All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
XEON E5-1410 v2 - 2.80GHz - clockspeed doubts
Hello, i pay dedicated XEON E5-1410 v2 2.80GHz
This CPU should have clock 2.80 GHz, but all benchmarks show me only this (approx 1200 MHz):
Processor: Intel(R) Xeon(R) CPU E5-1410 v2 @ 2.80GHz CPU cores: 8 Frequency: 1236.791 MHz RAM: 62G CPU: SHA256-hashing 500 MB 1.788 seconds CPU: bzip2-compressing 500 MB 4.865 seconds CPU: AES-encrypting 500 MB 1.440 seconds
I have following questions:
1) What means / suggests /or why is clock only 1.2GHz instead 2.8Ghz normal speed??
It can suggest that my provider intentionally under-clocked CPU?
(for example from reason of safe electricity consumption..? or CPU is worn and not able work on full speed so they underclocked it??)
2) CPU low bench values: I think values above are not bad at all, but also think they are significantly low / less as should this CPU give when is normal working on full performance.. I'm right or not??
What you think it should give better values - better performance, or these are well & accurate??
3) If above mentioned values (SHA256, bzip and AES) are low, this is probably caused just with low clock speed / underclocking??
4) I have the same issue (with approx half clockspeed as should such CPU does have) with another dedicated servers (most of them are also from the same provider)..
Is this only my unfamiliarity and my wrong idea that they should have clockspeed as manufacturer declare and all is ok, or this is probably not OK & sooner it suggest anything is not good with this provider?
BTW: I'm not any expert for CPU, clockspeeds, server performance etc., so pls understand if i give stupid questions, thanks
Many thanks for all explanation in advance
Comments
Have you checked the frequency with any real methods?
The CPU clock speed steps down when you're not doing anything computationally expensive. Your CPU governor is probably set to
powersave
. Set it toperformance
and reboot.Apologise, but i'm not sure what is real method..
I tested speed only with benchmarks:
Above is nench, git.io/benhc.sh gives:
Kernel : 3.10.0-1062.1.1.el7.x86_64
CPU Model : Intel(R) Xeon(R) CPU E5-1410 v2 @ 2.80GHz
CPU Cores : 8 cores @ 1202.612 MHz
CPU Cache : 10240 KB
I have KVMoverIP (iDRAC, it's dell) so i should also have look in BIOS.
Or what i should do / check for more trustworthy and more real measurement of CPU speed and performance?
Thanks
You mean / this should be done in BIOS settings at CPU settings, i'm right?
What is the output of cpufreq-info?
This is what i see in KVM console:
BIOS - there i not see anything for config CPU freq or performance (as sugges. by @CyberneticTitan ):
https://ibb.co/VD4C3L7
Info about clock-freq at reboot:
https://ibb.co/Pwy6YpM
It's quite strange and confusing for me that these images (system in KVM) shows right freg 2.8 GHz, but benchmarks give approx a half values :-/
Thanks for all explanations and comments
No in the OS. Steps for Ubuntu: https://askubuntu.com/questions/1021748/set-cpu-governor-to-performance-in-18-04
Generally speaking this shouldn't have to be done through bios, though it can be.
If debian/ubuntu, try "cpufreq-set -g performance" and then see what "cat /proc/cpuinfo | grep -i mhz" reports.
If centos try "tuned-adm profile latency-performance"
Also "cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor" should let you know what governor is currently set.
That would be the processor at idle, when it requires more oomph it throttles up to the full speed. Some processors can idle down to about 800mhz so it varies on the processor.
No foul play here I don't think
Thanks for suggestion.
BTW: It's latest Centos 7.7 and fresh installed from ISO in IDRAC HTML console (not using OS installed by provider or installed from provider automated installation template) - this should not be as default set up in OS for best performance already?
https://www.certdepot.net/rhel7-get-started-cpu-governor/
i found this:
yum install -y kernel-tools
# cpupower frequency-set -g performance
that's all?
THANKS
Thanks! (now i realize i read just the same page)
You are very right:
This was it:
THANKS!
I'll change it on "performance" mode
and then repost new bench - how measured values will changed / improved
Thanks for all for such speed help, comments and usefull suggestion.
Update:
Change CPU governor from powersave on performance did not bring any improvement in performance
In other words said: The same CPU benchmarks times as above i found, measured after change (only change is that bench was displaying already full speeds of CPU clocks freq.)
Conclusion - it was caused probably with fact i think: Intel CPU are able very fast (in a few msec) change CPU freq when load income. (Exact the same property as my workstation does) Benchmark probably first measure clock before load (and my server is idling so was running in powersave mode on lowest freq) and these values display, but when start test CPU perfor. then measured values and whole test is made on full CPU perfomance already and on max freq.
So i think is absolutely useless to have governor set up on PERFORMANCE, in most case this will not bring any improvement ever, or negligible.
On the contrary just powersave **and similar hybrid modes **are saving energy = more friendly and considerate to the environment, and simultaneously also save lifespan of hardware. (in cases when server is not non-stop running only on full load still..)
In accordance with this facts i think this is reason why in OS is default state set up just as powersave mode.
And the same experience and result with other 5 yet idling dedicated servers.
Apologies for my poor "english"
It is a 4 core/8 thread processor. When you're running single core operations, the speed can go up to 2.8 Ghz but when multi-tasking or using all the cores, the speed might only be 1.2 Ghz per core. This might explain what you're seeing.
Yep, you pretty much got it - the CPU will almost immediately raise the clock speed if load calls for it. There is some benefit to keeping it at the highest speed all the time (generally less latency), but it's hard to measure without good tools and simulated workloads. There's measurable 'jitter' when things are aggressively raising and lowering the clocks, but you kind of have to look for it.
As long as the hardware is run within spec, it'll last pretty much just as long either way. Main difference is power consumption and heat output, and there are tolerances in place before electrical degradation sits in (eg: raised voltages, poorly managed heat)