Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Is this normal? Can 4 cpu load excactly the same?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Is this normal? Can 4 cpu load excactly the same?

jvnadrjvnadr Member
edited June 2014 in General

This is weird (at least, for me), as I haven;t seen it in any of the boxes I have.
In a vps of mine with 4 cores, the cores are loading exactly the same as you can see to the attachments. The box is OVZ with 1GB memory and access to 4 cores (E3-1240 v3 3.40GHz).!

Snapshots have a time window of some seconds, as you can see.

Comments

  • some of my high-loaded instance with chicagovps and buyvm have the same behavior.

  • jvnadrjvnadr Member

    My CVPS vps's are not having the same behavior. Multiple cpu's do have different load.

  • wychwych Member

    I noticed this a few days ago with a MC box; never thought anything of it.

    For the record its not hosted at CVPS.

  • wojonswojons Member

    I have a few questions.

    Can you check if you also have any or a lot of cpu steal going on. is the responsive of the node normal or slow in any way. what type of application are you running?

  • jvnadrjvnadr Member
    edited June 2014

    @wojons The response of the box is now normal, I think. This is a box tha thad some outages the last couple of weeks (12,5 hours the last week!), I opened a ticket to the provider and I got the response that there was an abuser on the node that caused the trouble ("This is because of other VPS taken high CPU resources on the Host-node" is the answer). Now, vps is rolling good, but the cpu's are still load equally.

    And for the record also, it is not hosted at CVPS

  • wojonswojons Member

    @jvnadr do you have any monitoring for this server by any chance something that can some some history on it. also you never said what you were running

  • jvnadrjvnadr Member
    edited June 2014

    @wojons A single website with ispconfig below it. I monitor it with pingdom (only for outages), not of performance. But I have the solus internal monitoring for CPU, HDD and MEM installed

    (the load you see at the graph is normal, some minor peaks when optimizing db. The spikes to traffic are the active offline backups. The outages are all of them monitored externally from pingdom and the reason for the ticket I opened, that is now resolver. The server runs nicely but the weird behavior - equally load cpu's is still there and seem not to affect the performance).

  • wojonswojons Member

    so with out more information i can try to expalin this problem in the way i have seen it. If i create a 1core vm, and then put a vm with 4 cores in it. i will end up exactly equal or about equal cpu usage across all cores. i would recommend restarting the server and seeing if it presets. and then take snap shots of /proc/stat which will give the counts of the number of jiffers per core. and if they are about equal and stay with in range then it is probly what i sususpect.

  • AnthonySmithAnthonySmith Member, Patron Provider

    can you give us the 'top' header please when this happens (not htop)

  • jvnadrjvnadr Member
    edited June 2014

    @wojons So, If I can understand well, those are not 4 actual cores I have access to, but virtual cores that are working equally? I have to mention that in /proc/cpuinfo I can see access to 4 physical "genuin intel" cores.

    This is the output

    `processor : 0
    vendor_id : GenuineIntel
    cpu family : 6
    model : 60
    model name : Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz
    stepping : 3
    cpu MHz : 3399.997
    cache size : 8192 KB
    fpu : yes
    fpu_exception : yes
    cpuid level : 13
    wp : yes
    flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx hypervisor lahf_lm arat epb xsaveopt pln pts dts
    bogomips : 6799.99
    clflush size : 64
    cache_alignment : 64
    address sizes : 40 bits physical, 48 bits virtual
    power management:

    processor : 1
    vendor_id : GenuineIntel
    cpu family : 6
    model : 60
    model name : Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz
    stepping : 3
    cpu MHz : 3399.997
    cache size : 8192 KB
    fpu : yes
    fpu_exception : yes
    cpuid level : 13
    wp : yes
    flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx hypervisor lahf_lm arat epb xsaveopt pln pts dts
    bogomips : 6799.99
    clflush size : 64
    cache_alignment : 64
    address sizes : 40 bits physical, 48 bits virtual
    power management:

    processor : 2
    vendor_id : GenuineIntel
    cpu family : 6
    model : 60
    model name : Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz
    stepping : 3
    cpu MHz : 3399.997
    cache size : 8192 KB
    fpu : yes
    fpu_exception : yes
    cpuid level : 13
    wp : yes
    flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx hypervisor lahf_lm arat epb xsaveopt pln pts dts
    bogomips : 6799.99
    clflush size : 64
    cache_alignment : 64
    address sizes : 40 bits physical, 48 bits virtual
    power management:

    processor : 3
    vendor_id : GenuineIntel
    cpu family : 6
    model : 60
    model name : Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz
    stepping : 3
    cpu MHz : 3399.997
    cache size : 8192 KB
    fpu : yes
    fpu_exception : yes
    cpuid level : 13
    wp : yes
    flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx hypervisor lahf_lm arat epb xsaveopt pln pts dts
    bogomips : 6799.99
    clflush size : 64
    cache_alignment : 64
    address sizes : 40 bits physical, 48 bits virtual
    power management:
    `

    And this is /proc/stat

    `cpu 307998 0 82519 15423519 366334 0 0 203603

    cpu0 76999 0 20629 3855879 91583 0 0 50900

    cpu1 76999 0 20629 3855879 91583 0 0 50900

    cpu2 76999 0 20629 3855879 91583 0 0 50900

    cpu3 76999 0 20629 3855879 91583 0 0 50900

    intr 0

    swap 0 0

    ctxt 4724657

    btime 1402903188

    processes 6821

    procs_running 2

    procs_blocked 0
    `

    AnthonySmith said: can you give us the 'top' header please when this happens

    This is the output (it happens all the time)

  • wojonswojons Member

    @jvnadr said:

    so just for the top when ur in top press 1. and it will break down each cpu.
    but yes since you have openvz its going to be virtual cores i dont know how the hosting provider has it setup but based on what i see in /proc/stat you may really only have one cpu core its very odd. i doubt that the host will tell you but ask them which scheduler they are using for the machine because someing clearly is wrong if every process you run is running on all 4 cores.

  • jvnadrjvnadr Member

    wojons said: so just for the top when ur in top press 1

    As you can imagine, the results in top (spit the processes) are same as htop (how could they be different?). So, I have doubts how this can be real split load between cores, as in /proc/cpuinfo seem to be. It is odd to have equally loaded cpus to all of the processes.

  • wojonswojons Member

    @jvnadr said:
    As you can imagine, the results in top (spit the processes) are same as htop (how could they be different?). So, I have doubts how this can be real split load between cores, as in /proc/cpuinfo seem to be. It is odd to have equally loaded cpus to all of the processes.

    yeah i am sure everything is the same in the top breakdown was just letting you know. also in a vm other then like a few they will pass down the direct cpu data from the host.

  • andrewandrew Member

    this issue has happened for me with crissic.net. did you find any solution?

  • FrankZFrankZ Veteran

    +1 what @andrew said

  • jvnadrjvnadr Member

    @andrew @FrankZ No, not really. VPS is working well I guess, though. But I still have not any valid answer, some friends of mine had different opinions on if this is an issue, it caused bu a particular setup or it is normal behavior for some openvz installations. Odd thing is that most my other boxes (as I can remember, I have not checked it in deep) does not have this behavior, every cpu states different load.

    Thanked by 1FrankZ
  • ProfforgProfforg Member
    edited July 2014

    I recently seen this on one VPS provider. It was announced by hoster as a feature. They provide one (or a few) CPU cores, but visible number of cores are more than real by 4x. They said, that it's for perfomance reasons to parallize processes across cores) I don't think that it's good for perfomance :) It's on OpenVZ.

    In your case, it means, that you have only 1 CPU core, not 4.

    Thanked by 1FrankZ
  • DBADBA Member
    edited July 2014

    Short Answer: Bug introduced in fairly recent OpenVZ kernel releases and now fixed in 92.1. Seems it was introduced when some changes were made to the scheduler.

    Long Answer: Seen this behaviour on my VPS after they updated the host node's kernel. (Note the uname text comes from the host but is only updated when the container starts - if the host node is updated the container will still show the old version until it is restarted).

     # uname -a
    Linux ##### 2.6.32-042stab090.5 #1 SMP Sat Jun 21 00:15:09 MSK 2014 i686 i686 i386 GNU/Linux
    

    There was a security fix included in this release for containers using simfs so any nodes using ploop didn't need it and might not have been updated.

    Since 042stab090.4:

    >

    Fixed a critical vulnerability in the legacy simfs container filesystem (ploop is not affected) (CVE-2014-3519, PSBM-27641)

    An OpenVZ installation at home using the same kernel version shows the same odd behavior. After updating the host to 2.6.32-042stab092.1 it shows separate CPU usage again in the containers.

    From CU-2.6.32-042stab092.1 Parallels Virtuozzo Containers 4.7 Core Update (http://kb.parallels.com/en/122229) one of the bug fixes listed is :

    • The top utility run inside a Container could show confusing equal values of used CPU power for every CPU available inside that Container; even though the total CPU power used inside the Container was shown correctly. (#PSBM-26714)
    Thanked by 1FrankZ
  • andrewandrew Member

    @SkylarM what do you think? is it a kernel issue or other?

  • blackblack Member

    Profforg said: In your case, it means, that you have only 1 CPU core, not 4.

    This makes the most sense.

    Thanked by 1Profforg
  • @andrew said:
    SkylarM what do you think? is it a kernel issue or other?

    I'd say what DBA posted above is accurate. As it's not a mission-critical bug or security update we won't be rebooting nodes quite yet to apply the fixed kernel.

  • FrankZFrankZ Veteran
    edited July 2014

    After doing some checking this does seem to be a kernel bug since at least 042stab090.3
    My ProxMox 2.6.32-30-pve machines do the same thing.

    Thanked by 1andrew
  • DBADBA Member

    I see 042stab092.2 is now out and is a security update to fix CVE-2014-4699.

Sign In or Register to comment.