Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


~431 MB total RAM on 512 MB VPS?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

~431 MB total RAM on 512 MB VPS?

I've got a KVM VPS with 512 MB RAM, however /proc/meminfo only shows ~431 MB total:

daniel@vps10:~$ cat /proc/meminfo
MemTotal:         442264 kB
MemFree:          295916 kB
...

Which matches what free -m shows:

daniel@vps10:~$ free -m
              total        used        free      shared  buff/cache   available
Mem:            431          42         290           3          99         368
Swap:           509           4         505

I asked the provider about it and they said "Yes, there are overheads to a virtual machine that will reduce the usable memory available to the guest operating system"

Is it normal to have ~15.8% overhead like that? It seems a bit misleading to include invisible overheads in the amount of RAM they advertise. I just checked a box at DigitalOcean (who apparently also use KVM) and the value is much closer to the full amount (for a 1 GB VPS, it shows 1031824 kB as MemTotal).

«1

Comments

  • name and shame and help the community.

    Thanked by 1eol
  • on ovh 2Gb you only get 1.71Gb

  • deankdeank Member, Troll
    edited December 2018

    Yes, it's normal.

    If you don't like it, continue switching hosts.

  • 475.70 MB on my 512 kvm node

  • name and shame and help the community.

    This is on a BudgetNode storage VPS. Other than the memory discrepancy, I'm really happy with it.

    I just noticed something similar on my "main" VPS (BuyVM Slice 4096) - it's advertised as 4096 MB RAM but actually only has 3948 MB accessible:

    12:00 daniel@vps03 /home/daniel
    % free -m
                  total        used        free      shared  buff/cache   available
    Mem:           3948        1874        1058         191        1016        1595
    

    deank said: If you don't like it, continue switching hosts.

    I haven't actually switched hosts in a while, I'm just getting more servers for other projects with different requirements :smile:

  • It is normal. You can switch to an OpenVZ provider and you will likely get a full 512 MB RAM.

  • dedipromo said: You can switch to an OpenVZ provider

    I tried that, but OpenVZ tends to use a ridiculously old kernel (2.6 series, which is nearly ten years old now) and Debian buster/testing refuses to run on anything older than 3.10.

    So I'll just deal with the memory difference :smile:

  • smilesmile Member
    edited December 2018

    this is actually not normal to me. 15% is too much

    proxmox: 768mb
    free -m
    total used free shared buff/cache available
    Mem: 751 148 77 23 525 447
    Swap: 764 0 764

    vmware: 1gb
    free -m
    total used free shared buff/cache available
    Mem: 991 290 168 55 533 466
    Swap: 1023 16 1007

    solusvm: 1gb
    free -m
    total used free shared buffers cached
    Mem: 992 437 555 0 260 121
    -/+ buffers/cache: 56 936
    Swap: 1987 0 1987

  • The VGA shared RAM eats some of the RAM.

    If the host wants to steal, they would just steal by silent overprovisioning, not by garnishing the offered numbers which are as big as possible and used to bait customers.

    Anyhow, i am not rejecting your claim 100%. It could be anything.

    Thanked by 1vimalware
  • TimboJonesTimboJones Member
    edited December 2018

    @smile said:
    this is actually not normal to me. 15% is too much

    proxmox: 768mb
    free -m
    total used free shared buff/cache available
    Mem: 751 148 77 23 525 447
    Swap: 764 0 764

    vmware: 1gb
    free -m
    total used free shared buff/cache available
    Mem: 991 290 168 55 533 466
    Swap: 1023 16 1007

    solusvm: 1gb
    free -m
    total used free shared buffers cached
    Mem: 992 437 555 0 260 121
    -/+ buffers/cache: 56 936
    Swap: 1987 0 1987

    Someone actually read the OP and responded appropriately to the 15% in question.

    It would be good to understand where this overhead goes so it can be appropriately configured, like swap space.

    Thanked by 1Falzo
  • SpeedBusSpeedBus Member, Host Rep
    edited December 2018

    Could try,

    dmidecode -t memory
    

    For example for a KVM based VM which has 1 GB allocated to it,

    root@local:~# free -m
                  total        used        free      shared  buff/cache   available
    Mem:            985         376          75           2         533         465
    Swap:          1023          26         997
    

    While dmidecode shows a "1024 MB" stick present,

    root@local:~# dmidecode -t memory
    # dmidecode 3.1
    Getting SMBIOS data from sysfs.
    SMBIOS 2.4 present.
    
    Handle 0x1000, DMI type 16, 15 bytes
    Physical Memory Array
            Location: Other
            Use: System Memory
            Error Correction Type: Multi-bit ECC
            Maximum Capacity: 1 GB
            Error Information Handle: Not Provided
            Number Of Devices: 1
    
    Handle 0x1100, DMI type 17, 21 bytes
    Memory Device
            Array Handle: 0x1000
            Error Information Handle: 0x0000
            Total Width: 64 bits
            Data Width: 64 bits
            Size: 1024 MB
            Form Factor: DIMM
            Set: None
            Locator: DIMM 0
            Bank Locator: Not Specified
            Type: RAM
            Type Detail: None
    
  • Interesting, thanks @SpeedBus. dmidecode does indeed show 512 MB:

    # dmidecode 3.2
    Getting SMBIOS data from sysfs.
    SMBIOS 2.4 present.
    
    Handle 0x1000, DMI type 16, 15 bytes
    Physical Memory Array
            Location: Other
            Use: System Memory
            Error Correction Type: Multi-bit ECC
            Maximum Capacity: 512 MB
            Error Information Handle: Not Provided
            Number Of Devices: 1
    
    Handle 0x1100, DMI type 17, 21 bytes
    Memory Device
            Array Handle: 0x1000
            Error Information Handle: 0x0000
            Total Width: 64 bits
            Data Width: 64 bits
            Size: 512 MB
            Form Factor: DIMM
            Set: None
            Locator: DIMM 0
            Bank Locator: Not Specified
            Type: RAM
            Type Detail: None
    
    
    Thanked by 1SpeedBus
  • crashkernel might take some memory. Not sure if it's enabled on your servers.

    Thanked by 1vimalware
  • perennateperennate Member, Host Rep
    edited December 2018

    It is the same as if you rent a dedicated server with 2 GB RAM, or if you purchase a desktop computer with 2 GB RAM. After you install your OS you'll see there's not quite 2 GB RAM available for use by applications since some memory is reserved for the kernel and stuff.

    So it isn't really an invisible overhead because the memory is available to your software, it's just taken up by your OS and not available for your applications. If you installed a different OS you might see a bit less RAM used by the OS and more available for application. But ultimately it is your OS that is using the RAM, not the provider's infrastructure.

    But I agree with what other people said, 81 MB is a bit high, what distribution are you using, and are you running the default kernel?

    Thanked by 1uptime
  • It is the same as if you rent a dedicated server with 2 GB RAM, or if you purchase a desktop computer with 2 GB RAM. After you install your OS you'll see there's not quite 2 GB RAM available for use by applications since some memory is reserved for the kernel and stuff.

    As far as I know, the full amount should still be reported to the OS, though? Or does the system reserved memory not even appear in /proc/meminfo or top?

  • @perennate said:
    ... since some memory is reserved for the kernel ...

    ... and drivers.

    Thanked by 1tcp6
  • perennateperennate Member, Host Rep
    edited December 2018

    Daniel15 said: As far as I know, the full amount should still be reported to the OS, though? Or does the system reserved memory not even appear in /proc/meminfo or top?

    I did a bit of Googling and according to https://stackoverflow.com/questions/20348007/how-can-i-find-out-the-total-physical-memory-ram-of-my-linux-box-suitable-to-b you can check it from /proc/meminfo:

    Add the last 2 entries of /proc/meminfo, they give you the exact memory present on the host.

    Example:

    DirectMap4k:       10240 kB
    DirectMap2M:     4184064 kB
    

    10240 + 4184064 = 4194304 kB = 4096 MB.

    Edit: dmesg | grep Memory might also give some clues as to where your RAM is going.

    Thanked by 2Daniel15 uptime
  • Daniel15Daniel15 Veteran
    edited December 2018

    @perennate said:

    Daniel15 said: As far as I know, the full amount should still be reported to the OS, though? Or does the system reserved memory not even appear in /proc/meminfo or top?

    I did a bit of Googling and according to https://stackoverflow.com/questions/20348007/how-can-i-find-out-the-total-physical-memory-ram-of-my-linux-box-suitable-to-b you can check it from /proc/meminfo:

    Add the last 2 entries of /proc/meminfo, they give you the exact memory present on the host.

    Example:

    DirectMap4k:       10240 kB
    DirectMap2M:     4184064 kB
    

    10240 + 4184064 = 4194304 kB = 4096 MB.

    Interesting! On my 512 MB system:

    DirectMap4k:       34792 kB
    DirectMap2M:      489472 kB
    

    which adds up to 511.98 MB.

  • perennateperennate Member, Host Rep
    edited December 2018

    @Daniel15 run dmesg | grep Memory it should print something like this:

    Memory: 2039116K/2096632K available (7502K kernel code, 1163K rwdata, 3440K rodata, 1376K init, 1448K bss, 57516K reserved)

    So at least then you'll know if most of it is kernel code, reserved, or what.

    Thanked by 3Daniel15 eol tcp6
  • Daniel15Daniel15 Veteran
    edited December 2018

    Thanks you! That's useful :smiley:

    Memory: 422184K/523872K available (6587K kernel code, 648K rwdata, 2044K rodata, 872K init, 428K bss, 101688K reserved, 0K cma-reserved, 0K highmem)
    

    Still don't know why so much RAM is reserved, but at least I know it is reserved now. Will try other kernels when I get some free time, and see if that makes a difference.

  • IonSwitch_StanIonSwitch_Stan Member, Host Rep

    If your running Centos, its probably being used by crashkernel

    $ ssh 66.11.126.xx -l root
    [root@memory-test-512 ~]# cat /etc/redhat-release
    CentOS Linux release 7.3.1611 (Core)
    [root@memory-test-512 ~]# free -m
                  total        used        free      shared  buff/cache   available
    Mem:            488          42         227           4         218         420
    Swap:             0           0           0
    

    If you have less, make sure crashkernel is off...

    [root@memory-test-512 ~]# sed -i 's/crashkernel=auto/crashkernel=no/' /etc/default/grub
    [root@memory-test-512 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
    
    

    That last bit of ram is reserved for the Kernel..

    [root@memory-test-512 ~]# dmesg | grep BIOS | grep reserved
    [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
    [    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
    [    0.000000] BIOS-e820: [mem 0x000000001ffdc000-0x000000001fffffff] reserved
    [    0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
    [    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
    
  • @IonSwitch_Stan I'm using Debian buster. I don't see anything relating to crashkernel in /etc/default/grub.

  • IonSwitch_StanIonSwitch_Stan Member, Host Rep
    edited December 2018
    root@memory-test-512:~# cat /etc/debian_version
    9.4
    root@memory-test-512:~# free -m
                  total        used        free      shared  buff/cache   available
    Mem:            492          31         376           2          85         446
    Swap:             0           0           0
    root@memory-test-512:~# dmesg | grep Memory
    [    0.000000] Memory: 482064K/523752K available (6195K kernel code, 1137K rwdata, 2856K rodata, 1396K init, 688K bss, 41688K reserved, 0K cma-reserved)
    [    0.120576] x86/mm: Memory block size: 128MB
    root@memory-test-512:~# dmesg | grep reserv
    [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
    [    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
    [    0.000000] BIOS-e820: [mem 0x000000001ffdc000-0x000000001fffffff] reserved
    [    0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
    [    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
    
    
    Thanked by 2uptime Daniel15
  • deankdeank Member, Troll

    Windows and unix report ram differently.

  • Windows artificially limits RAM.

    Thanked by 1tcp6
  • Daniel15Daniel15 Veteran
    edited December 2018

    @IonSwitch_Stan - Is that a 32-bit or 64-bit VPS?

  • Daniel15Daniel15 Veteran
    edited December 2018

    I figured this out!

    I was using a PAE kernel (as that's what 32-bit Debian uses by default). PAE lets you use more than ~3 GB RAM on 32-bit systems. I guess it requires allocation of a mapping table to allow addressing the larger amount of RAM. I installed a non-PAE kernel (since I have no use for PAE) and now I have my RAM back:

    daniel@vps10:~$ uname  -a
    Linux vps10.d.sb 4.18.0-3-686 #1 SMP Debian 4.18.20-2 (2018-11-23) i686 GNU/Linux
    
    daniel@vps10:~$ free -m
                  total        used        free      shared  buff/cache   available
    Mem:            496          29         396           2          70         451
    Swap:           509           0         509
    

    Maybe I should use 64-bit, but I don't think 64-bit really has any advantages for systems with small amounts of RAM, and it can consume more RAM due to the larger pointer size.

    Anyways, in the end this was my fault, not the provider's fault :tongue:

  • @Daniel15 said:
    I figured this out!

    I was using a PAE kernel (as that's what 32-bit Debian uses by default). PAE lets you use more than ~3 GB RAM on 32-bit systems. I guess it requires allocation of a mapping table to allow addressing the larger amount of RAM. I installed a non-PAE kernel (since I have no use for PAE) and now I have my RAM back:

    daniel@vps10:~$ uname  -a
    Linux vps10.d.sb 4.18.0-3-686 #1 SMP Debian 4.18.20-2 (2018-11-23) i686 GNU/Linux
    
    daniel@vps10:~$ free -m
                  total        used        free      shared  buff/cache   available
    Mem:            496          29         396           2          70         451
    Swap:           509           0         509
    

    Maybe I should use 64-bit, but I don't think 64-bit really has any advantages for systems with small amounts of RAM, and it can consume more RAM due to the larger pointer size.

    Anyways, in the end this was my fault, not the provider's fault :tongue:

    no that's not it.

    proxmox:
    Linux suckit 4.9.0-8-686-pae #1 SMP Debian 4.9.110-3+deb9u4 (2018-08-21) i686 GNU/Linux

    768mb-
    free -m
    total used free shared buff/cache available
    Mem: 751 148 98 23 504 447
    Swap: 764 0 764

  • smile said: no that's not it.

    Hmm. I can consistently repro on my VPS. I have both linux-image-4.18.0-3-686 and linux-image-4.18.0-3-686-pae installed. Identical version, only difference is PAE vs non-PAE. If I boot with the PAE one, I have ~431 MB RAM available. If I boot with the non-PAE one, I have ~496 MB RAM available.

    There's probably other factors that affect it, like SeaBIOS version, virtualization system (KVM vs Proxmox), etc.

Sign In or Register to comment.