Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


RAM Caching
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

RAM Caching

So, I was mucking about doing a few benchmarks for my latest offer and....

[[email protected] ~]# ioping / -c 10
4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=1 time=14 us
4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=2 time=6 us
4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=3 time=7 us
4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=4 time=9 us
4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=5 time=8 us
4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=6 time=6 us
4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=7 time=6 us
4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=8 time=7 us
4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=9 time=8 us
4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=10 time=6 us

--- / (ext4 /dev/mapper/vg_orlando1-root) ioping statistics ---
10 requests completed in 9.0 s, 129.9 k iops, 507.3 MiB/s
min/avg/max/mdev = 6 us / 7 us / 14 us / 2 us

[[email protected] ~]# ioping / -c 10 -s 16k
16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=1 time=14 us
16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=2 time=13 us
16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=3 time=16 us
16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=4 time=12 us
16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=5 time=16 us
16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=6 time=17 us
16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=7 time=14 us
16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=8 time=13 us
16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=9 time=13 us
16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=10 time=14 us

--- / (ext4 /dev/mapper/vg_orlando1-root) ioping statistics ---
10 requests completed in 9.0 s, 70.4 k iops, 1.1 GiB/s
min/avg/max/mdev = 12 us / 14 us / 17 us / 1 us

[[email protected] ~]# ioping / -c 10 -s 128k
128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=1 time=33 us
128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=2 time=55 us
128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=3 time=43 us
128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=4 time=51 us
128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=5 time=58 us
128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=6 time=56 us
128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=7 time=42 us
128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=8 time=44 us
128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=9 time=53 us
128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=10 time=63 us

--- / (ext4 /dev/mapper/vg_orlando1-root) ioping statistics ---
10 requests completed in 9.0 s, 20.1 k iops, 2.5 GiB/s
min/avg/max/mdev = 33 us / 49 us / 63 us / 8 us

[[email protected] ~]# ioping / -c 10 -s 4m
4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=1 time=1.0 ms
4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=2 time=1.2 ms
4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=3 time=1.1 ms
4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=4 time=1.2 ms
4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=5 time=1.1 ms
4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=6 time=1.2 ms
4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=7 time=1.2 ms
4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=8 time=1.1 ms
4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=9 time=1.2 ms
4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=10 time=1.2 ms

--- / (ext4 /dev/mapper/vg_orlando1-root) ioping statistics ---
10 requests completed in 9.0 s, 872 iops, 3.4 GiB/s
min/avg/max/mdev = 1.0 ms / 1.1 ms / 1.2 ms / 46 us

Comments

  • Mark_RMark_R Member

    what?

  • @Mark_R said:
    what?

    he just showed how fast his empty node.

  • how?

  • perennateperennate Member, Host Rep
    edited May 2014

    Doesn't this break expectation of file buffer flushing?

  • GoodHostingGoodHosting Member
    edited May 2014

    @Mark_R said:
    what?

    @alexvolk said:
    he just showed how fast his empty node.

    Sadly no, that's on a full node.

    @perennate said:
    Doesn't this break expectation of file buffer flushing?

    Yes, a little. Individual blocks are considered stale after 6,000 centiseconds of inactivity at which time they are synchronized out to the disk. A block that has remained active but not yet flushed after 12,000 centiseconds is synchronized as well.

    Reads are delivered entirely from the cache, while writes follow the above rules. This is all configurable easily using the "cfq" I/O scheduler that comes with Linux since... a long time, I'm surprised nobody has done it.

    Drop 512GB of RAM into a dual E5 node [ RAM isn't expensive compared to the size of chassis required for 32 SSDs to make a proper RAID array... ] and back it with a 12-Disk RAID10 backend for the hard storage.

    The results are spectacular.


    Oh yeah, I guess it wasn't fair that I was testing the above on a bare system, so how about we test it on one of my KVM instances with the best-case I/O scenario as well:

    Related configuration of importance:

    • Disk is configured with the following properties:

      DISK = [ CACHE=WRITEBACK, TARGET=VDA, PREFIX=VD, DRIVER=QCOW2 ]

    The guest system is running Debian 7 (may as well do a fair comparison) as far as the default template goes, (Debian 7.1 AMD64 Netinstall, no special settings.) The only changed thing is that I've set the guest scheduler to noop (It is a VM with virtio disk after all...)

    On the software selection screen, "SSH server" and "Standard system utilities" was selected, nothing else.

    ( Obviously I also had to apt-get install ioping )

    [email protected]:~# ioping -c 10 /
    4096 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=1 time=0.0 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=2 time=0.0 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=3 time=0.0 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=4 time=0.0 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=5 time=0.0 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=6 time=0.0 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=7 time=0.0 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=8 time=0.0 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=9 time=0.0 ms
    4096 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=10 time=0.0 ms
    
    --- / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec) ioping statistics ---
    10 requests completed in 9002.5 ms, 86207 iops, 336.7 mb/s
    min/avg/max/mdev = 0.0/0.0/0.0/0.0 ms
    
    [email protected]:~# ioping -c 10 / -s 16k
    16384 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=1 time=0.0 ms
    16384 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=2 time=0.0 ms
    16384 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=3 time=0.0 ms
    16384 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=4 time=0.0 ms
    16384 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=5 time=0.0 ms
    16384 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=6 time=0.0 ms
    16384 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=7 time=0.0 ms
    16384 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=8 time=0.0 ms
    16384 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=9 time=0.0 ms
    16384 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=10 time=0.0 ms
    
    --- / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec) ioping statistics ---
    10 requests completed in 9002.8 ms, 64103 iops, 1001.6 mb/s
    min/avg/max/mdev = 0.0/0.0/0.0/0.0 ms
    
    [email protected]:~# ioping -c 10 / -s 128k
    131072 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=1 time=0.0 ms
    131072 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=2 time=0.1 ms
    131072 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=3 time=0.1 ms
    131072 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=4 time=0.1 ms
    131072 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=5 time=0.0 ms
    131072 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=6 time=0.1 ms
    131072 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=7 time=0.0 ms
    131072 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=8 time=0.1 ms
    131072 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=9 time=0.1 ms
    131072 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=10 time=0.1 ms
    
    --- / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec) ioping statistics ---
    10 requests completed in 9003.3 ms, 18939 iops, 2367.4 mb/s
    min/avg/max/mdev = 0.0/0.1/0.1/0.0 ms
    
    [email protected]:~# ioping -c 10 / -s 4m
    4194304 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=1 time=1.0 ms
    4194304 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=2 time=1.3 ms
    4194304 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=3 time=1.2 ms
    4194304 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=4 time=1.2 ms
    4194304 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=5 time=1.3 ms
    4194304 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=6 time=1.2 ms
    4194304 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=7 time=1.4 ms
    4194304 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=8 time=1.2 ms
    4194304 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=9 time=1.3 ms
    4194304 bytes from / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec): request=10 time=1.3 ms
    
    --- / (ext4 /dev/disk/by-uuid/a835a03d-d27a-46a3-8125-0088625748ec) ioping statistics ---
    10 requests completed in 9015.2 ms, 808 iops, 3231.5 mb/s
    min/avg/max/mdev = 1.0/1.2/1.4/0.1 ms
    

    If that's not a fair test, I'm not sure what would be.

  • I have no idea what this thread means

  • rds100rds100 Member
    edited May 2014

    This is nice for benchmarks, but a single power failure or a forced restart away from big data corruption / data loss.

    Thanked by 1lbft
  • @rds100 said:
    This is nice for benchmarks, but a single power failure or a forced restart away from big data corruption / data loss.

    You don't have UPSs and backup power systems? Pretty much every datacenter nowadays does. As per "forced restart" ; sync is part of the shutdown init scripts, and we use ksplice for live kernel replacements.

  • rds100rds100 Member

    @GoodHosting despite UPS-es a power failure can happen and then you will have a big mess to deal with. Murphy teaches us that there are many things that could break. I hope you and your customers don't have to learn this the hard way.

  • lbftlbft Member
    edited May 2014

    GoodHosting said: You don't have UPSs and backup power systems? Pretty much every datacenter nowadays does.

    So you take pure DC power and assume their systems are perfect?

    Here's your datacentre for that node having a power outage less than two months ago: http://www.hostdime.com/blog/wp-content/uploads/2014/03/RFO-March-24-2014.pdf

    Do you even have A+B power while you're taking such a cavalier attitude to your clients' data? Or is it just so important to produce dat benchmark porn that it doesn't matter?

  • ricardoricardo Member
    edited May 2014

    I'm no hardware guru, but I believe having a battery/backup power on there will prevent nightmare scenarios where critical data isn't fully flushed to disk. It assumes the battery/backup will work as expected though. I guess it's just a preference choice that risks stability for performance, in the end.

    I often use a RAM disk in applications to avoid touching the hard drives.

  • GoodHostingGoodHosting Member
    edited May 2014

    @lbft said:
    Do you even have A+B power while you're taking such a cavalier attitude to your clients' data? Or is it just so important to produce dat benchmark porn that it doesn't matter?

    I've confirmed that our racks do in fact have A+B power. Thanks for the heads up that not all racks at HostDime did have this.

    As per "benchmark pr0n" ; besides the fact that it seems that's all LET members care about lately (check the countless "VPS io performance" threads where abusers "test" their I/O every other minute and graph the results), we have customers that actually need I/O performance in their applications; and are aware of the risks involved.

    This is why Nebula provides [ and we have configured for ] off-site backups and snapshots, that the customer can even schedule by themselves using the "Scheduler" tab of their VM Details page. You can configure your VM to automatically perform any action you could, at any specified time in the day, or every X [time].

    For example, Jon Doe could configure his scheduling as follows:

    EVERY MINUTE
    - IF STATE IS NOT RUNNING
    - - RESUME OR BOOT
    - FI
    
    EVERY SIX HOURS
    - IF STATE IS NOT HOLD
    - - DIFFERENTIAL SNAPSHOT `vda`(disk.0) "my-snapshot-%y%m%d-%H.qcow2"
    - FI
    
    EVERY THREE DAYS
    - FULL SNAPSHOT `vda`(disk.0) "my-snapshot-full-%y%m%d.qcow2"
    

    Many things you could never do with SolusVM...

  • VirtovoVirtovo Member

    @GoodHosting said:
    Many things you could never do with SolusVM...

    I guess if your open about this in your offers to clients they can make their own choice as to what they want.

  • @Virtovo said:
    I guess if your open about this in your offers to clients they can make their own choice as to what they want.

    Exactly :). Scheduling and "Actions" is one of the many useful features you'll never find in other VM control panels, as it's more of an SaaS / IaaS geared feature.

  • MintyMinty Member

    @GoodHosting said:
    I've confirmed that our racks do in fact have A+B power. Thanks for the heads up that not all racks at HostDime did have this.
    ...
    Many things you could never do with SolusVM...

    This breaks all the fsync durability assumptions for virtually every database, ever.

    Please, please tell me you disclose this to your customers when they buy their VMs and make it abundantly clear the implications?

    Thanked by 1perennate
  • GoodHostingGoodHosting Member
    edited May 2014

    @Minty said:

    The customers that specifically need I/O for their applications are moved to these nodes upon request [ or when confronted about their ridiculously high I/O usage on our offers, and asked if they would like to move to a more suitable node. ] The specifics are explained to them then.


    As per fsync durability, that's not much of an issue. If the application asks to sync, we sync. It's that simple. If your application never syncs, then the centasecond-based flushing rules apply instead. It's simple Linux Kernel host page/buffercache , just tweaked to be a little more useful.


    It's also worth noting that a fair few customers that end up going to our RAM-Cached nodes are just running Jingling and other " Don't care if goes down / no data to lose " type applications. That's almost 90% of the customer base we get from LET, [ which is why we don't post offers here often. ]

  • MintyMinty Member

    As long as they know what they are getting into. :)

Sign In or Register to comment.