New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
RAM Caching
GoodHosting
Member
in General
So, I was mucking about doing a few benchmarks for my latest offer and....
[[email protected] ~]# ioping / -c 10 4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=1 time=14 us 4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=2 time=6 us 4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=3 time=7 us 4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=4 time=9 us 4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=5 time=8 us 4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=6 time=6 us 4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=7 time=6 us 4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=8 time=7 us 4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=9 time=8 us 4.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=10 time=6 us --- / (ext4 /dev/mapper/vg_orlando1-root) ioping statistics --- 10 requests completed in 9.0 s, 129.9 k iops, 507.3 MiB/s min/avg/max/mdev = 6 us / 7 us / 14 us / 2 us [[email protected] ~]# ioping / -c 10 -s 16k 16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=1 time=14 us 16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=2 time=13 us 16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=3 time=16 us 16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=4 time=12 us 16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=5 time=16 us 16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=6 time=17 us 16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=7 time=14 us 16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=8 time=13 us 16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=9 time=13 us 16.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=10 time=14 us --- / (ext4 /dev/mapper/vg_orlando1-root) ioping statistics --- 10 requests completed in 9.0 s, 70.4 k iops, 1.1 GiB/s min/avg/max/mdev = 12 us / 14 us / 17 us / 1 us [[email protected] ~]# ioping / -c 10 -s 128k 128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=1 time=33 us 128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=2 time=55 us 128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=3 time=43 us 128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=4 time=51 us 128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=5 time=58 us 128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=6 time=56 us 128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=7 time=42 us 128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=8 time=44 us 128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=9 time=53 us 128.0 KiB from / (ext4 /dev/mapper/vg_orlando1-root): request=10 time=63 us --- / (ext4 /dev/mapper/vg_orlando1-root) ioping statistics --- 10 requests completed in 9.0 s, 20.1 k iops, 2.5 GiB/s min/avg/max/mdev = 33 us / 49 us / 63 us / 8 us [[email protected] ~]# ioping / -c 10 -s 4m 4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=1 time=1.0 ms 4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=2 time=1.2 ms 4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=3 time=1.1 ms 4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=4 time=1.2 ms 4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=5 time=1.1 ms 4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=6 time=1.2 ms 4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=7 time=1.2 ms 4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=8 time=1.1 ms 4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=9 time=1.2 ms 4.0 MiB from / (ext4 /dev/mapper/vg_orlando1-root): request=10 time=1.2 ms --- / (ext4 /dev/mapper/vg_orlando1-root) ioping statistics --- 10 requests completed in 9.0 s, 872 iops, 3.4 GiB/s min/avg/max/mdev = 1.0 ms / 1.1 ms / 1.2 ms / 46 us
Comments
what?
he just showed how fast his empty node.
how?
Doesn't this break expectation of file buffer flushing?
Sadly no, that's on a full node.
Yes, a little. Individual blocks are considered stale after 6,000 centiseconds of inactivity at which time they are synchronized out to the disk. A block that has remained active but not yet flushed after 12,000 centiseconds is synchronized as well.
Reads are delivered entirely from the cache, while writes follow the above rules. This is all configurable easily using the "cfq" I/O scheduler that comes with Linux since... a long time, I'm surprised nobody has done it.
Drop 512GB of RAM into a dual E5 node [ RAM isn't expensive compared to the size of chassis required for 32 SSDs to make a proper RAID array... ] and back it with a 12-Disk RAID10 backend for the hard storage.
The results are spectacular.
Oh yeah, I guess it wasn't fair that I was testing the above on a bare system, so how about we test it on one of my KVM instances with the best-case I/O scenario as well:
Related configuration of importance:
DISK = [ CACHE=WRITEBACK, TARGET=VDA, PREFIX=VD, DRIVER=QCOW2 ]
The guest system is running Debian 7 (may as well do a fair comparison) as far as the default template goes, (Debian 7.1 AMD64 Netinstall, no special settings.) The only changed thing is that I've set the guest scheduler to noop (It is a VM with virtio disk after all...)
On the software selection screen, "SSH server" and "Standard system utilities" was selected, nothing else.
( Obviously I also had to apt-get install ioping )
If that's not a fair test, I'm not sure what would be.
I have no idea what this thread means
This is nice for benchmarks, but a single power failure or a forced restart away from big data corruption / data loss.
You don't have UPSs and backup power systems? Pretty much every datacenter nowadays does. As per "forced restart" ; sync is part of the shutdown init scripts, and we use ksplice for live kernel replacements.
@GoodHosting despite UPS-es a power failure can happen and then you will have a big mess to deal with. Murphy teaches us that there are many things that could break. I hope you and your customers don't have to learn this the hard way.
So you take pure DC power and assume their systems are perfect?
Here's your datacentre for that node having a power outage less than two months ago: http://www.hostdime.com/blog/wp-content/uploads/2014/03/RFO-March-24-2014.pdf
Do you even have A+B power while you're taking such a cavalier attitude to your clients' data? Or is it just so important to produce dat benchmark porn that it doesn't matter?
I'm no hardware guru, but I believe having a battery/backup power on there will prevent nightmare scenarios where critical data isn't fully flushed to disk. It assumes the battery/backup will work as expected though. I guess it's just a preference choice that risks stability for performance, in the end.
I often use a RAM disk in applications to avoid touching the hard drives.
I've confirmed that our racks do in fact have A+B power. Thanks for the heads up that not all racks at HostDime did have this.
As per "benchmark pr0n" ; besides the fact that it seems that's all LET members care about lately (check the countless "VPS io performance" threads where abusers "test" their I/O every other minute and graph the results), we have customers that actually need I/O performance in their applications; and are aware of the risks involved.
This is why Nebula provides [ and we have configured for ] off-site backups and snapshots, that the customer can even schedule by themselves using the "Scheduler" tab of their VM Details page. You can configure your VM to automatically perform any action you could, at any specified time in the day, or every X [time].
For example, Jon Doe could configure his scheduling as follows:
Many things you could never do with SolusVM...
I guess if your open about this in your offers to clients they can make their own choice as to what they want.
Exactly . Scheduling and "Actions" is one of the many useful features you'll never find in other VM control panels, as it's more of an SaaS / IaaS geared feature.
This breaks all the fsync durability assumptions for virtually every database, ever.
Please, please tell me you disclose this to your customers when they buy their VMs and make it abundantly clear the implications?
The customers that specifically need I/O for their applications are moved to these nodes upon request [ or when confronted about their ridiculously high I/O usage on our offers, and asked if they would like to move to a more suitable node. ] The specifics are explained to them then.
As per fsync durability, that's not much of an issue. If the application asks to
sync
, wesync
. It's that simple. If your application never syncs, then the centasecond-based flushing rules apply instead. It's simple Linux Kernel host page/buffercache , just tweaked to be a little more useful.It's also worth noting that a fair few customers that end up going to our RAM-Cached nodes are just running Jingling and other " Don't care if goes down / no data to lose " type applications. That's almost 90% of the customer base we get from LET, [ which is why we don't post offers here often. ]
As long as they know what they are getting into.