New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Regarding the benchmarks: Please note that this particular benchmark (popular here, I've forgotten the name) isn't that great as it only involves a simple dd call for testing disk performance. Without the right block size even NVMe SSDs don't perform that well here. If you use a greater block size, our NVMe servers deliver 6-12 GB/s (!) sequential reading performance without any problems. Of course the small servers here won't deliver the same, but when it comes to IOPS, the SAS setup really rocks. I personally like to test random read/write with fio:
(Performed on a Black Friday mini VM)
The IOPS might even increase if we add more SSDs to the nodes
FYI: Our NVMe nodes deliver up to 80k random IOPS (read/write).
Sequential read:
I think that's great, not only regarding the price but in general. And now, hammer the nodes with your own benchmarks!
Best Regards,
Tim
deleted
Here is it https://browser.geekbench.com/v4/cpu/15001384
Thanks Can you also post the output of a bench.sh test as it does some basic network tests and the CPU flags (cat /proc/cpuinfo) ?
can confirm, getting about 20k+20k on 4k blocksize, random read/write mix 50%. this goes down to 9k+9k with 64k blocksize which still equals >500MB/s in rw speed.
definitely good numbers!
while I agree with @PHP_Friends that dd and wget are not the best tools for a benchmark, here is a quick nench anyway:
Thanks @Falzo for that nench. A nench/bench script outputs some network throughput numbers and that is what I was after. It too is probably pass-through but can you also post the CPU flags exposed to the VM?
here are more speeds with iperf from YABS:
for CPU flags AES-NI and VMX/VTx is enabled/passed through...
I personally always test dd directly on the command line, with a large enough block size. Most benchmarks go for a very quick test. People don't like to wait anymore. Same goes with ioping.
I didn't know fio, so performed two tests. Which are not on the Black Friday VPS but on the Schnupperspecial. Just to be perfectly clear.
Given that these are 4K blocks, I'd think that the IOPS on an SSD would be a bit better. But that said, I don't know if I read these correctly. The VM is probably not connected to the same SAS SSD'd, so I understand I can't compare these directly to the BF Special, but still. Could be me though. In my understanding of IOPS I would at least expect ~10K with a 4K block size.
(mind that these two tests were done 16 hours apart)
true. probably a full node with a lot going on (noisy neighbours?) or IOps limited from the beginning ;-)
what do you get as a result for 64k blocksize?
@debaser In the SSD G2 we have still some SATA nodes running. Could you open a ticket about that? We'll check
It's good that you say this, because I'm obviously an idiot. I was on a very busy or full node which lead to some performance issues. To make sure I tried everything before opening a ticket I set the virtual disk driver to IDE. Tim from @PHP_Friends migrated me to another node within an hour. And I forget to switch the driver back to virtio.
Big difference:
We all are. Sometimes.
I was curious and ran the test on a managed server running on an older SATA node. Even there we see way more IOPS:
So yes, the bad benchmark result was only caused by the IDE driver. However, if 15k IOPS are not enough, we always offer a free migration to a SAS node for G2 customers
The Black Friday nodes (more specific: all machines we bought since the end of 2016) are always powered by SAS + datacenter SSDs.
Best Regards,
Tim
I read that 4k block size may result in artificially inflated results? Would a higher block size be a better indicator of performance, say at least 32?
The second test in my post is with 64K blocks.
in general 4k bs in fio should give you a good idea about the maximum iops possible.
as bandwidth equals bs*iops a higher blocksize like 64k usually is a better indicator for bandwidth limitations then.
obviously the iops with bigger bs will decrease accordingly when you reach the max bw...
TL;DR; mutliple runs with different blocksizes like @debaser did make a lot of sense ;-)
Wow. Payload increased by factor of 16, iops drops by factor of 2. Good stuff.
Based on what you said, if iops decline less rapidly relative to blocksize, we are looking at premium potassium because much heavier loads can be pushed without a corresponding penalty on iops. @debaser is on an excellent node now based on this reasoning
exactly! it's not linear all the way, because you have two limits, one being the max iops your storage potatoe is able to achieve and the second being the size of your sata/sas hose you need to get your watering data through.
in the example above you'd probably be able to achieve the same iops of ~40k for 4k, 8k 16k and 32k bs because io is the major limiting factor. only after that it changes to bw and that's why iops decrease proportionally with larger blocksizes from that point.
which probably does not matter much as that's then large files / sequential writes soon anyway ;-)
Good to see the numbers, thanks @PHP_Friends
Is 500Mb/s good network speed for high traffic websites?
Absolutely..