New on LowEndTalk? Please Register and read our Community Rules.
URPad 1GB OpenVZ SSD Benchmark

Kevin kindly gave me a box to test today for his upcoming LEB exclusive offer (I'll not spoil the offer, but it's looking quite nice).
Here's a benchmark of the plan:
Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz
UnixBench
Benchmark Run: Wed Aug 22 2012 08:35:08 - 09:03:16 4 CPUs in system; running 4 parallel copies of tests Dhrystone 2 using register variables 88844585.8 lps (10.0 s, 7 samples) Double-Precision Whetstone 13030.8 MWIPS (10.1 s, 7 samples) Execl Throughput 24155.0 lps (30.0 s, 2 samples) File Copy 1024 bufsize 2000 maxblocks 1187006.7 KBps (30.0 s, 2 samples) File Copy 256 bufsize 500 maxblocks 324062.7 KBps (30.0 s, 2 samples) File Copy 4096 bufsize 8000 maxblocks 3470534.6 KBps (30.0 s, 2 samples) Pipe Throughput 4504468.0 lps (10.0 s, 7 samples) Pipe-based Context Switching 495742.0 lps (10.0 s, 7 samples) Process Creation 54025.0 lps (30.0 s, 2 samples) Shell Scripts (1 concurrent) 22220.3 lpm (60.0 s, 2 samples) Shell Scripts (8 concurrent) 2434.9 lpm (60.1 s, 2 samples) System Call Overhead 3157551.2 lps (10.0 s, 7 samples) System Benchmarks Index Values BASELINE RESULT INDEX Dhrystone 2 using register variables 116700.0 88844585.8 7613.1 Double-Precision Whetstone 55.0 13030.8 2369.2 Execl Throughput 43.0 24155.0 5617.4 File Copy 1024 bufsize 2000 maxblocks 3960.0 1187006.7 2997.5 File Copy 256 bufsize 500 maxblocks 1655.0 324062.7 1958.1 File Copy 4096 bufsize 8000 maxblocks 5800.0 3470534.6 5983.7 Pipe Throughput 12440.0 4504468.0 3621.0 Pipe-based Context Switching 4000.0 495742.0 1239.4 Process Creation 126.0 54025.0 4287.7 Shell Scripts (1 concurrent) 42.4 22220.3 5240.6 Shell Scripts (8 concurrent) 6.0 2434.9 4058.1 System Call Overhead 15000.0 3157551.2 2105.0 ======== System Benchmarks Index Score 3471.7
IOPS
ioping -c 10 request=1 time=0.2 ms request=2 time=0.6 ms request=3 time=0.5 ms request=4 time=0.6 ms request=5 time=0.5 ms request=6 time=0.5 ms request=7 time=0.5 ms request=8 time=0.5 ms request=9 time=0.6 ms request=10 time=0.5 ms 10 requests completed in 9006.3 ms, 1970 iops, 7.7 mb/s ioping -RD 5568 iops, 21.8 mb/s min/avg/max/mdev = 0.0/0.2/1.9/0.0 ms
dd
dd if=/dev/zero of=sb-io-test bs=1M count=1k conv=fdatasync 1.22977 s, 873 MB/s dd if=/dev/zero of=sb-io-test bs=64k count=16k conv=fdatasync 1.25655 s, 855 MB/s
Network
Cachefly 10.2 MB/s Linode, Atlanta, GA, USA 10.9 MB/s Linode, Dallas, TX, USA 11.2 MB/s Linode, Tokyo, JP 6.53 MB/s Linode, London, UK 2.76 MB/s OVH, Paris, France 6.88 MB/s SmartDC, Rotterdam, Netherlands 9.95 MB/s Hetzner, Nuremberg, Germany 5.08 MB/s iiNet, Perth, WA, Australia 3.33 MB/s Leaseweb, Haarlem, NL, USA 9.57 MB/s Softlayer, Singapore 5.28 MB/s Softlayer, Seattle, WA, USA 10.0 MB/s Softlayer, San Jose, CA, USA 4.46 MB/s Softlayer, Washington, DC, USA 10.5 MB/s
Trace
Traceroute (cachefly.cachefly.net): traceroute to cachefly.cachefly.net (205.234.175.175), 30 hops max, 60 byte packets 1 199.231.227.2 (199.231.227.2) 0.027 ms 0.008 ms 0.007 ms 2 ae-10.border1.dal006.ionity.com (199.231.224.5) 0.283 ms 0.284 ms 0.276 ms 3 63.251.44.9 (63.251.44.9) 0.539 ms 0.535 ms 0.526 ms 4 core3.pc1-bbnet1.ext1a.dal.pnap.net (216.52.191.41) 1.381 ms 1.375 ms 1.375 ms 5 dax-edge-03.inet.qwest.net (67.133.189.93) 1.355 ms 1.686 ms 1.343 ms 6 dap-brdr-04.inet.qwest.net (205.171.25.30) 2.430 ms 3.734 ms 3.722 ms 7 te8-4-10G.ar3.DAL2.gblx.net (64.208.110.201) 4.002 ms 1.450 ms 1.431 ms 8 vip1.G-anycast1.cachefly.net (205.234.175.175) 1.113 ms 1.114 ms 1.106 ms
Full Report: http://serverbear.com/benchmark/2012/08/22/hq4pjlqjxz3vxuls
I'll probably pick up one of these myself, looking forward to the offer!
Thanked by 1laaev
Comments
Nice!
by the way, the tags are < pre >
Ye, just fixed it up. Thanks!
Seems like many provider is having great io todays
Not sure how many people are on that node, maybe @FTN_Kevin can confirm.
>
Just your self
{Removed per request }
There is quite a few now, and there was quite afew when I had Kevin give you the VM to run the tests. That goes to show how well these SSD's are performing on this node.
Even if the server node was at 100% capacity, a typical user would not notice any performance issues even if a serveral users were running high I/O applications.
This is exactly why I approve of this new "SSD VPS" fad. It's almost like you're on the box by yourself.
Nice sequential speeds. The ioping results look a bit high for SSDs.
Very nice results!
Is it raid1 or 10?
I agree, pretty much guarantees ultra fast and stable speeds to the VPS container at all times.
Thanks guys.
10
@FTN_Kevin: When will this offer be available? I'm interested in trying some SSD VPS, was looking at RAMNode as well.
Hello Novocaine,
It already is: http://www.urpad.net/ssd-linux-vps.shtml
Let me know if you have any questions.