Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Ever wonder how an Amazon EC2 performs?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Ever wonder how an Amazon EC2 performs?

Caveman122Caveman122 Member
edited June 2014 in General

Imgur

m3.xlarge 4 vCPU 13 ECU 15 GiB (2 x 40 SSD)?

Thanked by 1Mark_R

Comments

  • That I/O has to hurt a little.

  • @GoodHosting said:
    That I/O has to hurt a little.

    I did a few benches, the results are consistent(ly low).

  • Mark_RMark_R Member

    Was your server busy doing stuff while you took that benchmark? anything that could've highly influenced the outcome?

  • lipanbarakulipanbaraku Member
    edited June 2014

    for me, both i/o and network speed are really good.. can get 20MB/s download speed from my online.net server for free tier and 40MB/s for c3large plan (Oregon)..

  • Get some provisioned IOPS and test again :P

  • @Mark_R said:
    Was your server busy doing stuff while you took that benchmark? anything that could've highly influenced the outcome?

    It was idle, and this is not my server either. Disk I/O could very well be limited for whatever reason.

  • vdnetvdnet Member

    @GoodHosting said:
    That I/O has to hurt a little.

    Why? If you're transferring over 65MB/s from the VPS, you're probably not using the disk anyways. The file is going to reside in memory.

    IOPS is what really matters.

  • @vdnet said:

    The IOPs will be interesting, you have a bit of a point there; but one is generally tied with the other [ slow throughput speed and low IOPs. ]

    @OP:

    Please run the following two non-destructive commands and post:

    Basic IOPing:

    ioping -c10 /
    

    IO Seek test:

    ioping -RD /
    
  • vdnetvdnet Member
    edited June 2014

    @GoodHosting said:
    The IOPs will be interesting, you have a bit of a point there; but one is generally tied with the other [ slow throughput speed and low IOPs.

    Not at all. Many hosts, including us, do not cache sequential operations and reserve cache for commonly used files to boost performance. Sequential writing is a very bad way to test how fast a disk is since in real life scenarios, it is the least common. The only operation that uses large sequential writes is when you're copying a file or uploading to the server and how often do you do that at more than 65MB/s?

  • Caveman122Caveman122 Member
    edited June 2014

    Imgur

    It won't let me use direct I/O

  • @vdnet said:

    Please compare the following results, albeit it's just one sample.


    My personal Continuous Integration [ build ] server:

    Single i5-3470, 250mbps down/15mbps up cable, 1x HDD

    # bench.sh
    CPU model :  Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz
    Number of cores : 4
    CPU frequency :  2992.151 MHz
    Total amount of ram : 238 MB
    Total amount of swap : 991 MB
    System uptime :   4 days, 14:54,
    Download speed from CacheFly: 12.3MB/s
    Download speed from Coloat, Atlanta GA: 5.35MB/s
    Download speed from Softlayer, Dallas, TX: 11.5MB/s
    Download speed from Linode, Tokyo, JP: 9.98MB/s
    Download speed from i3d.net, Rotterdam, NL: 7.63MB/s
    Download speed from Leaseweb, Haarlem, NL: 10.4MB/s
    Download speed from Softlayer, Singapore: 7.35MB/s
    Download speed from Softlayer, Seattle, WA: 16.2MB/s
    Download speed from Softlayer, San Jose, CA: 15.9MB/s
    Download speed from Softlayer, Washington, DC: 9.52MB/s
    I/O speed :  290 MB/s
    
    # ioping -c10 /
    --- / (ext4 /dev/mapper/VolGroup-lv_root) ioping statistics ---
    10 requests completed in 9.0 s, 92.6 k iops, 361.7 MiB/s
    min/avg/max/mdev = 9 us / 10 us / 19 us / 2 us
    
    # ioping -RD /
    --- / (ext4 /dev/mapper/VolGroup-lv_root) ioping statistics ---
    93 requests completed in 3.0 s, 30 iops, 122.0 KiB/s
    min/avg/max/mdev = 48 us / 32.8 ms / 259.6 ms / 43.7 ms
    

    One of our minimum plan server [256MB, 0.25CPU] instance:

    Single E5-1650 v2, 2x 1gbps symmetric, 4x SSD [RAID10] config:

    # bench.sh
    CPU model :  QEMU Virtual CPU version (cpu64-rhel6)
    Number of cores : 1
    CPU frequency :  3499.998 MHz
    Total amount of ram : 241 MB
    Total amount of swap : 0 MB
    System uptime :   2 days, 4:18,
    Download speed from CacheFly: 21.8MB/s
    Download speed from Coloat, Atlanta GA: 44.3MB/s
    Download speed from Softlayer, Dallas, TX: 41.0MB/s
    Download speed from Linode, Tokyo, JP: 3.33MB/s
    Download speed from i3d.net, Rotterdam, NL: 3.71MB/s
    Download speed from Leaseweb, Haarlem, NL: 21.0MB/s
    Download speed from Softlayer, Singapore: 6.74MB/s
    Download speed from Softlayer, Seattle, WA: 20.6MB/s
    Download speed from Softlayer, San Jose, CA: 21.9MB/s
    Download speed from Softlayer, Washington, DC: 39.4MB/s
    I/O speed :  879 MB/s
    
    # ioping -c10 /
    --- / (ext4 /dev/vda2) ioping statistics ---
    10 requests completed in 9.0 s, 178.6 k iops, 697.5 MiB/s
    min/avg/max/mdev = 3 us / 5 us / 11 us / 2 us
    
    # ioping -RD /
    --- / (ext4 /dev/vda2) ioping statistics ---
    52.2 k requests completed in 3.0 s, 17.5 k iops, 68.5 MiB/s
    min/avg/max/mdev = 35 us / 57 us / 2.7 ms / 46 us
    
  • @Caveman122 said:
    It won't let me use direct I/O

    Ahh, without the DIRECT option it's going to be hard to compare, but those are some nice benchmarks either way. I've posted two comparison results above [ from 4x SSD and from 1x HDD. ] The results are skewed however in that the build server is currently idling [ first benchmark ], and the node is full [ second benchmark. ]

  • vdnetvdnet Member
    edited June 2014

    @GoodHosting said:

    I don't get your point. One HDD shouldn't ever get 290MB/s so something is wrong there or it is being cached. Anyways, it still shows a major difference between sequential operations and IOPS. The HDD was ~3x slower than the SSD array for sequential and ~500x slower for random IO confirming what I was saying, sequential write tests are pretty useless.

  • @vdnet said:

    The first test was run on a single HDD through CentOS 6.5 x86_64 running on VirtualBox [ whatever hypervisor / virtualization engine it uses ] running on Windows 8 x86_64, the single HDD is a Seagate 250GB SATA desktop drive. The second test was run on a CentOS 6.5 x86_64 running on KVM virtualization on CentOS 6.5 x86_64 .

    Either way, you're right; my results [ while they did show that lower/lower and higher/higher went together ] didn't really "mean" anything, and it could probably go either way. [ Although I've yet to see very low sequential and high IOPs, or high IOPs and very low sequential. ]

  • vdnetvdnet Member
    edited June 2014

    These are our advanced SSD cached systems:

    [root@vpsgrid ioping-0.6]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    ./i16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 8.23191 s, 130 MB/s

    [root@vpsgrid ioping-0.6]# ./ioping -RD /

    --- / (ext4 /dev/ploop38309p1) ioping statistics ---
    48878 requests completed in 3000.0 ms, 148400 iops, 579.7 mb/s
    min/avg/max/mdev = 0.0/0.0/0.1/0.0 ms

    We disabled caching on sequential operations. So it only uses SATA for that, then it uses an SSD array for random I/O. It looks like Amazon does something similar above.

  • My volumedrive have better I/O :O

  • vdnetvdnet Member

    @XxNisseGamerxX said:
    My volumedrive have better I/O :O

    I/O or sequential write? Lol. Here we go again.

    Thanked by 1netomx
  • @vdnet said:

    I/O or sequential write? Lol. Here we go again.

    I/O lol

  • vdnetvdnet Member
    edited June 2014

    So it has more than 150K IOPS? or 260K if you're referring to Amazon.

  • edited June 2014

    My VolumeDrive VPS

    
    [root@vdrive ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 6.14176 s, 175 MB/s
    [root@vdrive ~]#
    
  • vdnetvdnet Member
    edited June 2014

    @XxNisseGamerxX said:
    My VolumeDrive VPS

    You didn't read our conversation then. Whoever named that an I/O test was wrong. That only tests one form of I, not O at all (sequential write) and sequential writes are the least common in normal hosting situations.

    Random reads are the most common operation, the exact opposite of what dd tests for.

    Thanked by 1netomx
  • AlexBarakovAlexBarakov Patron Provider, Veteran

    The 65MB/s IO is enough for everything, users just like to see highly tuned results over the gb/s.

    Thanked by 1Maounique
  • TACServersTACServers Member
    edited June 2014

    @Alex_LiquidHost said:
    The 65MB/s IO is enough for everything, users just like to see highly tuned results over the gb/s.

    I have to agree with you on that..

    One of my VPS's. This is on a FC array. I am aware that it is not useful as a benchmark!

    root@www:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 2.351 s, 457 MB/s root@www:~#

  • INIZ VPS (HDD)

    ioping -c10 /
    --- / (simfs /vz/private/6765) ioping statistics ---
    10 requests completed in 9.0 s, 2.4 k iops, 9.2 MiB/s
    min/avg/max/mdev = 81 us / 423 us / 1.9 ms / 526 us

    ioping -RD /
    --- / (simfs /vz/private/6765) ioping statistics ---
    42.5 k requests completed in 3.0 s, 15.6 k iops, 61.1 MiB/s
    min/avg/max/mdev = 27 us / 63 us / 4.6 ms / 115 us

    root@server2:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 2.97673 s, 361 MB/s

  • This is good or bad? I think 7 mb/s isn't so good :(

  • vdnetvdnet Member
    edited June 2014

    @linuxthefish said:
    This is good or bad? I think 7 mb/s isn't so good :(

    1800 IOPS is good for random I/O. SATA drives put out around 150-300. You may want to test with -RD though. It is doubtful that any one VPS needs more than that, but sometimes it is good to have SSD speeds especially when you have a lot of virtual servers sharing one array. I'd say if your speed is consistent and you don't have any excess load or disk wait, then you have nothing to worry about. Benchmarking high numbers means little in most cases. As long as you have enough IOPS to support your demand.

  • jarjar Patron Provider, Top Host, Veteran

    I can transfer 3TB per second with a single SATA drive.

    The trick is to throw the drive a short distance so it doesn't take long to hit the target.

    I mean since we're benchmarking useless statistics. I win.

  • @Caveman122 said:
    I did a few benches, the results are consistent(ly low).

    Is that for the block storage or for the standard storage you get with each instance?

  • It seemed to be SAN. It was xvda/xvdb on iSCSI.

  • tchentchen Member

    Probably worthwhile to peruse through the performance primer Datadog assembled

    http://www.datadoghq.com/wp-content/uploads/2013/07/top_5_aws_ec2_performance_problems_ebook.pdf

Sign In or Register to comment.