Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


What is the average SSD IO with dd command?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

What is the average SSD IO with dd command?

ehabehab Member
edited April 2014 in Help

hi

what is the average IO on an SSD drive vps with a dd command like:

$dd if=/dev/zero of=tempfile bs=1M count=200 conv=fdatasync,notrunc

Thanks for any input.
ehab

Comments

  • nickyzainickyzai Member, Host Rep

    400MB/s to 1GB/s I think. Got 1~1.3GB/s on my OAH 512mb openvz VPS.

  • FlorisFloris Member
    edited April 2014

    @nickyzai said:
    400MB/s to 1GB/s I think. Got 1~1.3GB/s on my OAH 512mb openvz VPS.

    With the above command I get from 400MB/s to 1GB/s on SATAIII RAID1, so either those SATA disks are good, or the SSD's on your servers are being oversold. (Yes, this is a actively used node)

  • Intel 520 ssd 180gb in Dell 1121 http://namhuy.net/1541/intel-520-ssd-180gb-dell-1121-ubuntu.html

    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 4.69718 s, 229 MB/s
    
  • Hmm... what's the point of using dd to test sequential IO?

  • @Floris said:
    With the above command I get from 400MB/s to 1GB/s on SATAIII RAID1, so either those SATA disks are good, or the SSD's on your servers are being oversold. (Yes, this is a actively used node)

    You're not getting that to the disk. Cache maybe, but certainly not disk.

  • ehabehab Member

    @eLohkCalb said:
    Hmm... what's the point of using dd to test sequential IO?

    What test you suggest?

  • edited April 2014

    eLohkCalb said: Hmm... what's the point of using dd to test sequential IO?

    There isn't one. DD tests for performance are almost useless. As you say, DD tests sequential IO which is a useless test (especially for SSDs) because:

    a) Real life load is random, not sequential. It's like testing your car on a rolling (fake) road. I don't care if my car can get 10000MPG on a rolling road, if it's only going to get 5MPG on a real-life load. It's real life that matters.

    b) There is no real way to get a true sequential result (even with fdatasync), as there is a lot of caching going on. fdatasync will disable the filesystem caching, but there could be many layers underneath affecting your result.

    c) SSDs really, really shine in their random (IOPS) performance. A standard HDD will give you only about 75-100 IOPS. An SSD can give you 10,000 IOPS per second!! SSDs IOPS rate don't really drop with random vs sequential, as they are all electrical. HDDs IOPS rate do vary, as seek times are much longer for random IO (The head has to physically move!).

    and lastly, one which is specific to hosting

    d) Every provider will set up their infrastructure differently, so it's hard to compare anything with DD. For example, we like to split our customers over small RAID10 arrays, meaning if something goes wrong, the smallest number of customers are affected. Some providers have one, huge, bejassive array, with tens of disks etc that will give excellent DD results. But if that one array fails, goodbye everyone!

    The only thing that a really, really low DD result will tell you, is if a host's setup is very overloaded and is about to fall over in a heap. But you don't need DD to tell you that, as you're VPS will be running like crap anyway!

    I hope this helps you @ehab :)

    Thanks

    Jonny

  • FlorisFloris Member
    edited April 2014

    @Virtovo said:

    Meight be some glitch or whatever but:

    With the OP's dd test command:

    xxx@play:~$ dd if=/dev/zero of=tempfile bs=1M count=200 conv=fdatasync,notrunc 200+0 records in 200+0 records out 209715200 bytes (210 MB) copied, 0.213542 s, 982 MB/s



    With normal DD usually done here on LE*:

    xxx@play:~$ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 5.42464 s, 198 MB/s

    The above is still very good for RAID1, I believe.

  • SaikuSaiku Member, Host Rep

    The first one is 210MB copied, the second one is 1.1GB copied.

  • @Saiku said:
    The first one is 210MB copied, the second one is 1.1GB copied.
    @Floris said:
    The above is still very good for RAID1, I believe.

    Missed the difference in line. I assumed it was the typical LET disk thrasher.

    198 MB/s is good for raid 1

  • ehabehab Member
    edited April 2014

    @Jonny , very informative, Thank you for your time and help.

    I wish to thank other who commented so far.

  • Can't RAID fail as well?

  • emreemre Member, LIR

    eddynetweb said: Can't RAID fail as well?

    everything will fail. Sooner or later.

    That's why offsite backup is important.

  • @ehab said:
    What test you suggest?

    Depends on what exactly do you wish to measure.
    As was said above, random I/O is the strong side of SSDs.

    So, talking of tests, use iozone, fio, ioping (force cache off when testing). Even better, test real-life cases (i.e. run real-life large DB with complex concurrent queries).

    Also, see Phoronix suite, there are many random I/O tests there.

    Thanked by 2Maounique ehab
  • ehabehab Member

    @Master_Bo , understood, copied your text and will practice at one stage. Thanks.

  • MaouniqueMaounique Host Rep, Veteran

    Missed you lately.

    Thanked by 1ErawanArifNugroho
  • @Maounique said:
    Missed you lately.

    So did I. At times reality's requests are hard to ignore.

    Glad to see you, as well.

    Thanked by 1Maounique
Sign In or Register to comment.