Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


DD Test Results - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

DD Test Results

24

Comments

  • AnthonySmithAnthonySmith Member, Patron Provider

    @zon so why not just test the actual performance counter you need which is read/ read access speed?

  • @AnthonySmith actually i am not all worried about this dd result.. today while working on some shell scripts i found an easy to test dd in all my vpses.. so i thought it would be nice sharing the output with this community.. thats all.. i mainly monitor my vpses through my own programs like this..

    image

  • chicagovps queries showing zero because recently i changed the mysql pwd..

  • jarjar Patron Provider, Top Host, Veteran
    edited November 2012

    @AnthonySmith

    if that is your opinion then why would you show results that are only 50% of your competitors?

    Not when my competitors are newer start ups at the price point in (mostly) quality datacenters with standard HDD. I'm not afraid to say I'm high on the bracket of newbie vps providers that make a profit on LEB pricing and actually have solid RAID controllers. None of that RAID1 junk here. If customers want it, give it to them. It's stupid to argue with a group of people that historically doesn't know why they want what they do, but if you don't humor them someone else will. Explain all you want, but the high road here isn't paved in dollars. You won't be the first or last to complain about it, the expectations remain.

    That extra 50% you're referring to must be SSD, not my market, therefore not my competitor. Storage space > 1GB/s DD test. Even those customers seem to mostly agree in my experience.

  • KuJoeKuJoe Member, Host Rep
    edited November 2012

    DD speeds will never impress me nor do I use them in my purchasing decision since they mean nothing to me in terms of what to expect for my real world usage. That being said, I always run a DD test when I buy a VPS but the information is forgotten once it's left the screen.

    As a provider, as long as DD speeds are above 60MB/s we don't worry about them. Considering we spend a fraction on hardware than our competitors, we're happy with 200-300MB/s write speeds.

  • jarjar Patron Provider, Top Host, Veteran

    Considering we spend a fraction on hardware than our competitors, we're happy with 150-300MB/s write speeds.

    You use SAS 15k drives don't you?

  • @zon said: i found an easy to test dd in all my vpses.. so i thought it would be nice sharing the output with this community..

    nice... if you'd also like to share what performance monitoring scripts (made by you or by others) you found useful, please do let us know! ;)

  • KuJoeKuJoe Member, Host Rep

    @jarland said: You use SAS 15k drives don't you?

    On some of our servers yes. We have a wide range of drives since we buy our servers with drives already so they range from 7200 SATA drives to 15K SAS drives. Most of our servers have 6 drives but I think we have 2 nodes with only 4 drives but we use the faster SAS drives in those.

  • jarjar Patron Provider, Top Host, Veteran
    edited November 2012

    @KuJoe That's why I'm sticking with SATA drives for now. It's becoming a different angle that seems to allow me a way to be different enough to not really compete with as many here, as many are going the SAS or SSD route. At least many of the startups. I go for storage while everyone else gets an extra ounce of performance and capacity. Everyone wins, I step on less toes and get business.

  • zonzon Member
    edited November 2012

    @jan
    i dont use any third party monitoring tools... i write my own in php like displayed above...

    what i am really worried about is support... sometimes i get immediate replies.. but sometimes it goes beyond 5-6 hours... especially when off times in US or Europe.. but i am from a diff time zone... so that is my prime time.. and if there is a major issue it goes up to 2-3 days... and second is backup...

    due to the advancement in h/w most of the servers are now giving good performance.. so now i am more concerned abt these 2 issues..

    so for every site i keep 3 rsync copies - 1 in another vps, 1 in ftp backup, 1 in local server and few snapshots in aws.. and daily db backups.. so if i dont get a reply in 1-2 hours i will change my dns....

  • @zon said: for every site i keep 3 rsync copies - 1 in another vps, 1 in ftp backup, 1 in local server and few snapshots in aws.. and daily db backups.. so if i dont get a reply in 1-2 hours i will change my dns....

    i like this setup :) btw, using dropbox could be another solution for you; data back-up works in near-real-time and all file revisions are stored (for 30 days on a free plan)... dropbox works fine even without GUI and has been quite reliable (with the exception of ultra-low-memory VMs)

    back on the topic: RAMnode 128 performance varied between 54 and 804 (!) MB/s, i tried about 7 times with small pauses in-between... 800+ MB/s is not bad for this ultra-cheap VPS :)

    sorry @Nick_A for running this test several times, i was just curious about the variance in the results... other than this, the VPS is pretty much just sitting there, doing next to nothing...

  • @jan... thanks... long back i tried dropbox once.. then i left it there.. i think that time i was too lazy to complete the full steps. :) another thing is my main sites avg data comes around 7gb.. so it wont fit in free plan... anyway i will try it again with some small sites first...

  • dd write test:
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 1.87778 s, 572 MB/s
    

    From a completely full node at RamNode (@Nick_A <3)

  • QHoster.com fresh UK node:

    [root@ddtest ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync;
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 4.41738 s, 243 MB/s
    [root@ddtest ~]#

  • LeeLee Veteran
    edited November 2012

    I must admit I do run DD on my vps but only becuase I recently had problems with my futurehosting box in the UK, it ran like a bag of shit, unresponsive and so on, I ran a DD test to see 4mb/sec being returned. So I do run them every so often just to makes sure it looks ok.

    Other than that, as long it runs smoothly and visitors/members are not having issues I don't care for it.

  • XSXXSX Member, Host Rep
    edited November 2012

    this Pzea.com vps test

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

    记录了16384+0 的读入
    记录了16384+0 的写出
    1073741824字节(1.1 GB)已复制,6.57406 秒,163 MB/秒

  • Here's a tip for all the "testers"....

    Don't do dd tests. All it does is flog the HDD for no reason, f*cking things up for your neighbours and yourself.

    Read what experienced hosters like @AnthonySmith and @KuJoe say above. Hopefully @prometeus really has developed a fake dd binary and shares it with all vps providers!

    dd == bad

  • @sleddog said: Don't do dd tests. All it does is flog the HDD for no reason, f*cking things up for your neighbours and yourself.

    Indeed. I haven't seen any of our clients continuously writing gigabyte files.

    ioping is much more useful, and gives good information on the 'responsiveness' of a VPS.

  • letboxletbox Member, Patron Provider
    edited November 2012

    Busy vps on busy node SATA HD.

    root@ns0 [/]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 5.39338 seconds, 199 MB/s
    root@ns0 [/]#

  • PatrickPatrick Member
    edited November 2012

    Nearly full Dallas node: 4x1TB RE3 RAID10

    [root@tx1 ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync;

    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 5.57461 s, 193 MB/s

    Also don't like the hype about dd tests, most people will barely even use 30mb/s

  • KuJoeKuJoe Member, Host Rep

    This thread actually gave me a great idea for how to handle the DD testers. :D

  • LeeLee Veteran

    DD tests are not as bad as constantly getting hit by server bear tests ;)

  • Nick_ANick_A Member, Top Host, Host Rep

    @jan said: RAMnode 128 performance varied between 54 and 804 (!) MB/s, i tried about 7 times with small pauses in-between... 800+ MB/s is not bad for this ultra-cheap VPS :)

    I'll say...

    @TheHackBox - completely full SSD-Cached node mind you.

    @W1V_Lee said: DD tests are not as bad as constantly getting hit by server bear tests ;)

    We had someone run 4 (yes, 4) simultaneous ServerBear's on one of our SSD-Cached KVM VPSs a while back. I could not figure out why four tests in a row on ServerBear had mediocre results until I had @ServerBear confirm that they came from the same IP at the same time. I was disappoint. I'm glad people buy our VPSs and like testing out their awesome performance, but it can get ridiculous (like this whole thread is).

  • MaouniqueMaounique Host Rep, Veteran

    @Nick_A said: but it can get ridiculous

    Yes, some ppl run them on cron.
    And, yeah, @serverbear might try to shorten them up a bit. The more they last, the more the chance that more ppl run them in parallel and that will be the end of it...

  • serverbearserverbear Member
    edited November 2012

    @Nick_A said: We had someone run 4 (yes, 4) simultaneous ServerBear's on one of our SSD-Cached KVM VPSs a while back.

    FYI they weren't simultaneous (there's at least 3 hours time difference between each). I told you they were the same IP but I don't post results if I see similar timestamps on multiple uploads.

    Benchmark Run: Tue Nov 06 2012 18:49:04 - 19:17:10
    Benchmark Run: Wed Nov 07 2012 00:01:49 - 00:29:49
    Benchmark Run: Wed Nov 07 2012 04:00:49 - 04:29:12
    Benchmark Run: Wed Nov 07 2012 07:45:05 - 08:13:10

  • jarjar Patron Provider, Top Host, Veteran
    edited November 2012

    @Damian said: ioping is much more useful

    While this is true, it is slightly more difficult to read to the average user. A user likes a raw "per second" number of a measurement that they easily identify with. It really sits nicely next to a wget bandwidth test. DD test is the standard, no doubt, and changing that is proving to be even more difficult than anyone would have thought.

    However, dd test isn't to be ignored. It is an absolute fact that getting 2mb/s back in a dd test, consistently, is a bad thing. It is an absolute fact that getting 200mb/s back in a dd test, consistently, is not a bad thing. It is not the "catch all" solution, but people are not eager to leave it behind because it is absolutely a valid form of measurement to determine if the system is even remotely usable. It's just hard to use it to determine the true underlying quality, but that doesn't make it useless.

  • @AnthonySmith said: 4) From that point on everything I do is based on having good read speeds which I never tested and I will never ever run anything ever that requires fast sequential write speeds from step 1

    Can't agree more!

    According to MySQL, I'm running 98% reads / 2% writes - Sequential writes aren't exactly too helpful in this use case.
    That'd apply to most people running a MySQL based CMS.

    If you're running a VPN, apart from the occasional logs if turned on, you're not going to use the disk at all - You'll care more about the network and CPU performance.

  • AnthonySmithAnthonySmith Member, Patron Provider
    edited November 2012

    @ElliotJ and the unfortunate thing about this whole trend is that most hosts that have the option on their raid cards such as smart arrays etc will be using 25%/75% read/write on the cache to keep the public dd posters happy so it is hurting your performance.

    ioping is ok but still does not give an acurate picture, with xen.kvm for example you will never beat ioping on openvz as its a virtual block device behind a qemu layer while your just accessing the local native storage in OpenVZ, hence seing such a massive difference on xen and kvm between ioping -c 10 -s 1M /var/www and ioping -c 10 -s 1M /dev/sda

    I can only deduce that ioping was made with the intent of dedicated hardware or openvx i.e. native disk access not virt layers.

    My preferred testing method is: how does it feel, run a benchmark from an external source like load runner (if web based) and something like hdparm once or twice to see how the reads are.

    Pure MB/s means nothing, you want fast seek rates and none flooded IOPS by other users running daily benchmarks.

  • KuJoeKuJoe Member, Host Rep

    I'll be working on my dd replacement tonight for our OpenVZ nodes. I'll post the code on Google Code or something when I'm done.

  • Here's some @serverbear benchmarks for our new KVM plans.

    KVM Starter http://serverbear.com/benchmark/2012/11/25/gusgSYu6FfVE5GZz

    KVM Mini http://serverbear.com/benchmark/2012/11/25/koO9UcfFlfTohUMh

    We're getting around 500MB/s and 5000 IOPS. :D

Sign In or Register to comment.