Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


DD Test Results - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

DD Test Results

13

Comments

  • AnthonySmithAnthonySmith Member, Patron Provider

    @KuJoe dd is a valid tool though so replacing it with something that no longer functions may not be the best idea?

    Perhaps you have considered that though :)

  • serverbearserverbear Member
    edited November 2012

    @KuJoe said: I'll be working on my dd replacement tonight for our OpenVZ nodes. I'll post the code on Google Code or something when I'm done.

    Sounds awesome, we've considered making some sort of Daemon that runs at a node level & pushes small bite-sized tests back to us over time (plus monitors uptime/RAM all that other stuff too).

    Also Bonnie++ isn't bad.

  • KuJoeKuJoe Member, Host Rep

    @AnthonySmith said: Perhaps you have considered that though :)

    That I have. ;)

  • Wait... our 525MB/s DD is now botton 11% on @serverbear?

    Wow.

  • AnthonySmithAnthonySmith Member, Patron Provider
    edited November 2012

    @SimpleNode indeed but perhaps you should be more concerned with relevant counters and over all performance rather than juicing your sequential writes :)

    And where is my prize?

    1073741824 bytes (1.1 GB) copied, 0.585915 s, 1.8 GB/s
    
  • @SimpleNode said: Wait... our 525MB/s DD is now botton 11% on @serverbear?

    Wow.

    Strange, maybe a bug. Will look into it.

  • @AnthonySmith I know, I'm just surprised. I expected 50% to be about 300MB/s, as that's what I see most regularly.

    Anyways, I'm happy that our BearScore is over 80 :D

  • SimpleNodeSimpleNode Member
    edited November 2012

    @ServerBear 180MB/s is bottom 1% too :P

  • serverbearserverbear Member
    edited November 2012

    @SimpleNode said: 180MB/s is bottom 1% too :P

    Yep yep, seems fucked. Hence BearScore being "experimental" :P

    Should be fixed in a few hours.

  • [root@fluxcapacitor ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 0.09090 seconds, 1.21 GW/s
    [root@fluxcapacitor ~]# 
    
  • 1073741824 bytes (1.1 GB) copied, 113.048 s, 9.5 MB/s

    Too damn slow. (free PaulVPS)

  • @connercg You need more capacitance.

  • This is getting ridiculous, do you guys really want a 22GB/s output from my rams? ;V

  • @Nick_A said: We had someone run 4 (yes, 4) simultaneous ServerBear's on one of our SSD-Cached KVM VPSs a while back.

    Yikes! Why would someone even do this? While a benchmarking service like @serverbear may give you some idea when researching a potential future host, the results need to be taken with a grain of salt exactly because of what you @Nick_A say: when multiple people run the benchmarks (e.g. after a sales round), it ruins the experience for everyone. While this dd test takes about 2-5 seconds to complete (few seconds = not a big deal IMO, doesn't stress the node too much), the serverbear benchmark took hours to complete (at least on another host's machine where I tried it), which is exactly why I didn't run it on my other VPSs.

    Sequential write tests are usually a poor reflection of actual VPS performance for typical use cases anyway, agreed.

  • Yes run dd on RAM tmpfs :)

  • DamianDamian Member
    edited November 2012

    @KuJoe said: I'll be working on my dd replacement tonight for our OpenVZ nodes. I'll post the code on Google Code or something when I'm done.

    foreach $ctid in /vz/private {
    rm -rf /vz/private/$ctid/usr/bin/dd
    }

  • MaouniqueMaounique Host Rep, Veteran

    Erm, yeah, that will "replace" it with nothing :D

  • Perhaps a mass-ioping results across all of those VPS's would be more useful, because as others have said, DD's are not always accurate.

  • @SimpleNode said: @ServerBear 180MB/s is bottom 1% too :P

    BearScore is fixed.

  • I saw our new KVM nodes break 1.8GB/s the other week.. @Nick_A has some bat shit crazy ideas for disk performance.

    I won the serverbear competition for the submission with my $1/month Prometeus package that hit 525MB/s (so 525mb/s per 1$ score). And with that, I won 24 months hosting (24$ haha; 0!)

  • @KuJoe said: I'll be working on my dd replacement tonight for our OpenVZ nodes.

    And then somebody will do a wget replacement (why really download stuff from cachefly and other frequently used testing targets if it can be faked?), ioping replacement,... I think doing this is a fun "weekend project" but not sure if it's a great idea to use in production.

    On a related note (question for everyone): what do you think is a good way (friendly to other users using the shared resource) to detect performance degradation (not just the disk) on the node over time?

  • Cheating to client is not good practise. :p

  • MaouniqueMaounique Host Rep, Veteran
    edited November 2012

    Well, Uncle's DD binary managed to keep customers happy so much so that they voted prometeus number one last quarter :P
    So, since it is all for customer's happyness, how would that be cheating ?
    :D

  • jarjar Patron Provider, Top Host, Veteran
    edited November 2012

    @Maounique said: how would that be cheating ?

    @jarland said: It is an absolute fact that getting 2mb/s back in a dd test, consistently, is a bad thing. It is an absolute fact that getting 200mb/s back in a dd test, consistently, is not a bad thing. It is not the "catch all" solution, but people are not eager to leave it behind because it is absolutely a valid form of measurement to determine if the system is even remotely usable. It's just hard to use it to determine the true underlying quality, but that doesn't make it useless.

    Returning false results in a test that is well known and very commonly used to determine if a system has decent disk caching enabled and/or if the disk I/O is being heavily abused? Like I said, it's not even close to the best way to determine the quality of the available disk(s), but to discount it as useless is to throw the baby out with the bath water. Common misunderstanding of it's usefulness =/= useless.

    But that's just my opinion riding on an assumption as to what you're saying, I'm not even a client so it doesn't matter to me ;)

  • Hiding what the real result is from a customer = cheating, it's not exactly a question of 'happiness.'

    Regardless, whatever floats your boat.

  • But the modified DD binary was just a joke right?

  • MaouniqueMaounique Host Rep, Veteran
    edited November 2012

    That was a joke, of both Uncle and me :P
    Seems like a few ppl fell for it :D

    However, it is only half of a joke. If it was real, then would make not much of a difference as the test is largely irrelevant, not even on atoms for storage, the DD doesnt go much below 80.

  • jarjar Patron Provider, Top Host, Veteran
    edited November 2012

    @Maounique said: That was a joke, of both Uncle and me

    Haha good one, you got me ;)

    Was scratching my head as to how you'd pull it off anyway...

  • Veeeeeeery easy to do, just decompress the template - do your thing, pack it back up, put it in there.

    You don't even need a modified wget binary to fool most people who consider cachefly to be the epitome of performance.

    Just configure your own HTTP cache server, not only will this actually help (by caching frequently accessed bullshit, such as streams of zeroes as well as larger files) reduce bandwidth usage, it'll also keep these select breed of 'benchmarkers' happy.

  • budingyunbudingyun Member
    edited November 2012

    @Maounique said: Well, Uncle's DD binary managed to keep customers happy so much so that they voted prometeus number one last quarter :P

    So, since it is all for customer's happyness, how would that be cheating ?
    :D

    That's because uncle are awesome. Not because dd test. :p

Sign In or Register to comment.