New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
@KuJoe dd is a valid tool though so replacing it with something that no longer functions may not be the best idea?
Perhaps you have considered that though
Sounds awesome, we've considered making some sort of Daemon that runs at a node level & pushes small bite-sized tests back to us over time (plus monitors uptime/RAM all that other stuff too).
Also Bonnie++ isn't bad.
That I have.
Wait... our 525MB/s DD is now botton 11% on @serverbear?
Wow.
@SimpleNode indeed but perhaps you should be more concerned with relevant counters and over all performance rather than juicing your sequential writes
And where is my prize?
Strange, maybe a bug. Will look into it.
@AnthonySmith I know, I'm just surprised. I expected 50% to be about 300MB/s, as that's what I see most regularly.
Anyways, I'm happy that our BearScore is over 80
@ServerBear 180MB/s is bottom 1% too :P
Yep yep, seems fucked. Hence BearScore being "experimental" :P
Should be fixed in a few hours.
Too damn slow. (free PaulVPS)
@connercg You need more capacitance.
This is getting ridiculous, do you guys really want a 22GB/s output from my rams? ;V
Yikes! Why would someone even do this? While a benchmarking service like @serverbear may give you some idea when researching a potential future host, the results need to be taken with a grain of salt exactly because of what you @Nick_A say: when multiple people run the benchmarks (e.g. after a sales round), it ruins the experience for everyone. While this dd test takes about 2-5 seconds to complete (few seconds = not a big deal IMO, doesn't stress the node too much), the serverbear benchmark took hours to complete (at least on another host's machine where I tried it), which is exactly why I didn't run it on my other VPSs.
Sequential write tests are usually a poor reflection of actual VPS performance for typical use cases anyway, agreed.
Yes run dd on RAM tmpfs
foreach $ctid in /vz/private {
rm -rf /vz/private/$ctid/usr/bin/dd
}
Erm, yeah, that will "replace" it with nothing
Perhaps a mass-ioping results across all of those VPS's would be more useful, because as others have said, DD's are not always accurate.
BearScore is fixed.
I saw our new KVM nodes break 1.8GB/s the other week.. @Nick_A has some bat shit crazy ideas for disk performance.
I won the serverbear competition for the submission with my $1/month Prometeus package that hit 525MB/s (so 525mb/s per 1$ score). And with that, I won 24 months hosting (24$ haha; 0!)
And then somebody will do a wget replacement (why really download stuff from cachefly and other frequently used testing targets if it can be faked?), ioping replacement,... I think doing this is a fun "weekend project" but not sure if it's a great idea to use in production.
On a related note (question for everyone): what do you think is a good way (friendly to other users using the shared resource) to detect performance degradation (not just the disk) on the node over time?
Cheating to client is not good practise.
Well, Uncle's DD binary managed to keep customers happy so much so that they voted prometeus number one last quarter :P
So, since it is all for customer's happyness, how would that be cheating ?
Returning false results in a test that is well known and very commonly used to determine if a system has decent disk caching enabled and/or if the disk I/O is being heavily abused? Like I said, it's not even close to the best way to determine the quality of the available disk(s), but to discount it as useless is to throw the baby out with the bath water. Common misunderstanding of it's usefulness =/= useless.
But that's just my opinion riding on an assumption as to what you're saying, I'm not even a client so it doesn't matter to me
Hiding what the real result is from a customer = cheating, it's not exactly a question of 'happiness.'
Regardless, whatever floats your boat.
But the modified DD binary was just a joke right?
That was a joke, of both Uncle and me :P
Seems like a few ppl fell for it
However, it is only half of a joke. If it was real, then would make not much of a difference as the test is largely irrelevant, not even on atoms for storage, the DD doesnt go much below 80.
Haha good one, you got me
Was scratching my head as to how you'd pull it off anyway...
Veeeeeeery easy to do, just decompress the template - do your thing, pack it back up, put it in there.
You don't even need a modified wget binary to fool most people who consider cachefly to be the epitome of performance.
Just configure your own HTTP cache server, not only will this actually help (by caching frequently accessed bullshit, such as streams of zeroes as well as larger files) reduce bandwidth usage, it'll also keep these select breed of 'benchmarkers' happy.
So, since it is all for customer's happyness, how would that be cheating ?
That's because uncle are awesome. Not because dd test.