New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I had the similar Virpus result, slightly faster:
Clodstra Cloud VPS 512MB RAM
I suppose that you miss the "kB/s" in the op screenshot.
I haven't seen worse than what Hostrail offered for several months, and especialy during their claimed maintenance stage. For some months it took forever to login, wget, process, and I/O all at the same time took forever (in kilobytes, if not bytes), thus I vote hostrail.
AlienVPS NY
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 123.531 s, 8.7 MB/s
my Cloudstra cloud VM 512MB is faster on big files but even slower on small files.
dd if=/dev/zero of=testfilex bs=16k count=1k oflag=dsync
1024+0 records in
1024+0 records out
16777216 bytes (17 MB) copied, 20.0514 s, 837 kB/s
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 16.9797 s, 63.2 MB/s
They should tune the storage area network. As comparison, the following test are from a cloud server with 512RAM, same OS, on Vmware esxi 4.1 + Powervault MD3200i SAS (Dell entry-level SAN)
Raid 5 performance:
dd if=/dev/zero of=testfilex bs=16k count=1k oflag=dsync
1024+0 records in
1024+0 records out
16777216 bytes (17 MB) copied, 3.35084 s, 5.0 MB/s
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 8.25361 s, 130 MB/s
Raid 10 performance:
dd if=/dev/zero of=testfilex bs=16k count=1k oflag=dsync
1024+0 records in
1024+0 records out
16777216 bytes (17 MB) copied, 3.21622 s, 5.2 MB/s
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 7.58367 s, 142 MB/s
Just out of curiosity, are these openvz containers that are using vswap instead of burst ram?
We had an incident on one of our nodes where vswap was causing extremely poor disk i/o, due to artificially slowing down the container.
People want to see big numbers. We get tagged for our network being "too slow", since each connection is limited to 20mbit, but you can pull multiple 20mbit connections, and most people find 2.2mbyte/sec adequate for most things. Some people want to see 60mbyte/sec, even though they'll never use it.
Read again, it says KB/s
123Systems
One of our crappier nodes is down to 45mb/sec, which I think is atrociously slow. Then I see these results here... Are you guys still using these?
@DotVPS That's the worst I've ever seen it at, I don't have any doubts it has been ~9MB/s or less.
Made it right now on WEBTROPIA vps
268435456 Bytes (268 MB) kopiert, 41,4583 Sekunden, 6,5 MB/s
You'er more better, mine is kb/s...
FlipperHost:
root@flipper:~# dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync
4096+0 records in
4096+0 records out
268435456 bytes (268 MB) copied, 7.40488 s, 36.3 MB/s
root@flipper:~# dd if=/dev/zero of=testfilex bs=16k count=1k oflag=dsync
1024+0 records in
1024+0 records out
16777216 bytes (17 MB) copied, 322.795 s, 52.0 kB/s
Not maintaining the same sync type between tests may skew the results a bit...
private layer... but still ok for squid, openvpn etc.
root@srv:~# dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync
4096+0 records in
4096+0 records out
268435456 bytes (268 MB) copied, 78.103 s, 3.4 MB/s
i've got a result in kb/s with virpus too .. I try to use the vps for 2-3 day but even the ssh was super slow . i cancel it and waste 4$ =D . i should have listen to others about them . Virpus = Wasted $$ lol
With Ubservers
root@serv2:~# dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync
4096+0 records in
4096+0 records out
268435456 bytes (268 MB) copied, 33.456 s, 8.0 MB/s
DMBHosting, w00t!
Followed by IonVM
My worst I/O experience has been with 123systems and hostrail.
123Systems was between 4 and 6 MB/s, now its a bit better close to 16MB/s.
Hostrail never was higher thatn 4MB/s
about 7kb/sec @ hostrail
KB? kb? Anyway is insane lol
IPAP!
That's dead, you save your DD results?
Surprisingly my 123Systems Minecraft VPS isn't THAT bad:
Apart from the random downtimes, this VPS hasn't been too bad. If BuyVM allows MC on their servers, I might try grabbing one.
What disk I/O is expected from LowEndBox servers in your opinion?
40 mb/s +
I cancelled it, but LeaseWeb gave me 8.0MBS.
Apparently that was normal.
No, I found it on LEB post.
Some VPS in Sweden:
4096+0 records in
4096+0 records out
268435456 bytes (268 MB) copied, 50.5277 s, 5.3 MB/s