Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Any hosts out there that has good I/O?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Any hosts out there that has good I/O?

littleguylittleguy Member
edited April 2012 in General

I've changed hosts three times now, it always starts out nice with good I/O performance that degrades further and further the longer I have the server.

Now I've been with Hetzner for a couple of months on their vServer VQ plan. They initially had >200MB/s I/O (must have been an empty node), but now they have steadily been declining and this morning I was having ~2.0 load during idle and I/O of <4MB/s.

Can anyone recommend a european host that guarantees some kind of I/O performance? I don't care if it's more expensive, but I run a couple of web sites they need to be fast and responsive.

Approximate specs I need: XEN, 512MB, 15-20 GB disk, CentOS 6.

«1

Comments

  • What did Hetzner say when you opened a support ticket with them?

  • @Damian said: What did Hetzner say when you opened a support ticket with them?

    Still waiting on response. I opened a ticket a couple of weeks ago when I was having I/O performance of about 20MB/s and they said it was acceptable and closed the ticket.

  • @littleguy said: I was having I/O performance of about 20MB/s and they said it was acceptable

    Wow, didn't expect that from them!

    I think most of the hosts that actively participate here would be more concerned about keeping up their I/O response, so any of them offering Xen should be acceptable.

  • onepoundonepound Member
    edited April 2012

    @Damian said: Wow, didn't expect that from them!

    I think most of the hosts that actively participate here would be more concerned about keeping up their I/O response, so any of them offering Xen should be acceptable.

    Agreed, most from here 'should' be fine, or they would be torn down in flames by the LET community!!

    I know myself I will cap signups to a node when it reaches 70MB/s

  • littleguylittleguy Member
    edited April 2012

    @Damian said: I think most of the hosts that actively participate here would be more concerned about keeping up their I/O response,

    I have not had that experience. I was with both QuickWeb and ThrustVPS (on OpenVZ plans) and they both degraded this way. I once benched 327 KB/s on my ThrustVPS server. (See http://www.lowendtalk.com/discussion/1358/warning-thrustvps-openvz-plans#Item_9 )

  • [@DotVPS said]

    How often do you check if its gone down ?

    I have small vps on each node for small monitoring tasks etc. Nagios will alert me to anything 'strange'

    Manual disk check twice a week at present for speed and consistency.

  • fanfan Veteran

    root@host:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync && rm test

    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 3.60236 s, 298 MB/s

    From Knownhost's unmanaged brand RocketVPS, fastest I've got.

  • @littleguy said: I have not had that experience. I was with both QuickWeb and ThrustVPS

    I don't think that owners/representatives from QuickWeb or ThrustVPS are active participants here

  • JacobJacob Member

    @onepound That's abit over the top but sure. ;)
    @littleguy That is unfortunate, Try UK2.net the network reaches around 50MB/s with decent bandwidth mixes and the Hardware used is acceptable. UK2's support is terrible though.

  • @Jacob said: Try UK2.net the network reaches around 50MB/s

    Disk, not network.

  • Sounds like your neighbors are killing the server. I believe anything over 60 m/sec is acceptable. And what you experienced is either an oversold node or a bad neighbor.

    Did you tell the host about your issue?

  • JacobJacob Member

    @yomero I was simply pointing out all the good things. :)

  • Are you sure you guys are using the right benchmark to test IOPS ?...

  • [root@testden tmp]# dd if=/dev/zero of=test10.dat bs=4k count=16k conv=fdatasync && rm test10.dat
    16384+0 records in
    16384+0 records out
    67108864 bytes (67 MB) copied, 0.222871 s, 301 MB/s

    [root@testorl tmp]# dd if=/dev/zero of=test10.dat bs=4k count=16k conv=fdatasync && rm test10.dat
    16384+0 records in
    16384+0 records out
    67108864 bytes (67 MB) copied, 0.293473 s, 229 MB/s

    @kakashi DiskIO is really the only limiting factor nowadays unless you're using some serious SSD arrays. IO for the rest of the machine is generally not a bottle next unless everyone is maxing out the CPU.

    Poor IO can happen on boxes that begin swapping as well, in fact that would be my first assumption of an oversold box is disk swapping.

    For the OP you're writing 1mb/s or a little more than that due to overhead, but you also have hundreds of ongoing reads for the outbound side. I'm willing to bet you were around 2mb or so depending on your outbound usage, but if a blank VPS node can achieve 300M/s and you put 100 disk hungry VPS's on it, each user should be sharing around 3M/s if it was all equal.

    For pushing the IO on the disk side, you really need to migrate into the SAN world, but that gets costly. Course I would be thrilled I was getting enough business that I needed SAN :)

  • RophRoph Member
    edited April 2012

    I currently get about 1GB (byte)/sec on my Quickweb box. I feel a little bad about this, like I'm not taking advantage of it, because all I use it for is Piwik stats and storing some database backups :P

    1073741824 bytes (1.1 GB) copied, 1.19009 s, 902 MB/s
    1073741824 bytes (1.1 GB) copied, 1.04502 s, 1.0 GB/s
    1073741824 bytes (1.1 GB) copied, 1.09241 s, 983 MB/s

    It's the Budget OVZ III from this page, in the Phoenix location. If you feel like signing up with them, I'm on Node 7, maybe if you asked them then they'd put you on that node. Don't worry about me being your neighbour, I'm light on I/O there. :)

    I've seen Joel from QuickWeb post here before, maybe if you put in a word with him he could do something for you since they don't have any official offers going right now.

  • KakashiKakashi Member
    edited April 2012

    I was referring to the actual test. Wouldn't code.google.com/p/ioping/ Be a benchmarking utility?

  • littleguylittleguy Member
    edited April 2012

    I got moved to another host node.

    [root@s~]# dd if=/dev/zero of=test bs=64k count=2k conv=fdatasync 2048+0 records in 2048+0 records out 134217728 bytes (134 MB) copied, 10.1493 s, 13.2 MB/s

    Still awful but at least it's not 2MB/s. I am very disappointed in Hetzner though, I was expecting them to keep at least reasonable performance, but it's clear they are more interested in catering to people who run seedboxes and abuse disk performance than people who just want to run a couple of web sites with minimal disk I/O.

    @Roph said: I currently get about 1GB (byte)/sec on my Quickweb box.

    That's not possible. :)

  • Average disk I/O values at some providers (pfs):
    http://www.hostingwizard.net/?t=9
    First european host is VPSDeploy, as they use SSD drives.
    Second one is FitVPS. A really great VPS.
    Third one is Glesys. Expensive but reliable.
    Hope it helps.

  • littleguylittleguy Member
    edited April 2012

    http://www.hostingwizard.net/?t=9

    Great resource. Very few European XEN providers though. :(

    Not liking the Bulgaria location.

    VPSDeploy looks good, but only provide OpenVZ, and are a little low on HDD space.

    Glesys are great, but as you said they are very expensive.

  • netomxnetomx Moderator, Veteran

    Chicago VPS:

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync && rm test
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 36.9581 seconds, 29.1 MB/s

  • @littleguy
    1) I tend to prefer OpenVZ because I get a better ROI from it.
    2) I took this VPS in Bulgaria as a test last year and I found it very good so I kept it. Best value for money in Europe. Very stable.
    3) SSD drives only so disk space is quite low.
    4) I use Glesys mainly to host my critical websites. You can adjust all the characteristics dynamically.

  • Thanks for elaborating hostingwizard_net!

    My main beef with OpenVZ is that Apache and MySQL tend to cross over into the burst memory and eventually get killed by OpenVZ task scheduler.

    The peering to bulgaria from my home country (Sweden) seems very poor. I am getting >100ms to FitVPS servers, as opposed to <60ms to Hetzners location in Germany.

  • @Roph said: 1073741824 bytes (1.1 GB) copied, 1.19009 s, 902 MB/s

    1073741824 bytes (1.1 GB) copied, 1.04502 s, 1.0 GB/s
    1073741824 bytes (1.1 GB) copied, 1.09241 s, 983 MB/s

    I definitely don't believe that.

  • @Roph said: 1073741824 bytes (1.1 GB) copied, 1.19009 s, 902 MB/s

    1073741824 bytes (1.1 GB) copied, 1.04502 s, 1.0 GB/s
    1073741824 bytes (1.1 GB) copied, 1.09241 s, 983 MB/s

    Hard to believe the disks (even if SSDs) are only 400MB/s slower than DDR3 RAM.

     dd if=/dev/zero of=/dev/shm/test bs=16k count=64k conv=fdatasync;rm -f /dev/shm/test
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 0.762426 s, 1.4 GB/s
  • quirkyquarkquirkyquark Member
    edited April 2012

    @dmmcintyre3 said: are only 400MB/s slower than DDR3 RAM.

    Huhhh....what?

    DDR3 SDRAM gives a transfer rate of 
    (memory clock rate)
    × 4 (for bus clock multiplier) 
    × 2 (for data rate) 
    × 64 (number of bits transferred) 
    / 8 (number of bits/byte).

    DDR3-800 (the original DDR3) is 6.4 GB/s
    DDR3-1600 (current "standard") is 12.8 GB/s

    for comparison, the lowest DDR2-400 is 3.2 GB/s and DDR-200 is 1.6 GB/s. So yes, it's close to DDR-200 ;)

    Thanked by 1yomero
  • @FRCorey said: dd if=/dev/zero of=test10.dat bs=4k count=16k conv=fdatasync && rm test10.dat

    Use a bigger file. That isn't enough accurate

  • @littleguy said: My main beef with OpenVZ is that Apache and MySQL tend to cross over into the burst memory and eventually get killed by OpenVZ task scheduler.

    It will only get killed if the node is seriously out of memory, which would only happen if the provider has no idea what he is doing.

    The peering to bulgaria from my home country (Sweden) seems very poor. I am getting >100ms to FitVPS servers, as opposed to <60ms to Hetzners location in Germany.

    It's hard to beat distance and speed of light :) Still we have 5 different upstreams, so maybe i will be able to do some route optimization for you - can you please PM me an IP in Sweden to try to find a better route to?

  • @hostingwizard_net thanks for the kind words :) I think your table is a little skewed though, because you only show the price you pay, i.e. with the added cost for extra IPs. I can imagine you don't purchase the same amount of IPs in each location. Maybe you can add another column for "base price, without extra IPs" and then calculate the ratings against that.

  • @rds100 said: It's hard to beat distance and speed of light :)

    You're totally right, I didn't quite appreciate how far away Bulgaria is. The peering looks fine when you take the distance into consideration.

    I did eventually get moved to another node on Hetzner. The I/O still fluctuates pretty wildly, yesterday I had 12MB/s write speed and today it was 60 MB/s.. But I'm hoping it'll stay in the latter measurement. :)

  • I wonder if everyone on that node is constantly running sequential write tests? :)

    I see far too many people get bogged down with sequential write performance when their actual application doesn’t ever do an sequential write IO.

    Don’t get me wrong, if you are getting 2MB/sec then something is horribly wrong but there are a lot of people out there that think they need 200MB/sec and in reality its just not needed.

    A lot of providers, especially ones using shared storage will optimise their arrays for random IO's so Radom IO and latency tests are far more relevant. The majority of applications will be doing more reads that writes so again worth thinking about what you are trying to test and what you need out of the platform.

    Other storage availability features may also slow down IO so again, its a weigh up against what you need, performance or availability. Just some things to consider when looking at disk IO.

    If yore application does perform a lot of sequential write IO then a multi-tenanted platform sharing the same disk spindles may not be the best option.

Sign In or Register to comment.