Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


OVH VPS Cloud Disk I/O Performance - Page 4
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

OVH VPS Cloud Disk I/O Performance

124»

Comments

  • I think I mighty still have my 14.4 Modem somewhere. It might be broken though, I had to put bags of ice on it when using it for sessions longer than ~1h or so.

  • @southy said:
    Guys,
    You‘re freaking me out.
    Can someone please tell me now which is faster and by what order of magnitude:

    • OVH SSD VPS
    • OVH cloud VPS

    I always choose the SSD VPSes over their Cloud VPSes due to bad experiences with their Ceph.

  • Anyone can send any link where I can read about allowed resource usage (CPU utilization) at their different VPS plans.

    Can I utilize 100% CPU at OVH VPS for a long time?

  • @desperand said:
    Anyone can send any link where I can read about allowed resource usage (CPU utilization) at their different VPS plans.

    Can I utilize 100% CPU at OVH VPS for a long time?

    Yes, you can utilize 100% CPU for a long time - they don't care, because generally speaking their hypervisors are not overloaded in terms of CPU - they're a prem provider.

  • desperand said: Can I utilize 100% CPU at OVH VPS for a long time?

    It's in their TOS that this is not allowed, but in practice it doesn't seem to matter.

  • This problem seems to be persistent. It was good for a few months, and once again something is horribly wrong.

    Servers get insanely slow (disk I/O) and during the last 7 days, multiple servers have crashed due to ceph failing / each day. Really annoying and frustrating for everyone.

  • sinsin Member

    @WebDude said:
    This problem seems to be persistent. It was good for a few months, and once again something is horribly wrong.

    Servers get insanely slow (disk I/O) and during the last 7 days, multiple servers have crashed due to ceph failing / each day. Really annoying and frustrating for everyone.

    Yeah that's why I only use their SSD VPSes...I've used 2 providers that use ceph and have had bad experiences with both.

  • EasedEased Member, Host Rep
    edited January 2018

    I'm not seeing any problems on BHS1 or BHS3 Public Cloud right now.

  • WebDudeWebDude Member
    edited January 2018

    Yep, it's probably linked to this case:
    http://travaux.ovh.net/?do=details&id=29447

    I still got dozens of servers in three European data centers. I guess many whom are saying they don't have problems is operating a bit fewer servers. Or at different location(s) as in your case.

    But good to hear that BHS is problem free. Also servers in LIM1 have been working very well so far. But maybe that's just because there's adequate capacity and cloud isn't yet "stuffed full".

    Anyway, these problems come and go. And when these are bad, these are really bad. Just as stated earlier, single IO operation can take tens of seconds. Based on the ticket it seems that we've have to endure a few days more, and then hopefully the problem wave we've encountered this time is over.

    Then we're just waiting for next lag tsunami to hit.

  • MrHMrH Member

    YouTube is here?

  • @Zerpy said:
    Doing 4k:4k fio tests doesn't give a real world perspective - it's fine for having a baseline, that you then run 1 month later to see if stuff is still shit or not.

    Got proof? that's maybe your hardwares are shit or overselling it, but i understand the type of people ur normally getting.

    I'm sorry - I forgot that most people do not have 128-256 gigabyte of memory in a box :') my mistake - this is LET.

    Even my desktop is 128gb and it's been 4 years, most people? what country exactly are you from?

    If you do HLS streaming (which is quite common), you serve small chunks as you probably know - these will get loaded into memory automatically by the file system cache, if files in linux is frequently accessed, they will stay in memory for quite some time - so in case of HLS, the only disk IO you'll mostly do, comes from reading new HLS chunks, you might have a few system calls to the disk anyway from time to time, but it will be minimal.
    If you're serving larger files (480, 720, 1080, 1440 or 4k), same thing will happen - you'll be able to have less files in memory - no doubt, but the OS is smart enough to put "hot" content in places to decrease your disk IO - you can also optimize this even further using sysctl settings, to store even more in memory when possible.

    You can verify that every (decent) OS has a file system cache:
    sync; echo 3 > /proc/sys/vm/drop_caches; time tar cf blat.tar directory; rm blat.tar; tar cf blat.tar directory

    The second tar will be a lot faster due to the fact the file system cache kicked in during the first tar and put as much of the data into memory as possible.
    Same goes with video streaming.

    So you set those amount cache 2199023255552 from 128GB - 256GB ram on the analog spindle drive and that's how you design the 1080P HD streaming server?

    chunks? are you streaming little porn clips? or website intro?

    What framework was used on your HLS streaming? nginx? node? what?

    do you really think massive concurrent HD is possible on those proxy?

    YEAH that's what I was expected, you call that 480p is part of larger files, lol
    that proves you dont know anyting about streaming. you probably suspended your customers whoever stream a gigabyte file so you can't handle even 100 customers hosting gigabyte from your 128GB server. because you don't have budget for SSD.

    But.. You said earlier you do video streaming at 1Gbps :-D
    I do have real-world experience - but anyway.

    No, I never said that I'm still doing the streaming, It was part of "experience" back in the days with 1Gbps. I do networks, I play lower field with protocols when you probably use the proxy to get $1-2 per month accounts and provide HTTP forwarding. and suspend accounts, banning kids.

    No, you do not have real world experience in HD streaming, maybe chunks of porn previews.

    If you're having hot content with a long tail, iops will be worse - but we're talking about 1 gigabit, so the actual amount of different videos you can push on a single gigabit is going to be minimal regardless.

    Again, it will be worse on your spining noise hard drives.
    So, "amount" of "different" videos... jerking off is not good for your health.

    What is false about my statement? You're talking about non outdated documents (so 720p, 1080p, 4k is), we both know that those 3 formats have significant higher bitrate than 240p.
    If you do a bit of calculations and take decent quality 720p or 1080p videos the bitrate will be something like 3-5 megabit/s for 720p, 6-8 megabit/s for 1080 and 4k will be 30 megabit/s+

    Now let's do some numbers, you do 1 gigabit:
    720p: 1000 / 4 = 250 viewers
    1080p: 1000 / 7 / 142 viewers
    4k: 1000 / 30 = 33 viewers

    Okay, you are noob at HD but I agree that you are very experienced at small chunks of porn videos,
    4K is 16Mbps, 1080P is 4Mbps,
    here is the question, what could be the 8Mbps?

    So on a single gigabit with a decent bitrate you can do a maximum of 250 concurrent users when doing 720p streaming (thus we assume that clients will stream at the exact bitrate, and not do any buffering thus causing a slight congestion on the link).

    Ideally you wanna balance your traffic based on URI (video file, chunks etc), to maximize the optimal throughput - it makes no sense to serve let's say 1000 megabit/s of a video over 10 servers if you can put it on one, it's simply stupid to do so. In case a box dies, the ring calculations will change and the streaming will get allocated to another box.

    This way you get the most of of your hardware, and actually improves the overall performance of your streams (because you start utilizing buffers and system cache).

    I don't even know why I am replying your useless guess and argue statement without experience.
    What exactly do you have experience except for nginx and abusing proxy cache?
    lol so is that how you CDN? do you even pay for nginx? I doubt you have because you play with spindle HGST.

    @Zerpy said:
    And spindles HGST Ultrastar can actually do it - we're talking about mostly sequential reads and reading very large chunks of data (because videos tend to be more than a megabyte).
    Even if you do HLS which is purely chunks of 0.5 to 2-3 megabyte, spinning disks would be perfectly fine.

    Can I just stop answering your porn clip statements?

    It's quite common to have your chunks about 3-5 seconds, so for a 1080p chunk with decent quality would be 3-3.5 megabyte
    There's not many people doing 4k HLS streaming these days - but anyway.

    Point is, my numbers actually makes sense :-D

    So your 1080P chunks only 3-5 seconds, and call it common, here goes your lack of experience again.

    Many people do 4K it's been years, but not in nginx sucker. :dizzy:
    so now, like I would mention something to teach you to achieve 4k? hell no,
    study from the bottom and you will be here.

    and again, many people had 128gb ram on desktop.

    If 1080p HLS chunks, 18 gigabit (18000 megabit) is 2571 concurrent connections, pretty easy math, every 4 seconds we'd have to load a new chunk, so we read 3.5 megabyte every 4 seconds for a single 1080p video.

    can i have your pornsite address?

    Now if you were to stream huge MP4 files of an array, if using the mp4 module in nginx for example

    here you go, nginx, I knew you were nginx, study some stream directives too it might help you a little, oh wait, maybe you don't have to, http proxy is enough for you as all you do is collect $3 and suspend account.

    Lol, that comment proves the stupidity.

    yeah I'm proud to be a stupid, not a noob sucker.

    hey Zerpy I don't live on this forum so I might answer you probably after a month or year.

    but you can go ahead keep talking about your glorious nginx http directive experiences here. or bring something new i would be very interested.

    my lesson for you is, SSD is always better than your spindle motor Ultrastar.

  • @sangdogg said:
    Got proof? that's maybe your hardwares are shit or overselling it, but i understand the type of people ur normally getting.

    Got proof of what? everyone knows that your server doesn't do consistent 4k blocks - so using a 4k block size is really not very realistic for testing

    @sangdogg said:
    Even my desktop is 128gb and it's been 4 years, most people? what country exactly are you from?

    Most people do not have 128gb of RAM in their desktop systems - some do. I run with 64gb and it's still pretty decent specs.

    @sangdogg said:
    So you set those amount cache 2199023255552 from 128GB - 256GB ram on the analog spindle drive and that's how you design the 1080P HD streaming server?

    If you're doing HLS as a part of a live stream, then sure - it's possible.

    @sangdogg said:
    chunks? are you streaming little porn clips? or website intro?

    HLS is chunk based :-) A video can be hours long, it will still be split into chunks in the HLS stream.

    Most adult content websites use HLS because it's easy to deliver and it scales very well.

    @sangdogg said:
    What framework was used on your HLS streaming? nginx? node? what?

    nginx for transmitting the chunks yes.

    @sangdogg said:
    do you really think massive concurrent HD is possible on those proxy?

    Sure - nginx is perfectly capable of doing 40g of HLS.

    @sangdogg said:
    YEAH that's what I was expected, you call that 480p is part of larger files, lol

    In CDN context, they usually split it up in small and large files - if more than 10 megs, then usually it's seen as a "large file" - this 480p easily exceeds this if you have a decent length video.

    @sangdogg said:
    that proves you dont know anyting about streaming. you probably suspended your customers whoever stream a gigabyte file so you can't handle even 100 customers hosting gigabyte from your 128GB server. because you don't have budget for SSD.

    That's the funny thing - I've worked for one of the larger streaming CDNs in the world - I know pretty well what systems are capable of doing in terms of streaming both VOD via HLS, normal VOD or actual live transcoding of video and audio streams over HLS.

    @sangdogg said:
    when you probably use the proxy to get $1-2 per month accounts and provide HTTP forwarding. and suspend accounts, banning kids.

    No, you do not have real world experience in HD streaming, maybe chunks of porn previews.

    You make a lot of assumptions - I have plenty of real world experience with HD streaming :-)

    @sangdogg said:
    Again, it will be worse on your spining noise hard drives.
    So, "amount" of "different" videos... jerking off is not good for your health.

    Depends on your traffic pattern and how you design your systems. There would be customers that would have 2-3 different video or audio streams going and would do 15-18 gigabit/s per box - since all listeners or viewers of those streams would be within the same roughly 9-12 chunks (3x3, 4x3), using spinning disks would be perfectly fine because you'd only have to read a small amount of chunks of the disk and then it would end up in memory anyway. So spinning disks would be perfectly fine for such setup.

    Could probably even push it to 4x10g uplink per box, and still do pretty well.

    @sangdogg said:
    Okay, you are noob at HD but I agree that you are very experienced at small chunks of porn videos,
    4K is 16Mbps, 1080P is 4Mbps,
    here is the question, what could be the 8Mbps?

    If 4k is 16Mbps and 1080p is 4Mbps, that really depends on your format - Youtube try to put 4k at roughly 16mbps with their webm format, that's about 25% smaller than mp4 - so you'd be at about 20mbps in mp4 format - and your bitrate can be a lot higher depending on your encoding settings.

    You can even look at Google's own document about recommended bitrates for your 4k: https://support.google.com/youtube/answer/1722171?hl=en

    1440p can easily be at 8mbps :smile:

    @sangdogg said:
    What exactly do you have experience except for nginx and abusing proxy cache?
    lol so is that how you CDN? do you even pay for nginx? I doubt you have because you play with spindle HGST.

    There's plenty of CDN's that are using nginx as their core software with additional features on top such as custom nginx modules or possibly combining it with lua - there's nothing wrong with using nginx in a CDN - it's common practice.

    Why would you need to pay for nginx? Do you think CloudFlare pays for nginx, or KeyCDN, BunnyCDN etc? No they don't.

    @sangdogg said:
    I doubt you have because you play with spindle HGST.

    I play with spindles and SSDs, the main talk was whether you could actually use spindles or not - and it's very much possible in plenty of scenarios.

    @sangdogg said:
    So your 1080P chunks only 3-5 seconds, and call it common, here goes your lack of experience again.

    It's common for customers to deliver in 3-5 second chunks and distribute it via a CDN. Simply based on what customers actually deliver via the network - so maybe it's the customers experience that is lacking? Fun thing - it worked perfectly fine.

    @sangdogg said:
    can i have your pornsite address?

    You already have it, you're probably browsing it daily.

    @sangdogg said:
    here you go, nginx, I knew you were nginx, study some stream directives too it might help you a little, oh wait, maybe you don't have to, http proxy is enough for you as all you do is collect $3 and suspend account.

    Yeah nope :-) I don't suspend anyone ;)

    @sangdogg said:
    hey Zerpy I don't live on this forum so I might answer you probably after a month or year.

    Perfectly fine.

    @sangdogg said:
    my lesson for you is, SSD is always better than your spindle motor Ultrastar.

    Never said it wasn't - if you read back (maybe do in less than a year so your brain doesn't get rusty), then you'll see I basically said that spindles can work perfectly fine - I've never said SSDs is worse than spindles

    Jokes on you.

    Thanked by 1coreflux
  • If someone interesting OVH benchamarks:
    OVH SSD 3 and OVH Cloud Ram 2 - I don't know why but as many pepople said performace cloud options are definitele much worse than ssd

    OVH SSD 3
    `# fio --name=rand-write --ioengine=libaio --iodepth=32 --rw=randwrite --invalidate=1 --bsrange=4k:4k,4k:4k --size=512m --runtime=120 --time_based --do_verify=1 --direct=1 --group_reporting --numjobs=1
    rand-write: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
    fio-2.1.11
    Starting 1 process
    rand-write: Laying out IO file(s) (1 file(s) / 512MB)
    Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/8000KB/0KB /s] [0/2000/0 iops] [eta 00m:00s]
    rand-write: (groupid=0, jobs=1): err= 0: pid=10214: Tue Oct 2 00:56:46 2018
    write: io=909384KB, bw=7577.2KB/s, iops=1894, runt=120016msec
    slat (usec): min=5, max=544, avg=28.13, stdev=14.59
    clat (usec): min=166, max=85187, avg=16861.13, stdev=5947.64
    lat (usec): min=184, max=85254, avg=16889.63, stdev=5949.03
    clat percentiles (usec):
    | 1.00th=[10944], 5.00th=[13888], 10.00th=[15040], 20.00th=[15424],
    | 30.00th=[15680], 40.00th=[15808], 50.00th=[15936], 60.00th=[16064],
    | 70.00th=[16320], 80.00th=[16512], 90.00th=[17024], 95.00th=[19072],
    | 99.00th=[52992], 99.50th=[57600], 99.90th=[64768], 99.95th=[67072],
    | 99.99th=[76288]
    bw (KB /s): min= 2408, max= 9152, per=100.00%, avg=7580.76, stdev=1304.46
    lat (usec) : 250=0.01%, 500=0.09%, 750=0.06%, 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.34%, 20=95.26%, 50=2.87%
    lat (msec) : 100=1.37%
    cpu : usr=1.24%, sys=6.31%, ctx=227943, majf=0, minf=8
    IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
    issued : total=r=0/w=227346/d=0, short=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=32

    Run status group 0 (all jobs):
    WRITE: io=909384KB, aggrb=7577KB/s, minb=7577KB/s, maxb=7577KB/s, mint=120016msec, maxt=120016msec

    Disk stats (read/write):
    sda: ios=0/226905, merge=0/233, ticks=0/3822852, in_queue=18966088, util=100.00%
    `

    OVH Cloud Ram 2
    `rand-write: (groupid=0, jobs=1): err= 0: pid=7239: Tue Oct 2 00:57:28 2018
    write: io=671484KB, bw=5594.7KB/s, iops=1398, runt=120022msec
    slat (usec): min=4, max=6916.3K, avg=82.02, stdev=16884.47
    clat (usec): min=469, max=6938.4K, avg=22790.07, stdev=94430.28
    lat (usec): min=478, max=6938.4K, avg=22872.09, stdev=95922.19
    clat percentiles (usec):
    | 1.00th=[ 1320], 5.00th=[20608], 10.00th=[20608], 20.00th=[20864],
    | 30.00th=[20864], 40.00th=[21120], 50.00th=[21120], 60.00th=[21632],
    | 70.00th=[21632], 80.00th=[21888], 90.00th=[21888], 95.00th=[21888],
    | 99.00th=[35584], 99.50th=[55552], 99.90th=[183296], 99.95th=[197632],
    | 99.99th=[6914048]
    lat (usec) : 500=0.01%, 750=0.04%, 1000=0.26%
    lat (msec) : 2=1.72%, 4=0.38%, 10=0.55%, 20=1.53%, 50=94.96%
    lat (msec) : 100=0.13%, 250=0.42%, >=2000=0.02%
    cpu : usr=1.61%, sys=6.00%, ctx=145061, majf=0, minf=11
    IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
    issued : total=r=0/w=167871/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=32

    Run status group 0 (all jobs):
    WRITE: io=671484KB, aggrb=5594KB/s, minb=5594KB/s, maxb=5594KB/s, mint=120022msec, maxt=120022msec

    Disk stats (read/write):
    sda: ios=0/168102, merge=0/5587, ticks=0/5276740, in_queue=5276884, util=100.00%
    `

  • Here's same fio run in LIM1 with VPS Cloud 2.

    fio --name=rand-write --ioengine=libaio --iodepth=32 --rw=randwrite --invalidate=1 --bsrange=4k:4k,4k:4k --size=512m --runtime=120 --time_based --do_verify=1 --direct=1 --group_reporting --numjobs=1
    rand-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
    fio-3.1
    Starting 1 process
    rand-write: Laying out IO file (1 file / 512MiB)
    Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=6000KiB/s][r=0,w=1500 IOPS][eta 00m:00s]
    rand-write: (groupid=0, jobs=1): err= 0: pid=3432: Mon Oct 8 09:36:41 2018
    write: IOPS=1485, BW=5941KiB/s (6083kB/s)(696MiB/120021msec)
    slat (usec): min=3, max=37848, avg=34.89, stdev=242.82
    clat (usec): min=542, max=281083, avg=21504.00, stdev=10289.23
    lat (usec): min=567, max=281140, avg=21540.14, stdev=10311.44
    clat percentiles (usec):
    | 1.00th=[ 1336], 5.00th=[ 20579], 10.00th=[ 20579], 20.00th=[ 20841],
    | 30.00th=[ 20841], 40.00th=[ 21365], 50.00th=[ 21365], 60.00th=[ 21627],
    | 70.00th=[ 21627], 80.00th=[ 21627], 90.00th=[ 21627], 95.00th=[ 21890],
    | 99.00th=[ 38536], 99.50th=[ 58983], 99.90th=[175113], 99.95th=[179307],
    | 99.99th=[270533]
    bw ( KiB/s): min= 4456, max= 7200, per=100.00%, avg=5940.71, stdev=231.32, samples=240
    iops : min= 1114, max= 1800, avg=1485.17, stdev=57.83, samples=240
    lat (usec) : 750=0.08%, 1000=0.30%
    lat (msec) : 2=2.02%, 4=0.28%, 10=0.32%, 20=0.64%, 50=95.72%
    lat (msec) : 100=0.18%, 250=0.45%, 500=0.02%
    cpu : usr=1.49%, sys=5.68%, ctx=151715, majf=0, minf=11
    IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
    issued rwt: total=0,178249,0, short=0,0,0, dropped=0,0,0
    latency : target=0, window=0, percentile=100.00%, depth=32

    Run status group 0 (all jobs):
    WRITE: bw=5941KiB/s (6083kB/s), 5941KiB/s-5941KiB/s (6083kB/s-6083kB/s), io=696MiB (730MB), run=120021-120021msec

    Disk stats (read/write):
    sda: ios=0/178436, merge=0/5953, ticks=0/3874024, in_queue=3874328, util=99.97%.

    As stated, it's something quite horrible. And basically kills any system which requires disk I/O. Old dedicated servers with cheapest possible WD Blue disks perform better.

    Thanked by 1Xei
  • WebDude said: As stated, it's something quite horrible.

    That vps uses a CEPH remote file system which is likely to be slower than a local one, but more redundant etc. It's drastically slower than what I'm seeing on local-SSD vps though.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    willie said: but more redundant etc.

    You should see their public issue tracker in regards to ceph and storage problems.

    Wasn't there something about Ceph storage not being available anymore and they're moving back to local drives?

    Francisco

  • williewillie Member
    edited October 2018

    Hmm, https://www.ovh.com/world/vps/vps-cloud.xml still says ceph.

    I remember trying ceph on Hetzner cloud and it wasn't anywhere near that bad either. I might try it again.

  • Public Cloud servers by default will do local disk, but all additional disks in OVH is still ceph - and their VPS Cloud range is still ceph - and they're still expanding their petabyte clusters.

  • XeiXei Member

    OVH VPS lineup are kind of garbage both with disk and either routes taken / latency when I tested. I know people complained about the routes for their newer DCs taken then and I guess not much would have changed now.

  • @Xei said:
    OVH VPS lineup are kind of garbage both with disk and either routes taken / latency when I tested. I know people complained about the routes for their newer DCs taken then and I guess not much would have changed now.

    If people send their routing issues to [email protected] then the networking team actually tends to fix these issues when they're able to - sometimes they can not fix certain routes because they have no better routes available. But following the list, I see majority of the route "complaints" being fixed, even sometimes within a day.

  • XeiXei Member
    edited October 2018

    OVH VPS are still pretty horrible regardless, they're well behind the curve (disk I/O is a major complaint for most, they were just slower in general for connectivity in countless tests when I checked it out). Too many options these days which are better in every way and you don't need to send random trace routes daily.. well never.

  • I was always happy with the performance of OVH VPS-SSD which uses local SSD. The VPS Cloud series have high availability CEPH which is supposed to be more reliable but has had perennial reports of lousy performance. As someone says above, they have switched the Public Cloud to local disk, so maybe they'll eventually do the same with VPS Cloud.

  • @willie said:
    They have switched the Public Cloud to local disk, so maybe they'll eventually do the same with VPS Cloud.

    Yeah, but that's silly. Because it's priced totally incorrectly. It's better to switch to better provider than to better and lot more expensive service on same provider.

    The public cloud offerings made no sense whatsoever compared to providers like UC.

    Thanked by 1Xei
  • @WebDude said:
    The public cloud offerings made no sense whatsoever compared to providers like UC.

    Depends - if you're doing a decent amount of traffic, then OVH Public Cloud can very much make sense :-)

    Always look at the use-case, saying that it will never make sense is just being ignorant about other people's use-cases.

    If it was all about the synthetic benchmarks, then we'd all use who scores the highest on geekbench for example.

  • WebDude said: The public cloud offerings made no sense whatsoever compared to providers like UC.

    UC doesn't have comparable products. The public cloud servers have dedicated cpu and unlimited bw. So you'd have to compare them to the compute intensive Vultr, DO, and Hetzner cloud servers. They're in the same general price range, though Hetzner is a bit cheaper. Overall though I'd rather use dedis.

  • Just backing up one server from OVH SBG Cloud, and disk I/O seems to average around 2 megabytes / second. True awesomeness. Yet compared to what the performance was earlier, this is "ok" performance, at least the I/O doesn't completely stall for minutes.

    Anyway, we're migrating away from OVH Cloud, because performance has been so devastating and nobody's happy.

    Thanked by 1Xei
  • qba82qba82 Member, Patron Provider

    Doesn't OVH limit hard disk usage for a single user (iops/read/write) on cloud?

  • JamesFJamesF Member, Host Rep

    If this is their VPS cloud range. They are very hit and miss.

    Their public cloud seems quicker and they are in the process of increasing the I/O

  • XeiXei Member
    edited November 2018

    The I/O is trash yes. They had pretty horrible peering at some of their new DC's as well, I think someone claimed the peering stuff is better now though... and or fixed. But until they fix the I/O, not worth it.

  • @qba82 said:
    Doesn't OVH limit hard disk usage for a single user (iops/read/write) on cloud?

    On some level yeah, but it's totally meaningless. The real problem is huge I/O latency. Sequential Speed, IOPS, Latency, all are bit different measures, but related.

    With providers which limit IOPS the performance graph is very different, compared to systems which got high random I/O latency.

    But at least now OVH has eliminated the worst part of this problem, which initiated this thread. Which was singe IOPS latency up to tens of seconds. Sure it's fine if I/O is fast, but if 1 in 1000 IOPS takes 5 seconds and 1 in 100000 IOPS takes 30 seconds, you're still screwed.

Sign In or Register to comment.