New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I think I mighty still have my 14.4 Modem somewhere. It might be broken though, I had to put bags of ice on it when using it for sessions longer than ~1h or so.
I always choose the SSD VPSes over their Cloud VPSes due to bad experiences with their Ceph.
Anyone can send any link where I can read about allowed resource usage (CPU utilization) at their different VPS plans.
Can I utilize 100% CPU at OVH VPS for a long time?
Yes, you can utilize 100% CPU for a long time - they don't care, because generally speaking their hypervisors are not overloaded in terms of CPU - they're a prem provider.
It's in their TOS that this is not allowed, but in practice it doesn't seem to matter.
This problem seems to be persistent. It was good for a few months, and once again something is horribly wrong.
Servers get insanely slow (disk I/O) and during the last 7 days, multiple servers have crashed due to ceph failing / each day. Really annoying and frustrating for everyone.
Yeah that's why I only use their SSD VPSes...I've used 2 providers that use ceph and have had bad experiences with both.
I'm not seeing any problems on BHS1 or BHS3 Public Cloud right now.
Yep, it's probably linked to this case:
http://travaux.ovh.net/?do=details&id=29447
I still got dozens of servers in three European data centers. I guess many whom are saying they don't have problems is operating a bit fewer servers. Or at different location(s) as in your case.
But good to hear that BHS is problem free. Also servers in LIM1 have been working very well so far. But maybe that's just because there's adequate capacity and cloud isn't yet "stuffed full".
Anyway, these problems come and go. And when these are bad, these are really bad. Just as stated earlier, single IO operation can take tens of seconds. Based on the ticket it seems that we've have to endure a few days more, and then hopefully the problem wave we've encountered this time is over.
Then we're just waiting for next lag tsunami to hit.
YouTube is here?
Got proof? that's maybe your hardwares are shit or overselling it, but i understand the type of people ur normally getting.
Even my desktop is 128gb and it's been 4 years, most people? what country exactly are you from?
So you set those amount cache 2199023255552 from 128GB - 256GB ram on the analog spindle drive and that's how you design the 1080P HD streaming server?
chunks? are you streaming little porn clips? or website intro?
What framework was used on your HLS streaming? nginx? node? what?
do you really think massive concurrent HD is possible on those proxy?
YEAH that's what I was expected, you call that 480p is part of larger files, lol
that proves you dont know anyting about streaming. you probably suspended your customers whoever stream a gigabyte file so you can't handle even 100 customers hosting gigabyte from your 128GB server. because you don't have budget for SSD.
No, I never said that I'm still doing the streaming, It was part of "experience" back in the days with 1Gbps. I do networks, I play lower field with protocols when you probably use the proxy to get $1-2 per month accounts and provide HTTP forwarding. and suspend accounts, banning kids.
No, you do not have real world experience in HD streaming, maybe chunks of porn previews.
Again, it will be worse on your spining noise hard drives.
So, "amount" of "different" videos... jerking off is not good for your health.
Okay, you are noob at HD but I agree that you are very experienced at small chunks of porn videos,
4K is 16Mbps, 1080P is 4Mbps,
here is the question, what could be the 8Mbps?
I don't even know why I am replying your useless guess and argue statement without experience.
What exactly do you have experience except for nginx and abusing proxy cache?
lol so is that how you CDN? do you even pay for nginx? I doubt you have because you play with spindle HGST.
Can I just stop answering your porn clip statements?
So your 1080P chunks only 3-5 seconds, and call it common, here goes your lack of experience again.
Many people do 4K it's been years, but not in nginx sucker.
so now, like I would mention something to teach you to achieve 4k? hell no,
study from the bottom and you will be here.
and again, many people had 128gb ram on desktop.
can i have your pornsite address?
here you go, nginx, I knew you were nginx, study some stream directives too it might help you a little, oh wait, maybe you don't have to, http proxy is enough for you as all you do is collect $3 and suspend account.
yeah I'm proud to be a stupid, not a noob sucker.
hey Zerpy I don't live on this forum so I might answer you probably after a month or year.
but you can go ahead keep talking about your glorious nginx http directive experiences here. or bring something new i would be very interested.
my lesson for you is, SSD is always better than your spindle motor Ultrastar.
Got proof of what? everyone knows that your server doesn't do consistent 4k blocks - so using a 4k block size is really not very realistic for testing
Most people do not have 128gb of RAM in their desktop systems - some do. I run with 64gb and it's still pretty decent specs.
If you're doing HLS as a part of a live stream, then sure - it's possible.
HLS is chunk based :-) A video can be hours long, it will still be split into chunks in the HLS stream.
Most adult content websites use HLS because it's easy to deliver and it scales very well.
nginx for transmitting the chunks yes.
Sure - nginx is perfectly capable of doing 40g of HLS.
In CDN context, they usually split it up in small and large files - if more than 10 megs, then usually it's seen as a "large file" - this 480p easily exceeds this if you have a decent length video.
That's the funny thing - I've worked for one of the larger streaming CDNs in the world - I know pretty well what systems are capable of doing in terms of streaming both VOD via HLS, normal VOD or actual live transcoding of video and audio streams over HLS.
You make a lot of assumptions - I have plenty of real world experience with HD streaming :-)
Depends on your traffic pattern and how you design your systems. There would be customers that would have 2-3 different video or audio streams going and would do 15-18 gigabit/s per box - since all listeners or viewers of those streams would be within the same roughly 9-12 chunks (3x3, 4x3), using spinning disks would be perfectly fine because you'd only have to read a small amount of chunks of the disk and then it would end up in memory anyway. So spinning disks would be perfectly fine for such setup.
Could probably even push it to 4x10g uplink per box, and still do pretty well.
If 4k is 16Mbps and 1080p is 4Mbps, that really depends on your format - Youtube try to put 4k at roughly 16mbps with their webm format, that's about 25% smaller than mp4 - so you'd be at about 20mbps in mp4 format - and your bitrate can be a lot higher depending on your encoding settings.
You can even look at Google's own document about recommended bitrates for your 4k: https://support.google.com/youtube/answer/1722171?hl=en
1440p can easily be at 8mbps
There's plenty of CDN's that are using nginx as their core software with additional features on top such as custom nginx modules or possibly combining it with lua - there's nothing wrong with using nginx in a CDN - it's common practice.
Why would you need to pay for nginx? Do you think CloudFlare pays for nginx, or KeyCDN, BunnyCDN etc? No they don't.
I play with spindles and SSDs, the main talk was whether you could actually use spindles or not - and it's very much possible in plenty of scenarios.
It's common for customers to deliver in 3-5 second chunks and distribute it via a CDN. Simply based on what customers actually deliver via the network - so maybe it's the customers experience that is lacking? Fun thing - it worked perfectly fine.
You already have it, you're probably browsing it daily.
Yeah nope :-) I don't suspend anyone
Perfectly fine.
Never said it wasn't - if you read back (maybe do in less than a year so your brain doesn't get rusty), then you'll see I basically said that spindles can work perfectly fine - I've never said SSDs is worse than spindles
Jokes on you.
If someone interesting OVH benchamarks:
OVH SSD 3 and OVH Cloud Ram 2 - I don't know why but as many pepople said performace cloud options are definitele much worse than ssd
OVH SSD 3
`# fio --name=rand-write --ioengine=libaio --iodepth=32 --rw=randwrite --invalidate=1 --bsrange=4k:4k,4k:4k --size=512m --runtime=120 --time_based --do_verify=1 --direct=1 --group_reporting --numjobs=1
rand-write: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
fio-2.1.11
Starting 1 process
rand-write: Laying out IO file(s) (1 file(s) / 512MB)
Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/8000KB/0KB /s] [0/2000/0 iops] [eta 00m:00s]
rand-write: (groupid=0, jobs=1): err= 0: pid=10214: Tue Oct 2 00:56:46 2018
write: io=909384KB, bw=7577.2KB/s, iops=1894, runt=120016msec
slat (usec): min=5, max=544, avg=28.13, stdev=14.59
clat (usec): min=166, max=85187, avg=16861.13, stdev=5947.64
lat (usec): min=184, max=85254, avg=16889.63, stdev=5949.03
clat percentiles (usec):
| 1.00th=[10944], 5.00th=[13888], 10.00th=[15040], 20.00th=[15424],
| 30.00th=[15680], 40.00th=[15808], 50.00th=[15936], 60.00th=[16064],
| 70.00th=[16320], 80.00th=[16512], 90.00th=[17024], 95.00th=[19072],
| 99.00th=[52992], 99.50th=[57600], 99.90th=[64768], 99.95th=[67072],
| 99.99th=[76288]
bw (KB /s): min= 2408, max= 9152, per=100.00%, avg=7580.76, stdev=1304.46
lat (usec) : 250=0.01%, 500=0.09%, 750=0.06%, 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.34%, 20=95.26%, 50=2.87%
lat (msec) : 100=1.37%
cpu : usr=1.24%, sys=6.31%, ctx=227943, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=227346/d=0, short=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: io=909384KB, aggrb=7577KB/s, minb=7577KB/s, maxb=7577KB/s, mint=120016msec, maxt=120016msec
Disk stats (read/write):
sda: ios=0/226905, merge=0/233, ticks=0/3822852, in_queue=18966088, util=100.00%
`
OVH Cloud Ram 2
`rand-write: (groupid=0, jobs=1): err= 0: pid=7239: Tue Oct 2 00:57:28 2018
write: io=671484KB, bw=5594.7KB/s, iops=1398, runt=120022msec
slat (usec): min=4, max=6916.3K, avg=82.02, stdev=16884.47
clat (usec): min=469, max=6938.4K, avg=22790.07, stdev=94430.28
lat (usec): min=478, max=6938.4K, avg=22872.09, stdev=95922.19
clat percentiles (usec):
| 1.00th=[ 1320], 5.00th=[20608], 10.00th=[20608], 20.00th=[20864],
| 30.00th=[20864], 40.00th=[21120], 50.00th=[21120], 60.00th=[21632],
| 70.00th=[21632], 80.00th=[21888], 90.00th=[21888], 95.00th=[21888],
| 99.00th=[35584], 99.50th=[55552], 99.90th=[183296], 99.95th=[197632],
| 99.99th=[6914048]
lat (usec) : 500=0.01%, 750=0.04%, 1000=0.26%
lat (msec) : 2=1.72%, 4=0.38%, 10=0.55%, 20=1.53%, 50=94.96%
lat (msec) : 100=0.13%, 250=0.42%, >=2000=0.02%
cpu : usr=1.61%, sys=6.00%, ctx=145061, majf=0, minf=11
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=167871/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: io=671484KB, aggrb=5594KB/s, minb=5594KB/s, maxb=5594KB/s, mint=120022msec, maxt=120022msec
Disk stats (read/write):
sda: ios=0/168102, merge=0/5587, ticks=0/5276740, in_queue=5276884, util=100.00%
`
Here's same fio run in LIM1 with VPS Cloud 2.
fio --name=rand-write --ioengine=libaio --iodepth=32 --rw=randwrite --invalidate=1 --bsrange=4k:4k,4k:4k --size=512m --runtime=120 --time_based --do_verify=1 --direct=1 --group_reporting --numjobs=1
rand-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.1
Starting 1 process
rand-write: Laying out IO file (1 file / 512MiB)
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=6000KiB/s][r=0,w=1500 IOPS][eta 00m:00s]
rand-write: (groupid=0, jobs=1): err= 0: pid=3432: Mon Oct 8 09:36:41 2018
write: IOPS=1485, BW=5941KiB/s (6083kB/s)(696MiB/120021msec)
slat (usec): min=3, max=37848, avg=34.89, stdev=242.82
clat (usec): min=542, max=281083, avg=21504.00, stdev=10289.23
lat (usec): min=567, max=281140, avg=21540.14, stdev=10311.44
clat percentiles (usec):
| 1.00th=[ 1336], 5.00th=[ 20579], 10.00th=[ 20579], 20.00th=[ 20841],
| 30.00th=[ 20841], 40.00th=[ 21365], 50.00th=[ 21365], 60.00th=[ 21627],
| 70.00th=[ 21627], 80.00th=[ 21627], 90.00th=[ 21627], 95.00th=[ 21890],
| 99.00th=[ 38536], 99.50th=[ 58983], 99.90th=[175113], 99.95th=[179307],
| 99.99th=[270533]
bw ( KiB/s): min= 4456, max= 7200, per=100.00%, avg=5940.71, stdev=231.32, samples=240
iops : min= 1114, max= 1800, avg=1485.17, stdev=57.83, samples=240
lat (usec) : 750=0.08%, 1000=0.30%
lat (msec) : 2=2.02%, 4=0.28%, 10=0.32%, 20=0.64%, 50=95.72%
lat (msec) : 100=0.18%, 250=0.45%, 500=0.02%
cpu : usr=1.49%, sys=5.68%, ctx=151715, majf=0, minf=11
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwt: total=0,178249,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: bw=5941KiB/s (6083kB/s), 5941KiB/s-5941KiB/s (6083kB/s-6083kB/s), io=696MiB (730MB), run=120021-120021msec
Disk stats (read/write):
sda: ios=0/178436, merge=0/5953, ticks=0/3874024, in_queue=3874328, util=99.97%.
As stated, it's something quite horrible. And basically kills any system which requires disk I/O. Old dedicated servers with cheapest possible WD Blue disks perform better.
That vps uses a CEPH remote file system which is likely to be slower than a local one, but more redundant etc. It's drastically slower than what I'm seeing on local-SSD vps though.
You should see their public issue tracker in regards to ceph and storage problems.
Wasn't there something about Ceph storage not being available anymore and they're moving back to local drives?
Francisco
Hmm, https://www.ovh.com/world/vps/vps-cloud.xml still says ceph.
I remember trying ceph on Hetzner cloud and it wasn't anywhere near that bad either. I might try it again.
Public Cloud servers by default will do local disk, but all additional disks in OVH is still ceph - and their VPS Cloud range is still ceph - and they're still expanding their petabyte clusters.
OVH VPS lineup are kind of garbage both with disk and either routes taken / latency when I tested. I know people complained about the routes for their newer DCs taken then and I guess not much would have changed now.
If people send their routing issues to [email protected] then the networking team actually tends to fix these issues when they're able to - sometimes they can not fix certain routes because they have no better routes available. But following the list, I see majority of the route "complaints" being fixed, even sometimes within a day.
OVH VPS are still pretty horrible regardless, they're well behind the curve (disk I/O is a major complaint for most, they were just slower in general for connectivity in countless tests when I checked it out). Too many options these days which are better in every way and you don't need to send random trace routes daily.. well never.
I was always happy with the performance of OVH VPS-SSD which uses local SSD. The VPS Cloud series have high availability CEPH which is supposed to be more reliable but has had perennial reports of lousy performance. As someone says above, they have switched the Public Cloud to local disk, so maybe they'll eventually do the same with VPS Cloud.
Yeah, but that's silly. Because it's priced totally incorrectly. It's better to switch to better provider than to better and lot more expensive service on same provider.
The public cloud offerings made no sense whatsoever compared to providers like UC.
Depends - if you're doing a decent amount of traffic, then OVH Public Cloud can very much make sense :-)
Always look at the use-case, saying that it will never make sense is just being ignorant about other people's use-cases.
If it was all about the synthetic benchmarks, then we'd all use who scores the highest on geekbench for example.
UC doesn't have comparable products. The public cloud servers have dedicated cpu and unlimited bw. So you'd have to compare them to the compute intensive Vultr, DO, and Hetzner cloud servers. They're in the same general price range, though Hetzner is a bit cheaper. Overall though I'd rather use dedis.
Just backing up one server from OVH SBG Cloud, and disk I/O seems to average around 2 megabytes / second. True awesomeness. Yet compared to what the performance was earlier, this is "ok" performance, at least the I/O doesn't completely stall for minutes.
Anyway, we're migrating away from OVH Cloud, because performance has been so devastating and nobody's happy.
Doesn't OVH limit hard disk usage for a single user (iops/read/write) on cloud?
If this is their VPS cloud range. They are very hit and miss.
Their public cloud seems quicker and they are in the process of increasing the I/O
The I/O is trash yes. They had pretty horrible peering at some of their new DC's as well, I think someone claimed the peering stuff is better now though... and or fixed. But until they fix the I/O, not worth it.
On some level yeah, but it's totally meaningless. The real problem is huge I/O latency. Sequential Speed, IOPS, Latency, all are bit different measures, but related.
With providers which limit IOPS the performance graph is very different, compared to systems which got high random I/O latency.
But at least now OVH has eliminated the worst part of this problem, which initiated this thread. Which was singe IOPS latency up to tens of seconds. Sure it's fine if I/O is fast, but if 1 in 1000 IOPS takes 5 seconds and 1 in 100000 IOPS takes 30 seconds, you're still screwed.