@yoursunny said:
Their network is Objectively Very Horrible.
High rate UDP causes severe packet loss.
Maybe it's a measure to prevent ddos?
Maybe, but for websites it’s fine. Wouldn’t recommend Hillsboro really. For web hosting any location should work, I have plenty of Asian customers using our location in Canada. Of course, Singapore would be a good choice if you don’t mind bandwidth limitations
@yoursunny said:
Their network is Objectively Very Horrible.
High rate UDP causes severe packet loss.
Maybe it's a measure to prevent ddos?
Maybe, but for websites it’s fine. Wouldn’t recommend Hillsboro really. For web hosting any location should work, I have plenty of Asian customers using our location in Canada. Of course, Singapore would be a good choice if you don’t mind bandwidth limitations
@jsg said:
Percentage of traffic using the above (besides Google and the likes)? I wouldn't even bet on http/2 having crossed the 50% mark.
RTCPeerConnection is used by anything that uses WebRTC (mainly things that require communication such as calls etc.), HTTP/3 is iirc enabled by default on Cloudflare (at least not forced on you like HTTP/2, not like you should not be using it), QUIC is used by a few things, and WebTransport is a WIP that's being rolled out and already available on Chrome, with other browsers such as Firefox, and web server implementations, to follow up.
Add to that, WireGuard uses UDP, which is more of an issue in a server context.
@jsg said:
Percentage of traffic using the above (besides Google and the likes)? I wouldn't even bet on http/2 having crossed the 50% mark.
RTCPeerConnection is used by anything that uses WebRTC (mainly things that require communication such as calls etc.), HTTP/3 is iirc enabled by default on Cloudflare (at least not forced on you like HTTP/2, not like you should not be using it), QUIC is used by a few things, and WebTransport is a WIP that's being rolled out and already available on Chrome, with other browsers such as Firefox, and web server implementations, to follow up.
Add to that, WireGuard uses UDP, which is more of an issue in a server context.
I do not at all doubt that bits and pieces (of the above mentioned) are used. My question is how many percent of http/1.x and 2 traffic has been replaced by http/3, etc.
Percentage of traffic using the above (besides Google and the likes)? I wouldn't even bet on http/2 having crossed the 50% mark.
Yes, it's also harder to debug, which is why I am all for HTTP/1.1. Add in compression, and you can see the issue: it bogs down older systems even more.
@jsg said:
I do not at all doubt that bits and pieces (of the above mentioned) are used. My question is how many percent of http/1.x and 2 traffic has been replaced by http/3, etc.
27% of the web traffic my servers serve (shared hosting) is QUIC or HTTP/3. Higher than the amount of IPv6 traffic.
OVH is a pretty good choice. As @MikeA said, I'd also stay away from Hillsboro. For me personally, the network was pretty problematic but maybe it's better now. Singapore was great on the other hand.
@jsg said:
I do not at all doubt that bits and pieces (of the above mentioned) are used. My question is how many percent of http/1.x and 2 traffic has been replaced by http/3, etc.
27% of the web traffic my servers serve (shared hosting) is QUIC or HTTP/3. Higher than the amount of IPv6 traffic.
All numbers are rounded, in reality 1.0 is sub-1%.
First, thanks for some cold hard data. Now, on to the matter itself ...
With all due respect "X% of my sites run ..." means next to nothing wrt the full picture. From what I see http/3 (just like /2 before) is mainly used by a few big players, mainly Google itself. Reminder: it also was Google who had developed and pushed SPDY as the "better and faster protocol"; unfortunately though they gave us snake oil and had to come up with yet again some "better and faster protocol" which is the current version of the miracle cure, if only there weren't some ugly issues. Probably most importantly and obviously UDP instead of TCP; oh well who cares, after all TCP had no advantages anyway, or no, wait, TCP offered "guaranteed transmission" while UDP basically is a lottery. But certainly today one can do that better and faster than some decades ago (I'm serious, I mean it) - but there's a sneaky and very venomous "but": QUIC also does away with TLS and doing it's job itself. Yeah, sure, why not? After all Google the "do not evil" (except when you feel like it) corporation that can be trusted blindly ...
Oh, and btw: http/3 / QUIC so far is not even faster than http/2. Oopsie.
Here's my POV: Even http/1.1 would be damn good enough if there wasn't the 800 pound gorilla all those oh so smart "we need a faster http!" people seem to just not see at all: bloated, fat web sites with bloated fat advertising on top of everything.
Web site X 20 years ago: 200 KB (incl. images), average web site today: multi megabytes.
But I also see some good points with http/3 / QUIC, mainly the fact that Google finally has seen some light and was trying to clean up the http/2 mess which they very much co-created. So yes, http/3 is progress and probably much better than /2 all in all - but I won't accept and/or use it anyway because experience shows that it's never good to put the "crown jewels" into the hand of one questionable corporation.
It's not that I think that TLS is the peak of wisdom but I don't want a protocol that force-feeds their corporate version of "TLS" to us and that forces us to be members of the idiotic "encrypt everthang!!!" sect.
TL;DR FU Google! You yourself proved that you should not be let near important protocols. Besides, I'm interested in what's good for us, the people. and not in what's the interest of a few questionable global corporations, so fuck off!
@kalimov622 said:
OVH is a pretty good choice. As @MikeA said, I'd also stay away from Hillsboro. For me personally, the network was pretty problematic but maybe it's better now. Singapore was great on the other hand.
I have used and would love to use OVH elite VPS servers, but they just don’t seem as powerful as they claim or I have just been unlucky.
Website tests using GTMetrix always seem to suffer with speed issues.
When the server would backup to rsync.net, it would take longer than other VPS servers and the speed seem to max out at 100mb/s according to Hetrixtools, I am not sure if it was a poor vps or a network speed issue.
According to Hetrixtools the CPU usage on the VPS and ram usage was always higher compared to other VPS servers of a similar spec from other providers.
Using the geek bench script the score at the end was half compared to other providers.
Their ‘DDOS’ protection would kick in if I started to restore a server from backups.
Websites for me would just seem slower and not as snappy.
Overall I would say depending on what your doing, test the VPS against their Bare Metal to get a good indication of network speed and performance.
@jsg said:
With all due respect "X% of my sites run ..." means next to nothing wrt the full picture.
Good I didn't write "my sites" then. It's thousands of websites in my own stats.
But for the sake of it, another provider with 400k customers across Europe, see roughly the same metrics, percentage differs a tiny bit, since they (due to their size) have a larger chunk og bots and crawler traffic that increase the 1.0 and 1.1 traffic.
@jsg said:
From what I see http/3 (just like /2 before) is mainly used by a few big players, mainly Google itself.
It's used by any litespeed host. Sure it's only about 10% of the websites on the internet.
@jsg said:
after all TCP had no advantages anyway, or no, wait, TCP offered "guaranteed transmission" while UDP basically is a lottery.
HTTP/2 sucks over high latency links when there's congestion. There's nothing better when single TCP stream suffers loss and absolutely kills HTTP/2 performance.
@jsg said:
Oh, and btw: http/3 / QUIC so far is not even faster than http/2. Oopsie.
Under ideal conditions, HTTP/2 is a lot faster than HTTP/3, that's correct, no one working on the QUIC Draft have said otherwise.
@jsg said:
Here's my POV: Even http/1.1 would be damn good enough if there wasn't the 800 pound gorilla all those oh so smart "we need a faster http!" people seem to just not see at all: bloated, fat web sites with bloated fat advertising on top of everything.
Web site X 20 years ago: 200 KB (incl. images), average web site today: multi megabytes.
56k modems would IMO also be damn good enough.
@jsg said:
but I won't accept and/or use it anyway because experience shows that it's never good to put the "crown jewels" into the hand of one questionable corporation.
QUIC as a protocol is an effort behind many companies including Google - but Google by no means had the majority "say" in the implementation of the QUIC protocol. There's so many design decisions and changes in the QUIC protocol that there's zero compatibility, there's a reason why Chrome for example runs two completely different implementations for QUIC and GQUIC.
@jsg said:
It's not that I think that TLS is the peak of wisdom but I don't want a protocol that force-feeds their corporate version of "TLS" to us and that forces us to be members of the idiotic "encrypt everthang!!!" sect.
There's nothing in the QUIC protocol that forces "their corporate version of TLS". "their" cannot be used, when it's a shared effort between many companies.
Please understand there's a difference between QUIC Protocol (The draft), and GQUIC.
@jsg said:
TL;DR FU Google! You yourself proved that you should not be let near important protocols. Besides, I'm interested in what's good for us, the people. and not in what's the interest of a few questionable global corporations, so fuck off!
@jsg said:
With all due respect "X% of my sites run ..." means next to nothing wrt the full picture.
Good I didn't write "my sites" then. It's thousands of websites in my own stats...
My point certainly was not to belittle you or your operations. It was that anyone's "X% of my sites run ..." is pretty much meaningless.
@jsg said:
From what I see http/3 (just like /2 before) is mainly used by a few big players, mainly Google itself.
It's used by any litespeed host. Sure it's only about 10% of the websites on the internet.
... that could do http/3.
@jsg said:
after all TCP had no advantages anyway, or no, wait, TCP offered "guaranteed transmission" while UDP basically is a lottery.
HTTP/2 sucks over high latency links when there's congestion. There's nothing better when single TCP stream suffers loss and absolutely kills HTTP/2 performance.
(a) boring. The classical "reduce something to a weakness"
(b) We lived well before http/2 and in fact the web largely became what it is today with "primitive, hardly bearable" http/1.x.
(c) congestion control is not a http's job but the OS's and there are diverse CGAs available.
@jsg said:
Oh, and btw: http/3 / QUIC so far is not even faster than http/2. Oopsie.
Under ideal conditions, HTTP/2 is a lot faster than HTTP/3, that's correct, no one working on the QUIC Draft have said otherwise.
Many, incl. you did strongly suggest otherwise. Just look above. Also, wasn't faster serving a major, if not the reason to develop http/3... but hey, maybe I'm wrong and the true reason was a different one dislike of TCP maybe. Anyway, usually one finds "via UDP is much faster than via TCP!!!" statement near the top of pro http/s sites.
@jsg said:
Here's my POV: Even http/1.1 would be damn good enough if there wasn't the 800 pound gorilla all those oh so smart "we need a faster http!" people seem to just not see at all: bloated, fat web sites with bloated fat advertising on top of everything.
Web site X 20 years ago: 200 KB (incl. images), average web site today: multi megabytes.
56k modems would IMO also be damn good enough.
And you call me salty?
But oh well, I'm patient: Bloat is everywhere, e.g. in OSs too - and it makes things slow, be it computers or the web or ... But hey, probably all those Vim, Geany, etc users (as opposed to BloatStudio) and people complaining that both Chrome and Firefox are using insane amounts of memory should be give 56k modems too
Or maybe, just maybe you are capable and willing to not utterly ignore the issue of extreme bloat almost everywhere.
@jsg said:
but I won't accept and/or use it anyway because experience shows that it's never good to put the "crown jewels" into the hand of one questionable corporation.
QUIC as a protocol is an effort behind many companies including Google - but Google by no means had the majority "say" in the implementation of the QUIC protocol. ...
Oh, I see, the (not at all) good old corporate wavy wavy argument line. In fact Google brags about having invented and developed QUIC. And pretty much all the other (not that many) QUIC/http/3 players are innocent tiny ants too, like e.g. Microsoft, Mozilla, CloudF#&%! ...
@jsg said:
It's not that I think that TLS is the peak of wisdom but I don't want a protocol that force-feeds their corporate version of "TLS" to us and that forces us to be members of the idiotic "encrypt everthang!!!" sect.
There's nothing in the QUIC protocol that forces "their corporate version of TLS". "their" cannot be used, when it's a shared effort between many companies.
Uhm, nope. the crypto layer (formerly known as "TLS") now is a part of QUIC. Their own graphics clearly show that.
Please understand there's a difference between QUIC Protocol (The draft), and GQUIC.
I don't care a flying f_ck about their funny zoo.
And, at the end of the day, what exactly does http/3 bring to the table? More speed - and not only in some weird corner cases - more security, better [whatever]? And I mean to our table to the people (as opposed to corporate giants and their lapdogs like e.g. Mozilla)?
Offer me something convincing! Shouldn't be too hard as I'm not exactly a http/2 fan.
I'm just annoyed at how long http 1, 2, 3, 4 takes to get implemented and then I have to reconfigure or redevelop just to be with the cool kids (especially if this is going to be a case where the performance is negligible to the naked eye).
Comments
Singapore is great.
I satisfied with their network and rarely down, oh yes my server is behind cloudflare
I need to make a choice, singapore or hillsboro?
Hillsboro has always sucked. Every other location is great.
Their network is Objectively Very Horrible.
High rate UDP causes severe packet loss.
Oregon is not the best location. For some reason they still route some traffic through Illinois.
http://hil.smokeping.ovh.net/smokeping?target=USA.AS852-3
70ms ping from Vancouver to Hillsboro lol.
No where good for China.
If you serve asia excluding China that's fine.
Maybe it's a measure to prevent ddos?
This routing information is very useful to me, thank you!
Maybe, but for websites it’s fine. Wouldn’t recommend Hillsboro really. For web hosting any location should work, I have plenty of Asian customers using our location in Canada. Of course, Singapore would be a good choice if you don’t mind bandwidth limitations
SG gets ~50ms ping from Bangladesh
Modern websites run over UDP:
TCP port 443 is only used for initial handshake.
Percentage of traffic using the above (besides Google and the likes)? I wouldn't even bet on http/2 having crossed the 50% mark.
RTCPeerConnection is used by anything that uses WebRTC (mainly things that require communication such as calls etc.), HTTP/3 is iirc enabled by default on Cloudflare (at least not forced on you like HTTP/2, not like you should not be using it), QUIC is used by a few things, and WebTransport is a WIP that's being rolled out and already available on Chrome, with other browsers such as Firefox, and web server implementations, to follow up.
Add to that, WireGuard uses UDP, which is more of an issue in a server context.
I do not at all doubt that bits and pieces (of the above mentioned) are used. My question is how many percent of http/1.x and 2 traffic has been replaced by http/3, etc.
You can check the latency for ovh data centers to there peering members at http://smokeping.ovh.net/smokeping
Yes, it's also harder to debug, which is why I am all for HTTP/1.1. Add in compression, and you can see the issue: it bogs down older systems even more.
27% of the web traffic my servers serve (shared hosting) is QUIC or HTTP/3. Higher than the amount of IPv6 traffic.
45% HTTP/2 traffic
27% HTTP/1.1 traffic
1% HTTP/1.0 traffic
All numbers are rounded, in reality 1.0 is sub-1%.
afaik
ovh singapore, connectivity to indonesia good enough except telkom indonesia
OVH is a pretty good choice. As @MikeA said, I'd also stay away from Hillsboro. For me personally, the network was pretty problematic but maybe it's better now. Singapore was great on the other hand.
I had an EU server with them and I never had any issues regarding network problems. Not sure about now.
First, thanks for some cold hard data. Now, on to the matter itself ...
With all due respect "X% of my sites run ..." means next to nothing wrt the full picture. From what I see http/3 (just like /2 before) is mainly used by a few big players, mainly Google itself. Reminder: it also was Google who had developed and pushed SPDY as the "better and faster protocol"; unfortunately though they gave us snake oil and had to come up with yet again some "better and faster protocol" which is the current version of the miracle cure, if only there weren't some ugly issues. Probably most importantly and obviously UDP instead of TCP; oh well who cares, after all TCP had no advantages anyway, or no, wait, TCP offered "guaranteed transmission" while UDP basically is a lottery. But certainly today one can do that better and faster than some decades ago (I'm serious, I mean it) - but there's a sneaky and very venomous "but": QUIC also does away with TLS and doing it's job itself. Yeah, sure, why not? After all Google the "do not evil" (except when you feel like it) corporation that can be trusted blindly ...
Oh, and btw: http/3 / QUIC so far is not even faster than http/2. Oopsie.
Here's my POV: Even http/1.1 would be damn good enough if there wasn't the 800 pound gorilla all those oh so smart "we need a faster http!" people seem to just not see at all: bloated, fat web sites with bloated fat advertising on top of everything.
Web site X 20 years ago: 200 KB (incl. images), average web site today: multi megabytes.
But I also see some good points with http/3 / QUIC, mainly the fact that Google finally has seen some light and was trying to clean up the http/2 mess which they very much co-created. So yes, http/3 is progress and probably much better than /2 all in all - but I won't accept and/or use it anyway because experience shows that it's never good to put the "crown jewels" into the hand of one questionable corporation.
It's not that I think that TLS is the peak of wisdom but I don't want a protocol that force-feeds their corporate version of "TLS" to us and that forces us to be members of the idiotic "encrypt everthang!!!" sect.
TL;DR FU Google! You yourself proved that you should not be let near important protocols. Besides, I'm interested in what's good for us, the people. and not in what's the interest of a few questionable global corporations, so fuck off!
It's a reasonable latency.
got it, singapore zone is the better choice.
I have used and would love to use OVH elite VPS servers, but they just don’t seem as powerful as they claim or I have just been unlucky.
Website tests using GTMetrix always seem to suffer with speed issues.
When the server would backup to rsync.net, it would take longer than other VPS servers and the speed seem to max out at 100mb/s according to Hetrixtools, I am not sure if it was a poor vps or a network speed issue.
According to Hetrixtools the CPU usage on the VPS and ram usage was always higher compared to other VPS servers of a similar spec from other providers.
Using the geek bench script the score at the end was half compared to other providers.
Their ‘DDOS’ protection would kick in if I started to restore a server from backups.
Websites for me would just seem slower and not as snappy.
Overall I would say depending on what your doing, test the VPS against their Bare Metal to get a good indication of network speed and performance.
OVH connectivity is decent.
Good I didn't write "my sites" then. It's thousands of websites in my own stats.
But for the sake of it, another provider with 400k customers across Europe, see roughly the same metrics, percentage differs a tiny bit, since they (due to their size) have a larger chunk og bots and crawler traffic that increase the 1.0 and 1.1 traffic.
It's used by any litespeed host. Sure it's only about 10% of the websites on the internet.
HTTP/2 sucks over high latency links when there's congestion. There's nothing better when single TCP stream suffers loss and absolutely kills HTTP/2 performance.
Under ideal conditions, HTTP/2 is a lot faster than HTTP/3, that's correct, no one working on the QUIC Draft have said otherwise.
56k modems would IMO also be damn good enough.
QUIC as a protocol is an effort behind many companies including Google - but Google by no means had the majority "say" in the implementation of the QUIC protocol. There's so many design decisions and changes in the QUIC protocol that there's zero compatibility, there's a reason why Chrome for example runs two completely different implementations for QUIC and GQUIC.
There's nothing in the QUIC protocol that forces "their corporate version of TLS". "their" cannot be used, when it's a shared effort between many companies.
Please understand there's a difference between QUIC Protocol (The draft), and GQUIC.
You're salty once again.
My point certainly was not to belittle you or your operations. It was that anyone's "X% of my sites run ..." is pretty much meaningless.
... that could do http/3.
(a) boring. The classical "reduce something to a weakness"
(b) We lived well before http/2 and in fact the web largely became what it is today with "primitive, hardly bearable" http/1.x.
(c) congestion control is not a http's job but the OS's and there are diverse CGAs available.
Many, incl. you did strongly suggest otherwise. Just look above. Also, wasn't faster serving a major, if not the reason to develop http/3... but hey, maybe I'm wrong and the true reason was a different one dislike of TCP maybe. Anyway, usually one finds "via UDP is much faster than via TCP!!!" statement near the top of pro http/s sites.
And you call me salty?
But oh well, I'm patient: Bloat is everywhere, e.g. in OSs too - and it makes things slow, be it computers or the web or ... But hey, probably all those Vim, Geany, etc users (as opposed to BloatStudio) and people complaining that both Chrome and Firefox are using insane amounts of memory should be give 56k modems too
Or maybe, just maybe you are capable and willing to not utterly ignore the issue of extreme bloat almost everywhere.
Oh, I see, the (not at all) good old corporate wavy wavy argument line. In fact Google brags about having invented and developed QUIC. And pretty much all the other (not that many) QUIC/http/3 players are innocent tiny ants too, like e.g. Microsoft, Mozilla, CloudF#&%! ...
Uhm, nope. the crypto layer (formerly known as "TLS") now is a part of QUIC. Their own graphics clearly show that.
I don't care a flying f_ck about their funny zoo.
And, at the end of the day, what exactly does http/3 bring to the table? More speed - and not only in some weird corner cases - more security, better [whatever]? And I mean to our table to the people (as opposed to corporate giants and their lapdogs like e.g. Mozilla)?
Offer me something convincing! Shouldn't be too hard as I'm not exactly a http/2 fan.
I'm just annoyed at how long http 1, 2, 3, 4 takes to get implemented and then I have to reconfigure or redevelop just to be with the cool kids (especially if this is going to be a case where the performance is negligible to the naked eye).