New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Servers don't own IP address anymore! Cloudflare did it!
Long story short (from what I understood from their latest blog post)
What they did is shared /32 with many servers.
With a port slice of say 2,048 ports, we can share one IP among 31 servers.
Earlier: each server had 1 IP
Now: entire datacenter has single IP and every server is reached with customized port.
For example:
Datacenter in NY has IP address as 1.1.1.1
server 1 will be assigned 1.1.1.1:01
server 2 will be assigned 1.1.1.1:02
server N will be assigned 1.1.1.1:0N
..like that.
How do you find this useful in everyday use at datacenter or server-cluster level??
Comments
Insane amount of over-engineered bullshit just to handle egressing just because v4 nets are expensive.
If this was v6 they won't have problems with the amount of IPs; and even if they needed some magical re-routing capabilities, guess what... they can do it with standard L3 routing like god intended the internet to be.
"YoU'lL nEvEr Be AbLe To UsE AlL IpS iN A /64 sUbNeT, It'S wAsTeFuL"
Suck it
So they use some form of proxy tunnel to serve web requests, send mail, etc?
Yes, IPv6 everywhere is what we need!
Yeah the blog calls this out, on IPv6 the servers have their own IPs because theres plenty to go around and it all works fine. IPv4 is a hellscape of bodge after bodge just to get things working.
Most other CDNs just throw in a second proxy layer or route it to a peer server instead of what Cloudflare has done.
This simplifies many aspects of routing and caching, and in fact Cloudflare already does with Argo, so I'm not sure why they simply couldn't replicate the same thing in a single datacenter.
It looks more like each server was given a range of ports (for server initiated egress), so e.g. one server might have 4096-8191, another 8192-12287, 12288-16383, etc.
I've not tried it out, but I think you can probably get most of the way there on Linux by using
/proc/sys/net/ipv4/ip_local_port_range
on each server, and then you wouldn't even need any NAT to route the incoming reply packets to the right place.Glorified nat vps
yup right!
indeed it seems so. I got headache reading their blog post
NAT requires a stateful firewall system to manage the ports, while this doesn't.
Isn't that similar to gcore where there's only one ip and all the countries are looking up the same ip and you can ping 123.mdzz.plus to see if there's only one ip
I don't understand the benefits of this over IPv6 and regular routing
I read the article yesterday.
They need to use many source IP addresses, each tagged with a different country code, to pull content from origin servers.
For IPv6, they just give each server as many IPv6 addresses as needed, each tagged with a different country code.
For IPv4, the same method is too expensive, so they instead give each datacenter one IPv4 address for each country code needed.
Since the minimal BGP announcement for IPv4 is /24 but they don't need 256 addresses in each data center, they announce the same /24 from several nearby data centers.
If the response comes into the wrong data center, it's forwarded internally, incurring a higher latency.
This means, offering IPv6 on your website would decrease latency for Cloudflare WARP customers, because the response would not come into the wrong data center and need to be forwarded.
Likewise, if you are using Cloudflare CDN for your website, you should have IPv6 on your origin server, such as shared hosting or VPS, to avoid incurring the additional latency.
Is any of you able to observe this behavior yet? Basically, the article says their origin-facing IPs are now anycast but
netstat
shows unicast IPs from the entry colo in my case.From an engineer POV, they're getting paid for work they'll eventually throw away and get paid again to do something better they should have done in the first place. And probably decently paid. Cloudflare is probably the kind of place that has free team building get togethers that includes alcohol. There's probably a meme for that but I can't think of it right now.
I might disable IPv4 on my websites and show a 404 IPv6 Not Found page or something like that. Need to figure out how to force IPv6 if possible, maybe via javascript dns request?
Just remove the A record and use AAAA record?
But I can't let people know why they can't access it if they are on IPv4 only, IPv6 has been introduced 26 years ago but there are still people that don't have access to it and don't use it
Hm, that's easy.
make your webserver listen on an ipv6 and an ipv6(like two server blocks in nginx), and in the ipv4 server block use a doc root with whatever page you need and in the ipv6 one with your content. Though I'm highly against this, my shit isp doesn't have IPv6 to provide, i opened a ticket regarding it and it's been 10 months with no reply. @yoursunny would have to involucrate my isp tbh
thanks for explaining. I've IPv6 shared host, will use that on Cloudflare as AAAA record
If you use Cloudflare's proxy on the AAAA record, I believe IPv4 only clients would be able to access it.
I could be wrong though
No, IP4, if at all, is made a hellscape by exactly that (your) attitude. One major problem is that every Joe Anybody feels to need his own IP for his server (plus that tons of IPs still are wasted in other ways, e.g. by high schools holding a /16).
Ignoring the root causes of the problem will lead to IPv6 addresses being scarce in some decades too. Not because there aren't enough but because you can bet on plenty careless idiots feeling they need at least a /64 and because software will be written based on the assumption that IP addresses are limitless.
One good way to solve that problem is to provide what people actually need, as nowadays most ISPs do. You get a dynamic or a private IP and if you want a static one you have to pay extra for it - just like with other resources and goods.
I'm surprised, very surprised, but for once I have to commend CloudF%§# for doing the sensible thing.
Please note that the spec does mandate the usage of a /64 for the careless idiot.
I'm not surprised that careless idiots create specs tending to the "needs" of careless idiots.
Gladly quite a few of the IPv6 fans in their fervor "go hard" against IP4 users by using only IPv6 on their servers - which I very much welcome because I see it as a protection barrier (as long as I don't use any bridging services which I of course do not use.
The only way for world to move from IPv4 to IPv6 is removal of IPv4 from existence. Make IPv4s 100000 times more expensive and ISPs will share a single IPv4 between 100000 customers and never implement IPv6. And server providers will implement name based routing after allocating each of 65536 ports to an individual server.
Is there any way to relate IPv4 usage and global warming ? Maybe that way we can see some progress in IPv6 implementation.
Edit : after writing this suggestion checked google and found out that trick (global warming) didn't work too ! https://www.razorblue.com/how-ipv6-is-similar-to-climate-change/
Yeah, I should be A record + AAAA record (from my origin provider)
Why do you continue to refuse to learn about the IPV4 issues and stop whining about how many IPs are used? It's very simple, there's so many IP's available you should never have to worry about running out of IP's and needing to redesign your network to handle growth over decades.
You're not a Network Designer, but you act like you know better, despite doing fuck all research into what they do and why they need IPv6.
whoosh static vs dynamic assignment is irrelevant. You don't even know the difference between public IP's and static/DHCP (:facepalm:). Your network knowledge seems to be limited to less than high school.
whoosh it's a backhanded complaint that the fucking world hasn't gotten onto IPv6 and therefore all this complex fuckery is needed. This is throwaway effort of an inferior design.
No. Think bridge as a piece of wire that just passes bits along. That won't affect routing or protocols.
>
Yeah I dunno if this is 'needed'. Cloudflare does a lot of shit for the fun of it, keeping people busy, a blog post/PR statement, or all of the above.
Cloudflare is in 200+ POP's. Do you think that every single POP has more than 256 devices in it? Probably not. Maybe a few, but most are likely storage nodes that aren't on public internet anyway. Still, a /24 in each location means they burn a 65k on those IP's. they still have another 1.5M~2M IP's to go. They're most def not allocating each device a /30 or a /29, there's no reason to.
I'm surprised CF hasn't gone the netflix route and start offering ISP's on-prem caching nodes, that way they can offload the capital/infra cost and likely lean up their own deployments.
Francisco
@Francisco
How dare you to utter your view after canadian Mr. "I know everything about everything" has dropped his wisdom in this thread?! Just because you are a successful and respected provider and actually know what you're talking about, huh?
Actually knowing about a topic and having lots of concrete experience means nothing. Get more self-entitled, stupid, and arrogant already and drink more acorn syrup, boy!
In contrast to you (and me) he is presidential material in his country!
To be fair he'd be running for the liberals or NDP, and both leaders are fuckin' retarded, so it'd be a lateral movement at worst
Francisco