New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
does the router incur load from many ipv6? sounds so many, some provider give 1 or 10. how about real usage
Excuse me, but I don't believe that for those reasons single person should receive 65,536 ips. It is some sort of hoarding / wasting of resources which will never be used in full.
It doesn't and you need new routers anyways, so they are faster and actually support ipv6
It will never be used anyways, so might as well give them out
Read this article http://www.networkworld.com/article/2223248/cisco-subnet/the-logic-of-bad-ipv6-address-management.html
This article was quoted above but I'll quote it again: https://www.networkworld.com/article/2223248/the-logic-of-bad-ipv6-address-management.html
IPv6 addressing works differently from IPv4.
Unless I missed it this article doesn't take IoT into account.
But why kimsufi only give /128 ipv6?
You can use the whole /64 they assign, but you can only set rDNS on the single IP
What would justification look like for a single VPS?
If anything it's the opposite... Routing IPv6 addresses is easier than IPv4 (since there's no weird subnetting like with IPv4), plus no NAT is needed, so the routing overhead should be lower compared to IPv4.
I don't know why you guys refer to that site, which talks about subnets for networks with multiple devices, not single ones. It's main point is future proofing so you don't need to worry about too many devices per subnet.
It doesn't even mention the popular reason of /64 blacklisting (which is more valid than talking about future device counts in the thousands) as a reason to assign /64's to single devices.
It really is a management burden to have so many unique subnets to manage. Until people don't need IPV4, the limits of 254 devices per subnet will come into play long before you get large numbers on a single network.
So I asked my provider what their policy was and they didn't answer that exactly they just went ahead and gave me a /48. So I think I am good for a while.
So now I just need to learn how to implement this effectively in Virtualizor. Anybody have any tips for Virtualizor?
Needing to sub-allocate for VPN tunnels is usually what we set it up for.
With OpenVZ it seems common to give a prefix, which can be a /64, to the container. But then the client needs to configure each IPv6 address in the management panel. I don't understand why that's required and why the prefix isn't instead routed to the instance. Then the instance could use any address within the prefix.
We provide one /48 per customer. Then the customer can split it into different /64's which they route to the VPS.
That is, to the VPS we route only the /64.
IPv6 is like ipv4 in the old days where you didn't need exceptional reason to get free addresses from ARINs
OpenVZ venet interface doesn't have MAC address so they don't work well with IPv6. You have to statically assign each address from the host node side.
https://wiki.openvz.org/IPv6
OpenVZ veth interface would work properly just like KVM, but none of the providers offer that.
When I created TraceArt at Hack Arizona 2016 that requires routed IPv6, I resorted to disabling the provider's IPv6 and using the routed address space from Tunnel Broker.
Yes, I know it's a point-to-point interface which means you can't use SLAAC. But I don't see why it wouldn't be possible to route a prefix to the interface, you don't even need to have a via gateway since it's point-to-point.
BTW it shouldn't be any problem to have both have a /64 route to the interface, and a /128 route for each IPv6 address the customer configures in the management panel. That why customers which want to use the whole prefix would be able to do so.
I don't agree that KVM instances usually are configured properly. Often they might only get a /64 prefix assigned to the external interface. Which means you need to use NDP proxy to use it on another interface such as a docker bridge. This is of broken. A KVM instance should get a routed IPv6 prefix.
https://wiki.openvz.org/Virtual_network_device
Venet drop ip-packets from the container with a source address, and in the container with the destination address, which is not corresponding to an ip-address of the container.
Thus, routing isn't possible with venet.
My workaround is using robbertkl/docker-ipv6nat for Docker IPv6.
Let's ask @MaxKVM and @SpartanHost and @EvolutionHost why they don't have routed IPv6.
We provide /64 per VPS but we allow only 50 IPv6 IP to be used from that /64 per VPS, as adding too many IPv6 can saturate your router/switch
We ran into an issue with one client, he was running some IPv6 proxy or deamon - "ndppd" and it was causing pfem (the PFE manager) in our switch to crash, causing short outages, but then we instructed client to limit to 50 ipv6 and also restricted it on vps node and never had issues
Most clients hardly use 1-2 IPv6 IP but even 1-2 abuse users can cause a lot of issues
If you provide routed IPv6, the number of addresses in use would not affect the router in any way.
Then, you can limit on-link IPv6 to one address only. It needs to be in another /64, which could be a link-local address.
Main goal for us was to allow it to be possible to move /64 subnets between VPS nodes in the same VLAN making routed IPv6 not possible but maybe no one cares about being able to keep the same IPv6 subnet if they're migrated to another VPS node? Certainly would be operationally easier on our end if we did routed IPv6 to each VPS node.
For many use cases, it is very important to keep the same IPv6 routed subnet during a live migration event. The subnet would be written into config files, DNS records, etc.
Changing the IPv6 routed subnet is as bad as changing the IPv4 address - you have to schedule maintenance window, inform users in advance, and keep both subnets attached for a few days so that DNS updates take effect.
There wouldn't really be any technical way to do that e.g. if we routed a /48 to a VPS node and gave each VPS a /64, it wouldn't be possible to migrate any /64s from that /48 to a different VPS node. I'm aware of setups where L3 is ran on the VPS node e.g. a BGP session back to the upstream L3/router which would allow migrating /64s between VPS nodes but Virtualizor doesn't natively support such a setup.
ip route
commands.Blame Virtualizor if certain feature is missing.
What you mention did come to mind but sadly completely unsupported by Virtualizor so that's where we're let down.
How long is your grace period until providers not offering routed subnets get added to the list?
It's time to ditch Virtualizor and make new control software.
Let's call it hyperbrueggus.
Do one thing at a time. It will take a while.
/64 or /114, techncically you don't need more than a /114
Are you mad or just fucking with people?
Francisco