New on LowEndTalk? Please Register and read our Community Rules.
Hosthatch issues after maintenance on LA
After scheduled maintenance on 2023-02-01 affected Los Angeles nodes, noticed some issues on their nodes and network.
IPv6 is unreachable. Their router fe80::1 is not responding to neighbor discovery.
MTU on the private network was changed to 1500. It does not pass packets with 9000 bytes now.
If you have a private network in LA, check the link MTU. it may cause intermittent connectivity issues later.
Mine was rebooted yesterday. IPv6 has been unreachable since. I also found that MTU on the public network is lower than 1500, namely 1476, and that causes connectivity issues for SSH and others.
Edit: looks like both issues have been fixed.
It's really a mixed bag at the moment. I've got four VPSes with HostHatch in LA:
185.198.26.x, Intel) is working fine over IPv4 but not IPv6
185.197.30.x, Intel) is completely inaccessible even though it's booted up and I can connect to it via VNC in their control panel
185.197.30.x, storage VPS, Intel) is working over both IPv4 and IPv6
45.67.219.x, AMD EPYC) is working over both IPv4 and IPv6
I'm not sure why they have IPv6 issues so often. I've got 31 different VPSes for dnstools.ws and the only ones I have IPv6 issues with are HostHatch ones.
220.127.116.11/24ranges from any of the VPSes.
Yeah it's really a mixed bag, but also evolving, from what I can tell, some VPS still has the MTU<1500 issue, some has unreachable IPv6, but others are ok (at least with
@Daniel15 On those with working IPv6, are you using
subnet::1address? Do other addresses from your subnet work?
Maybe due to the fact that VPS can't ping each other's public IP, the IPv6 gateway needs to be fe80::1 but not 2a04:bdc7:100::1.
They entirely changed upstreams, network stability should be miles better now. I think it's worth giving them time to fix these issues.
I think I spoke too soon because IPv6 is broken on all of them now. I'm using the
::1address on all of them I think, and
fe80::1as the gateway. Network stability may very well be better with the new upstream, but it's not stable if it's broken
The private network is still broken too.
It seems the ipv6 router fe80::1 is still broken.
The router does not send neighbor solicit to subnet::1 when I ping the VM from the internet. It still randomly responds to NS from VM and learns the mac address if the NS packet was sent from subnet::1. That might be why the ipv6 connectivity was dropped and restored randomly, depending on when the kernel on VM refreshes the neighbor/arp tables.
perhaps, filter rules on their router or host nodes are dropping icmp6 packets from the router.
To check if the router is responding to solicitation, try
sudo apt install ndisc6
sudo ndisc6 -s $yournet::1 fe80::1 ens3
Connections between one of my servers in Las Vegas and my HostHatch storage VPS in Los Angeles seem... weird though. iperf just dies after a while? Maybe it's the MTU issue someone mentioned.
It's fine with another VPS I have in San Jose though (both over IPv4 and IPv6)...
It'll eventually all work properly again. I've learnt to take their maintenance windows and add a few extra days.
Everything seems good on my LA VPS.
The MTU 1500 seems to be a frequent problem. They need to kill it with fire and do 9000 everywhere by default. Learn from previous issues!
You can't use 9000 on anything public facing. If you want to use jumbo frames (9000 MTU), every router the data is going through needs to be configured to use them. That's doable on a local network, but not possible on the internet. On the internet, you'll just end up with fragmented packets, which will make performance even worse.
Switches aren't routers and it only matters to the switch if you needed to communicate with the switch itself with packets over 1500. I know of no such use case where public facing Virtual switches would care (if your switch has a Public assigned IP reachable from Internet, you fucked up). An unmanaged switch operates with higher MTU without doing a thing because no management interface.
I have bought a VPS 2 months ago, the VPS is always in pending state. The ticket was opened for 2 months and no response from support team.
Sure, but both switches and routers along the router from the source to the destination all need to support jumbo frames without fragmenting them. This is practically impossible as all routers and switches for internet backbones only support 1500 byte frames.
No, the switch needs to support frames with 9000 MTU. Most unmanaged switches do support it out-of-the-box, but on managed switches (and routers) it's configurable. No core internet routers have jumbo frames enabled so your 9000 byte frames will just be fragmented into 1500 byte frames, adding extra overhead.
If you disagree then feel free to try and show any successful ping across the internet (not just in the same data center) using 9000 MTU. Make sure you use the 'do not fragment' flag (
Right, the router is the limiter and so having jumbo frames enabled on private and public switches won't make or break the connection. Setting to 9000 on 1500 network does no harm. Setting 1500 on 9000 network does, as repeatedly experienced at HH.
I haven't had an unmanaged switch that didn't support jumbo frames since hubs were a thing.
As explained above, you set 9000 on all switches by default with no harm done and set routers to the MTU to be used. You're the one who brought up the router.
Get it now? Default the switches to 9000 only helps and doesn't harm. It's still the routers that are the issue.
PM if you still don't understand. It would help to make my point so HH could be convinced as well. (Have been inconvenienced multiple times for weeks so I consider this a fixable issue that would make a difference).
Private networking is still broken for me. Is it broken for anyone else?
OK, I understand what you're saying now. I thought you meant using MTU 9000 on everything for a public-facing system (the server, the router and any switches).