New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I'm not seeing any remarkable speed improvements. HostHatch Chicago to HostHatch NYC is only doing about 80 Mbit/sec right now for me.
I haven't had any issues with HostHatch in Chicago so far.
It really depends on what the ticket is about. In my experience, the simpler tickets are handled quickly while the more in-depth tickets take a lot longer. I've got a ticket about long-running connections dropping, with a Wireshark packet capture attached (where you can see the server is not responding to TCP retransmits), and it's been open for 5 months with no reply. On the other hand, one of my legacy VPSes that was migrated to their new control panel lost IPv6 connectivity during the move and they fixed it and replied a few hours after I opened the ticket (some issue about the network filters for the VPS).
Which location? This is what I'm currently seeing in Los Angeles on two legacy VPSes using iperf3:
NVMe VPS to NVMe VPS gets closer to the max 10Gbps - it seems like the storage VPSes have slower network performance for whatever reason. Same with doing speed tests through the speedtest CLI - The storage VPSes have slower speeds than the NVMe VPSes even though support says they're connected to the same switches and using the same internet connection.
I didn't put important data on it, the new panel suggests that this is a legacy instance with limited administrative features. The administrative features are limited Please reinstall your server to upgrade and enable all features. So I just reinstalled my server and then the hard drive got smaller. But I reinstalled it again during the night and it seems to have regained its original size.
Please share full information when making comments like this so you don't leave other people in panic, since your comment makes it seem like we shrank your volumes and lost 1.9TB of your data, which is not the case and would be frankly quite stupid of us to do.
We've ordered secondary (non Psychz) transit there, since they have been useless in trying to fix this problem. It only affects their colo clients, most of whom appear to be gaming companies - which don't really care about single thread speed.
Just waiting for the cross-connect to be run, which should hopefully in the next few days and then this will be an issue of the past.
Appreciate the update on that! I had previously reported some speed issues via ticket and was told that extra transit was on the way in a few locations. I realize the setup for that isn't under your control and can take time.
In the future, if you get a smaller drive than expected right after a reinstall, you can usually solve that by yourself by growing the partition to use the whole drive. Basically, a single template is used for all drive sizes and on first boot, its supposed to expand to use all space. But sometimes people login too early or the first boot process doesn't finish, or whatever, and you need to manually do it yourself.
He's referring to how they migrated over compute and storage VM's at different times. I had lost a server in the legacy panel a couple months back and it was recreated in the new panel and I didn't get the private nic and had to wait for the remaining servers to be migrated and changed from legacy private nics to the new Private Network with vlan support. But I think the migrations are done (in Chicago, anyway) and you should be able to request (if not showing in the new panel) with ticket for them to enable the new Private Network feature.