Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Hosthatch,What's wrong with you? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Hosthatch,What's wrong with you?

2»

Comments

  • I'm not seeing any remarkable speed improvements. HostHatch Chicago to HostHatch NYC is only doing about 80 Mbit/sec right now for me.

  • mwtmwt Member

    I haven't had any issues with HostHatch in Chicago so far.

  • defaultdefault Veteran

    @mwt said:
    I haven't had any issues with HostHatch in Chicago so far.

    Thanked by 2Logano TimboJones
  • Daniel15Daniel15 Veteran
    edited July 2022

    It really depends on what the ticket is about. In my experience, the simpler tickets are handled quickly while the more in-depth tickets take a lot longer. I've got a ticket about long-running connections dropping, with a Wireshark packet capture attached (where you can see the server is not responding to TCP retransmits), and it's been open for 5 months with no reply. On the other hand, one of my legacy VPSes that was migrated to their new control panel lost IPv6 connectivity during the move and they fixed it and replied a few hours after I opened the ticket (some issue about the network filters for the VPS).

    @jbuggie said: My compute VPS used to be able to connect to the storage node VPS via their 10Gbps network (separate NICs). That's has not been possible for a while now.

    Which location? This is what I'm currently seeing in Los Angeles on two legacy VPSes using iperf3:

    [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
    [  5]   0.00-1.00   sec   848 MBytes  7.11 Gbits/sec    3   1.21 MBytes
    [  5]   1.00-2.00   sec   599 MBytes  5.02 Gbits/sec    0   1.37 MBytes
    [  5]   2.00-3.00   sec   576 MBytes  4.83 Gbits/sec    0   1.19 MBytes
    [  5]   3.00-4.00   sec   591 MBytes  4.96 Gbits/sec    1   1.30 MBytes
    [  5]   4.00-5.00   sec   929 MBytes  7.79 Gbits/sec  102   1.33 MBytes
    [  5]   5.00-6.00   sec   859 MBytes  7.20 Gbits/sec    1   1.50 MBytes
    [  5]   6.00-7.00   sec   502 MBytes  4.21 Gbits/sec    0   1.35 MBytes
    [  5]   7.00-8.00   sec   564 MBytes  4.73 Gbits/sec    0   1.21 MBytes
    [  5]   8.00-9.00   sec   596 MBytes  5.00 Gbits/sec    0   1.38 MBytes
    [  5]   9.00-10.00  sec   522 MBytes  4.38 Gbits/sec    0   35.0 KBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate         Retr
    [  5]   0.00-10.00  sec  6.43 GBytes  5.53 Gbits/sec  107             sender
    [  5]   0.00-10.00  sec  6.43 GBytes  5.52 Gbits/sec                  receiver
    

    NVMe VPS to NVMe VPS gets closer to the max 10Gbps - it seems like the storage VPSes have slower network performance for whatever reason. Same with doing speed tests through the speedtest CLI - The storage VPSes have slower speeds than the NVMe VPSes even though support says they're connected to the same switches and using the same internet connection.

  • @TimboJones said:

    @GuGuGee said:

    @Chocoweb said:
    My servers in Chicago lost DHCP and internal network interface
    after their migration to the new cloud portal.
    I know that nobody will reply me but I still opened 2 tickets.
    But surprisingly the problems are solved a day later.
    The tickets are still not answered but I happily closed them.

    After migrating to the new panel, my Chicago VPS hard drive with 2T storage became 100G and my 40G NVMe hard drive became 10G, sent a ticket , no reply yet :(

    Showing in the panel only (cosmetic) or your VPS really had data loss? Very different things.

    I didn't put important data on it, the new panel suggests that this is a legacy instance with limited administrative features. The administrative features are limited Please reinstall your server to upgrade and enable all features. So I just reinstalled my server and then the hard drive got smaller. But I reinstalled it again during the night and it seems to have regained its original size.

  • hosthatchhosthatch Patron Provider, Top Host, Veteran

    @GuGuGee said:

    After migrating to the new panel, my Chicago VPS hard drive with 2T storage became 100G and my 40G NVMe hard drive became 10G, sent a ticket , no reply yet :(

    Please share full information when making comments like this so you don't leave other people in panic, since your comment makes it seem like we shrank your volumes and lost 1.9TB of your data, which is not the case and would be frankly quite stupid of us to do.

    Thanked by 2skorous masteri
  • hosthatchhosthatch Patron Provider, Top Host, Veteran

    @aj_potc said:
    I'm not seeing any remarkable speed improvements. HostHatch Chicago to HostHatch NYC is only doing about 80 Mbit/sec right now for me.

    We've ordered secondary (non Psychz) transit there, since they have been useless in trying to fix this problem. It only affects their colo clients, most of whom appear to be gaming companies - which don't really care about single thread speed.

    Just waiting for the cross-connect to be run, which should hopefully in the next few days and then this will be an issue of the past.

    Thanked by 3skorous masteri aj_potc
  • @hosthatch said:

    @aj_potc said:
    I'm not seeing any remarkable speed improvements. HostHatch Chicago to HostHatch NYC is only doing about 80 Mbit/sec right now for me.

    We've ordered secondary (non Psychz) transit there, since they have been useless in trying to fix this problem. It only affects their colo clients, most of whom appear to be gaming companies - which don't really care about single thread speed.

    Just waiting for the cross-connect to be run, which should hopefully in the next few days and then this will be an issue of the past.

    Appreciate the update on that! I had previously reported some speed issues via ticket and was told that extra transit was on the way in a few locations. I realize the setup for that isn't under your control and can take time.

  • @GuGuGee said:

    @TimboJones said:

    @GuGuGee said:

    @Chocoweb said:
    My servers in Chicago lost DHCP and internal network interface
    after their migration to the new cloud portal.
    I know that nobody will reply me but I still opened 2 tickets.
    But surprisingly the problems are solved a day later.
    The tickets are still not answered but I happily closed them.

    After migrating to the new panel, my Chicago VPS hard drive with 2T storage became 100G and my 40G NVMe hard drive became 10G, sent a ticket , no reply yet :(

    Showing in the panel only (cosmetic) or your VPS really had data loss? Very different things.

    I didn't put important data on it, the new panel suggests that this is a legacy instance with limited administrative features. The administrative features are limited Please reinstall your server to upgrade and enable all features. So I just reinstalled my server and then the hard drive got smaller. But I reinstalled it again during the night and it seems to have regained its original size.

    In the future, if you get a smaller drive than expected right after a reinstall, you can usually solve that by yourself by growing the partition to use the whole drive. Basically, a single template is used for all drive sizes and on first boot, its supposed to expand to use all space. But sometimes people login too early or the first boot process doesn't finish, or whatever, and you need to manually do it yourself.

  • @Daniel15 said:

    @jbuggie said: My compute VPS used to be able to connect to the storage node VPS via their 10Gbps network (separate NICs). That's has not been possible for a while now.

    Which location? This is what I'm currently seeing in Los Angeles on two legacy VPSes using iperf3:

    He's referring to how they migrated over compute and storage VM's at different times. I had lost a server in the legacy panel a couple months back and it was recreated in the new panel and I didn't get the private nic and had to wait for the remaining servers to be migrated and changed from legacy private nics to the new Private Network with vlan support. But I think the migrations are done (in Chicago, anyway) and you should be able to request (if not showing in the new panel) with ticket for them to enable the new Private Network feature.

Sign In or Register to comment.