New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Black Friday 2021 - NVMe and Storage deals - Deploy in 16 global locations (APAC/EU/US)
This discussion has been closed.
Comments
I could not find a use for this server, it has constant problems, very low performance compared to other identical hosthatch servers. paid for another 1.5 years, I would gladly transfer it to someone
Same here. Can't wait to play with my new servers.
Do you mean SG location server?
Yes
In SG location
Mine is working great though.
It's down for me, too. As usual, no response to my ticket yet.
Can you please share price and specs?
Mine using old intel from their offer in August 2021
so, who wants to transfer out their SG VPS?
I have the same, no issues so far except the recent 4 hour downtime.
https://hetrixtools.com/report/uptime/6fef4813c8aa2014589827e29a5f0ea9/
Don't know
IPv6 still unreachable, try reconfigure networking and still same result
PING google.com(sd-in-f101.1e100.net) 56 data bytes
From sd-in-f101.1e100.net icmp_seq=1 Destination unreachable: Address unreachable
ipv6 is not working for me as well. Otherwise no problem.
Seems the new Server has been arrived. Noramlly how much it will need to depoly a new server?
This outage was caused by human error by the data center remote hands that was conducting work in the rack to deploy new hardware for us. They unplugged the node that your VM is being hosted on by mistake and despite submitting emergency requests it took a few hours for them to react and get the node back online.
Mine is still available for transfer.
Open thread in service transfer
maybe someone can make useful of it
Anyone who wants to transfer thier Singapore vps can pm me
anyone got their sg server provisioned?
SG: second ETA time is 11~14. Well, it is never on time.
Update on this issue. After opening the ticket 3 days ago and reporting issues daily (working, then not working, and so on), I received an update stating:
Needless to say, the outage persists in this moment so the thresholds were not sufficient. The majority of my data is moved over TCP via SSH to the storage server and not that much is going over the WireGuard UDP VPN, I don't get it.
Update: ipv6 is working now.
singapore ?
I'm not sure if my issue is related, but several months ago I spent significant time trying to get the HostHatch Chicago location to work as a destination for my Veeam backups. The symptoms were the same for any HostHatch VPS at the Chicago location (I have several), always resulting in retransmissions, slowdowns, and sometimes failures of the backups to complete. Any other destination (even other HostHatch locations) worked fine. But Chicago was always a problem. The issue also happened quite predictably: after a certain amount of data was sent at a high rate, I'd start seeing slowdowns, as Veeam needed to reconnect and retransmit. It happened with every backup job, as if something were triggering it when a certain amount or type of traffic was passed.
With the help of Veeam's support, I did some packet captures on both sides of the transfer and determined that, during the process of transferring backups, some packets were not arriving in Chicago. Veeam's suggestion was that there could be some filtering or deep packet inspection that was interfering, but HostHatch told me that nothing like that is present on their network.
The only way to find out for sure what was going on would be to do a packet capture directly on HostHatch's router to see if the packets were making it that far. (With my own captures, I could only say that packets weren't reaching the VPS.) Although HostHatch did agree to do this, their ticket responses weren't quick enough, and I had to find a solution quickly. So I never followed through with it, and just abandoned Chicago as a backup destination.
For what it's worth, the packet drops affected TCP packets. And I was not able to reproduce the issue for other types of transfers, like rsync over SSH. It was something about my Veeam backups that triggered it -- or so it seemed.
My theory is still that some filtering is going on -- especially when the same issue doesn't happen at other HostHatch locations with identically configured servers. Otherwise I can't explain it.
There's no theory needed here: HostHatch admitted that their Chicago upstream provider is mis-classifying my up 1Gbps rsync (via SSH) transfers as DDoS attacks.
I assume the trigger for @aj_potc and I are the same in this case: high speed inbound data to the VPS over TCP. I'm trying to
rsync
backup data to my new 10TB VPS, I'd expect many users are in a similar situation, what else would you do with a storage VPS other then store data?The symptoms are:
Unfortunately my overnight
rsync
transfers have been repeatedly interrupted and the following shows when my UDP data for the WireGuard VPN has been flat out broken:I can try throttling the
rsync
transfers, or taking short breaks between some transfers to avoid appearing as if I'm DDoSing myself when really I'm just backing up files. Guessing game is frivolous thoughI have no doubt you're correct. The theory was referring to my particular experience. HostHatch never told me that Psychz was using DDoS protection -- I even asked about it, because I know HostHatch has offered this feature at some locations in the past.
Except for UDP, which I didn't test, I had exactly the same symptoms with Veeam. During my backups, after passing data at roughly 100 MB/sec for some time, there was a sudden connection reset. This caused the Veeam backups to stop momentarily, trigger a reconnect, followed by a much slower transfer speed for the remainder of the backup. If I recall, the connection reset happened usually around 20GB or so into the backup. After that, the remaining transfer would be slow and prone to packet retransmissions according to the Veeam logs.
In addition, I saw another interesting symptom. When the connection reset happened, any existing SSH connections to the HostHatch server would be terminated. And they weren't normal session disconnections, either; the sockets just seemingly disappeared out from under SSHD, causing the session to appear frozen. I never saw any errors logged on the server when this happened, so could never connect it to anything actually happening on the server itself. It stands to reason that something external was messing with those connections.
I never tried throttling. I did lots of
iperf3
tests, though, and could never get the connection reset to happen with those, even with high speed, multi-connection transfers. I'm sure I did some tests withrsync
as well, but also don't recall being able to trigger a reset. For me, the connection reset seemed to be triggered by some method that Veeam uses to pass data. I could never reproduce it with other applications.All I know for sure is that this issue is nothing I can fix myself, sadly. I switched to HostHatch London, which works without any hitches. I still use HostHatch Chicago for backups via
rsync
, but those are different than my Veeam backups, and thankfully I never have problems with them.@aj_potc I'm confident we're seeing the same thing, thanks a bunch for sharing and FYI to others in Chicago!
This is pretty frustrating Least the customer service person mentioned something believable to me.
Yes. Roughly the same, never quite got to 100MB/sec, but have 10s of GBs over SSH at 40-50MB/s.
Same as well. All ssh connections would drop but could reconnect. They seemed to be slower after this.
Did some poking around the Veeam docs and it appears to have multi thread upload jobs? Maybe that raised your throughput and put you in the red with the DDoS thresholds.
The experiment continues.
I stopped my
rsync
this morning @ 9:41 am and at 10:46 am my connection UDP connections recovered.I've restarted the
rsync --bwlimit 10M
with a 10MB/s limit, so hopefully this keeps me under the radar and stopping the connection seems to revoer.Had the same issue, briefly. Minutes after I started a 2TB incoming transfer via ssh on a new Chicago storage VPS, all ssh connections got jammed up in a way I've never seen before. Second try completed the 8-hour transfer with no issue. No issue on the VPS itself - did seem like the networking interrupted things. The cause remains a mystery, doubt bandwidth was the cause since I got 500 mbps for the entire transfer on the 2nd attempt, or maybe I got lucky? Was tcp ipv4 ssh port 22 both times. Haven't tried another transfer or rsync as of yet.
Still waiting after receiving their email mentioned ETA is 11 - 14 Jan...
Yes, indeed. Veeam uses multithreaded transfers to maximize bandwidth usage. It's pretty good at saturating a pipe if the network can support it.
I initially thought the connection resets had something to do with the number of concurrent connections and the volume of data, so that's why I tried testing with
iperf3
. But no dice. I could never reproduce it.It may be related to packet size/rate, but I'm not sure how to test that.
At least now I know I'm not the only one who's seen this behavior out of Chicago. I assumed that with so many clients there, HostHatch/Psychz would already have heard about this.
Me too, both Singapore and Sydney. The last wasn't mentioned anywhere.
Around September I noticed a drastic drop in transfer speeds in SFTP. Something like 20-30 Mbps.
Recently tried using Rsync to see if it would make any difference and rsync tops out at like 15-20 Mbps......wtf
This was pulling files from my HostHatch Chicago server.