New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Black Friday 2021 - NVMe and Storage deals - Deploy in 16 global locations (APAC/EU/US)
This discussion has been closed.
Comments
No, they don't have their own network spanning datacenters. They claim to choose datacenters with very premium networks, so each one is independent in that regard.
I'm unclear whether they negotiate their peering and transit separately, or if they just take each datacenter's own BGP blend. I guess it might differ from one datacenter to the other. But this would certainly affect how proactive they can be (i.e., it's one thing if they are a direct client of Telia or GTT, but it might be another if they need to run everything through their colocation partner).
Honestly I can't condemn them so harshly. It's really the same with every provider I've used. The more premium (expensive) ones may be a little more active in contacting their upstreams, but unless the issue is within their own network (which almost never happens in my experience), it's a matter of luck whether anything gets solved. Sometimes it's a transient routing issue and goes away quickly; other times it's a capacity issue that can't be solved without some effort; and yet other times, nobody can seem to reproduce the issue or even cares about it. The last case is of course the worst when it affects you.
I wonder this about most of these problems -- again, not with HostHatch in particular. If I can reproduce slow transfers on certain paths, certainly the guys sitting in the NOCs can see it, too, right? In that case they can at least inform their customers that there's a problem, and not force everybody to do endless MTRs to prove it, and then act like it's a big surprise when they see that some link is at its capacity.
Depends on where you're moving it to I guess. Plus I generally throttle my transfers anyway.
I mean it's possible it's path specific but it seems more widespread than that. They changed a bunch of routes ranging from GTT to NTT but had no effect. At the end of the day, it's literally going from Chicago to New York. It's not like it's going from Chicago to across the globe to Singapore/Tokyo or other Asia Pacific regions.
Then you also have @aj_potc - who was transferring between HostHatch servers in NY and Chicago and still saw slowdowns.
HostHatch finally got back to me that they are investigating with Psychz for their Chicago location.
Awesome! I got a couple servers with them in Chicago and it is slow. I just quit using them period. There NL location works great!
Yeah my Chicago servers have ended up just being idlers. Every other location is fantastic, big fan of their NL one as well, super good network there.
Anyone gotten their Oslo server(s) deployed yet? Mine is still pending at least.
Confirmed. Chicago bandwidth is absolute rubbish for quite some time now.
They might as well shutdown servers, to idle in cold state, in order to protect the planet by not consuming electricity for nothing.
Edit: tagging @hosthatch and @Emil without hope.
wow.....seems like quite a few people having issues with Chicago.......who would've known.....
My question aged well it seems. My Oslo server was just deployed
Also got my Oslo server provisioned today.
## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ##
Yet-Another-Bench-Script
v2021-12-28
https://github.com/masonr/yet-another-bench-script
## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ##
Thu Feb 3 20:04:55 UTC 2022
Basic System Information:
Processor : AMD EPYC 7413 24-Core Processor
CPU cores : 4 @ 2645.030 MHz
AES-NI : ✔ Enabled
VM-x/AMD-V : ✔ Enabled
RAM : 19.6 GiB
Swap : 0.0 KiB
Disk : 90.2 GiB
fio Disk Speed Tests (Mixed R/W 50/50):
iperf3 Network Speed Tests (IPv4):
Provider | Location (Link) | Send Speed | Recv Speed
| | |
Clouvider | London, UK (10G) | 1.19 Gbits/sec | 66.9 Mbits/sec
Online.net | Paris, FR (10G) | 2.64 Gbits/sec | 2.72 Gbits/sec
WorldStream | The Netherlands (10G) | 3.69 Gbits/sec | 3.93 Gbits/sec
WebHorizon | Singapore (400M) | 247 Mbits/sec | 426 Mbits/sec
Clouvider | NYC, NY, US (10G) | 910 Mbits/sec | 974 Mbits/sec
Velocity Online | Tallahassee, FL, US (10G) | 802 Mbits/sec | 1.53 Gbits/sec
Clouvider | Los Angeles, CA, US (10G) | 734 Mbits/sec | 838 Mbits/sec
Iveloz Telecom | Sao Paulo, BR (2G) | 119 Mbits/sec | 791 Mbits/sec
Geekbench 5 Benchmark Test:
Test | Value
|
Single Core | 1141
Multi Core | 4114
Full Test | https://browser.geekbench.com/v5/cpu/12569715
yes.. I do have network issue with the Chicago VPS
@hosthatch @Emil - all my Chicago servers are down. Any info?
EDIT: Did you decide to finally shutdown that location, until you get better connection there?
Has IPv6 been deployed at all locations in the new panel?
Chicago seems to be back up now.
Chicago network had a 33-minute downtime since 08:00 Chicago time.
The sheer amount of push-ups rsync'ed through this network has crashed the router.
We apologize for the lost billions.
Well at least HostHatch is doing some work on their Chicago servers.
Ran another quick iperf and it seems like Chicago -> New York (BuyVM) is getting roughly 300 Mbps - decent improvement compared to the last test in which speeds dropped to 40 Mbps.
On the other hand, BUYVM (NY) -> Chicago (HostHatch) is in excess of 600 Mbps.
So maybe HostHatch's solution isn't fully implemented yet....
Their twitter says that they are looking at a Chicago network issue.
HostHatch Chicago -> BuyVM NY
BuyVM NY -> HostHatch Chicago
I just tested HostHatch Chicago -> HostHatch New York, and I was able to average 1.04 Gbits/sec during a 60-second iperf test. A big improvement!
The opposite direction (HostHatch New York -> HostHatch Chicago) was a good bit slower: 300 Mbits/sec.
Think I may have spoken too soon.....starts out good...but still falls well below even 100 Mbps over a 60 second time frame.
Early Sunday afternoon Chicago HostHatch -> BUYVM NY
BuyVM NY -> HostHatch Chicago averages over 700 Mbps over a 60 second time frame.
Anybody else run some iperfs recently for their Chicago HostHatch servers?
The results seem highly variable on the route.
My previous test sending data from Chicago out to HostHatch NY is now running much slower (around 300 Mbits/sec). But if I test to another server I have in the NY area, it's saturating the 1 Gbit link.
Inbound to Chicago is the real issue. I can't push more than about 50-70 Mbit to HH Chicago from any New York location I try.
Seems I have the exact opposite issue. Going out of HostHatch Chicago cannot break 60 Mbps. Going into HostHatch Chicago - can exceed 600 Mbps.
That is indeed odd! I'm testing from three different NY-area servers, and consistently get pretty good inbound speeds from HostHatch Chicago, but totally lousy outbound.
HostHatch Chicago storage from Nexril Dallas:
HostHatch Chicago storage from WebHorizon New York (ShockHosting Piscataway NJ):
Try just plain old TCP. Rsync uses SSH which is generally TCP.
TCP is not a reliable reflection of L3 network performance.
You should always measure with UDP.
You need both to get more information on the problem at hand.
YABS for $70/year (60GB storage, 8GB RAM + extra 2GB for two year payment) VPS in Los Angeles:
Does anyone know if HostHatch intentionally changed from GiB (1 GiB = 1024 MiB) to GB (1GB = 1000MB) for these NVMe VPSes?
sudo fdisk -l /dev/vda
on their older VPSes show an exact number of GiB - a 60GB offer would have 60GiB space, ie 64424509440 bytes. However, on one of the new VPSes a 60GB disk is only 55.88GiB (59995324416 bytes), meaning it's 6.875% smaller. Only a sample size of 1 so I'm not sure if this is consistently the case.I know hard drive manufacturers use decimal bytes rather than binary bytes, but HostHatch used to always use binary bytes...
yes i did ask, it is the case now.
took over a 1TB STO storage and it came with 931 GiB.
on that note, how's the STO storage users, any issues on your end? I have high IOWait on my VPS that affects single file upload speeds.