New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I had good interactions with them till now ALWAYS helped and also were super nice... but this time I had bad luck, it looks like and because I can't make it work, I've canceled my box...
>
What was/is wrong with it?
after the migration ipv6 stopped working, I tried to fix it... but it keeps crashing after a few minutes...
I experienced the same. I tryed everything possible without luck.
Only solution I found was reinstall from the template instead of image. Then IPv6 started to work miraculously.
thanks for the hint... i don't/didn't wanted to reinstall it... but I guess I have to do it...
If you haven't already reformatted it, have you posted your net config? Asking because I had similar but didn't have to reformat. Is be willing to take a stab if your game. Which OS?
What was crashing? You probably have a bad kernel setting or something. Having the crash output would help point to hardware or driver.
If it's what I remember, IPv6 comes up and works for ~10s and then just stops working. Nothing wrong in any log files. If my admittedly sketchy memory serves it had something to do with the .1 being your primary IP on that interface but it wasn't 100% conclusive.
To be honest, I am not sure what you're suggesting. Network setup was okay. It's routed /64, so :1 is added anyway to IPv6 work at all as that's the assigned address the traffic is routed to. In mine and someone elses case IPv6 died after several minutes anyway, so that's not related.
The only difference between my image network setup and then temple network setup which work without interruptions is that template still use old eth0 interface. Problem must be elsewhere.
And to be funnier that's the issue related only to some HH locations. Before I reinstalled my Norway VPS with template I had identical (with different IP subnet of course) image setup at their Austrian and Norway location. Austrian /64 worked flawlessly, Norway /64 after few minutes died. If memory serve me correct someone reported the same issue with Stockholm (sweden) location.
I tryed some other things (ie. added net.ipv6.conf.all.accept_ra = 0 net.ipv6.conf.ens3.accept_ra = 0 to sysctl.conf, etc...) but no love, so finally I gave up and reinstalled to the template where IPv6 don't die.
Okay, never mind then.
In some HostHatch locations,
fe80::1
didn't work as the IPv6 gateway, and I had to use the "legacy" gateway instead, for example2a04:bdc7:100::1
in Los Angeles.Does the template do this? It's a bad template then
.
ifconfig
is legacy and not guaranteed to be on every system. It's not installed out-of-the-box on a minimal Debian install, for example. It should useip addr add
instead ofifconfig inet6 add
. Everyup
command should also have adown
command that does the opposite.On Debian I've had better luck using the legacy approach (an interface alias for each IP) vs hacking scripts in to add IPs:
Having said that, I'm not sure adding the extra IPs should even be required if it's a routed /64.
@Daniel15
fe80::1 didn't work as the IPv6 gateway, and I had to use the "legacy" gateway instead
Interesting. They explicitly requested to set up fe80::1after the migration otherwise IPv6 wont work.
Well fe80::1 in my case worked, but it seems like IPv6 went to sleep after some time.
Does the template do this? ifconfig is legacy and not guaranteed to be on every system.
Not template. You're correct but it doesn't really matter as those were additional addresses anyway and in this case also default "address 2a04:bdc7:xxxx::1/64" died after some time.
I'm not sure adding the extra IPs should even be required if it's a routed /64.
It's not - you should be fine with 2a04:bdc7:xxxx::1/64 if you require just IPv6 connectivity and don't need more addresses. For my usage it's always nice to have more addresses to set up IRC bouncer vhosts and such.
I've used them in the past with good results. Although, I tried accessing their front page a couple weeks back when I was in the market for a cheap box and their frontend was down.
Checked a couple days later and it was back up but found that they aren't offering their 256mb OpenVZ boxes anymore so I've looked elsewhere since then.
I have services with HH and I'm satisfied with what they provide, the problem here are the expectations... Provider should clearly say in the promotional thread that support is basic without any SLA (or whatever designation they want to) and everyone would agree to that before buying any service.
Hosthatch offers is generous but as far as I am concerned, the network reliability seems to be kinda lackluster in Singapore. Do not misunderstand, the network did not go down but it seems to have some kind of weird DDOS protection which timeout incoming UDP every now and then. It also seem to fail in some TCP connections.
It could be my application problem but I don't have this problem on contabo.
So, what should be expected for promotional services? Like what is the ETA of any support if the plan was promotional? In a short while I’ll be waiting for two weeks for example, would that be within the acceptable time frame? And clearly I’m asking for a time without any crazy deals and so that there should be at least a minimal expectation @hosthatch
EDIT: Oh, before it’s even asked, ticket #230672
don't expect nothing and react 'surprised/happy' WHEN they answer.
Is yours an IPv6 issue too?
Better yet, a rep from hosthatch contacted me on here asking for details.. before promptly ignoring my response. That's such a great look. Good job guys.
Two weeks is a good response time!
I haven't heard anything about migration from the Los Angeles storage server that experienced data loss - neither them migrating all VPS on that server to a different one with different hardware, nor them creating a new VPS on a different server with the same size HDD that I could manually migrate to. It's been 3 months since the problem occurred and 1 month since I last requested an update in the ticket.
Plus I have this one that just hasn't gotten a reply at all yet, about TCP connections dropping if they've been open for a while (I hit this a lot with SSH). I included a lot of detail plus a Wireshark capture:

I've been using HostHatch for one of my sites for a good half year, and I'm actually quite impressed. I find I don't need a lot of support, but so far, I've actually found HostHatch quite solid.
Is this happening at one of the Psychz locations like Chicago, by chance? I and others have observed some odd network anomalies (i.e., dropped packets) at that location, and I was able to record these events in Wireshark captures. I've moved affected applications to other providers/locations, and haven't tested it again recently, so I can't say if this is still happening.
Yeah, Los Angeles. If it's a Psychz issue, they should at least reply and say so, and forward the report to Psychz.
I did some more investigation since opening that ticket, and the inbound packets do still reach the server when this occurs; it's just the ACKs for those packets (plus any response from the server) don't make it back to the client for whatever reason.
That's essentially what I observed as well, but it wasn't trivial to reproduce. It involved generating a certain amount of inbound traffic into the server with a commercial backup product. The data transfer always started well, but after some time, the server would stop sending out ACKs, causing the transfer to pause and triggering retransmits/retries. Eventually the backup would finish, but at greatly reduced speed and with multiple transfer errors reported in the backup debug logs.
Interestingly, I could never reproduce the issue with iperf3 or by sending data over SSH. Those always worked fine. So I guess some aspect of the protocol used by the backup software, in combination with the data volume, was not playing well with Psychz's network. I tried pointing my backups to another HH location and the problem disappeared.