New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
The way I interpret it is that I can use one core 100% with no issues (or 1/3 of each core as you mentioned). That's essentially dedicated CPU in that it's one dedicated core (or at least one dedicated thread).
Load average is not always directly correlated with CPU usage percentage
Notably you can have a high load average but low CPU usage if iowait is high.
Has anybody have internal IP working between 2 servers in the same DC?
I have it setup but when I ping, it says host is not reachable. Tried ICMP and tcptransroute but no luck.
@dosai said:
I have it working in Chicago and Stockholm with no issues. Make sure your netmask is /8 (255.0.0.0) and your firewall rules allow it.
Thanks, I have verified both still not working![:/ :/](https://lowendtalk.com/resources/emoji/confused.png)
What location are we talking about. I think they mentioned that there's one (or two?) locations which have two separate private networks.
Might have been Amsterdam or London - but I'm not sure.
AMS - AMS (same DC)
Yes, they have two different private networks in AMS, same DC. Open a ticket, they’ll migrate one of the VPS to the other network. Only the internal IP will change for you, nothing else.
I have a ticket open and yet to get a response. I will get clarification on different private networks in AMS. Thanks.
It's working for me in Chicago.
Note that it's not a private isolated network just for your servers. You can see other customers on the network too. There's a small but constant amount of broadcast traffic flying over the network. If you want to use an unencrypted protocol like NFS (without Kerberos), you likely still want to encrypt the connection, for example by using Wireguard between the servers, even over the internal network. I think HostHatch said somewhere that they'll be looking into isolated networks in H1 2021.
Did you get a answer to your question?
Right, but only @hosthatch should be able to sniff the unencrypted traffic, making this network somewhat safer than the Internets.
Not yet. I don't want to bother them by asking in a ticket so I'll wait until the new year.
Nothing to stop a bad actor from arp poisoning and man in the middling you. Take appropriate measures, it's not a dedicated line.
We do not enable nested virt for anyone at this time.
Thanks for your reply. Hope you're taking some much deserved time off during the holiday period![:smile: :smile:](https://lowendtalk.com/resources/emoji/smile.png)
Yeah, this is what I was thinking. I think all the VPSes in the same location would be in the same broadcast domain on the internal network.
Isolated networking (like what BuyVM are doing now) is better since it's an isolated network per customer, so you can choose whatever IPs you want for the private network, and it's just your traffic on it.
Does anyone else find that HostHatch VPSes kick them out of SSH relatively frequently, even when the connection isn't idle? I'm connected to a bunch of VPSes via SSH at the moment yet just the HostHatch ones sporadically throw an error like "Connection reset by 2605:4840:xxxxxx port 22". I enabled keepalive in my client-side SSH config (
ServerAliveInterval 60
) and it didn't help. I guess I should probably install Eternal Terminal or mosh.Yes I've noticed long running borg backups via ssh (ipv4) die with a connection reset error after some hours (only with HostHatch)
I always using screen when to do the long backup. This makes the resume easier.
My last rclone full backup from server to gdrive took few days. The gdrive has the max write per day. rclone --bwlimit 8.5M works all the time.
Mine will stay connected for days. I'm using IPv4.
Today, I check my screen session and has terminated. Seems there's a soft reboot maybe?
Yeah I usually use tmux for long-running things but I'm not used to requiring it for short-running things too. I've had ssh disconnect while I was actively editing a file.
Interesting. Next time I'll try connect via IPv4 and see if it's any better. All my servers are dual-stack and I've got native IPv6 at home, so most of my connections are via IPv6.
I've noticed the same one some boxes. Seems like it only affects IPv6. A workaround seems to be running a ping to your gateway in a screen, but.... meh.
Sounds like IP duplication, or spoofing.
Did you set the appropriate stuff in
sysctl
? That is something in my default setup which might be another difference.What about CIFS? Is it sufficient to mount with the ‘seal’ option?
I'm surprised this deal isn't sold out yet! Great value! Even if @hosthatch said it isn't as great of a deal as the original BF storage deals. I wish I had more money to buy more. LOL. Looking forward for the HK VPS I ordered on the 31st.
I haven't used CIFS much (is it the same as SMB?) so I'm not sure, sorry.
I'm currently trying NFS over Wireguard (at least until HostHatch have isolated private networks) and it seems to be working well. It increases CPU usage a bit, but that's there's always going to be some overhead with any sort of encryption. There was a proposal from Red Hat for NFS over TLS a few years ago which would have been similar to HTTPS, but I don't think anything ever came from it.
SSHFS may be OK too, but I've also never tried it, and I've heard it's quite a bit slower than NFS.
It's only $5 more per year than the Black Friday deal. Still a great price. I grabbed one of the 16GB RAM 80GB NVMe boxes even though it was $10/year more than the BF price.
among the 3 loc, which one is better for singapore?
Test using Looking Glass:
Chicago: http://lg.chi.hosthatch.com/
Stockholm: http://lg.sto.hosthatch.com/
London: http://lg.lon.hosthatch.com/