New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
They'll probably go to $20...
@hosthatch Hello, I have got one of 10t storage server,are the 2 CPU cores both dedicated?
@shunglay you probably have at most 50% of one core dedicated.
As far as I know, there's no dedicated CPU on storage servers, it's just a "fair use" policy. They're not designed for heavy compute use cases.
If you need a lot of CPU power, you can get both a storage VPS and a NVMe VPS, and mount the storage on the NVMe VPS using whichever technology you like (NFS, iSCSI, etc). I'm using NFS and it works great for this use case.
Note that since their internal network is NOT private (that is, not isolated per customer) , you should use encryption+authentication even over the internal network, as there's potential vulnerabilities otherwise, such as interception/sniffing via ARP spoofing.
I run Wireguard over the internal network, which handles both encryption and authentication (verifying that the nodes are who they say they are via public key cryptography)
I've got a tutorial here: https://d.sb/2020/12/nfs-howto
Thanks for this, never even considered Wireguard for this use case. Much less complicated to setup than what I have been using previously!
Oh, thank you~ It's a good solutions. I have got nvme vps too, I can try it.
At work we use Kerberos with NFS, but there's an entire team whose only job is to maintain the corporate network. It's way too much setup for a simple use case like this. Kerberos is overkill for communicating between two servers with a small number of user accounts, in the same way that Active Directory is overkill for authentication on your home PCs (unless you have some sort of large fancy home lab).
I don't understand why, after so many years of use and development, NFS still doesn't have built-in encryption without requiring Kerberos.
While so many of the other core Linux tools use encryption -- or at least make it easy to enable -- NFS is like something from the dark ages. It's specifically intended to share data between servers, it's super stable, and yet it isn't suited for use on the public Internet. Really a shame, because it's quite easy to set up and use.
What about the blind and direct user ID mapping by number, was that not enough to hint at the above?
I find CIFS (Samba) to be more palatable than NFS in some aspects, not to mention easier cross-platform compatibility (even though Windows can have NFS enabled). And now Samsung is working on an in-kernel Samba server, when that's ready, the performance will benefit as well: https://www.phoronix.com/scan.php?page=news_item&px=Samsung-KSMBD-v7
Likely yes, that is how much the IP pricing has gone up...I guess we'll try to add in a bit more storage to add some more value to the plan
Why not remove IPv4 entirely, add /48 IPv6 only with private IPv4 (for people to transfer data between servers unmetered), and possibly put some cheap add-on of $1/year NAT for internet access. In the end, many people just want storage.
As for 10 TB storage, the default bandwidth is 15TB, and I pay for the additional 10TB bandwidth option, so I can have 25TB bandwidth. If I pay for two years, bandwidth will be doubled, so I can have 25*2=50TB bandwidth, right?
Will the additional 10TB be doubled too?
I want to know what percentage can I use without risk suspension?
Writing 5MB/s via SFTP is causing 15% CPU usage, sometimes lasting hours.
If this percentage would ruffle any feathers, I can throttle it to a lower speed.
For storage servers, I'd prefer both /64 IPv6 and IPv4-NAT included in base price.
I see no benefit for larger than /64 on a storage server.
IPv4-NAT is useful because IPv4 and IPv6 routing are different and one may work better than the other.
Transfers using IPv6 addresses at the same location shall be configured as unmetered too.
Switching to private IPv4 for local transfer would increase setup complexity.
@hosthatch what ranges would have ZRH and could I choose the IP?
can i have a test IP and LG?
Yeah, that's surely thrown me for a loop in the past.
I haven't seriously looked at Samba outside of situations where Windows is involved, and that was a very long time ago. Perhaps it's worth another look.
Another very easy option is SSHFS/Fuse, but since that runs in user space, it's not always compatible with every application.
IP is auto assigned, no choosing (I can't recall any mass provider who allows choosing the exact IP address).
LG: lg.zrh.hosthatch.com (45.91.92.92)
You can train your reading skills at this site and find the required information yourself: https://hosthatch.com/features#datacenters
@hosthatch I ordered a bundle, are those requests getting any attention? I know it takes 3 days to set up VMs, but some sort of confirmation of the order would be nice in the meantime.
There's an RFC draft for it, originally published in March 2019 and most recently updated in November 2020. Hopefully that'll be approved and implemented one day: https://datatracker.ietf.org/doc/html/draft-ietf-nfsv4-rpc-tls
I remember Samba being really insecure, but that was maybe 15 years ago, so it's probably better now. When I get a chance I'll benchmark it vs NFS. For my use case I only needed to share from the storage VPS to the NVMe VPS, so Windows compatibility didn't matter.
(even though Windows can have NFS enabled).
For some strange reason, Windows only has an NFSv3 client, even though Windows Server has an NFSv4 server. huh.
Wow, I didn't know about that! I wonder why Samsung are working on it.
In theory, you can use an IPv6 unique local address for the private network... No need to use IPv4. However, Hosthatch don't support this Maybe it'll be doable once they move over to a VLAN per customer (an isolated private network, rather than an internal network shared with all customers in the same DC)
@hosthatch Do you have this plan still available?
Yes. Ordering is possible.
Cheers,
Put an order in tonight. Will be good for my backup’s
Starting 2017, when Microsoft started killing samba V1 with fire.
Is there no one who can handle the ticket? No reply in the past few weeks
How to check and use the double resources without reinstalling the OS?
Do you have a ticket #? We get a bunch of tickets every time we post such a promotion, asking for some sort of strange customization, additional discounts, or faster setup, we don't offer any of these things usually.
You can also hard reboot your server, and then resize the disk manually - easier to reinstall since it's a new server with nothing on it.
Do you check Tickets every few days? Is there too much that keeps you too busy?
Welcome to lowendtikets.. We've doubled your bandwidth..
Yeah, there are a bunch of people asking for additional discounts, variations on the special offer, or how to do basic things.
Ticket handling is a weakness, unfortunately. It seems to be hit-and-miss.
I've got a couple that have been open for a long time (one month+). One was involving a reproduceable network issue that HostHatch never acknowledged in the ticket -- maybe because they didn't have a solution, but still, it deserves a reply. And the other is something that they could resolve in a couple of minutes, but for some reason haven't handled.