New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Wrong department. The reason for that simply is is one of personal priority. Some providers go for safety and take the concerned VMs or even the node offline, while others go, to put it simply, for no downtime plus earliest rebuild possible ("my uptime numbers and my equipment is priority") and don't care a lot about performance for users (during rebuild).
Plus evidently it's a question of which segment a provider is in. Raid 5 and 6 are cheapest so in a segment (low and mid end of LET) where price is a and likely the, decisive factor providers tend to go for Raid 6 plus one or two spare drives for the whole location (not per node or array). Why Raid 6? Because it's the "sweet spot" in that segment; just one additional drive gives "double protection" (so they please to think and so they read in marketing material) over Raid 5 but in most cases still is much cheaper than Raid 1 or 10.
Plus evidently "automatic array rebuild without technician intervention needed" is irrelevant with a decent provider because they have someone in the NOC 24/7 anyway. But hey soon you'll see marketing material saying that their equipment (incl. drives and Raid controllers) are AI controlled!!!, hurray /sarcasm
Did anyone receive the compensation? Mine still says
That only has relevance if we knew your service was paid up for 1.5 years already.
Yeah sorry, that wasn't clear, not sure why I just pasted that hahaha
This was the last blackfriday deal of 2 years prepayment and it was deployed on 31-12-2021 even though they said this on their deployment email
But that didn't happen either, kinda makes me a bit confused.
Sounds like you have plenty of time to ticket and have that corrected.
Yeah, I am just asking here first because I read that they take a very long time to reply to tickets and I thought the info might be correct elsewhere, because of the new portal old portal thing.(Signed up when only the new portal was available, and it didn't have some features) Will wait a couple days to see if anyone got the 3 months or is it only me, and if so I will contact them.
TBH I really like their offer and I wouldn't be mad whether I got the compensation or not.
For any backups toy care about, I'd really recommend using Borgbackup's append-only mode. It disallows the client from deleting any data from the backup. It means that even if your server gets hacked and the attacker also deletes the backups, you can still recover. A lot safer than just a blind rsync mirror with no historical backups. Borg's deduping is pretty good so I keep daily backups for the past month as well as monthly backups indefinitely.
Will the new nodes in Los Angeles have the same config? I'm still using my storage VPS that experienced data loss, but I think I'd like to migrate to a newer node at some point (once my other VPSes can be migrated to the new panel, as I'm using the internal network between all of them).
My hacked-together-with-rsync backups use filesystem based snapshots (not terribly efficient, but it works: many hardlinks) to keep old versions of files.
Unless they properly kill the backups by gaining access to the backup system, rather than pushing an empty volume from the client. I use a soft-offline arrangement to protect a bit against that, and true offline copies for a few really important bits of information.
I'm like to be wary of deduplication, as the whole point of backups is strategic redundancy. My snapshots do the same, many daily/other snapshots hardlink to the same unchanged files, but I have more than one snapshot chain so there isn't just one physical copy of each. Otherwise you only really have one backup copy of a file that never changes but it could be a vital file not to lose and if you get a bad sector where that one much-referenced copy is held all copies are gone. Less of an issue if you follow good practise (3-2-1 or similar) and have more than one physical backup location, which I do too, but storage is relatively cheap these days so no harm in being paranoid enough to have multiple copies at each location too.
[I have nothing against Borg BTW, it seems a well regarded and very capable tool, but I've been rolling my own setup to acheive the layers of protection I need/want since long before it existed]
Again, provider basically say that customers are idiots
HostHatch is down again for me, any one else facing same issue?
Amsterdam Node
Looks to be down, got a message from support saying it's being looked into. Hope they didn't get hit by the bathtub curve on those new drives
meeeeeee
Makes sense why whoever has their box is eager to transfer it off
Yeah, me too. Hopefully is not drive failure again. Is this the end of my journey?
I'll happily take any transfers
Also down again in Amsterdam since approx 5am UTC. Symptoms different than last time - instantly dead rather than IO-related freezes.
Server page in their customer panel says: "The node this VM resides on has had a hardware failure that is preventing it from booting up. We are currently looking at replacing the failed hardware."
Except This downtime and Node failure last time, Experience is amazing. Great bandwidth and fast hardware.
Anyways, My server is up again.
Server is back up as per previous post, but noticed it has a slower CPU: was EPYC 7551P with 671 single core geekbench, now E5-2620 v4 with 518 geekbench
Real world perf loss for me is within 10%, rather than the 30% suggested by geekbench, so not the end of the world. But at 1/4 the physical cores I guess there is less CPU to go around
Edit: Actually don't mind me, turns out it was never sold as specifically Epyc, just luck of the draw