All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Dedicated servers with single HDD/SSD - what's the point?
From my experience hard drives are expandable. They die sooner or later, especially under load, "it is not the question of "if?" but "when?"" and all that. So why a lot of providers offer dedicated servers with single HDD? Not only the cheapest ones, but mid-range ones too. Would anyone ever use such system for anything other then testing? Because for production single HDD means that even with ideal backups when HDD dies server will be down for at least a day, or more. And not just that, but whoever manages this server will have to waste some time on things like os installation and backup recovery. At this point a will prefer a KVM VPS with proper raid on host node....
Comments
People are cheap.
This is LowEndTalk..
Well, after this long time, someone opened a thread about this. Because, it doesn't make any sense to me.
When I see some offers here, I inmediately discard most of them, because, it's ironic, how they have the latest processors, lots of memory, and just a cheap disk.
I would prefer to pay some extra bucks for having an extra disk. Or even having two small disks instead of a big one.
Sure it is, and because of it i will expect to have no hardware raid and consumer-grade disks on cheap offers, but single disk seems like a bit too much...
Because all data redudant
To be a Bawler, one must live on the edge. One disk is by far, the most on edge way of hosting customer data.
I mean, i have 2 or more cheap single disk server with different providers with duplicate data.
The simple answer for most end users is they really have no clue about hard drives and raid and so on. Then when it comes to cost they have the options to add more hard drives and raid but 9 times out of 10 they wont because of cost.
On the host using cheap hard drives that would really be up to the host that you are looking at. For some smaller host the mid rang might be all they can afford at that time.
But if they last any amount of time they will see its better to buy something that cost a little more upfront to save in the end.
While we always recommend R1 or R10 setup we get a minority of Customers who would not be interested in resiliency but more in a price, and so they'd decide to go ahead with a single drive option. They might still be doing some distributed storage over the Internet or very frequent backup though.
It is a good solution, yes. But still does not explain why those servers should have only one hdd.
I mean keeping data perfectly in sync may be a challenging task with databases and all that, and failure of one of the servers will still cause major inconvinience with recovery... and it is so simple to prevent... just a simple soft raid 1 of 2 drives will reduce the probability of such event from 100% to very low, and recovery after 1 hdd failure will be as simple as asking provider to replace failed hdd...
An OS install only takes a few minutes because most providers have an internal repository for OS installs over the network.
Often you can find a cheap VPS to make an incremental backup that's cheaper than a second hard drive.
So for a recovery to take more than 24 hours is not realistic.
It depends on the time of the day when it fails as well as provider ticket response/hdd replacement time.
Especially for cheap servers/providers IMO it is unwise to expect that this will happen "instantly", it is more likely that it will take those 24 hours just to replace hdd.
Then, after OS installation, there is still data to be transferred back to the server, which can take some time too...
Wholesaleinternet has replaced drives on $10 servers in under 20 minutes from time of opening ticket and Dacentec has replaced my last drive in around 10-15 minutes on a $25 server.
A single Intel Datacenter(DC) ssd is probably more reliable than a RAID6 hdd array.
Happy with my dedibox XC2015. (they have 100gb ftp space too in mirrored dc)
Sure, RAID is always better but having a failover plan always running (mysql slave on a cheap VPS with enough resources to keep a light version of your website online while you fix the broken server) it can be fine, at least for small / personal projects.
But it's surprising why so many provider don't even offer the option for a secondary drive!
I don't think so. Do you have any data to back this up ?
I don't see how a single drive, that can have a defect that may show up when you write a large chunk of data for example, can be more reliable than a number of drives in N+2 Config.
That's not even taking into account the TBW limit on every SSD drive
It all depends upon your requirement and budget, as not all customers need multiple drives hence usually all providers offer a single drive as default for low end servers and make additional drives optional
I don't. I just picked an easy to attack example. Hyperbole, if you insist.
A RAID6 array with large 4TB+ HDDs is pretty vulnerable during the long rebuild, no.
Yeah, but you ain't gonna have a single 8TB Intel enterprise SSD either, plus that's a huge data loss if one would fail.
It's better to be at risk while rebuilding, with a chance of positive completion, rather than be at risk of not being able to rebuild at all, as it is in case of a single drive option. No matter how reliable of enterprise it is perceived.
I can understand when second/additional drive is optional, after all someone may need a server for testing purposes. But there are a lot of providers who does not offer this option at all...
Aslo enterprise SSD-s are not as different to consumer ones as one may suppose. They use exactly the same memory with the only difference that data retention requirements are lowered and as a result number of write cycles it is certified for is increased and additional memory is added to increase possible amount of data written.
And no single device will ever be more reliable than redundant array of devices...
What happened to the "RAID is not backup" mantra?
Personally, I don't do RAID for 2 reasons:
a) I don't know how to do it with ESXi.
b) I have automated backups.
c) I have redundancy where it matters. Where it matters less, it can wait for a restoration from backups.
(b, c) is one reason
Yes, raid is not backup.
Backups are needed to avoid data loss, RAID is needed to avoid downtime and additional work on recovery, where it is needed.
If one disk is the the most on edge then what do you make of my Raid0 setup?
LOL
High speed bawling
We have a few dedis that work as crawlers, they just need a single HDD or SSD with a solid CPU and port speed. Most are E3's with single 120-240GB SSD.
Our Hetzner boxes come with a pair of SSDs and we just RAID 0 them because.. fuck it. Deployment is like no time at all, a box drops off and it's pending jobs just time out and go back into the queue.
because its idling server and its LET, people love buying server.
or
maybe most is webserver, and have automatically backup data on another provider and if something happens just migrate to new server in few minutes or have HA, point to new server and done. but all depends what development are you running for.
Are they backups or just copies? An untested backup it not really a backup, so it is a good idea to have some automated testing and reporting as well as automating the backups themselves.
But similarly I have boxes with only one drive. Their content is such that it can be rebuilt on a replacement from other sources (backups etc) quickly enough should something bad happen. And in some cases "quickly enough" does not need to be very quick at all as they exist for convenience. More important stuff (data/service that I need/want higher availability for, primary backup locations) is always on RAID1/10/5 volumes of course.
Um, people would launch entire VPS companies on such setups.
Heck, 32mb.club ran on a single 500GB SATA...
https://borgbackup.readthedocs.io/en/stable/
I do like automation in general so I'm not in principle against having some test suite running against the backups but I'm too lazy to worry about the possibility of the software not working properly.
The answer comes down to cost for me. Hard to beat 2tb of storage for 12/month. I'm aware of the risks.
I am taking large quantities of readily available source code, indexing it and hosting it/the index as a reference for myself/others to use. It is a non critical service that if need to be can be down for multiple days.
Would I like something faster than an atom that had redundant storage? Yes, but I scripted everything when setting it up so building it back up again will only be a minor inconvenience WHEN this dies.