New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
It should never take days, the provider should have been pro-actively monitoring the situation and ordered new drives/servers/whatever required to fix the problem before customers needed to start complaining.
Who's to say they didn't? Do you also think that they are required to order overnight shipping? Maybe they are trying out a new build on the next server and they ran into some problems and had to order more parts.
@Corey Yes I have to tell you that this has been the first major issue and this is probably why I can have some patience with them. Also, you are right again, they did provide responses to every message I have sent them so I must give them some kudos for that because I've been with some providers that answer with "We're working on it" and then a day later I message them back to find out what's going on and they say "It's been fixed" as if my human brain wouldn't want to know what was up
@JoeMerit you do make a valid point there but I guess sometimes people make mistakes and not do what they have to plus this mistake or unforseen event didn't cause me to lose my data - and still I guess I would be forgiving as long as my backup works but I'm not demeriting what you say... there should be a reasonable level of proactive approach.
30mb/s - a normal value for virpus vps when I was with them (over a year)
Not look good.
Whatever, I'd be out of there. If a single drive failing causes days of slow as molasses disk i/o then we got someone cutting corners on hardware and I dont want to be part of it and certainly dont want to risk being there for the next occurrence.
What do you mean cutting corners on hardware? When you loose a disk out of any array the array becomes slow as molasses. On top of that they are probably a small business, when they initially bought the server they probably didn't have an unlimited budget like cvps-chris #winning. I thought this community welcomed small businesses and startups.
You have to look at this in the big picture, not from just a simple client's point of view that doesn't care about their provider and just wants service 24/7 365 with no hiccups. ( I know a lot of users here are pretty savvy and know there will be hiccups and are rsyncing their vps to other providers for when there are issues like this. )
I'm all for supporting small businesses and startups. However, it doesn't take an "unlimited budget" to have a disk array setup that doesn't make a node dog slow
if a disk is lost. I'm not going to stay with a host out of pity just because they are small if there is another provider doing it better for the same price.
I'm pretty sure a disk can up replaced in at most a couple of hours. If the data center or the provider can't organize this, something's wrong. A spare is kept onsite?
The issue here is the performance of the node after the drive failed and then the 3 days (and counting) of crappy performance that risharde is getting which is presumably due to the array rebuilding.... Corey said he thinks risharde's provider is "great" for how they are handling the situation and appears to be suggesting that mediocre providers should be given a pass if they are small businesses. I don't.
@risharde
1. dd test tests write only, while most (5:1) IOPS are read.
2. dd tests sequential write which is not going to happen at that speed almost in no situation, only copy from one directory to another, and even then...
3. ioping shows the "responsiveness" of the storage. I.e. how long it takes to aknowledge the request and start serving it. It depends on more factors than just the storage speed and shows much better how well your app will be have if it needs frequent IOPS.
Since most IOPS are not sequential in a real life case scenario and not write but read, dd test can at most give a vague idea of how good the storage is, for example, in a SAN situation the sequential operations are not going to be blazing fast, however, response time is great for a big array on fibrechannel, but nothing beats local SSD in any situation.
Well, except a big SSD array :P
that dd that is acceptable, but the IOPing is terribad.
I don't either. If the array of disks are going to be HUGE at least get RAID10 or something and not RAID1. 3 days is a long time.
@Maounique Thank you for the explanation, much appreciated (I miss the thank button as well lol).
@SimpleNode agreed
@concerto49 hmm I'm really not sure what they are using, I just assumed it was RAID10 since most (not all) of the providers I come across use RAID10.
@JoeMerit I understand what you mean, you never know, maybe they might be kind to me later on... who knows... will wait and see, for now, I'll wait and see, if it becomes a habit then I'll surely have to think of moving
I'm wondering if it was multiple disks that started to function wrongly but right now, I'm just speculating, I think now its a good idea to use that IOPING more often to check how the disk responds. I'll keep you guys posted on what they say if they say anything further.
Definitely seen hosts with RAID1. We don't do it, but seen it. SwitchVM did before they got sold.
Our oldest node is running RAID1
^ On RAID1 Node.
^ On RAID10 Node.
Degraded arrays should not be left for any period of time.
^ H/W RAID 10 Node
^ S/W RAID 1 Node
^ S/W RAID10 Node
31mb/s is awful. 31MB/s is ok, depending on what I'm doing with the box.
I'm not sure how they are mediocre for loosing disk IO when their disk fails is what I'm getting at. Should every startup have 8-12 disk raid 10 arrays at the start? Are they stupid and mediocre for not doing this?
What I'm getting at is that you don't know their prior experience server administration when they first start their business. People generally learn from their mistakes. Startups will make mistakes. It's how they handle these mistakes that make them 'great'. Should everyone have worked for another provider for X years before starting up to learn every detail of the industry?
In this case they are ordering a brand new server because evidently they learned that their old build IS in fact mediocre.
How does them learning from their mistake and getting new hardware in for risharde here make them mediocre? Of course - getting brand new hardware in after learning that they made a mistake is going to take a few days.
I'm not sure what you mean about a huge raid1 array? Raid1 is only two disks ever. If you've ever had a dedicated server you know that you need 30 days advance notice to the provider to cancel. If you've ever had rackspace you would know how big of an investment that is. Who's to say they aren't stuck with a slow datacenter because they don't have the money to move away? Startups do not always have #winning budgets.
err. err... err.... err...... ;(
I think @lele0108 also uses RAID1 RE4s
Oh folks and their love of RAID... Not sure how people make a dollar with these strands of drives and schmancy controllers Actually not sure how folks are packing density of drives in low unit space. 1U servers don't afford much in the way of drive bays.
@Damian, the dd output stunk. 34 MB/s I'd call problematic. Depends on node load though. If it lingers there typically then might be tolerable for most folks.
The ioping times were all over the place. I see those sorts of wild deviations on many providers VPS'es. Very common. Again, is that typical and what happens under normal load from other VPS'es on the node?
Could be active use, slow drives or drive failure in RAID.
Not quite
So mirrors are now 3 - 4 way? How?
Anyways, all of our newer nodes use 4 disks in hardware RAID10, as we can't fit any more into a 1U server.
Server density is how you make money in this market. You can't get as much density from 1U servers.
By mirroring it 3-4 ways?
By mirroring on 3 - 4 disks
http://www.supermicro.com/products/system/2U/2027/SYS-2027TR-HTRF_.cfm
We have some of those, good wattage per unit, good drive density, very fast with LSI raid and SAS2 10k drives.
Downside ? Cost and arm and a leg.