Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In with OpenID
Advertise on LowEndTalk.com

In this Discussion

Why IO tests are overrated.

Why IO tests are overrated.

MaouniqueMaounique Member
edited September 2012 in General

OK, this is going to be long but I feel it is an issue that has to be raised. We see more and more ppl posting dd tests, iopings and things like those and they all think that being unable to write a 1 GB continuous random file at speeds above 100 MB/s or ioping spikes once in a while means the storage is bad, host oversold and whatnot. I agree I am looking more for bw and dont care much about storage speed as long as it is in good supply, but asking huge speeds from our hosts at LEB prices we are actually lowering the storage capacity indirectly. They put expensive SSDs or SAS2 arrays, as such space is not great (even tho the ssd cache technique seems promising).

Question is, who needs that much speed ? Will my blog load slower if the "DD" speed is below 100 ? Or below 20 for that matter ? Do I really need to write fast for my site to be snappy ? First, write speed for RAID is way slower than read, usually, and for serving pages or anything from the web, means you need read and not write speed. Writing big (1GB+) files is not likely to happen in a production environment, will only matter in backups of some sorts, and, even then, the speed will be limited by other factors, such as encryption or compressing, making it highly unlikely you will be able to generate the file at speeds close to 100 MB/s, not to mention download from somewhere on shared ports of 1 gbps or below.

Agreed, these tests give an IDEA about the quality of the storage and the level of "crowding" on the node, but only an idea and not a verdict. A regular SATA 3 array will cost 5 times less per GB and if I have to choose between 20 GB sas2 over 100 GB SATA 3 with a 1.5-2 times lower "DD" speed, I will always get the higher storage. On the other hand, if i need fast DB or a game server, I choose SSD and this is it.

Next time you are bashing hosts for "slow" "DD", think if you need a faster one and if not the capacity would be a good trade-off over speed. Read operations are 5 times more frequent than write operations, and that is where the performance should be monitored as well as in ioping for responsiveness, but if you see a spike out of 10, it does not matter, in a shared server, it ALWAYS happens that someone is doing some IO intensive operation, if you need constant iopings, go dedicated.

In conclusion: I have high hope regarding the SSD cache thing, that can speed up cheap SATA 3 arrays, a good quality drives will mean cheap and solid storage, but, int he meantime, please dont force providers to buy SAS2 drives and reduce storage just because you want to win in the dd race, we are supposed to give a trend to the market, to show the way for thousands of ppl reading here to make the difficult choice of providers. Even so, hosts should consider have more kinds of offers, some with high speed storage (SSD or SSD cached) and whose with plenty of space, (SATA 3 with or without SSD cache). I will always choose the higher space offer, over the one 2 times as fast. M

I am only representing myself :)

Tagged:
Thanked by 4Taz KuJoe Chan risharde

Comments

  • KuJoeKuJoe Member
    edited September 2012

    The problem I see with these types of benchmarks is that hosts are focusing specifically on these benchmarks and not real world experience/usage. We're a slave to the consumers and consumers want high DD speeds (which can be far from accurate).

    We tossed all of our 2TB drives we started with in favor of 500GB SAS drives so you are right, the amount of storage does suffer for the sake of speed.

    Our original 128MB OpenVZ plan came with 30GB, now with our SAS drives we offer 10GB.

    Now of course I don't expect other companies to be as open as we are, but the 60MB/s write speeds were causing a lot of bad press for us so we had to make a change.

  • MaouniqueMaounique Member
    edited September 2012

    @KuJoe said: consumers want high DD speeds (which can be far from accurate)

    Yes, synthetic benchmarks cant tell the full story, but even if they could, why would we need such speeds ? My site will not be able to tell the difference from 10 to 100 MB/s especially on a slow network, unless some big db query with write operations will be needed to display the page. IF we need high speed, SSD is the way, for most regular usage and not 300 or so users on a server, any 10 or so raid will do so ppl will not be able to tell the difference without those synthetic tests. There is no problem for me if hosts respond tot he market, but in this case the market request is unreasonable and it forces hosts to give less space to satisfy the ego of a few ppl that want to win the dd race.. M

    I am only representing myself :)

    Thanked by 1risharde
  • @mitgib said he only pays for a gigabit because people like seeing more than 100mbit but he doesn't actually need it.

  • Well, it does help in case of small DDoSes, but it is becoming irrelevant as botnets are cheaper and cheaper. But, anyway, rates are not that big and 1gbps is trivial these days, while SAS2 disks are closing to SSD prices/GB. It does cost a lot to make a fast atorage and that is one of the main reasons plans are expensive. Lets see how the SSD cache pans out, i really hope it will make a big difference. M

    I am only representing myself :)

  • @Jack said: only pays for a gigabit because people like seeing more than 100mbit but he doesn't actually need it.

    That was true in the beginning, but I do actually need over 100bit now, but what has that got to do with I/O tests being over rated?

    Hostigation High Resource Hosting - SolusVM OpenVZ/KVM VPS
  • @miTgiB said: but what has that got to do with I/O tests being over rated?

    Wlecome to LET and threads being derailed fast, however, this is somewhat related regarding on how the pressure from customers is actually making the plans more expensive without real benefit... Or actually making a loss for both parties, as storage decreases and prices increase, making it harder for hosts to sell and customers find good storage and a low price. M

    I am only representing myself :)

  • jarlandjarland Member
    edited September 2012

    For me it's always this question: What is the buffer between me and abusers? Obviously I am going to trust my provider to deal with abuse or they won't be my provider. However, what I want to know is that it takes more than 1 or 2 average abuse cases to cripple performance. Obviously this depends on the type of abuse, but an array that gives me 10x the results of another (ioping strongly considered here as well) is going to require more abuse than the one that performs at 1/10th of it. The reason this matters to the LEB market is that prices are low enough that they often attract abusers. What we want to know is that we're safe from a complete loss of reasonable performance every few days.

    While I agree with you that it has become excessive and misunderstood, you'd be wise to consider the reason such a thing became popular to begin with. Bad disk performance in the low end market is becoming less of an issue, and though I can't prove it, I'd bet money that it's directly caused by the excessive use of dd tests to judge providers. It may be harsh, it may be misunderstood, and it may not tell the whole story, but it does more than ignoring it and letting providers slip by with the lowest possible investment.

    Plus, is it just me or does ioncube run like pure trash on the performance of a single consumer SATA drive?

    jarland.me | Read about my new hosting experiment.

  • @jarland said: Obviously I am going to trust my provider to deal with abuse or they won't be my provider. However, what I want to know is that it takes more than 1 or 2 average abuse cases to cripple performance.

    I am going to disagree this is relevant in this case. The provider will still have to police abuse for one thing, and the second issue is that an abuser will be not so obvious in any situation, 3 not so obvious on a fast system and will not raise the alarm, while 3 will matter on a slower one and will be visible, making the provider act. but how will this affect real life issues ? The provider will simply need to act more often and slowdown will last less but will occur more frequently on a lower end storage. Will that matter ? In some situations, perhaps, but if you need a top notch storage, you will go with SSD, you will not put your money-making app on the lower end storage,that is for personal use and storage and the abuse will not make a difference for a website or monitoring application, VPN, streaming site and whatnot before the alarm will ring and provider kick the abuser. So, while valid in some situations, your point will not make a difference in most situations. It is like buying an 32 GB laptop even if we only use 4 max just in case some app has memory leak and might need it... M

    I am only representing myself :)

  • jarlandjarland Member
    edited September 2012

    @Maounique said: The provider will still have to police abuse for one thing, and the second issue is that an abuser will be not so obvious in any situation, 3 not so obvious on a fast system and will not raise the alarm, while 3 will matter on a slower one and will be visible, making the provider act. but how will this affect real life issues ? The provider will simply need to act more often and slowdown will last less but will occur more frequently on a lower end storage.

    We're talking about LEB providers I assume. As much as we respect several providers around here for providing incredible service, we're still paying for little. They can't always be expected to catch everything the second it happens, and even though many do (and sure I expect it of myself), I still don't ask that of LEB providers. If the system can handle the abuser without every other client on the node knowing it, everyone is better off. I don't think lowering performance so that an abuser causes issues for other clients in order to increase the chances that the abuse will be caught faster is reasonable.

    As for how that applies to a real world scenario, that is simple. Try to load a medium traffic website with a reasonable (not too large, not too small) SQL database when the I/O is crippled to single digits. Now take 40 other clients running similar websites. Even a bunch of reasonably light websites can begin to crawl at such low speeds. Now, if the array requires twice the abuse to bring me to that point, am I not theoretically 50% safer if the abuse is not specifically designed to hog all I/O regardless of how much is available? Obviously it's somewhat relative, but it then takes twice the abuse to get me there. This pleases me in a market where abuse is common.

    @Maounique said: Will that matter ? In some situations, perhaps, but if you need a top notch storage, you will go with SSD, you will not put your money-making app on the lower end storage,that is for personal use and storage and the abuse will not make a difference for a website or monitoring application, VPN, streaming site and whatnot before the alarm will ring and provider kick the abuser.

    You can't really assume what everyone is going to use their VPS for though. Maybe I need storage and reasonable performance. I'm not asking the world, but if the node is capable of pushing a max of 30mb/s in a dd test, I am more likely to be one light abuser away from crippled I/O. Sure I expect the provider to act, but until then I deal with what could amount to an outage depending on what I use it for. If you are not prepared to host content that people would prefer not to see go down frequently, why would you be interested in doing business in the hosting industry? The whole point is getting quality for less, otherwise it's just "Here's a crappy service for a cheap price, it doesn't work well but it's fun to play with." To say low disk I/O doesn't hurt a website is, quite frankly, odd. It absolutely effects a website. Now put 50 websites on the same system and cripple the I/O. Maybe I can keep running at 3mb/s, can everyone on the node? I bet WHMCS would time out, and I know a lot of providers host their WHMCS elsewhere because I've seen them state it around here a few times.

    @Maounique said: So, while valid in some situations, your point will not make a difference in most situations.

    How is SQL not a common situation? How is a bunch of people on a single node hoping to maintain at least acceptable performance so that their services don't go down not a common situation in reference to an LEB? Just curious. If people here honestly do not mind their hosted content going down every time an abuser hits a node, I've misread the community entirely.

    jarland.me | Read about my new hosting experiment.

  • @miTgiB said: That was true in the beginning, but I do actually need over 100bit now, but what has that got to do with I/O tests being over rated?

    I was on my phone and thought he said something about the port speed aswell sorry.

  • @Maounique said: as storage decreases and prices increase, making it harder for hosts to sell and customers find good storage and a low price.

    OK, you have me totally lost now how this describes how network throughput relates to IO

    Hostigation High Resource Hosting - SolusVM OpenVZ/KVM VPS
  • @miTgiB said: OK, you have me totally lost now how this describes how network throughput relates to IO

    The whole thread was too long to read, however

    Imagine that your IO write speed is around 15MB/s. In that case, unless you are writing to the RAM, you can not utilise the gige port. It's a long shot though, got no idea what exact Maounique meant.

  • @jarland said: but if the node is capable of pushing a max of 30mb/s in a dd test

    1. 30 MB/s is not much, and single digits is not what i defend here. It will still work for my needs, but dont expect everyone to accept that. It is that I protest against calling anything below 100 or even 70 "junk".
    2. If the write test is 10 MB/s, the read is likely 3 times better and this is what matters mostly in a typical scenario.

    You are talking about extremes there, I say 50 MB is okay, besides, it is also an issue about how many ppl are on the node, if regular tests show 50 MB and ht node is full, you are more than 3 abusers away from seeing the difference in a typical usage scenario. 30 MB is also ok if it is sustained at all time and/or it doesnt drop below 25/20 in high load situations (backups and such). M

    I am only representing myself :)

  • @Alex_LiquidHost said: got no idea what exact Maounique meant.

    That was one way to look at it, but the idea is that the market demands are unreasonable and the user does not need such performances for their needs, if they demand such, the price will increase and the resources will be less, lower capacity to make faster storage that most ppl dont need and those that need would be better with SSD anyway. The network port (i saw someone asking for 10 gbps at leb prices) is just another example of demanding something we dont need in a regular scenario, just because it is "trendy". M

    I am only representing myself :)

  • @Maounique I agree with you there.

    Price point is a big factor as well. I know it sounds crazy budget wise, but I prefer higher I/O from lower cost providers. Simple reason being that a higher priced provider might be able to pay more people to monitor for abuse, and they're less likely to get the type and frequency of abusers that LEB providers are constantly trying to fend off.

    jarland.me | Read about my new hosting experiment.

  • @Alex_LiquidHost said: The whole thread was too long to read, however

    My take on the thread, while it does look like a fun thread I would like, I'm starting to form the opinion of are prices for quality hardware cheap enough now that those providing single disk or raid1 even saving that much cash anymore?

    The general corners that were safe to cut a year or two ago, are they even worth cutting anymore? Not that long ago, it was $200 or more to buy a 4 port SAS/SATA2 raid card, I bought NIB Adaptec 3405's last week for $25. Sure it's an older card, but I still have them in service and on loaded nodes providing over 100MB/s dd tests, so using raid1 saves you what? Nothing in my estimation, it is costing you extra to use raid1. A raid10 array you can put over twice as many users on than a raid1 array, but the raid1 array only saves you the cost of a raid card and 2 drives, so you've saved $200-300, and over the 3 year life I like to place on a node, do you think you can earn a multiple of that savings with added income?

    This whole pure SSD fad going on currently I think will pass once all the testers finish testing and users who want to actually use their VPS find 5gb is crap for hosting their applications.

    Hostigation High Resource Hosting - SolusVM OpenVZ/KVM VPS
    Thanked by 1jarland
  • @Maounique said: They put expensive SSDs or SAS2 arrays, as such space is not great (even tho the ssd cache technique seems promising).

    Actually the primary reason for providers increasing IOPS is because the more VPSes per server, the higher ROI on the server. If by using SSD/SAS2 which are one-time costs with 12-36 months lifespan can increase the monthly revenue per server, it'll make perfect business sense. Considering the fact that on the newer E5 servers, you run out of IOPS or CPU rather than RAM, you'll definitely want to push IOPS as high as possible so that you can sell the most VPS (RAM).

    @Maounique said: Question is, who needs that much speed ? Will my blog load slower if the "DD" speed is below 100 ? Or below 20 for that matter ? Do I really need to write fast for my site to be snappy ?

    Very true. Considering the fact that a 2x HDD RAID1 does no more than 60-100MB/sec, and you're the sole user of the server, comparing numbers ends up being a provider-vs-provider number benchmark that doesn't make sense. Websites can host on 60MB/sec RAID1 arrays decently, there's no real need for 300MB/sec throughput on a 1GB file.

    @Maounique said: Next time you are bashing hosts for "slow" "DD", think if you need a faster one and if not the capacity would be a good trade-off over speed.

    This really depends. As explained, IOPS is probably the first bottleneck on a VPS. These speeds are good indicators of how loaded the node is. With some real cases as comparison, I believe there were a number of oversold servers that had bad IOPING + DD results over time, but initial results were good. Without the initial benchmarks, it would be difficult to show how good it was before. On the other hand, some providers are using SAN based storage, which has great IOPS due to spindle quantity, but lower actual single-thread DD speeds. They still performed to expectation even with lower-than-normal DD speeds, and these speeds were consistent over time.

    @Maounique said: I will always choose the higher space offer, over the one 2 times as fast.

    May or may not be the right thing to do. I'll explain.

    Storage capacity is cheap, 3TB SATA drives are everywhere and decently priced. If they're able to offer high space, high RAM, low price, then the provider looks good? But what about CPU/IOPS? It's so simple to buy a E3, 32GB, 2x 3TB RAID1 server from WHT at $150 or less, grab maybe 30 IPs and sell 30x 1GB RAM + 100GB HDD servers for $10 each. That's a good 100% profit, but that's also 100 IOPS shared between 30 servers, barely 4 IOPS per server. Given that you have a 1GB RAM + 100GB HDD, I'm pretty sure you're going to be doing some form of hosting to maximize the server. Assume 15 of those servers install cPanel and host websites. A single cPanel installation uses 3 IOPS on idle due to logging, watchdog, etc. When hosting, probably up to 5-10 IOPS. If it's hosting more sites, even more IOPS.

    The only reason why providers are considering SAS/SSD drives may not always be just about making more money off customers, but also ensuring customer's performance is kept within reasonable standards. IOPS is something that I feel most providers don't want to talk about publicly, it's like some hidden success story or dirty secret, depends on which provider you talk to. Even Dell/HP don't tell my corporate customers that they need to increase IOPS when running multiple VMs, they just sell the highest spec (CPU/RAM) server to my customer, and leave it with 2 HDD because my customer doesn't need the storage capacity of multiple drives. Customer happily loads up 30x4GB VMs then asks why their server is so slow. facepalm

    Asia VPS | Asia Dedicated Server OneAsiaHost - Singapore based Asia-Centric VPS & Dedicated Servers
  • @miTgiB said: My take on the thread, while it does look like a fun thread I would like, I'm starting to form the opinion of are prices for quality hardware cheap enough now that those providing single disk or raid1 even saving that much cash anymore?

    The general corners that were safe to cut a year or two ago, are they even worth cutting anymore? Not that long ago, it was $200 or more to buy a 4 port SAS/SATA2 raid card, I bought NIB Adaptec 3405's last week for $25. Sure it's an older card, but I still have them in service and on loaded nodes providing over 100MB/s dd tests, so using raid1 saves you what? Nothing in my estimation, it is costing you extra to use raid1. A raid10 array you can put over twice as many users on than a raid1 array, but the raid1 array only saves you the cost of a raid card and 2 drives, so you've saved $200-300, and over the 3 year life I like to place on a node, do you think you can earn a multiple of that savings with added income?

    This whole pure SSD fad going on currently I think will pass once all the testers finish testing and users who want to actually use their VPS find 5gb is crap for hosting their applications.

    I see your point there.

    However as a normal user, I would be perfectly fine with RAID1 array. Let's face it - I in no ocassion use more than 10MB/s . Neither I would need a big storage. This comes from the side of an average user - hosting a site for example on a VPS. As far as it i consistent and the dd tests do not fall under 70-80MB/s I will most likely be absolutely satisfied. Network as well - I usually do not consume over 10mbps at a time. I do prefer gigabit VMs though, cause as any other provider my main site gets ddosed on hourly basis. Anyway, after building a tunnel infront of the site, that issue is resolved and as I am monitoring my usage, it never went over 10mbps, even at the traffic spikes that I got from my last KVM offer.

    On the other side, as a provider - I belive that RAID10 is not industry standard (or maybe it is spelled standart, not sure). I have recentely upgraded all ym nodes to RAID10 from RAID1. However I rarely had anyone complain about dd tests that were ~70-80MB/s on the filled nodes. I have to agree that if you have good hardware, a RAID10 is an absolute must and that on sucha node that the IO issues are the most common bottleneck (atleast in my experience - i might be wrong of course) and at the end the profit would be much higher, than going with RAID1. However on the other end - if you can not afford 2000-3000$ per node, and You'd like for example to go with 2 lower end configurations that cost much less. You can easily achieve reasonable dd result with good drives and RAID1 configuration. Of course you will for example be able to deploy only a third of the clients that you'd otherwise be able to deploy on the higher-end node. Hope you understood what I meant, as I did not explain it really well.

    Regards

  • :O long long long explanation.... :O

    www.erawanarifnugroho.com - powered by Prometeus XenBiz | Server Uptime status - powered by Prometeus Xen Pune
    I'm not working for any providers in here, all my comments just my own opinion.
  • KuJoeKuJoe Member
    edited September 2012

    @Alex_LiquidHost said: if you can not afford 2000-3000$ per node

    We run 6 SAS drives in RAID10 with hardware RAID and don't even spend anywhere close to $2k per node.

  • You all make good points... but I think the need for disk speed stems from users not wanting to buy very much ram and compensating with disk read/writes when they run out :)

    __BitAccel__ - OpenVZ VPS / TUN, PPP 24/7 Support!
  • We've set up two DNS servers on RamNode VPS servers in Atlanta and Dallas. Average install time for cPanel DNS Only was 25 minutes. Never mind the close to 1GB/s transfer rates. So I would say that I/O speed matters.

    Thanked by 1Nick_A
  • TazTaz Disabled

    @Vpsnodebox and how many times a.day you reinstall cpanel DNS only? 0.

    Average users don't even need that great speed.

    Time is good and also bad. Life is short and that is sad. Dont worry be happy thats my style. No matter what happens i won't lose my smile!

  • TazTaz Disabled

    I think @Francisco and @miTgiB ruined everyone with those great service and now people can't even think anything below 200 :P

    Time is good and also bad. Life is short and that is sad. Dont worry be happy thats my style. No matter what happens i won't lose my smile!

    Thanked by 1risharde
  • It's just another performance metric that people can use to make a decision, but imo very high IO speeds end up benefiting the host more than the user.

    ServerBear - Easy UnixBench/dd/IOPS/FIO (NEW) & Network Benchmarks | Example Report | Compare Low End Boxes
    Gleam - Run kick-ass viral competitions & rewards to grow your userbase. Free until Sept.
    Thanked by 2Nick_A risharde
  • risharderisharde Member
    edited September 2012

    Well, probably if your hosting a small blog, the IO wouldn't matter as much but I've noticed during some of my testing that when I get IO speeds around 22mb/s, my websites do in fact take longer to "generate" - particularly when using Drupal and perhaps because of the numerous php includes and mysql interactions compared with IO around 60-70 mb or higher. In any event, I'm not particularly confident in going with an SSD VPS provider even though I know the SSDs are faster. I'm just still not confident about the technology the same way I am about USB thumb drives (even though they are convenient), I've seen usb thumb drives fail without warning but I suppose my mind will change in the future when I see full proof and test some myself.

    Risharde.com - I AM THE FUTURE
  • I started viewing LEB last March and joined LET early April. Back then, very very few providers can do 100mb/s+ dd io. Right now, I don't think any one would dare to post in LET something below 100, or else they will get it....

    Thanked by 2KuJoe jamson
  • @vpsnodebox said: We've set up two DNS servers on RamNode VPS servers in Atlanta and Dallas. Average install time for cPanel DNS Only was 25 minutes. Never mind the close to 1GB/s transfer rates. So I would say that I/O speed matters.

    Isn't it fun?!

    I feel like I should add my two cents to this thread seeing as we're one of the bad guys, but I'm still reading and absorbing.

    RamNode: High Performance SSD and SSD-Cached VPS
    Atlanta - Seattle - Netherlands - IPv6 - DDoS Protection - AS3842
  • @jcaleb said: I started viewing LEB last March and joined LET early April. Back then, very very few providers can do 100mb/s+ dd io. Right now, I don't think any one would dare to post in LET something below 100, or else they will get it....

    And is this a good thing ? IIRC the average storage offered was more generous then, tho i wasnt looking for storage. Certainly I am not looking for speeds above 30-40 MB/s in dd, but hey, looks like 1 ms less at loading the page is more important than double the storage for some ppl... Well, in this case, I hope providers will make 2 offers, one with bigger storage and one with faster storage. Also put the premium price on the fast storage, we will see then what matters and if ppl put their money up for their beliefs. M

    I am only representing myself :)

    Thanked by 1jcaleb
  • @Nick_A Thanks again for the great service. Having 4 DNS servers scattered around the US (Phoenix, AZ, Dallas, TX, Atlanta, GA and Chicago, IL) is awesome. I would say hats down for the performance of your VPS servers. We are moving in the direction of providing more managed services like high end shared hosting, so we'll more than likely get more VPS servers when we launch our high speed dedicated MySQL service for cPanel / shared hosting.

  • I have noticed the ssd trend catching on a bit with providers.

    Id like to see a provider offering large amounts of slower storage with their plans because large storage would benefit me a lot more than high speed

  • @titanicsaled said: Id like to see a provider offering large amounts of slower storage with their plans because large storage would benefit me a lot more than high speed

    Yes. I come from the same area and, hopefully, SSD cache can solve both problems to a degree. Lets hope for the best :) M

    I am only representing myself :)

  • jcalebjcaleb Moderator
    edited September 2012

    Another way of looking into this, hopefully related to topic, is that when provider have the high io, its indicative that it is more likely they own and control hardware, and that they are more serious in the longevity of providing service? i'm assuming its more troublesome to put together the sauce to have good io service.

    Thanked by 1risharde
  • @jcaleb said: I started viewing LEB last March and joined LET early April. Back then, very very few providers can do 100mb/s+ dd io. Right now, I don't think any one would dare to post in LET something below 100, or else they will get it....

    That's funny because I know providers with 700mb/s+ IO :P

    This signature is brought to you by the NSA. Spying on the entire world since 1952!

  • jarlandjarland Member
    edited September 2012

    @titanicsaled said: Id like to see a provider offering large amounts of slower storage with their plans because large storage would benefit me a lot more than high speed

    It's like @miTgiB said, it's too cheap to set up a RAID, so you really won't get much more storage if any. RAID controller only costs once until God forbid it goes out. Most providers here are using SATA drives on hardware RAID10 from what I see. By no means do I think we're losing out on storage with places like BuyVM and Hostigation (admittedly my two preferred providers aside from RamNode, speaking as a customer not as a provider). I mean I have 100GB for $10 with Tim. Only way to top that is to get a kimsufi.

    I'm holding near 300mb/s on 4 2TB SATA drives on HW RAID10 in Denver right now, so I'm not sure what lowering my I/O speed would do to help boost storage levels. Controller didn't even cost me anything. I guess lower I/O would force me to put less people on a node and therefore increasing allotments, but I don't think it's beneficial to the client if I can't make a node profitable. Unless we're not talking LEB prices.

    jarland.me | Read about my new hosting experiment.

    Thanked by 1Nick_A
  • OAS does 1.5~ GB/s on their test boxes, lol.

    But yeah, I do agree. The problem is, people think the higher the number, the better the service - which is true occasionally, but in no way a permanent gauge for the overall service quality.

    -- BOFH

Sign In or Register to comment.