Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Why is it hard to get high TBs storage hosting at low prices?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Why is it hard to get high TBs storage hosting at low prices?

xadexade Member

Why is it hard to get high TBs storage hosting at low prices?

«1

Comments

  • TripleflixTripleflix Member
    edited May 2019

    because (decent) storage costs money..?

    and why cant i find a tesla for the price of a beer...??

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @theblackesthat said:
    How many TBs are you trying to store?

    And for what price?

    Francisco

  • mrclownmrclown Member

    Are you just looking for storage?

    There're many choices depending on your budget. S3, Digital Ocean - Object Store.. etc if you are into expensive tier..

  • MilonMilon Member

    The same question. Lack of simple storage with low of other parameters.

  • pikepike Veteran
    edited May 2019

    My University uses an IBM Tape Library 3500 for their backups. I guess it can store arround 50PB = 50000TB. No idea what they paid for it though.

    Edit: I researched this and they claim its 150€/yr for 1,5TB of data.

  • williewillie Member

    Storage is frighteningly cheap these days if you shop a little bit. You can get to around $2/TiB/mo with Hetzner SX servers, and almost that low with a few others. What are you looking for?

  • pikepike Veteran

    @willie said:
    Storage is frighteningly cheap these days if you shop a little bit. You can get to around $2/TiB/mo with Hetzner SX servers, and almost that low with a few others. What are you looking for?

    First I douted that, remembering myself looking for a storage dedi in the auctions a year ago and not finding anything that cheap. Then I looked on the new SX Line offerings... damn Hetzner, why are you so good?

    Thanked by 1Hetzner_OL
  • KuJoeKuJoe Member, Host Rep
    edited May 2019

    High capacity hard drives only come in 3.5" and you can only fit so many per U of space not to mention they require more power to run than 2.5" HDD and significantly more than SSDs. Most providers are still using local storage for their servers so you're limited in how many terabytes you can fit per server and if you're running a NAS/SAN then you're still limited on CPU, RAM, and network throughput on the compute side of things. Companies like Google, Amazon, Backblaze, and other cloud storage providers can get much higher density because they have invested millions of dollars into their infrastructure and have engineers who's sole focus is getting higher density for less money.

    This is why for most LET providers you'll be better off at dedicated servers with a few 2-4TB HDDs for high capacity because if a VPS provider only has say 20TB of storage on their a 2U server (not factoring in RAID), they're losing money by only being able to sell 5 high capacity VPSs on it at low prices so it makes more sense to sell 20/40/60/etc... smaller VPSs.

    Thanked by 2uptime eva2000
  • Hetzner_OLHetzner_OL Member, Top Host

    pike said: Hetzner, why are you so good?

    In case this was not a rhetorical question I'm gonna answer this and hope it's not gonna sound too big-headed: Because we can 😎
    No seriously, it's nice to see you cheer about our offerings. So thank you for your feedback :smile:
    --Julia, Marketing

    Thanked by 3Wolf dragon1993 beagle
  • rcxbrcxb Member

    @KuJoe said:
    you can only fit so many per U of space

    You can find storage servers with 60 3.5" bays in 4U. With 14TB drives and a slightly risky RAID-5 configuration, .you could have about 700TB of usable space in 4U, or 7.7PB per rack.

    I don't believe power is a major issue. NAS-style hard drives run cool, and it takes quite a few to add up to a power budget larger than a couple multi-core CPUs.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @rcxb said:

    @KuJoe said:
    you can only fit so many per U of space

    You can find storage servers with 60 3.5" bays in 4U. With 14TB drives and a slightly risky RAID-5 configuration, .you could have about 700TB of usable space in 4U, or 7.7PB per rack.

    I don't believe power is a major issue. NAS-style hard drives run cool, and it takes quite a few to add up to a power budget larger than a couple multi-core CPUs.

    Slightly?

    You're 1000% guaranteed to have a unrecoverable read if you have to fix a drive, or a drive has a bad sector.

    Francisco

  • rcxbrcxb Member

    Very risky with hardware RAID, but software RAID options are getting better and smarter all the time.

  • KuJoeKuJoe Member, Host Rep

    @rcxb said:

    @KuJoe said:
    you can only fit so many per U of space

    You can find storage servers with 60 3.5" bays in 4U. With 14TB drives and a slightly risky RAID-5 configuration, .you could have about 700TB of usable space in 4U, or 7.7PB per rack.

    I don't believe power is a major issue. NAS-style hard drives run cool, and it takes quite a few to add up to a power budget larger than a couple multi-core CPUs.

    You're talking about servers like Backblaze uses which are really high in storage density but extremely low in compute power comparatively. No VPS provider would be able to sell enough VPSs on a box like that to make it profitable. Those servers are better suited for network storage (i.e. NAS or SAN) which then adds the costs for at least a 10Gbps network and the compute nodes.

    As for power, I don't know if this is correct or not but Backblaze stated they'd need 90 Amps of power for 45 4TB drives which is just mind boggling for me considering the cheapest I've seen is $10 per amp in most data centers. There's no way this is accurate but I can't find anything to compare it with except that I know the difference between 6 SSDs and 6 2.5" SAS drives is 0.5-0.75 amps so they will add up quickly.

  • NeoonNeoon Community Contributor, Veteran

    Well, 1.59EUR per TB, just get a Hetzner, sadly there is still setup fee.

  • williewillie Member

    The issue is the likelihood of another drive failing during rebuild in a raid 5 system with such large drives. I worked at a big installation like that and the head guy was terrified of raid after some misadventures. So we used no raid at all, but mirrored each entire server to another server so we had 2 independent copies of everything, or more than 2 for important stuff. That worked pretty well and kept things simple. But, it used up a lot of drive space on mirrors.

    That was done with old technology by today's standards. If I were doing it today I'd try for something like Online C14, with stuff spread across a bunch of servers using erasure codes.

    We still don't know what OP (@xade) is trying to do or how much storage is required, other requirements, budget, etc. If it's less than 100TB or so (probably already high TB by LET standards) then just get dedi(s) from the usual suspects.

  • donlidonli Member

    @xade said:
    Why is it hard to get high TBs storage hosting at low prices?

    Please specify what you mean by "low price" ( $$/Tb/year ? ).

  • williewillie Member

    And "high TBs".

  • williewillie Member

    Looked at previous threads from OP. OP appears to be running a porn site so can't use Hetzner. Mentioned Romania as an ok location, so probably should talk to Cociu if actually looking for something rather than just "reflecting".

  • williewillie Member

    KuJoe said: 90 Amps of power for 45 4TB drives

    That's 12 volt power, not 120 volt, and it's only at startup. Normally a big server won't start all the drives at the same time though, so no idea why that much current. In operation the drives use around 7 watts each, depending.

    Data sheet for 12TB Seagate Ironwolf drive:

    https://media.flixcar.com/f360cdn/Seagate-32764491-ironwolf-12tbDS1904-9-1707US-en_US.pdf

  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited May 2019

    @rcxb said:
    Very risky with hardware RAID, but software RAID options are getting better and smarter all the time.

    The R5 issue exists for both software & hardware. Someone did the math and unless you use high end enterprise drives with a crazy high URE rating, you are more or less guaranteed to have a rebuild failure on any array over 15TB or so.

    We used to use RAID50 on our old storage nodes and what a bloody headache that was after a year or two on the drives.

    More than once I was in the datacenter doing a ddrescue from one drive to another, just to help with a rebuild.

    We swapped things to RAID6 after all of that and while it was stable, the IO performance was rough for storage VM's.

    Francisco

    Thanked by 3JTR marrco that_guy
  • williewillie Member

    Oh wow it hadn't occurred to me that URE was the cause of rebuild failures. I thought it just meant that in a large array there was a high likelihood of another drive failing outright during the stress of a rebuild while also serving clients online. URE makes sense and I always thought the specs were uncomfortably high. Even enterprise drives say 1 URE sector per 1e15 bits read (1e14 for regular drives), and the drive capacity is already around 1e14 bits so that's cutting things close.

    I wonder what the distribution of those errors is like. It might be interesting to implement an object store that included some software level ECC for every file or every track or something, maybe spread across the array. That would make rebuilding safer and maybe make raid-5 tolerable. That in turn makes various cheap 4-drive storage servers much more attractive since 75% of the raw drive space becomes usable instead of 50%.

    What raid/redundancy are you using for storage slabs now?

  • CoreyCorey Member

    @Francisco said:

    @rcxb said:
    Very risky with hardware RAID, but software RAID options are getting better and smarter all the time.

    The R5 issue exists for both software & hardware. Someone did the math and unless you use high end enterprise drives with a crazy high URE rating, you are more or less guaranteed to have a rebuild failure on any array over 15TB or so.

    We used to use RAID50 on our old storage nodes and what a bloody headache that was after a year or two on the drives.

    More than once I was in the datacenter doing a ddrescue from one drive to another, just to help with a rebuild.

    We swapped things to RAID6 after all of that and while it was stable, the IO performance was rough for storage VM's.

    Francisco

    Why not raid 60? :P

  • jsgjsg Member, Resident Benchmarker

    @KuJoe said:
    High capacity hard drives only come in 3.5" and ... they require more power to run than 2.5" HDD and significantly more than SSDs.

    Kind of. Old drives were power pigs. Nowadays - I did happen to calculate it - The Power/Volume ratio is actually better than with 2.5" drives, at least in the volume range beyond 2.5" cap. Example: At a given point in time one needed 3 x 2.5" disks for 6 TB but only 1 x 3.5". Of course all that must be calculated for any given point in time and any given/desired volume and Raid combo.

    @rcxb said:
    I don't believe power is a major issue.

    Wrong.

    @willie said:
    That's 12 volt power, not 120 volt, and it's only at startup. Normally a big server won't start all the drives at the same time though, so no idea why that much current. In operation the drives use around 7 watts each, depending.

    This.

    Side note: The high power rail is needed because (a) a power supply must be calculated on the worst case scenario (e.g. all drives spinning up at the same time), and (b) because those "7 W" are the average (like "that drive will ty. consume about 7 W) -but- it consumes way more when starting up.

    Plus: Power always is an issue and on multiple levels. First, the power supply must be able to deliver it. Next the backup (UPS) must be able to deliver it. Finally, power is one of the major cost factors in a colo. Next, the more power the more cooling (another important cost factoir). Plus, of course, power in a colo is more expensive than normal power.

    @Francisco said:
    We swapped things to RAID6 after all of that and while it was stable, the IO performance was rough for storage VM's.

    Raid 6 is computationally considerably more expensive than R5. R5 is a simple XOR while R6 is a function over a Galois field. Also, there are plenty of Arm and Power processors with a built-in XOR engine which makes them do R5 near wire speed.
    About your only friend in that case (storage servers) is the fact that the sequential vs. random read/write ratio is far higher than on an average system which translates to good (and lots of) caching usually really increasing (read) performance noticeably.


    Generally speaking almost all wishes could come true. The real problem is the users who want speed, excellent reliability and low cost.

    Thanked by 1that_guy
  • williewillie Member
    edited May 2019

    There is no reason to spin up all the drives at the same time. Server motherboards and raid cards know how to spin them up in sequence to avoid that big power surge. Server builders rely on that so they don't have to put ridiculously large power supplies into the boxes.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    Corey said: Why not raid 60? :P

    Sorry, that should've been a 60 :)

    Even still.

    Francisco

  • jsgjsg Member, Resident Benchmarker

    @willie said:
    There is no reason to spin up all the drives at the same time. Server motherboards and raid cards know how to spin them up in sequence to avoid that big power surge. Server builders rely on that so they don't have to put ridiculously large power supplies into the boxes.

    We can discuss this all day long. Again: a power supply design must assume the worst case. Another question is how quickly the drive series spin up. Yet other questions are OLP sensitivity, main regulation loop frequency, resonant tank size, aso, aso. Or we could look at the supply connection to the backplane(s) which may or may not support the full (all drives) load. In particular we could look at the mainboard itself, because that 12V line is the same that feeds the drives.
    That's why never a server engineer came said "we need 580W, let's go with a 600W supply". Nope, they'd use a 750W supply. Not for the drives but for the power dynamics of the mainboard and particularly the processor. And that's exactly what we see in the real world: servers with seemingly too beefy supplies, that can cope with the worst case and still have some reserves.

  • jsgjsg Member, Resident Benchmarker

    @Francisco said:

    Corey said: Why not raid 60? :P

    Sorry, that should've been a 60 :)

    Even still.

    R6 or R60 doesn't make that much of a difference as long as it's one (1) controller and many disks. Reasons: See above.
    If you are willing to spend the money, get 2 controllers with R6 each and do the R60 (striping) with an OS software raid (don't worry, the R0 part is virtually cost free computationally).

  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited May 2019

    jsg said: R6 or R60 doesn't make that much of a difference as long as it's one (1) controller and many disks. Reasons: See above.
    If you are willing to spend the money, get 2 controllers with R6 each and do the R60 (striping) with an OS software raid (don't worry, the R0 part is virtually cost free computationally).

    That's exactly what we did. We had 2 cards, 8 drives each, and then just did the "0" side in software.

    Francisco

  • deankdeank Member, Troll

    Enterprise class hardware + Lowend budget

    = Nothing.

Sign In or Register to comment.