Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Why is it hard to get high TBs storage hosting at low prices? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Why is it hard to get high TBs storage hosting at low prices?

2»

Comments

  • @rcxb said:
    Very risky with hardware RAID, but software RAID options are getting better and smarter all the time.

    This is a red flag indicating you don't know what you're talking about.

    Here's a tip. The algorithms are essentially the same. Hardware raid implements hardware to handle the expensive math processing on parity, freeing up the bus, CPU and interrupts so that CPU and storage don't bottleneck each other. You can also have them battery backed to prevent corruption you'd either have with software, or give up significant performance to prevent corruption.

  • jsgjsg Member, Resident Benchmarker

    @TimboJones said:

    @rcxb said:
    Very risky with hardware RAID, but software RAID options are getting better and smarter all the time.

    This is a red flag indicating you don't know what you're talking about.

    Here's a tip. The algorithms are essentially the same. Hardware raid implements hardware to handle the expensive math processing on parity, freeing up the bus, CPU and interrupts so that CPU and storage don't bottleneck each other. You can also have them battery backed to prevent corruption you'd either have with software, or give up significant performance to prevent corruption.

    Hardware Raid does not free up the bus or IRQs. (more precisely, it frees it insignificantly). Also depending how one looks at things, hw. Raid usually doesn't implement somehow special hardware.

    Hardware Raid typically is an "embedded Power or Arm based system", the processors of which are very significantly less powerfull than the main CPU. In better cases - but by no means in all cases - those raid controllers have processors that have at least specialized or optimized functions for Raid. Note, however, that most of that functionality is still slower than it would be on say, a Xeon.

    The two major points why hardware Raid is attractive anyway are

    • To perform Raid the processor would need to read and write from/to memory, trashing its caches along the way, do the computation and handle the result message (which might incur additional processing). Note that although the R6 algorithm is way more expensive than XOR, it's still not the major bottleneck (which is memory transfer). The reason the R6 algorithm is a bottleneck on most hw controllers is that their processor is poor (which was OK before R6 and Sata-3 came up).
      With hw Raid the system simply hands off reading and writing.

    • dealing with disks actually is quite complicated. There are many things to know and consider (e.g. disk geometries) and it's not really a one shot operation but a series of steps, most of which involve some to and fro communication with the disks. To make it worse it's also slow (milliseconds rather than micro or nano seconds). So it makes sense to outsource that to a "sub computer" and to turn it into a one-shot operation (as seen from the main system).

    I stress that a bit because software Raid actually is an alternative in many cases. That is particularly true for R0 and R1 (and R10, of course). Formertimes, a couple of years ago (before SSDs and Sata-3 took off big time) the barrier point was R5. Nowadays many systems, even servers, can do R5 in software unless they are seriously hard working (like probably most VPS nodes) and the barrier point is R6 (which usually should be done with hw. Raid).

  • williewillie Member
    edited May 2019

    jsg said: Again: a power supply design must assume the worst case.

    Yes, the worst case is the highest number of drives that can be spinning up simultaneously, and that is something that the designer can decide to make lower than the number of drives physically in the box. You turn some on, wait, turn on some more, wait, etc. It's just like at home where you can't turn on too many appliances at the same time without tripping the circuit breaker. Or the latest Intel cpus that slow their core clocks down if you use too many cores or AVX-512 instructions at a time. Or an espresso machine where you can pull a shot and then use the steam wand, but not both simultaneously. Very normal and straightforward constraints.

    Here is a Supermicro box with 90(!) drive slots:

    https://www.supermicro.com/products/system/4U/6048/SSG-6048R-E1CR90L.cfm

    At 120 volts input its 12 volt max power is 66.7 amps, or 0.74 amps per drive. It can't possibly spin up all 90 drives at the same time, and it doesn't have to do that. It just has to be able to keep them spinning after they are spun up.

    This is all just basic power management. They make hardware bigger and bigger until they are pushing the practical limits, and that means you have to take some measures to keep stuff inside the envelope. You can't drive an 18 wheel truck like a sports car. You have to understand the limitations and shape your usage around them.

  • deankdeank Member, Troll

    Server chassis with HDD backplanes are designed to spin up HDDs progressively by rows.

  • rcxbrcxb Member

    @TimboJones said:
    Here's a tip. The algorithms are essentially the same.

    There is nothing remotely close to Z-RAID's "algorithms" in any hardware RAID controller.

    Thanked by 2vimalware TimboJones
  • SirFoxySirFoxy Member

    bc things cost money

  • jsgjsg Member, Resident Benchmarker

    @willie (@deank)

    Sorry, no, that doesn't change things. Yes, normally hdd power up serially with some time between, but ...

    That time is usually configurable and it is quite common for users/admins to considerably lower the default.

    Hard drives may or may not be the major consumer of power in a system. But even if they are they are not the most demanding and tricky one. Processors of diverse kinds (of which many are in a system) and in particular the main CPU but also some other circuitry is way more demanding and tricky and also much more sensitive. Also, drives have evolved in the last decades and better ones ("enterprise") often have buffer or slow rise (or both) circuitry.

    Keep in mind that it is quite common for chips and circuitry to need multiple Vcc and in a tightly defined regime (like "5V 20ms after 3.3V").

    Finally, we are talking about the professional/industry market here (it's rare, I guess, for end users designing their own server power supply) and in that market the price difference for being able to deliver a couple of 100 W more (even beyond spec.) is a question of small money. The reason for that is that technically the difference isn't big; it's just a couple of different parameters, a somewhat different tank for example and some bigger condensators.

    TL;DR - as I said - A server power supply must be able to cope with the worst case.

    @rcxb

    That's a bit too strongly worded. algorithmically there is a lot in common between Raid 5 and Raid Z but overall there are major differences (e.g. over what block size the checksum is calculated; for Raid 5 that's fixed, for Raid Z not).

  • JanevskiJanevski Member
    edited May 2019

    Money? What happened with love?
    Do you know how much love does it take to make a hard disk or ssd?
    Some poor human being works it's fingers to the bone, tooth and nail, somewhere on an assembly line, just to get enough love to go by...
    But, what about the supervisors, managers, owners, distributors, sellers, resellers - they need love too. Plus the materials involved require mining and slave labor, which is a lot more love to come.
    Since they don't make containers big enough, sending that much love is pretty difficult and could even be considered quite festive and gay, instead you pay a "high" price in money for data storage, which is easy, or at least the easiest way to compensate people.

Sign In or Register to comment.