Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Sata disk Server and Iops best setup
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Sata disk Server and Iops best setup

tbstbs Member

I got a server with 6 x 8tb sata 7200rpm with no hardware raid

I will need to setup ubuntu server on it but also want some type of software raid
Now you can’t install boot partition on software raid so I am trying to figure out what’s the best way and not also loose too much space

I tried putting single disk for os and rest as raid 5 but it was pretty bad when it comes to random read and write, I understand the limitation of sata but I should be able to get 100MB read/write without hitting high io

Perhaps someone have suggestions please share

Comments

  • tbstbs Member

    Before anyone asks I do not have option to add ssd or nvme for os or hardware raid setup

  • vsys_hostvsys_host Member, Patron Provider

    You need to create a separate partition for the boot, for example for 500mb and make it raid1. And from the rest already do a raid that you need.

    Thanked by 1RapToN
  • PulsedMediaPulsedMedia Member, Patron Provider
    edited March 2023

    You can install /boot and have regular boot on MDADM SW Raid no problem, we've been doing that for more than a decade now :)

    You can use any raid level.
    Caveat: Remember to install grub on all devices.

    RAID5 requires careful tuning to get high random performance, and alas, newer kernels have moved optimization towards SSDs so the performance has been plummeting for many years now, sadly.

    Carefully tuned you still get decent performance, but it's far from the best. I think around Debian 9 kernel was updates to start chopping I/O requests, whereas HDDs require big I/O request sizes to truly get the performance out.

    You can achieve still about 80% of RAW random read performance with RAID5, depends heavily on usage. It used to be about 95% RAW in reads, and about 67% in writes.

  • tbstbs Member

    @PulsedMedia said:
    You can install /boot and have regular boot on MDADM SW Raid no problem, we've been doing that for more than a decade now :)

    You can use any raid level.
    Caveat: Remember to install grub on all devices.

    RAID5 requires careful tuning to get high random performance, and alas, newer kernels have moved optimization towards SSDs so the performance has been plummeting for many years now, sadly.

    Carefully tuned you still get decent performance, but it's far from the best. I think around Debian 9 kernel was updates to start chopping I/O requests, whereas HDDs require big I/O request sizes to truly get the performance out.

    You can achieve still about 80% of RAW random read performance with RAID5, depends heavily on usage. It used to be about 95% RAW in reads, and about 67% in writes.

    Do you have some type of guide on this for Ubuntu ?

  • I'd embrace raid10 and sacrifice half the storage if need performance

    could LVM on top and use XFS for performance volumes, and then btrfs with high compression for archive stuff. But no matter what you chose, there's plenty of knobs to tune like increasing or decreasing readahead at the blockdevice level, or controlling how often things are flushed to disk etc at the filesystem level... all depends on your requirements

    Here's some quick benchmarks from my 4x8TB raid10 setup (mdadm10 raid10 + LVM on top) drives are ST8000NM0105

    XFS

    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 2.79 MB/s      (697) | 35.82 MB/s     (559)
    Write      | 2.80 MB/s      (700) | 36.08 MB/s     (563)
    Total      | 5.59 MB/s     (1.3k) | 71.91 MB/s    (1.1k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 119.35 MB/s    (233) | 104.42 MB/s    (101)
    Write      | 125.69 MB/s    (245) | 111.38 MB/s    (108)
    Total      | 245.05 MB/s    (478) | 215.81 MB/s    (209)
    

    BTRFS defaults,compress=zstd:7

    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 8.59 MB/s     (2.1k) | 33.36 MB/s     (521)
    Write      | 8.63 MB/s     (2.1k) | 33.67 MB/s     (526)
    Total      | 17.23 MB/s    (4.3k) | 67.03 MB/s    (1.0k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 30.46 MB/s      (59) | 30.20 MB/s      (29)
    Write      | 32.60 MB/s      (63) | 32.88 MB/s      (32)
    Total      | 63.07 MB/s     (122) | 63.09 MB/s      (61)
    
  • PulsedMediaPulsedMedia Member, Patron Provider

    @lanefu said:
    I'd embrace raid10 and sacrifice half the storage if need performance

    could LVM on top and use XFS for performance volumes, and then btrfs with high compression for archive stuff. But no matter what you chose, there's plenty of knobs to tune like increasing or decreasing readahead at the blockdevice level, or controlling how often things are flushed to disk etc at the filesystem level... all depends on your requirements

    Here's some quick benchmarks from my 4x8TB raid10 setup (mdadm10 raid10 + LVM on top) drives are ST8000NM0105

    XFS

    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 2.79 MB/s      (697) | 35.82 MB/s     (559)
    Write      | 2.80 MB/s      (700) | 36.08 MB/s     (563)
    Total      | 5.59 MB/s     (1.3k) | 71.91 MB/s    (1.1k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 119.35 MB/s    (233) | 104.42 MB/s    (101)
    Write      | 125.69 MB/s    (245) | 111.38 MB/s    (108)
    Total      | 245.05 MB/s    (478) | 215.81 MB/s    (209)
    

    BTRFS defaults,compress=zstd:7

    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 8.59 MB/s     (2.1k) | 33.36 MB/s     (521)
    Write      | 8.63 MB/s     (2.1k) | 33.67 MB/s     (526)
    Total      | 17.23 MB/s    (4.3k) | 67.03 MB/s    (1.0k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 30.46 MB/s      (59) | 30.20 MB/s      (29)
    Write      | 32.60 MB/s      (63) | 32.88 MB/s      (32)
    Total      | 63.07 MB/s     (122) | 63.09 MB/s      (61)
    

    Your test results come from cache, and when large enough for not to be cached, the performance is abysmal for 4 drive setup.

    Here are some ways to test DESTRUCTIVE so don't just copy & paste:

    fio --filename=/dev/md1 --direct=1 --rw=randrw --bs=512k --ioengine=libaio --iodepth=256 --runtime=120 --numjobs=16 --time_based --group_reporting --name=iops-test-job --eta-newline=1 --rwmixread=85 --rwmixwrite=15
    
    fio --filename=/home/fiotest --size=50GB --direct=1 --rw=randrw --bs=512k --ioengine=libaio --iodepth=256 --runtime=120 --numjobs=16 --time_based --group_reporting --name=iops-test-job --eta-newline=1 --rwmixread=85 --rwmixwrite=15
    

    Reading back on old RAID0 benchmarks some results for 4xSATA 7200rpm drives, paremeters based for SBs and read optimized, similar to above benchmark commands:

    FIO Direct R IOPS 463
    FIO Direct W IOPS 82
    FIO Direct R MiB/s 232
    FIO Direct W MiB/s 41.2
    

    Same test, but caching:

    FIO 50G R IOPS 837
    FIO 50G W IOPS 148
    FIO 50G R MiB/s 419
    FIO 50G W MiB/s 74
    
    

    @tbs said: Do you have some type of guide on this for Ubuntu ?

    We can tune the system for you if you want. We charge 165€/hour, this type of setup goes with 1 hr.

    But apart from that, giving out the "secret sauce" directly for public usage. Sorry not going to happen. Google is your friend then.

Sign In or Register to comment.