Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Ways to improve performance with drives in RAID 10
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Ways to improve performance with drives in RAID 10

I have a server with 4 regular hard drives in RAID 10 configuration. I choose RAID 10 because I thought I would get better performance than with a single drive and some redundancy at the same time, but write speeds are quite poor. Are there any settings or something in Linux that can help improve disk performance with this configuration? It's software RAID btw. Thanks!

«13

Comments

  • No offense, but maybe it's time to open a 1 thread, where you could ask your 101 question as also share some goods?

    Thanked by 3thane SinV bench
  • @CalmDown said:
    No offense, but maybe it's time to open a 1 thread, where you could ask your 101 question as also share some goods?

    Don't you have anything else to do than being such an idiot?

  • Use our Raid7™ setup instead for just $7

    https://usd7.host

    Thanked by 1tjn
  • host_chost_c Member, Patron Provider

    Zfs ? Mdadm?

    Go with zfs, mdadm is really bad.

  • @host_c said:
    Zfs ? Mdadm?

    Go with zfs, mdadm is really bad.

    Mdadm, it's already all set up. Is there anything that can be done without reinstalling everything? Like some tweaks that can speed it up a bit?

  • host_chost_c Member, Patron Provider

    Please give me an idea on slow speed.

    A yabs , disk test only

  • NewToTheGameNewToTheGame Member
    edited December 2023

    Actually yes.

    You can do a thing called short stroking. You limit the movement of the drive heads to a smaller amount of the drive surface. This is as easy as this. Partition the drive so that the first partition is say 33% or less of the total space available.

    This only works on mechanical drives. You can still use the rest of the space with more partitions

    The only other way is to add drives to the array

  • HostEONSHostEONS Member, Patron Provider

    @vitobotta said:
    I have a server with 4 regular hard drives in RAID 10 configuration. I choose RAID 10 because I thought I would get better performance than with a single drive and some redundancy at the same time, but write speeds are quite poor. Are there any settings or something in Linux that can help improve disk performance with this configuration? It's software RAID btw. Thanks!

    You can disable bitmap

    mdadm --grow --bitmap=none /dev/md

    but if you disable it, it will just make rebuilding RAID slower, but overall performance will improve

    Thanked by 1darkimmortal
  • @vitobotta said:

    @CalmDown said:
    No offense, but maybe it's time to open a 1 thread, where you could ask your 101 question as also share some goods?

    Don't you have anything else to do than being such an idiot?

    CalmDown.

  • @host_c said:
    Please give me an idea on slow speed.

    A yabs , disk test only

    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/md3):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 849.00 KB/s    (212) | 12.25 MB/s     (191)
    Write      | 887.00 KB/s    (221) | 12.85 MB/s     (200)
    Total      | 1.73 MB/s      (433) | 25.11 MB/s     (391)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 48.28 MB/s      (94) | 50.88 MB/s      (49)
    Write      | 50.75 MB/s      (99) | 54.51 MB/s      (53)
    Total      | 99.04 MB/s     (193) | 105.40 MB/s    (102)
    

    @HostEONS said:

    @vitobotta said:
    I have a server with 4 regular hard drives in RAID 10 configuration. I choose RAID 10 because I thought I would get better performance than with a single drive and some redundancy at the same time, but write speeds are quite poor. Are there any settings or something in Linux that can help improve disk performance with this configuration? It's software RAID btw. Thanks!

    You can disable bitmap

    mdadm --grow --bitmap=none /dev/md

    but if you disable it, it will just make rebuilding RAID slower, but overall performance will improve

    This is exactly the kind of thing I was looking for! There's some improvement in the numbers:

    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/md3):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 1.26 MB/s      (316) | 16.49 MB/s     (257)
    Write      | 1.30 MB/s      (325) | 17.01 MB/s     (265)
    Total      | 2.56 MB/s      (641) | 33.51 MB/s     (522)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 67.92 MB/s     (132) | 74.54 MB/s      (72)
    Write      | 71.53 MB/s     (139) | 79.50 MB/s      (77)
    Total      | 139.46 MB/s    (271) | 154.04 MB/s    (149)
    

    And the app (Nextcloud) seems somewhat more responsive now.

    Are there any other magical settings like this that can help further?

  • host_chost_c Member, Patron Provider

    I really do not like your values, but i never used mdadm.

    It is like your HDD cache is disabled or you have 5400 RPM drives, not sure I can get that low even on that.

    Not even raid 5/6 is that slow, and that has overhead for parity calculation.

    I would investigate this further,

    In a raid 10 of 4 drives:

    write speed is N-2
    read speed is N

    N = number of total drives.

  • @host_c said:
    I really do not like your values, but i never used mdadm.

    It is like your HDD cache is disabled or you have 5400 RPM drives, not sure I can get that low even on that.

    Not even raid 5/6 is that slow, and that has overhead for parity calculation.

    I would investigate this further,

    In a raid 10 of 4 drives:

    write speed is N-2
    read speed is N

    N = number of total drives.

    I wonder if the drives are oldish or something like that? This is an OVH dedicated server I bought for Nextcloud only due to the amount of storage for the price (21e/mo for 4 x 2TB of storage). The app seems to work "decently" well but it would be nice if I could improve the performance further without rebuilding the array. Is there any diagnostic tool or something like that I could use to investigate if there is a performance bottleneck?

  • host_chost_c Member, Patron Provider

    @vitobotta

    I would help you out further, but my taxi just arrived, I have to go to "party" for 2024.

    Have no clue on OVH drive policy, but if you do not have data on it, just do a ZFS and do not waste any more time.

    Maybe someone can help you debug this, I personally, would go for ZFS on Software raid.

    AA, just remembered, filesystem? try out XFS. It is much better than EXT4 at handling read and write ( https://en.wikipedia.org/wiki/XFS )

  • @HostEONS said:

    @vitobotta said:
    I have a server with 4 regular hard drives in RAID 10 configuration. I choose RAID 10 because I thought I would get better performance than with a single drive and some redundancy at the same time, but write speeds are quite poor. Are there any settings or something in Linux that can help improve disk performance with this configuration? It's software RAID btw. Thanks!

    You can disable bitmap

    mdadm --grow --bitmap=none /dev/md

    but if you disable it, it will just make rebuilding RAID slower, but overall performance will improve

    @host_c said:
    @vitobotta

    I would help you out further, but my taxi just arrived, I have to go to "party" for 2024.

    Have no clue on OVH drive policy, but if you do not have data on it, just do a ZFS and do not waste any more time.

    Maybe someone can help you debug this, I personally, would go for ZFS on Software raid.

    AA, just remembered, filesystem? try out XFS. It is much better than EXT4 at handling read and write ( https://en.wikipedia.org/wiki/XFS )

    Thanks, will do some reading since I haven't used ZFS/XFS before and perhaps rebuild the server. Enjoy your party!

  • @vitobotta - can you paste output of cat /proc/mdstat (shouldn't be anything sensitive but feel free to mask if you're concerned)

    Are your drives still syncing by any chance?

  • @nullnothere said:
    @vitobotta - can you paste output of cat /proc/mdstat (shouldn't be anything sensitive but feel free to mask if you're concerned)

    Are your drives still syncing by any chance?

    Sure, here it is

    Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
    md3 : active raid10 sdc3[2] sda3[0] sdb3[1] sdd3[3]
          3903610880 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
    
    md2 : active raid10 sdc2[2] sda2[0] sdd2[3] sdb2[1]
          2093056 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
    
    unused devices: <none>
    

    No, here is no syncing happening AFAIK.

    Thanked by 1nullnothere
  • HostSlickHostSlick Member, Patron Provider
    edited December 2023

    More performance ....
    As always in IT. It depends.

    In this case it looks like a pure SoftRAID. So not very much performance increase.

    So the performance increase here depends on if its softraid, hw raid, block size, cache of your raid, and overall settings of raid card. etc etc

    But indeed looks very bad, so uhm...

  • @HostSlick said:
    More performance ....
    As always in IT. It depends.

    In this case it looks like a pure SoftRAID. So not very much performance increase.

    So the performance increase here depends on if its softraid, hw raid, block size, cache of your raid, and overall settings of raid card. etc etc

    But indeed looks very bad, so uhm...

    The server is quite cheap, so probably I am expecting a bit too much from these disks?

  • HostSlickHostSlick Member, Patron Provider

    @vitobotta said:

    @HostSlick said:
    More performance ....
    As always in IT. It depends.

    In this case it looks like a pure SoftRAID. So not very much performance increase.

    So the performance increase here depends on if its softraid, hw raid, block size, cache of your raid, and overall settings of raid card. etc etc

    But indeed looks very bad, so uhm...

    The server is quite cheap, so probably I am expecting a bit too much from these disks?

    Is it HD or SSD?

  • @vitobotta said:

    @CalmDown said:
    No offense, but maybe it's time to open a 1 thread, where you could ask your 101 question as also share some goods?

    Don't you have anything else to do than being such an idiot?

    Brrr, calm down, i told, no offense, and you being an aggro. C'mon, don't be a dick, i was not offending you nor insulted. If you have something against my words, just DM and we will resolve your issue.

  • @HostSlick said:

    @vitobotta said:

    @HostSlick said:
    More performance ....
    As always in IT. It depends.

    In this case it looks like a pure SoftRAID. So not very much performance increase.

    So the performance increase here depends on if its softraid, hw raid, block size, cache of your raid, and overall settings of raid card. etc etc

    But indeed looks very bad, so uhm...

    The server is quite cheap, so probably I am expecting a bit too much from these disks?

    Is it HD or SSD?

    HDD

    @CalmDown said:

    @vitobotta said:

    @CalmDown said:
    No offense, but maybe it's time to open a 1 thread, where you could ask your 101 question as also share some goods?

    Don't you have anything else to do than being such an idiot?

    Brrr, calm down, i told, no offense, and you being an aggro. C'mon, don't be a dick, i was not offending you nor insulted. If you have something against my words, just DM and we will resolve your issue.

    No offense? It feels like I am dealing with children here.

    I am TIRED of comments like yours. TIRED. What kind of "advice" (if you want to call it that way) is to put threads that have nothing to do with each other in a single thread? How useless are these comments? And it's not the first time. It's happening often lately and again I'm TIRED. You and others who have fun pointing the finger each time I open a thread without any consideration for the value/meaning/usefulness of the thread, as if I were a spammer, then take your precious time to add useless and totally irrelevant comments to the same threads. What's wrong with you people? Are you wishing that I leave the community or what the hell? How on earth is me opening a thread on RAID issues - that again has nothing to do with other threads I have opened previously - affecting you or those other people? My question is still valid, do you really have nothing better to do than chasing me from thread to thread just to annoy me? I can't believe that ONCE AGAIN people like you bring NOTHING useful for the topic. NOTHING.

  • HostSlickHostSlick Member, Patron Provider

    Could only guess if its some older HD or many hours on it that one of them is going slow and causing bottleneck to the complete array.. Try check with smart

    As its ovh i guess it has enough hours. But yea, not many ways to improve it with softraid.

  • vitobottavitobotta Member
    edited December 2023

    @HostSlick said:

    Could only guess if its some older HD or many hours on it that one of them is going slow and causing bottleneck to the complete array.. Try check with smart

    As its ovh i guess it has enough hours. But yea, not many ways to improve it with softraid.

    Ouch... I checked with smartmontools and all the drives have a lot of metrics flagged with "Old_age" and others "Pre-fail"

  • jbilohjbiloh Administrator, Veteran
    edited December 2023

    Software raid on spinning disks is hardly ideal.

    To get much better performance add a hardware raid card with 512mb or 1 GB of onboard cache plus a bbu (so you can enable write back cache without risk of data loss).

    Thanked by 1raza19
  • @jbiloh said:
    Software raid on spinning disks is hardly ideal.

    To get much better performance add a hardware raid card with 512mb or 1 GB of onboard cache plus a bbu (so you can enable write back cache without risk of data loss).

    It's a cheap OVH dedi, I don't think it's possible to add a raid card etc

  • darkimmortaldarkimmortal Member
    edited December 2023

    Perf looks fine to me, don’t forget YABS tests read/write simultaneously so HDDs will appear significantly slower than when read/write are tested separately

  • jarjar Patron Provider, Top Host, Veteran
    edited December 2023

    Yeah software 10 on spinning drives mostly just is what it is. You’d be better off with SSDs.

    But if you need help destroying a RAID10 array let me know.

    Thanked by 1M66B
  • @darkimmortal said:
    Perf looks fine to me, don’t forget YABS tests read/write simultaneously so HDDs will appear significantly slower than when read/write are tested separately

    Any better benchmark?

  • risharderisharde Patron Provider, Veteran

    @vitobotta said:

    @host_c said:
    Please give me an idea on slow speed.

    A yabs , disk test only

    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/md3):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 849.00 KB/s    (212) | 12.25 MB/s     (191)
    Write      | 887.00 KB/s    (221) | 12.85 MB/s     (200)
    Total      | 1.73 MB/s      (433) | 25.11 MB/s     (391)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 48.28 MB/s      (94) | 50.88 MB/s      (49)
    Write      | 50.75 MB/s      (99) | 54.51 MB/s      (53)
    Total      | 99.04 MB/s     (193) | 105.40 MB/s    (102)
    

    @HostEONS said:

    @vitobotta said:
    I have a server with 4 regular hard drives in RAID 10 configuration. I choose RAID 10 because I thought I would get better performance than with a single drive and some redundancy at the same time, but write speeds are quite poor. Are there any settings or something in Linux that can help improve disk performance with this configuration? It's software RAID btw. Thanks!

    You can disable bitmap

    mdadm --grow --bitmap=none /dev/md

    but if you disable it, it will just make rebuilding RAID slower, but overall performance will improve

    This is exactly the kind of thing I was looking for! There's some improvement in the numbers:

    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/md3):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 1.26 MB/s      (316) | 16.49 MB/s     (257)
    Write      | 1.30 MB/s      (325) | 17.01 MB/s     (265)
    Total      | 2.56 MB/s      (641) | 33.51 MB/s     (522)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 67.92 MB/s     (132) | 74.54 MB/s      (72)
    Write      | 71.53 MB/s     (139) | 79.50 MB/s      (77)
    Total      | 139.46 MB/s    (271) | 154.04 MB/s    (149)
    

    And the app (Nextcloud) seems somewhat more responsive now.

    Are there any other magical settings like this that can help further?

    I experienced some slow performance as well on ovh sata disks with raid 1 software but I don't think it was as slow as you are reporting. Sorry I don't have the numbers off hand. If I do get a chance, I will run it so you can compare but that server is going to expire in a few days and my kid is likely not going to allow me to use my computer to get you these numbers this evening. Moved to the SSDs for my case but I understand why you might not since you're looking likely for space vs bang for buck.

Sign In or Register to comment.