Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


RAID10 on 8 x NVMe drives - Terrible Performance - Help Needed
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

RAID10 on 8 x NVMe drives - Terrible Performance - Help Needed

HannanHannan Member, Host Rep

Hi,
8 x NVMe Samsung pm9a3 with software RAID10 we are getting a terrible performance of around 80k IOPS where we get around 1 million IOPS on a single drive.
The OS is Debian and CPU is Epyc.

What's your experience? Any help really?

Thanks

Comments

  • speedypagespeedypage Member, Patron Provider

    1 million IOPS on Linux? That's interesting in itself on a single drive. I've never been able to reach that high even with disk cache/writeback caching. I usually put it down to poor NVMe drivers on Linux.

  • Non optimal block or sector size? Write cache disabled?

  • SagnikSSagnikS Member, Host Rep

    @Hannan said:
    Hi,
    8 x NVMe Samsung pm9a3 with software RAID10 we are getting a terrible performance of around 80k IOPS where we get around 1 million IOPS on a single drive.
    The OS is Debian and CPU is Epyc.

    What's your experience? Any help really?

    Thanks

    Look into your chunk settings. There's a lot to read about it, try increasing/decreasing it and see if that helps at all.

  • Using single PCIe adapters or using bifurcation?

  • HannanHannan Member, Host Rep

    We are using this chassis asus rs700a-e11-rs12u but I have to dig into it further more when I have time and a free server, then can provide more information. To try with RAID1 and RAID0.

    Thanks

  • HannanHannan Member, Host Rep

    Could push it up to 800k on raid 1. But the problem is the CPU usage goes to top. On dual 128 core CPU. Any idea?

Sign In or Register to comment.