Hi,
8 x NVMe Samsung pm9a3 with software RAID10 we are getting a terrible performance of around 80k IOPS where we get around 1 million IOPS on a single drive.
The OS is Debian and CPU is Epyc.
1 million IOPS on Linux? That's interesting in itself on a single drive. I've never been able to reach that high even with disk cache/writeback caching. I usually put it down to poor NVMe drivers on Linux.
@Hannan said:
Hi,
8 x NVMe Samsung pm9a3 with software RAID10 we are getting a terrible performance of around 80k IOPS where we get around 1 million IOPS on a single drive.
The OS is Debian and CPU is Epyc.
What's your experience? Any help really?
Thanks
Look into your chunk settings. There's a lot to read about it, try increasing/decreasing it and see if that helps at all.
We are using this chassis asus rs700a-e11-rs12u but I have to dig into it further more when I have time and a free server, then can provide more information. To try with RAID1 and RAID0.
Comments
1 million IOPS on Linux? That's interesting in itself on a single drive. I've never been able to reach that high even with disk cache/writeback caching. I usually put it down to poor NVMe drivers on Linux.
Non optimal block or sector size? Write cache disabled?
Look into your chunk settings. There's a lot to read about it, try increasing/decreasing it and see if that helps at all.
Using single PCIe adapters or using bifurcation?
We are using this chassis asus rs700a-e11-rs12u but I have to dig into it further more when I have time and a free server, then can provide more information. To try with RAID1 and RAID0.
Thanks
Could push it up to 800k on raid 1. But the problem is the CPU usage goes to top. On dual 128 core CPU. Any idea?