Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


ZFS or btrfs or ext4 for RAID1?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

ZFS or btrfs or ext4 for RAID1?

edited April 2023 in General

I am thinking of going with Proxmox and it seems like that community likes using ZFS instead of ext4 + LVM. I don't know much about it so trying to understand it better before jumping in. It definitely simplifes the install since the Proxmox UI gives me the option of selecting ZFS RAID 1 and then sets that all up for me. It also gives me the option of using Btrfs RAID 1 but says it's a technology preview

One thing I don't like about ZFS is that it's not in the Linux kernel. Btrfs is. Everyone has been saying Btrfs is too new for years but at some point people will need to stop saying that. It's been in the mainline Linux kernel since March 2009.

Comments

  • @LosPollosHermanos said:
    It definitely simplifes the install since the Proxmox UI gives me the option of selecting ZFS RAID 1 and then sets that all up for me.

    If that's an important help to you, why not go with it. The Proxmox UI for ZFS is quite good imo. ZFS takes a bit more RAM than ext4.

  • amarcamarc Veteran

    What specific questions do you have ? Just like any other filesystem 'it will work'. Please have in mind that 1/2 of your RAM would be showing (per default, configurable) as "used" which is used for ARC caching. So if you are gonna freak out about "omg Proxmox uses 64GB of RAM" do not do it :smile:

    Best feature of ZFS: snapshoots. Use them as incremental backup, use them as cheap way to "go back in time"...

    Also, if you are going to do it, do it properly: without HW RAID and for at least 2 disks in mirror. Meaning, your system, in ideal scenario, will have: a) OS drive (256GB Enterprise SSD) + b) any other disks for ZFS datasets (at least 2, for example 2x1.92TB SSD Enterprise). Have a decent amount of RAM, I would say your initial calculations + ~30% more. So if you are thinking about 32GB system, go with 64GB

  • edited April 2023

    I got ahead of myself starting this thread before trying Btrfs on Proxmox, which is also an option on the install UI. So far I like it a lot more than ZFS. Mostly because the setup looks and acts similar to ext4. So less learning curve.

  • edited April 2023

    I am digging Btrfs but doesn't looke like Proxmox can do what I need. No idea why people keep recommending it for VPS hosting and sending me off on these tangents every few years when it can't even do the basics for that specific vertical market like Virtualizor/Solus have always been able to do.

  • @LosPollosHermanos said:
    I am digging Btrfs but doesn't looke like Proxmox can do what I need. No idea why people keep recommending it for VPS hosting and sending me off on these tangents every few years when it can't even do the basics for that specific vertical market like Virtualizor/Solus have always been able to do.

    Proxmox is suitable for non-dev like me who just want a cluster up and running in a day. It certainly waste a lot of resources and does not even allow hibernate of VMs when hypervisor restart. Contabo is using it to a certain level of success.

    Also it's Debian-based so if a feature you need can be done in Deb. Then you can make it happens in Proxmox

    The way I understand it, everyone suggests Xen when it comes to doing it as a service and/or efficiently. (Free/Open/Free-premium option that is).

  • host_chost_c Member, Patron Provider

    @LosPollosHermanos

    Either way you go, just leave EXT4 where it is, in the ground. It lived enough, time to move on.

    Also, on ZFS, in Linux, it takes some tweaking to get it performing good, so I would suggest to go btrfs or XFS. If you do hardware RAID, with a dedicated RAID card, just do not use ZFS please, it will break your data at some point.

    As for the operating system of your Hypervisor, Proxmox has come a long way in the past 3 years, today it is at the point of being a relay good option, it still strugles with performance issues on NVME compared to the BIG BOYS like vMware or Hyper-V, that will outperform Proxmox on any level in NVME performance.

    Other than this, Proxmox or Xen are both a good option, although, I would go for Proxmox as a Hypervisor.

  • tentortentor Member, Patron Provider
    edited April 2023

    @host_c said: it still strugles with performance issues on NVME compared to the BIG BOYS like vMware or Hyper-V

    Can you provide any benchmarks confirming this statement? I have found only one and it refuting this clause: https://kb.blockbridge.com/technote/proxmox-vs-vmware-nvmetcp/

    Thanked by 1yongsiklee
  • host_chost_c Member, Patron Provider

    on NVME, vMware and Hyper-V will do 2.6-3.5 Gbps, Proxmox will max out at 1.8 Gbps, same server, same NVME. This was our test's, I cannot give any benchmarks, as the servers are already in production.

    We tried, in proxmox, EXT4, ZFS, XFS, RAW & QCOW2 combinations. Results were the same, +/- 10%

  • host_chost_c Member, Patron Provider

    A, servers were generally R630 Xeon V4 or R640 Xeon Gold, Dual CPU. again, same server, different operating system, better results, so for the moment, I say the the driver for NVME in Proxmox, is behind in performance, but againe, the other setups are paid licenses, so for free, what you get in proxmox is OK.

    On the other hand, 8 x 2TB Kingston DC500R SSD drives in a RAID 10, in Proxmox give same results as in vMware or Hyper-V.

    Thanked by 1tentor
  • host_chost_c Member, Patron Provider

    @tentor said:

    @host_c said: it still strugles with performance issues on NVME compared to the BIG BOYS like vMware or Hyper-V

    Can you provide any benchmarks confirming this statement? I have found only one and it refuting this clause: https://kb.blockbridge.com/technote/proxmox-vs-vmware-nvmetcp/

    Tests were done on : Storage is network-attached using NVMe/TCP. The backend storage software is Blockbridge 6.

    Our test's were on internal storage.

  • ezethezeth Member, Patron Provider

    @host_c said:
    on NVME, vMware and Hyper-V will do 2.6-3.5 Gbps, Proxmox will max out at 1.8 Gbps, same server, same NVME. This was our test's, I cannot give any benchmarks, as the servers are already in production.

    We tried, in proxmox, EXT4, ZFS, XFS, RAW & QCOW2 combinations. Results were the same, +/- 10%

    I have similar experience with a new u.2 nvme in my r630 server.

    1 GB/s on proxmox, 3 GB/s on hyper-v. I have not tried vmware, they don’t support software raid and I’m not sure there’s a RAID card for the u.2 nvme. I got 4 of them and want to do RAID 5

  • host_chost_c Member, Patron Provider

    @ezeth said: 1 GB/s on proxmox, 3 GB/s on hyper-v. I have not tried vmware, they don’t support software raid and I’m not sure there’s a RAID card for the u.2 nvme. I got 4 of them and want to do RAID 5

    You need the cabels for NVME from the motherboard to the backplane + a 730p card for that, it will work like a beast!.

  • edited April 2023

    @host_c said:
    @LosPollosHermanos

    Either way you go, just leave EXT4 where it is, in the ground. It lived enough, time to move on.

    I'm not convinced xfs is an improvement over ext4 for general use. Redhat defaults to that now but Debian still defaults to ext4. Because I will be using this for KVM hosting and will be doing snapshots there is also that consideration, which is apparently a limitation of xfs. I guess there are ways around it but I haven't looked into that yet.

  • host_chost_c Member, Patron Provider

    We went with XFS because of data corruption on power loss. we never had this problem on XFS, on EXT4, we head enough fsck at boot. we got bored. And as we use qcow2 for KVM, XFS handels IO better on single large files.

  • ezethezeth Member, Patron Provider
    edited April 2023

    @host_c said:

    @ezeth said: 1 GB/s on proxmox, 3 GB/s on hyper-v. I have not tried vmware, they don’t support software raid and I’m not sure there’s a RAID card for the u.2 nvme. I got 4 of them and want to do RAID 5

    You need the cabels for NVME from the motherboard to the backplane + a 730p card for that, it will work like a beast!.

    Could you perhaps link to what I need.

    I currently got this https://www.ebay.com/itm/284524536712 installed in my server. It has no supported controller mode. The 730p only seem to support 2 drives?

  • host_chost_c Member, Patron Provider

    OK, I got it wrong, sorry, you cand do raid with what you have + the onboard S HBA adapter.

    The video is for Gen14, I remember we did it on Gen 13, but we had the 10 bay model

    Give it a trey with what you have, it should work. I am afraid that Linux OS will not see the raid, only Microsoft Windows Server + vMware with driver loaded during setup.

  • host_chost_c Member, Patron Provider

    Both SATA and NVMe drives can be used at same time. In your case SATA drive will be seen by H730P and you can create RAID 10 in H730P. NVMe drives will be seen by S130 controller and you can create RAID 10 under S130 controller

    https://www.dell.com/community/PowerEdge-HDD-SCSI-RAID/Dell-PowerEdge-R630-nvme-disks/td-p/7891457/page/6

  • host_chost_c Member, Patron Provider

    Only H755N PERC controller support NVMe RAID. It is supported in PowerEdge R650.

  • CatixsCatixs Member, Host Rep

    @host_c said:
    NVMe drives will be seen by S130 controller and you can create RAID 10 under S130 controller

    So for DELL 14 Gen and 15 Gen servers with S150 controllers, it's just a bit tricky.

    We have tried multiple NVME drives, including Intel P4510/P4610, Samsung PM9A3, and 980 Pro, but only a few of them could be found within the S150 controller. The rest drives would still work normally, just not displaying in the S150.

    I had a long discussion with Dell Pro Support, and the only solution they gave us is to use "DELL Certified Drives" instead of OEM or other brands.

  • ShazanShazan Member, Host Rep

    @host_c said:
    We went with XFS because of data corruption on power loss. we never had this problem on XFS, on EXT4, we head enough fsck at boot.

    I have the exact opposite experience. It's true that ext4 could need to fsck at boot but it never failed, while I had XFS corrupting ALL THE FILES overwriting them with zeros in the only server I had with XFS... terrible.

  • host_chost_c Member, Patron Provider

    @Catixs said: I had a long discussion with Dell Pro Support, and the only solution they gave us is to use "DELL Certified Drives" instead of OEM or other brands.

    They always give you this, we know

  • host_chost_c Member, Patron Provider

    @Shazan said: I have the exact opposite experience. It's true that ext4 could need to fsck at boot but it never failed, while I had XFS corrupting ALL THE FILES overwriting them with zeros in the only server I had with XFS... terrible.

    Interesting, did you have a power outage? or memory error on server?

    Or did you use Desktop systems in this case?

  • Some practical experience I have gained is that while ZFS has many advanced features, it can consume a lot of memory. Therefore, if you do not have ample memory or your programs require a lot of memory, it may not be the best choice. XFS is generally sufficient for file management. I have used software RAID with bcache to enhance performance and achieved some good results. This is not a cutting-edge or fancy solution, but it has been tried and tested as long as you configure it properly.

  • rm_rm_ IPv6 Advocate, Veteran

    @Shazan said: I have the exact opposite experience. It's true that ext4 could need to fsck at boot but it never failed, while I had XFS corrupting ALL THE FILES overwriting them with zeros in the only server I had with XFS... terrible.

    I bet not "all" the files, but only files which were open at the time of a power cut or a sudden reboot. Secondly, they say it's no longer supposed to do that, so it could matter how recently you had this happen, if too long ago, then it might be fixed.

  • ShazanShazan Member, Host Rep

    @host_c said:
    Interesting, did you have a power outage? or memory error on server?

    A power outage.

    Or did you use Desktop systems in this case?

    No, it was a Supermicro server.

  • ShazanShazan Member, Host Rep

    @rm_ said:

    @Shazan said: I have the exact opposite experience. It's true that ext4 could need to fsck at boot but it never failed, while I had XFS corrupting ALL THE FILES overwriting them with zeros in the only server I had with XFS... terrible.

    I bet not "all" the files, but only files which were open at the time of a power cut or a sudden reboot. Secondly, they say it's no longer supposed to do that, so it could matter how recently you had this happen, if too long ago, then it might be fixed.

    Yes sorry not all of them but definitely many more than only those opened. It was a mail server, all the mailboxes became completely unusable and the relative files were full of zeros.

    It happened few years ago but given I never had any issues with ext2-3-4, I decided not to try xfs again.

  • host_chost_c Member, Patron Provider
    edited May 2023

    Well, we actually only use DELL and HP.

    On the loosing data, with ZFS we never, and underlying never lost any data, nor with XFS over HWRAID. But, we are colocated in a new Datacenter, so we do not have any problems over power and cooling.

    Either way you go, xfs, zfs, ext4, ntfs......, if you use caching, and have power outage, you will loose some data, as what is in the memory of the serer/storage/switches, goes by by. If you have the scenario where power is a problem, just don't' use writhe cache, and always use sync writes. Yes, this will really impact your write performance, but you will be on the SAFE SIDE :smile:

    It has absolutely no importance what file system you use, if you loose power when writing data, or have open files, some file systems are more resilient than others, but all of them suffer from the same problem on a power loss if you have open files or writes that are not sync. This is not a fault of file systems, it is a weakness of writhe cache.

Sign In or Register to comment.