New on LowEndTalk? Please Register and read our Community Rules.
ZFS or btrfs or ext4 for RAID1?
edited April 16 in General
I am thinking of going with Proxmox and it seems like that community likes using ZFS instead of ext4 + LVM. I don't know much about it so trying to understand it better before jumping in. It definitely simplifes the install since the Proxmox UI gives me the option of selecting ZFS RAID 1 and then sets that all up for me. It also gives me the option of using Btrfs RAID 1 but says it's a technology preview
One thing I don't like about ZFS is that it's not in the Linux kernel. Btrfs is. Everyone has been saying Btrfs is too new for years but at some point people will need to stop saying that. It's been in the mainline Linux kernel since March 2009.
If that's an important help to you, why not go with it. The Proxmox UI for ZFS is quite good imo. ZFS takes a bit more RAM than ext4.
What specific questions do you have ? Just like any other filesystem 'it will work'. Please have in mind that 1/2 of your RAM would be showing (per default, configurable) as "used" which is used for ARC caching. So if you are gonna freak out about "omg Proxmox uses 64GB of RAM" do not do it
Best feature of ZFS: snapshoots. Use them as incremental backup, use them as cheap way to "go back in time"...
Also, if you are going to do it, do it properly: without HW RAID and for at least 2 disks in mirror. Meaning, your system, in ideal scenario, will have: a) OS drive (256GB Enterprise SSD) + b) any other disks for ZFS datasets (at least 2, for example 2x1.92TB SSD Enterprise). Have a decent amount of RAM, I would say your initial calculations + ~30% more. So if you are thinking about 32GB system, go with 64GB
I got ahead of myself starting this thread before trying Btrfs on Proxmox, which is also an option on the install UI. So far I like it a lot more than ZFS. Mostly because the setup looks and acts similar to ext4. So less learning curve.
I am digging Btrfs but doesn't looke like Proxmox can do what I need. No idea why people keep recommending it for VPS hosting and sending me off on these tangents every few years when it can't even do the basics for that specific vertical market like Virtualizor/Solus have always been able to do.
Proxmox is suitable for non-dev like me who just want a cluster up and running in a day. It certainly waste a lot of resources and does not even allow hibernate of VMs when hypervisor restart. Contabo is using it to a certain level of success.
Also it's Debian-based so if a feature you need can be done in Deb. Then you can make it happens in Proxmox
The way I understand it, everyone suggests Xen when it comes to doing it as a service and/or efficiently. (Free/Open/Free-premium option that is).
Either way you go, just leave EXT4 where it is, in the ground. It lived enough, time to move on.
Also, on ZFS, in Linux, it takes some tweaking to get it performing good, so I would suggest to go btrfs or XFS. If you do hardware RAID, with a dedicated RAID card, just do not use ZFS please, it will break your data at some point.
As for the operating system of your Hypervisor, Proxmox has come a long way in the past 3 years, today it is at the point of being a relay good option, it still strugles with performance issues on NVME compared to the BIG BOYS like vMware or Hyper-V, that will outperform Proxmox on any level in NVME performance.
Other than this, Proxmox or Xen are both a good option, although, I would go for Proxmox as a Hypervisor.
Can you provide any benchmarks confirming this statement? I have found only one and it refuting this clause: https://kb.blockbridge.com/technote/proxmox-vs-vmware-nvmetcp/
on NVME, vMware and Hyper-V will do 2.6-3.5 Gbps, Proxmox will max out at 1.8 Gbps, same server, same NVME. This was our test's, I cannot give any benchmarks, as the servers are already in production.
We tried, in proxmox, EXT4, ZFS, XFS, RAW & QCOW2 combinations. Results were the same, +/- 10%
A, servers were generally R630 Xeon V4 or R640 Xeon Gold, Dual CPU. again, same server, different operating system, better results, so for the moment, I say the the driver for NVME in Proxmox, is behind in performance, but againe, the other setups are paid licenses, so for free, what you get in proxmox is OK.
On the other hand, 8 x 2TB Kingston DC500R SSD drives in a RAID 10, in Proxmox give same results as in vMware or Hyper-V.
Tests were done on : Storage is network-attached using NVMe/TCP. The backend storage software is Blockbridge 6.
Our test's were on internal storage.
I have similar experience with a new u.2 nvme in my r630 server.
1 GB/s on proxmox, 3 GB/s on hyper-v. I have not tried vmware, they don’t support software raid and I’m not sure there’s a RAID card for the u.2 nvme. I got 4 of them and want to do RAID 5
You need the cabels for NVME from the motherboard to the backplane + a 730p card for that, it will work like a beast!.
I'm not convinced xfs is an improvement over ext4 for general use. Redhat defaults to that now but Debian still defaults to ext4. Because I will be using this for KVM hosting and will be doing snapshots there is also that consideration, which is apparently a limitation of xfs. I guess there are ways around it but I haven't looked into that yet.
We went with XFS because of data corruption on power loss. we never had this problem on XFS, on EXT4, we head enough fsck at boot. we got bored. And as we use qcow2 for KVM, XFS handels IO better on single large files.
Could you perhaps link to what I need.
I currently got this https://www.ebay.com/itm/284524536712 installed in my server. It has no supported controller mode. The 730p only seem to support 2 drives?
OK, I got it wrong, sorry, you cand do raid with what you have + the onboard S HBA adapter.
The video is for Gen14, I remember we did it on Gen 13, but we had the 10 bay model
Give it a trey with what you have, it should work. I am afraid that Linux OS will not see the raid, only Microsoft Windows Server + vMware with driver loaded during setup.
Both SATA and NVMe drives can be used at same time. In your case SATA drive will be seen by H730P and you can create RAID 10 in H730P. NVMe drives will be seen by S130 controller and you can create RAID 10 under S130 controller
Only H755N PERC controller support NVMe RAID. It is supported in PowerEdge R650.
So for DELL 14 Gen and 15 Gen servers with S150 controllers, it's just a bit tricky.
We have tried multiple NVME drives, including Intel P4510/P4610, Samsung PM9A3, and 980 Pro, but only a few of them could be found within the S150 controller. The rest drives would still work normally, just not displaying in the S150.
I had a long discussion with Dell Pro Support, and the only solution they gave us is to use "DELL Certified Drives" instead of OEM or other brands.
I have the exact opposite experience. It's true that ext4 could need to fsck at boot but it never failed, while I had XFS corrupting ALL THE FILES overwriting them with zeros in the only server I had with XFS... terrible.
They always give you this, we know
Interesting, did you have a power outage? or memory error on server?
Or did you use Desktop systems in this case?
Some practical experience I have gained is that while ZFS has many advanced features, it can consume a lot of memory. Therefore, if you do not have ample memory or your programs require a lot of memory, it may not be the best choice. XFS is generally sufficient for file management. I have used software RAID with bcache to enhance performance and achieved some good results. This is not a cutting-edge or fancy solution, but it has been tried and tested as long as you configure it properly.
I bet not "all" the files, but only files which were open at the time of a power cut or a sudden reboot. Secondly, they say it's no longer supposed to do that, so it could matter how recently you had this happen, if too long ago, then it might be fixed.
A power outage.
No, it was a Supermicro server.
Yes sorry not all of them but definitely many more than only those opened. It was a mail server, all the mailboxes became completely unusable and the relative files were full of zeros.
It happened few years ago but given I never had any issues with ext2-3-4, I decided not to try xfs again.
Well, we actually only use DELL and HP.
On the loosing data, with ZFS we never, and underlying never lost any data, nor with XFS over HWRAID. But, we are colocated in a new Datacenter, so we do not have any problems over power and cooling.
Either way you go, xfs, zfs, ext4, ntfs......, if you use caching, and have power outage, you will loose some data, as what is in the memory of the serer/storage/switches, goes by by. If you have the scenario where power is a problem, just don't' use writhe cache, and always use sync writes. Yes, this will really impact your write performance, but you will be on the SAFE SIDE
It has absolutely no importance what file system you use, if you loose power when writing data, or have open files, some file systems are more resilient than others, but all of them suffer from the same problem on a power loss if you have open files or writes that are not sync. This is not a fault of file systems, it is a weakness of writhe cache.