New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
LVM Thin or qcow2?
LosPollosHermanos
Member
Lots of people have opinions but does anyone have recent real world experience? Just how much slower is qcow2 these days with SSDs and more recent Linux kernels and improvements in the drivers? I have read that it's much faster now than it was just a few years ago and I like the benefits of a file rather than having to deal with LVM and raw format.
Comments
Generally speaking it makes sense to use LVM-THIN with RAW disk images, so that you can snapshot and also you don't have to reserve the space since is thin.
I'd be curious to know the answer as well. I doubt performance can be really great but as you say dealing with files has advantages.
Why no love for btrfs, btw? Many advantages over LVM!
Because it is too young for production usage and requires modern kernel versions to be stable (at least 5.14). Keep in mind that some features like RAID-5/6 support are still experimental (as of Linux kernel 6.2).
Using btrfs with raid10 for few years, no problems but I didn't have to take advance of any of btrfs's features (snapshots etc.) either.
use lvm thin with raw disk image maybe
That is the default position I am starting from. I am just asking of it would make sense to consider QCOW2 now that it is a lot faster with SSD drives and driver improvements. Lots of people with outdated opinions on it but not a lot of people with recent real world experience to back it up. Myself included.
I think qcow2 handles snapshots poorly. The problems we get with snapshots in our cloud are too frequent to be deemed normal or some freak accident. If snapshotting is very important and frequent, you would better avoid it. I know in theory is old and mature technology, but I don't think snapshotting was one of the main testing lines, especially under load.
Using btrfs for a few years, including snapshots, compression and stuff like that (no RAID). Never had a problem. Many people have been running btrfs for production stuff for a while with no trouble.
I just tried it on Proxmox and really liking it so far. Very easy to learn coming from ext4. I'll have to try breaking RAID and recovering to see how it acts. The commands for that look straight forward enough.
It's been in mainline kernel since 2009. Not sure why people are still say it's too young. Sounds like a fair number of people are using it in production. Supposedly Facebook uses it for a lot of stuff.
linux mdadm raid is super simple (and very easy to recover on) once you have a general understanding.
Def gonna be a great option tho
Btrfs is not a good choice for VM images. The performance will be poor unless you enable "nodatacow" for the image files and also don't do any snapshots, but then you get no benefit in Btrfs, as compared to ext4 or XFS.
Secondly, there is a nasty extent reservation problem, which makes your actual disk usage grow over time if your scenario is to overwrite small pieces all over within a large file (exactly as you do with VM images).
One option that I find nifty these days is to run XFS and store VM images as raw format sparse files on it:
fallocate -l 100gb vm.img
It combines the best of both worlds to some degree, as you have the ease of dealing with files, but they also can be instantly snapshotted (via reflink support, that XFS has recently gained):
cp -a --reflink vm.img vm.img.snap
And consume only the allocated space, thanks to being sparse.
Sparse files can be used on Ext4 as well, but no reflink-snapshot support there.
qcow2 on top of xfs or zfs, xfs for local RAID 10, ZFS for SAN Storage.
After testing in production, we found a 5% performance hit on qcow2 VS RAW, in some extreme occasions maybe 10%. But storage handling and recovery is much easier on qcow2 images, at least in out opinion, we are using minimum Xeon V4 and Xeon Gold CPU's for Nodes, and a minimum of 2x 10 Gbps ETH cards.
LVM Thin is convenient for snapshots and backups, and QCOW2 is more convenient if you need to transfer a VPS from one server to another