Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Experiences with ZFS?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Experiences with ZFS?

What are your guys experience with ZFS?

Just had my first. Was a nightmare.

Was using proxmox, updating via apt-get dist-upgrade, then rebooted. After that it wouldn't boot, got a KVM, and found out ZFS was constantly stalling at the beginning and needed me to manually import cache and exit for it to boot.

Long story short, thus far ZFS has been a bit more work than otherwise, but I recognize the benefits of it.

Do you guys use ZFS? If so, how's your experience with it. If not (and you did use it), why do you no longer?

«1

Comments

  • ZFS on Linux or BSD?

    Thanked by 1jh
  • @Tom said:
    ZFS on Linux or BSD?

    Linux, for me anyways.

  • I've been using ZFS on Linux for quite some time, and it's effectively replaced hardware RAID nearly everywhere I previously used hardware RAID for bulk storage.

    It's a different paradigm, but one worth your time. Remember to use HBAs only, remember to scrub, remember to set up notifications.

    Reviewing your statement:

    Crandolph said: Was a nightmare.

    I've found your problem:

    Crandolph said: Was using proxmox

    I'll pull no punches on this: Proxmox sucks at literally everything but disk images. ZFS, remote Gluster, RBD, thin LVM provisioning. All sucks on Proxmox.

  • @Damian said:
    I've been using ZFS on Linux for quite some time, and it's effectively replaced hardware RAID nearly everywhere I previously used hardware RAID for bulk storage.

    It's a different paradigm, but one worth your time. Remember to use HBAs only, remember to scrub, remember to set up notifications.

    Reviewing your statement:

    Crandolph said: Was a nightmare.

    I've found your problem:

    Crandolph said: Was using proxmox

    I'll pull no punches on this: Proxmox sucks at literally everything but disk images. ZFS, remote Gluster, RBD, thin LVM provisioning. All sucks on Proxmox.

    But what's a better alternative to Proxmox? (free, open sourced, etc)

  • Crandolph said: But what's a better alternative to Proxmox? (free, open sourced, etc)

    Libvirt via virsh, or if you need point-and-click, virt-manager.

    That being said, I like Proxmox, and it's fine otherwise. Just not using it with any of their built-in "storage" offerings.

  • My experience is great it's much more stable than btrfs but the learning curve can be steep.
    The only limitation, assuming your are not using mirroring but zraid2/3, is if you need to increase your zpool, you wont be able to do it like you do with raid level expansion.
    So the downsize is that you need to predict with some confidence your growth or you will need to overprovision your pool. The increase is possible but not recommended.
    So once created the zpool is pretty much static.

    This reminds me that I need to find a provider that offers ZFS and integration with borg for my offsite backups.

    I will possibly open a new thread for that, but if anyone knows a good one, let me know.

  • mmoris said: The only limitation, assuming your are not using mirroring but zraid2/3, is if you need to increase your zpool, you wont be able to do it like you do with raid level expansion.

    You can add more zraid vdevs to your pool, although that obviously has more capital expense for growth since you'd have to buy more drives. You can also add these via iscsi targets in another chassis/expander if you fill your existing one.

  • exactly and beyond that the process takes ages...
    So I wouldn't suggest it...

  • @mmoris said:
    My experience is great it's much more stable than btrfs but the learning curve can be steep.
    The only limitation, assuming your are not using mirroring but zraid2/3, is if you need to increase your zpool, you wont be able to do it like you do with raid level expansion.
    So the downsize is that you need to predict with some confidence your growth or you will need to overprovision your pool. The increase is possible but not recommended.
    So once created the zpool is pretty much static.

    This reminds me that I need to find a provider that offers ZFS and integration with borg for my offsite backups.

    I will possibly open a new thread for that, but if anyone knows a good one, let me know.

    I was using two mirrored Vdev's, the ZFS equivalent of Raid 1 mirrored.

  • Then you won't have any issues in increasing your pool. But your storage effectiveness will be of 50% less than zraid3

  • @mmoris said:
    Then you won't have any issues in increasing your pool. But your storage effectiveness will be of 50% less than zraid3

    Yeah, I'm definitely still new to ZFS so I took the easiest path I saw possible to get my feet wet. I'm sure there's plenty of optimizations to be done, also I'm not for sure I'm sticking with ZFS (but, like I said above, I do recognize the benefits, so it's a possibility).

  • freerangecloudfreerangecloud Member, Patron Provider

    I've used ZFS for years on my home NAS. Started out on FreeBSD and last year switched to ZFS on Linux. So far I've been thrilled, easy to export/import to multiple different machines and had a couple of drives fail, but replacing them was pretty straightforward.

    All that said, never used ZFS on proxmox before, I've always separated my storage and my hypervisors.

  • @freerangecloud said:
    I've used ZFS for years on my home NAS. Started out on FreeBSD and last year switched to ZFS on Linux. So far I've been thrilled, easy to export/import to multiple different machines and had a couple of drives fail, but replacing them was pretty straightforward.

    All that said, never used ZFS on proxmox before, I've always separated my storage and my hypervisors.

    I was able to get it working pretty good after I posted this with a KVM IP.

    Not too bad, think I'll stick with ZFS.

  • Been using it since 2008. Initially on Solaris, and on Linux since it became available as ZoL.

    All my Linux machines run now, as a matter of policy, on ZFS (including root/boot fs).

    My one and only hiccup happened in 2008 :-) and one of the original ZFS developers was able to help me via telnet, by doing some magic w/ zdb (from what I could gather he rolled back the FS by one transaction). Saved my a$$ - had all my family pics on it and no backup (I now have 4 backups, one of which off-site).

    zfs send/receive is magic. Do not even think of using less than raidz2 (raidz3 is better).

    I have no doubt that in the future zfs will become the default Linux FS - IMO it is without doubt the most advanced FS available today.

    HTH.

    Thanked by 2vimalware mmoris
  • Played a bit with it, of course, but never really used it. Multiple reasons. For one I don't like fat multilayer stuff doing everything and the kitchen sink. And why should I anyway? I'm perfectly happy with ufs2 and geom.

    But then I'm just me. For seriously large operations zfs might be a way to go.

  • @bsdguy said:
    for seriously large operations zfs might be a way to go.

    small ones too: search my posts here, you will see I use it on a 256MB VM w/7GiB disk. Reason: extremely easy remote filesystem duplication (online too) via zfs send/receive.

    Thanked by 1lazyt
  • bsdguybsdguy Member
    edited February 2018

    @lwt said:

    @bsdguy said:
    for seriously large operations zfs might be a way to go.

    small ones too: search my posts here, you will see I use it on a 256MB VM w/7GiB disk. Reason: extremely easy remote filesystem duplication (online too) via zfs send/receive.

    You see, I know that game. systemd also has quite some heroic advantage stories to tell but that's not the full picture.

    I come from a different corner, namely from the reliability/safety/security camp and reading what you mentioned as an advantage I see flashing red lights all over the place. And btw it's not like that could't be done with other approaches, too, albeit less "cool and extremely easily".

    zfs goes against what I've learned (and found true) as sound engineering paradigmas, quite some of which match well known unix credos like "do one specific thing and do it well. For more complex tasks just have reliable tools cooperate".

    But hey, by all means go on, that's just my opinion and it's why we have forums to discuss diverse views. I'm not saying "zfs is shit!"; I'm saying "I don't trust it and I have reasons for that and I don't need it".

  • Running FreeBSD 11 with root-on-zfs, never skips a beat. Any specific questions?

  • @bsdguy said:
    ... I see flashing red lights all over the place
    ... zfs goes against what I've learned (and found true) as sound engineering paradigmas

    OK you lost me - care to explain? What is wrong with zfs? I think it delivers quite well on its stated goals, which are not the same as those of ext4/ufs/etc. And I have not yet had any data loss on it - in 10 years - which I sadly cannot say about other file systems or RAID arrays.

    It may be luck, I agree, but I have never read a bad thing about zfs reliability.

  • msg7086msg7086 Member
    edited February 2018

    Mixing ZFS with proxmox is too much risk IMO. If you want to play with ZFS, better having a dedicated SAN running ZFS underneath.

    Even if you do use ZFS on the proxmox server, make sure you don't put system partitions on it -- so that it should boot up and you can deal with any potential problems thereafter. I'd rather put system partitions on a mdadm soft RAID 1 than on a ZFS volume.

    If you want to run ZoL, do it on a distribution (and kernel), not on a proxmox kernel.

  • @lwt said:

    @bsdguy said:
    ... I see flashing red lights all over the place
    ... zfs goes against what I've learned (and found true) as sound engineering paradigmas

    OK you lost me - care to explain? What is wrong with zfs? I think it delivers quite well on its stated goals, which are not the same as those of ext4/ufs/etc. And I have not yet had any data loss on it - in 10 years - which I sadly cannot say about other file systems or RAID arrays.

    It may be luck, I agree, but I have never read a bad thing about zfs reliability.

    This is potentially a really deep rabbit hole and not only in the tech realm. Lets us start with some simple questions:

    You say you've never lost data with zfs. On what is that statement based? A better and more realistic way to state what you mean (or at least should mean) is "I never noticed a data loss in 10 years". I think you see where I'm going at and moreover I've heard that statement many times and for many systems.

    Why is there a problem in the first place? Why do we have imperfect storage devices and media? Answer: economy. It's not that we couldn't do better, it's rather that we "can't" ( with a question mark, btw) do better with our society and economic system within a certain budget - and a low one at that. That problem, however, can not possibly sorted out or compensated by technology. Yet, that's what we try to do with zfs and other approaches which are necessarily bound to never even get close to 100%.

    History and context. zfs, like every technology, was conceived and designed under a set of assumptions. One important assumption was "large data" (probably made sense for Sun), another one was full control of hardware. Hence it was questionable (to avoid saying idiotic) to simply transplant that technology to a segment, where by far less than 1% is about large data ("large" as in "many exabytes") and where there not only is virtually no control of hardware (for the zfs guys) but, in fact, hardware is a brutal cut corners everywhere commodity business.

    zfs is very memory hungry and doing a lot in memory - which, however, often has worse reliability than hard disks and anyway nowhere near high reliability (keep in mind that ecc pretty much always is single error correction, "error" typically meaning bit).

    A related problem is that of locality and responsibility. What we really want is a disk system that offers a simply interface and just works and works reliably. Why? For diverse reasons, one of them being that we want a high payload vs admin ratio, we want as few resources (cpu cycles, mem, ...) as possibly spent on housekeeping and as many as possible on user jobs. That was, in fact, one of the main reasons for raid hw controllers. high payload vs admin ratio and locality (have some specialized device deal with specific (e.g. disk) details and offer a simple resource cheap interface to the main system).

    Looking at that one finds another questionable view, namely something boiling down to "todays cpus can do checksumming and other (disk related) things much faster than hardware raid controllers anyway". While that's true it quickly turns out to be questionable when being properly framed: Sure, a 20 core xeon is much much faster than, say, an arm based raid processor, but a) that doesn't mean that that arm raid processors is too slow (after all, it's job is quite limited), and b) and more importantly it's not eating away user cycles like the xeon! Moreover, it's bullshit anyway because most relevant algorithms run much faster on a lowly but specialized raid processor than in software on a xeon.

    Safety, security. Here we are at another assumption issue. Sun could and did have very experienced high level developers. Most foss developers, however, are mediocre; a few super stars don't change that fact. Moreover we fucking know - and from plenty pain - that complexity and size are the natural enemy of quality. In case you still have doubts, just have a look at openssl (or linux or windows or ...). So the unix fathers did have concrete and heavyweight pragmatic reasons for the "do 1 thing and do it well" credo.

    Religion and zealotry. Let's be honest, maybe not you but the vast majority of people are herd animals and always following what's cool or fashionable. That is the single most important reason for everyone and his dog preaching how great zfs is. Looking closer one notes, to name but one example, that max net usage of storage devices with zfs is about 75%. Pardon me, you want to tell me to follow a religion that generously throws away the one holy resource it's all about? Thanks, no.

  • @bsdguy said:

    Religion and zealotry. Let's be honest, maybe not you but the vast majority of people are herd animals and always following what's cool or fashionable. That is the single most important reason for everyone and his dog preaching how great zfs is. Looking closer one notes, to name but one example, that max net usage of storage devices with zfs is about 75%. Pardon me, you want to tell me to follow a religion that generously throws away the one holy resource it's all about? Thanks, no.

    So what do you suggest instead? Everything has a compromise and I'm currently not aware of any other filesystem as stable that contains all the features that ZFS offers.

  • omelasomelas Member
    edited February 2018

    @mmoris said:

    @bsdguy said:

    >

    TL: skiped

    So what do you suggest instead? Everything has a compromise and I'm currently not aware of any other filesystem as stable that contains all the features that ZFS offers.

    I think he meant you should NOT looking for such features from a filesystem, but have seperate programs for each feature you want.

  • omelasomelas Member
    edited February 2018

    @bsdguy said:

    Looking at that one finds another questionable view, namely something boiling down to "todays cpus can do checksumming and other (disk related) things much faster than hardware raid controllers anyway". While that's true it quickly turns out to be questionable when being properly framed: Sure, a 20 core xeon is much much faster than, say, an arm based raid processor, but a) that doesn't mean that that arm raid processors is too slow (after all, it's job is quite limited), and b) and more importantly it's not eating away user cycles like the xeon! Moreover, it's bullshit anyway because most relevant algorithms run much faster on a lowly but specialized raid processor than in software on a xeon.

    HW raid doens't make sense unless you have a storage server that needs to fit a 8+ hdds on a server (then you need one anyway because you run out of ports.) I already have enought ports for hdd and cpu will process raid fast enough, then why I would buy expensive HW raid card? its like having a dedicate sound card on your desktop because you not fond of cpu cycle used for software processed for your motherboard sound ports. (some audiophiles do buy sound card, but that's another story)

  • @omelas said:
    its like having a dedicate sound card on your desktop because you not fond of cpu cycle used for software processed for your motherboard sound ports. (some audiophiles do buy sound card, but that's another story)

    Bad analogy. Those on-board realtek sound-chips on todays consumer mobos are total crap...

  • @Jarry said:

    @omelas said:
    its like having a dedicate sound card on your desktop because you not fond of cpu cycle used for software processed for your motherboard sound ports. (some audiophiles do buy sound card, but that's another story)

    Bad analogy. Those on-board realtek sound-chips on todays consumer mobos are total crap...

    yes, but cpu usage isn't one of their problem.

  • @mmoris said:

    @bsdguy said:

    So what do you suggest instead? Everything has a compromise and I'm currently not aware of any other filesystem as stable that contains all the features that ZFS offers.

    I suggest to carefully and knowledgeably choose the right tool for a scenario.

    Again, why do you say "stable"? Based on what? Because you never noticed having corrupted data or data loss? Because many say zfs is super stable, reliable and whatnot? From what I see zfs goes against pretty much every holy engineering wisdom and we fucking know (as in "scientifically incl. empirically") that adding complexity is about the most reliable way to DEcrease reliability (which almost certainly is what you mean when talking about "stable").

    @omelas said:
    HW raid doens't make sense unless you have a storage server that needs to fit a 8+ hdds on a server (then you need one anyway because you run out of ports.) I already have enought ports for hdd and cpu will process raid fast enough, then why I would buy expensive HW raid card? its like having a dedicate sound card on your desktop because you not fond of cpu cycle used for software processed for your motherboard sound ports. (some audiophiles do buy sound card, but that's another story)

    Well, for a start disk transfers are massively more intense than audio stuff in pretty much any respect. But I get your point.

    I'm not saying that everyone should buy a raid card but keep in mind that the typical zfs use case is about (more or less) massive and multi disk scenarios. Now, add to that a non trivial network and e.g. web server scenario and it starts to make a lot of sense to "outsource" much of the disk stuff to a dedicated unit (like a raid card with its own specialized processor). btw the same goes for network stuff, much of which is taken care of (via offloading) by smarter network card (again with its own specialized processor).

    Now look at zfs: It adds a shitload of work to the main processor. For your home server or a small companies thingy that might be quite OK but in a data center it's often nonsensical.

    And btw, raid cards are the predecessors of zfs in a way. They, too, have been praised - and for decades - as being "the right way", the way to have "reliable data storage", "data storage made easy as pie". And now? Have raid cards that "kept us safe" for decades suddenly turned into cancerous beasts? I don't think so.

    Moreover an important aspect of data storage I mentioned above just happens to bite us again: The fact that raid cards are so expensive is due to social and economical reasons. Technically, a raid card (I'm talking about a professional 6 x sas/sata card here!) is a few chips, namely (typically) a specialized (xor, galois) arm core, typ. 1 or 2 (cheap) port driver chips, some memory, all together costing less than 50$ plus a bunch of (ridiculously) cheap electronics plus firmware, that's it. Yet you pay 500$ for it.

    We could have cheap raid boards in 3 months if our system and society wanted that. We would not even need ASICs; nxp, to name but one example, makes arm cores with interesting hardware support for quite some needed functionality, plus they have plenty SerDes and I'm sure that some of them have all the sas/sata stuff in hw, too.

  • @bsdguy said:

    Well, for a start disk transfers are massively more intense than audio stuff in pretty much any respect. But I get your point.

    I'm not saying that everyone should buy a raid card but keep in mind that the typical zfs use case is about (more or less) massive and multi disk scenarios. Now, add to that a non trivial network and e.g. web server scenario and it starts to make a lot of sense to "outsource" much of the disk stuff to a dedicated unit (like a raid card with its own specialized processor). btw the same goes for network stuff, much of which is taken care of (via offloading) by smarter network card (again with its own specialized processor).

    or a dedicated storage server. then storaging becomes all of its job.

    And btw, raid cards are the predecessors of zfs in a way. They, too, have been praised - and for decades - as being "the right way", the way to have "reliable data storage", "data storage made easy as pie". And now? Have raid cards that "kept us safe" for decades suddenly turned into cancerous beasts? I don't think so.

    I agree on that part.

    Moreover an important aspect of data storage I mentioned above just happens to bite us again: The fact that raid cards are so expensive is due to social and economical reasons. Technically, a raid card (I'm talking about a professional 6 x sas/sata card here!) is a few chips, namely (typically) a specialized (xor, galois) arm core, typ. 1 or 2 (cheap) port driver chips, some memory, all together costing less than 50$ plus a bunch of (ridiculously) cheap electronics plus firmware, that's it. Yet you pay 500$ for it.

    We could have cheap raid boards in 3 months if our system and society wanted that. We would not even need ASICs; nxp, to name but one example, makes arm cores with interesting hardware support for quite some needed functionality, plus they have plenty SerDes and I'm sure that some of them have all the sas/sata stuff in hw, too.

    you can't expect people to make a thing for free, except when it created a religion to people work on it. and I doubt unix philosophy has enough congregation to people follow it.

  • mmorismmoris Member
    edited February 2018

    @bsdguy said:

    @mmoris said:

    @bsdguy said:

    So what do you suggest instead? Everything has a compromise and I'm currently not aware of any other filesystem as stable that contains all the features that ZFS offers.

    I suggest to carefully and knowledgeably choose the right tool for a scenario.

    Again, why do you say "stable"? Based on what? Because you never noticed having corrupted data or data loss? Because many say zfs is super stable, reliable and whatnot? From what I see zfs goes against pretty much every holy engineering wisdom and we fucking know (as in "scientifically incl. empirically") that adding complexity is about the most reliable way to DEcrease reliability (which almost certainly is what you mean when talking about "stable").

    With all due respect all your arguments are not real arguments. There's no substance in everything you mentioned so far, being everything very superficial and vague.
    So allow me to add some substance to the discussion.

    ZFS is a great filesystem. It's reliable, performant, featureful, and very well documented. Any other filesystem has a subset of the ZFS featureset and many fails on all the other counts. And yes, I know this based on my extensive usage, I've been hammering it over long periods and I never experienced any sort of dataloss and performance problems. I've yet to encounter any problems with ZFS myself, but I've encountered many serious issues with other filesystems (like btrfs).

    This is a given it's not just my experience - ZFS is well known by its stability and reliability, there are very small number of dataloss incidents, there's nothing to be disputed in this front.

    It might be not fast on single disks as other filesystems (e.g. ext4) but there's a substantial performance gain as more disks are added, but overall you're getting resilience and reliability, not raw speed although it scales well as you add more discs; exactly what I want for storing my data.

    ZFS is also a FS which works on several operating systems, some years ago I took the discs comprising a ZFS zpool mirror from my Linux system and slotted them into a FreeBSD NAS. One command to import the pool (zpool import) and it was all going. Later on I added l2arc and zil (cache and log) SSDs to make it faster, both one command to add and also entirely trouble-free.

    I'm not aware of any other FS with this track history,

  • bsdguybsdguy Member
    edited February 2018

    @mmoris

    Well, my arguments are at least better than stubborn assertions by a fan. It's really funny to watch how you first assert that all my arguments, which go to some depth, somehow are not real arguments and then follow up with a series of subjective fanboy declarations.

    Funnily you also seem to still fail to understand the difference between "I had no data loss" and "I noticed no data loss".

    I have not noticed any data loss with UFS(1 and 2) in 10+ years. Does that mean that UFS is just as good as zfs? Moreover I have not noticed any data loss on notebooks with ext3 and ext4. Are ext3 and 4 hence just as reliable and safe as zfs?

    Also you seem to utterly confuse "ease of use"/comfort and reliablity. Besides your totally unproven assertions ("I never lost data") most of what you say boils down to "zfs offers a lot of comfortable features" (which I btw. never denied).

    All I see is a pissed off zfs fanboy.

    P.S. My aunt has a spare key under a flower pot in front of the house (I'm serious) and there was never a burglary. Does that somehow proof that having the key under flower pots is somehow safe and secure?

    Thanked by 1bugrakoc
Sign In or Register to comment.