Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


ZFS ! Comment if you have tried ?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

ZFS ! Comment if you have tried ?

darknessendsdarknessends Member
edited May 2013 in General

Hi,

Getting into extreme mess with our Raid Arrays I learned about ZFS.
I can see people using it for virtualisation.

Do you think you have tried or used it ? May be you are even running production on it.
Tell me about it. I would love to know if you have any experience around it.

Comments

  • zaxattkzaxattk Member

    Must you create a new thread for everything...?

    Also I personally have not heard of it.

  • ShadosShados Member

    I've been using it on my desktop for a while, have 4 1TB HDDS in a stripe of mirrors configuration with a fast SSD acting as L2ARC & ZIL, works pretty awesomely.

    It is very different to the normal approach of layering a filesystem on top of a logical volume management/raid system, but it can do some pretty awesome stuff as a result. One very neat side-effect is that even if you're just running a few disks in a stripe (i.e. no built-in redundancy) you can still set copies=2 (or 3, or 4, or whatever you like) for critical sections of your filesystem and it will spread those copies across the physical disks appropriately, letting you recover at least that information in the event of drive failure.

  • It's great. Just doesn't work well on Linux yet (not as supported). It's a Solaris file system.

  • @Shados, how is the performance and all ?

  • ShadosShados Member

    @concerto49, I'm using it under Linux with the zfsonlinux out-of-kernel module project. It's definitely not production-ready or anything, but I've never had any data loss or corruption throughout the past two-three years, and at this point I'd say it's definitely good enough for hobby use/testing.

    @darknessends, not bad, generally get 250-350MB/s reads when doing a scrub (for those not familiar, essentially reads/checksums/repairs the entire filesystem from the top-down, like fsck on crack). Keep in mind the zfsonlinux port has known performance problems vs. the original Solaris version & the FreeBSD port.

  • @Shados said: @concerto49, I'm using it under Linux with the zfsonlinux out-of-kernel module project. It's definitely not production-ready or anything, but I've never had any data loss or corruption throughout the past two-three years, and at this point I'd say it's definitely good enough for hobby use/testing.

    Yes, I know, but he's hinting production use, so just letting the OP know. I've used it too. It's cool. I doesn't require RAID cards :p

  • raindog308raindog308 Administrator, Veteran

    ZFS is supported for FreeBSD and OpenNAS

  • I use it with FreeBSD for my home array. Has been working fine for 2 years or so, I like it.

  • AnthonySmithAnthonySmith Member, Patron Provider

    @zaxattk said: Must you create a new thread for everything...?

    This...

    Summer is coming, he needs to be ready.

  • TheLinuxBugTheLinuxBug Member
    edited May 2013

    Dude, how many threads on the same subject do you need? Please stop the madness!

  • CrabCrab Member

    Go with FreeBSD or Solaris to get proper support. FUSE implementation on Linux is mediocre at best.

  • ShadosShados Member

    @Crab said: FUSE implementation on Linux is mediocre at best.

    Which is why you use the kernel module implementation instead. It's not as good as on Solaris or FreeBSD, but it is pretty good and it's getting better fast.

  • marrcomarrco Member

    @raindog308 also freenas and nas4free

    using and loving on BSD, never tried on linux

  • imperioimperio Member

    You may try nexenta if you are interested in ZFS.

    http://www.nexenta.com/corp

  • We're using zfsonlinux on our backup servers -- compression and snapshots are wonderful :) Haven't had any real issues at all, performance at times can be so/so but then again, the machines we're using for it aren't that fantastic either.

  • lumaluma Member

    I use Nexentstor (solaris with frontend) with ZFS and its great.

    Just have to remember ZFS is heavy on the cpu as it is software and LOVES cache so throw it lots of ram and or give it an SSD or 3 for Zil and L2ARC.

    I use a Xeon E3-1220 with 32GB of ram and 8x 2TB WD Reds. The entire setup was cheap to build. Yet its good and fast (for what I do with it) and only sucks up a tiny bit of power (seriously, this thing sips power)

  • craigbcraigb Member

    @darknessends said: Do you think you have tried or used it ? May be you are even running production on it.

    Tell me about it. I would love to know if you have any experience around it.

    Used ZFS on FreeBSD and Linux (native). Linux version is now stable. Give ZFS plenty of RAM and use the ZFS performance tuning tools to figure out whether you would benefit from SSD for caching (ZIL and L2ARC).

    TL;DR: Great filesystem for a summer host. Proceed to selecting your #lowenddedi

  • danodano Member

    I have used ZFS(freenas) in production for a couple of years or so, and have become quite a fan. I use in on a machine with 16GB mem, dual L5420's, and 4x1Tb sata disks, booting from usb flash with no ZIL cache(yet). I have it configured in Raidz2, so I only have about 1.8Tb available, but have the reliability I need and OK IO. I would say it's best to do ZIL cache if your planning on writing lots of data, as raidz2 can be a bit slower than I expected -- I do use this machine for the backend storage of Xen virtual machines via NFS, and also have many clients connected to NFS for file serving, and have not seen any ill effects. Memory used hoovers around 12Gb, as ZFS does like to use memory -- I am currently not doing any remote ZFS replication, but when I was, I think it took a hair more memory also.

    Overall, I am happy with the above machine, it's been reliable through power loses(where I have had raid attached disks die,rebuild raid) - if I was to do it all over again, I would go with SSD zil cache disks and of course, whatever is the largest affordable sata drive available .

Sign In or Register to comment.