Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Piotr Has a Lot of Disk
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Piotr Has a Lot of Disk

raindog308raindog308 Administrator, Veteran
edited May 2023 in General

@AXYZE has 10x10TB on a Hetzner dedi.

Link to script: https://github.com/terem42/zfs-hetzner-vm

Comments

  • SirFoxySirFoxy Member

    "Piotr Has a Lot of Disk" Piotr Has a Big Disk

  • @AXYZE Is it nice to see your videos on LET YT channel but... when Zalecane?

  • PagePage Member

    However, the 10*10TB auction server does not have a raid controller. If you plan to use raid5 or raid10, the cpu simply cannot bear the load.

  • amarcamarc Veteran

    Literally makes no sense without link to that script.. and it's not present either here or on youtube description

  • raindog308raindog308 Administrator, Veteran

    @amarc said:
    Literally makes no sense without link to that script.. and it's not present either here or on youtube description

    That's because I'm a dork. I added it.

    https://github.com/terem42/zfs-hetzner-vm

  • edited May 2023

    @Page said:
    However, the 10*10TB auction server does not have a raid controller. If you plan to use raid5 or raid10, the cpu simply cannot bear the load.

    Given how RAID10 performs (and RAID5 which I used previously) on my little home media array, I would assume that the CPU in those machines should be able to keep up with what the spinning-disk based drives can handle, certainly for RAID10 (no checksumming needed just mirroring). Unless the CPU load of ZFS's redundancy/checksum/etc. methods is significantly higher than that of mdraid. If I had a bit more time on my hands I might be tempted to grab one for a month and run some tests.

    With that many drives I'd be more likely to use RAID6 rather than 5, or 5 with at least one hot spare.

    Update: a little extra research for software RAID benchmarks finds https://lore.kernel.org/all/[email protected]/T/ talking about RAID6 over 24 SSDs on a CPU which seems a little less capable than the Hetzner ones.

    Thanked by 1Page
  • AXYZEAXYZE Member
    edited May 2023

    @BilohBucks said:
    @AXYZE Is it nice to see your videos on LET YT channel but... when Zalecane?

    Never, I wont come back to reviewing physical products, people feel connected to them and feel offended when you are making valid critique + PR companies will mass dislike honest reviews etc. its just not worth it for me anymore.
    I could make whole book about manipulative tactics that PR companies/shops use...

    With LEB its completly different, I love this community and I hope we can grow to even bigger numbers than Zalecane had.

    Thanked by 2raza19 BilohBucks
  • Is this preferred over something like TrueNAS that'll have more features and GUI for ZFS stuff? I would know to go to TrueNAS forum for help, not sure for roll your own setup.

    Is this fairly safe from lost/corrupt data from unexpected power outages?

    How do you back that up? Or do you?

    (Not a ZFS guy)

  • edited May 2023

    @TimboJones said:
    Is this preferred over something like TrueNAS that'll have more features and GUI for ZFS stuff? I would know to go to TrueNAS forum for help, not sure for roll your own setup.

    Depends on two things: your confidence level and feature needs.

    For confidence, I recommend having a play first before using for real data: maybe create a local VM with many virtual drives and tinker with that for a bit, just be aware that performance characteristics will be significantly different in such an arrangement. If you have a mix of SSDs and spinning disks spread the vdisks over those, so you can play with HDD->SSD->RAM caching options to see if they are worth it for your planned workloads. There is a lot of information out there, and sources of help like StackExchange & their ilk if you run into something that isn't clear from that info, though rolling your own might be just a touch less comfortable in that respect.

    For features: if all you are looking for is a storage server then go with TrueNAS for convenience. If you need anything else then look into how well that is supported. A quick look at https://www.truenas.com/compare/ (I've not used TrueNAS in anger, and last played with it years ago) shows a few differences in virtualisation support between the community and paid editions for instance (not listing KVM on the community edition list probably doesn't mean it won't work, just that you are back to rolling your own rather than having a helping hand).

    I might still suggest rolling your own virtual array to have a play, so you better understand what is going on under the hood. That way you will be more confident when something does go wrong and you don't get a good response quick from the community forums. For that matter, do the same with TrueNAS so you can see how its recovery procedures react to things like a dead or corrupt drive.

    Is this fairly safe from lost/corrupt data from unexpected power outages?

    Just as much so as TrueNAS I expect, unless you pay for the features like clustering for HA and have the extra hardware to support that.

    You can use a VM to see how that behaves in some circumstances too: drop a disk from the VM and see what happens and what recovery procedures look like once you add a vdisk to replace it.

    How do you back that up? Or do you?

    You most certainly do. ZFS has features to minimise the danger of some issues but not all and certainly the likes of “home or data-centre in flames”!

    For how: ZFS has features specifically to help with backups: the ability to do an incremental send/receive of filesystems to a pool hosted elsewhere and snapshots. Or you can use any other backup method that you currently employ like home-grown rsync scripts, rclone, borg, …

    (Not a ZFS guy)

    Same, but I keep considering it so have done a bit of reading around the matter and have had a play with it in virtual environments.

    Thanked by 1TimboJones
  • ChievoChievo Member

    piotr has a massive big extra long DICKS

  • defaultdefault Veteran

    @Chievo said:
    piotr has a massive big extra long DICKS

    True. And he has 10 of them, to satisfy all possible requirements.

    Thanked by 2bruh21 Chievo
  • TimboJonesTimboJones Member
    edited May 2023

    @MeAtExampleDotCom said:
    For features: if all you are looking for is a storage server then go with TrueNAS for convenience. If you need anything else then look into how well that is supported. A quick look at https://www.truenas.com/compare/ (I've not used TrueNAS in anger, and last played with it years ago) shows a few differences in virtualisation support between the community and paid editions for instance (not listing KVM on the community edition list probably doesn't mean it won't work, just that you are back to rolling your own rather than having a helping hand).

    SCALE, which I use, has built in KVM listed on that page.

  • raza19raza19 Veteran

    I've been learning pros and cons of different file systems via chatgpt since the last few weeks & I was wondering why the default system or an optional system for most vm offered on LET is not zfs, the thing that intrigues me is the deduplicaiton feature, plz anyone enlighten me why ext4 & not btrfs or zfs is the file system of choice for most vms?

  • @raza19 said:
    I've been learning pros and cons of different file systems via chatgpt since the last few weeks & I was wondering why the default system or an optional system for most vm offered on LET is not zfs, the thing that intrigues me is the deduplicaiton feature, plz anyone enlighten me why ext4 & not btrfs or zfs is the file system of choice for most vms?

    Google is fucked if ChatGPT doesn't explain it for you and you skip over Google and ask LET forum instead.

  • AstroAstro Member

    Tried to follow the exact steps but it keeps failing on

    checking whether inode_owner_or_capable() takes user_ns... configure: error:
    *** None of the expected "capability" interfaces were detected.
    *** This may be because your kernel version is newer than what is
    *** supported, or you are using a patched custom kernel with
    *** incompatible modifications.
    ***
    *** ZFS Version: zfs-2.1.11-1
    *** Compatible Kernels: 3.10 - 6.2

    Install failed, please fix manually!
    bash: line 499: zfs: command not found

  • @Astro said:
    Tried to follow the exact steps but it keeps failing on

    checking whether inode_owner_or_capable() takes user_ns... configure: error:
    *** None of the expected "capability" interfaces were detected.
    *** This may be because your kernel version is newer than what is
    *** supported, or you are using a patched custom kernel with
    *** incompatible modifications.
    ***
    *** ZFS Version: zfs-2.1.11-1
    *** Compatible Kernels: 3.10 - 6.2

    Install failed, please fix manually!
    bash: line 499: zfs: command not found

    apt get zfs? Hehe

Sign In or Register to comment.