Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


50-100TB video storage cluster - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

50-100TB video storage cluster

2

Comments

  • AXYZEAXYZE Member

    @luckypenguin said:

    @AXYZE said: QEMU (with virtio to enable more than 4 disks) + Proxmox VNC installation got me "almost working" - its stuck at 99% installation progress on "make bootable drive". htop shows me that there is still around 8% cpu usage by qemu, ram usage is going up and down but its still stuck at 99%.

    I've saw online that it is a bug with floppy drive, so I'll try Q35 QEMU now, maybe it will fix it.

    Why?
    Ask them to connect IPMI to your server, you will have it for 3 hours, and you can mount
    real ISOs and have real VNC directly from UEFI and down. Why complicate it with QEMU?

    Is IMPI free on Hetzner?
    If so I'll try this method.

    Sorry I mainly used VPSes in recent years, I forgot about IPMI at all :) yea it should make it very simple

  • Free for 3 hours, but you can ask it several times a day if needed. You can request via robot.
    The credentials will come after a few minutes, they're fast.
    Make sure to have popup blocker disabled for it later.

    Thanked by 1AXYZE
  • Try Dropbox Advanced (unlimited space) and mount file storage to your vps using rclone with local cache.

    Thanked by 1AXYZE
  • 0xbkt0xbkt Member

    Can't your client afford to use B2 for storage?

    Thanked by 1AXYZE
  • m4num4nu Member, Patron Provider
    edited July 2022

    Ceph is pretty complex and needs 10G network between nodes. 100 TB can easily fit on a single server. You just need to plan the bandwidth you want. Basically HDDs keep getting larger, but bandwidth is almost the same. So you need to balance space and bandwidth (=number of drives and RAID level)

    Thanked by 1AXYZE
  • AXYZEAXYZE Member

    @0xbkt said:
    Can't your client afford to use B2 for storage?

    They can! They increased budget to 2k euro/mo (and said "have fun" lol), so we definitely could use reliable providers.

    But... they are tired of Twitch & YouTube - american corporations. They hate that even tho they are "big names" platform still owns everything and they have no rights, have crazy terms and platform take huge percent of their money.

    I gave one streamer idea about "european video platform" & he loved my UX/UI design work, asked couple of Youtube/Twitch friends and they sponsored it :)

    For now this project is at "fun stage" - I try to do it, gain experience, they will have platform for their editors and supporters. If that succeeds then they want to hire programmers and make something like "european YouTube+Twitch with good terms for creators". Platform that I'm making right now is not business-critical, so I can mess it up, no problem, its just for experience and fun :D

    That's why using AWS/Backblaze etc. is out of question, they want european provider. Initially they wanted to buy their own servers, but at this stage renting is way better option.

  • AXYZEAXYZE Member

    @luckypenguin said:
    Free for 3 hours, but you can ask it several times a day if needed. You can request via robot.
    The credentials will come after a few minutes, they're fast.
    Make sure to have popup blocker disabled for it later.

    Thanks! I've requested it and now I wait for reply :)

  • AXYZEAXYZE Member

    @m4nu said:
    Ceph is pretty complex and needs 10G network between nodes. 100 TB can easily fit on a single server. You just need to plan the bandwidth you want. Basically HDDs keep getting larger, but bandwidth is almost the same. So you need to balance space and bandwidth (=number of drives and RAID level)

    Thanks for information!
    For now I will use ZFS with RAID-Z2, if they like the idea I will explore how to make very big array on MinIO/CEPH/GlusterFS. Budget is here, but I don't want them to waste money for simple stuff that I'm doing now.

    Any recommendation for European hosts that will provide me 10G/40G network between nodes at good price? Stay with Hetzner (and rent router + 10G cards) or other company?

  • HxxxHxxx Member

    @Clouvider maybe?

  • dfroedfroe Member, Host Rep

    You need to know what you are doing, it is no copy&paste snippet, but for installing Debian Buster with Root on ZFS you can follow this guide:

    https://openzfs.github.io/openzfs-docs/Getting Started/Debian/Debian Buster Root on ZFS.html

    I think Hetzner rescue system is also Debian based so you should be able to add ZFS into the rescue system and install your system from there without qemu.

    Ubuntu installer even has/had experimental support for ZFS on Root but I do not have any personal experience with Ubuntu.

    https://ubuntu.com/blog/zfs-focus-on-ubuntu-20-04-lts-whats-new

    However, if you are not familiar yet with ZFS, I would not recommend a ZFS on Root setup. It works but it's not that easy and well integrated into Linux. Instead installing a Linux distribution simply by using the ISO installer into the first 100 GB or so of your disks is much easier. Then you can add ZFS within your running OS and build the zpool on partitions covering the remaining space of the disks.

    Thanked by 1AXYZE
  • AXYZEAXYZE Member

    @dfroe said:
    Instead installing a Linux distribution simply by using the ISO installer into the first 100 GB or so of your disks is much easier. Then you can add ZFS within your running OS and build the zpool on partitions covering the remaining space of the disks.

    I tried to do it via Proxmox Web GUI yesterday, but disk with OS wasn't selectable when I saw list of disks. There was no "a", only "b", "c", "d" etc.

    So you're saying that instead of making that ZFS setup on Proxmox Web GUI I should do it via SSH and I can select all space expect 200GB from first drive? Won't that cut 200GB from every drive? Never heard that in RAID6/RAID-Z2 partitions can be different sizes...

  • AXYZEAXYZE Member

    I've tried to install Proxmox via QEMU once again with different configs.
    Can't go past 99%.

    Now I'll try to install it via IPMI like suggested in this thread :)

  • dfroedfroe Member, Host Rep

    I have never used Proxmox or any Web GUI, cannot say much about that.

    Example partitioning schema:
    /dev/sd?1: 1 GB, mdadm RAID-1, ext4 for boot
    /dev/sd?2: 100 GB, mdadm RAID-6, LVM, for root partition and other OS-related stuff.
    /dev/sd?3: remaining space, used for ZFS RAIDz2 pool.

    You should be able to install a Linux OS with any usual ISO installer this way.
    The ZFS pool will be configured after installation.
    Basically you install your prefered Linux distribution the same way like you would do with 100 GB disks, just keeping the remaining space for the ZFS partition.

    Thanked by 1AXYZE
  • AXYZEAXYZE Member
    edited July 2022

    Proxmox installer on IPMI throws error

    I'll try once again to do it via rescue system and custom partitioning like @dfroe suggested :)
    If it fails then I'll investigate why Proxmox on IPMI throws error (they alread removed IPMI, even tho 3 hours didnt pass :/ )

  • Thanked by 1AXYZE
  • AXYZEAXYZE Member

    So now here's the problem
    I've setup RAID-6 partitions - boot, swap and main one.
    It works exactly as intended

    But, I cannot create ZFS on unused space

    I understand "No disks unused", but I want to make ZFS partition, not convert whole drive to ZFS.
    Can I do it via Proxmox Web GUI? Or do I need use SSH?

  • AXYZEAXYZE Member

    I've managed to do it

    1. Proxmox installation via Hetzner 'installimage'
    2. Create partitions for boot & OS, leave rest of drive untouched
    3. Create raw partitions on unused space via 'fdisk /dev/sd*', type 'n', 3x enter, 'y' to remove signature.
    4. Create ZFS pool.

    zpool create bigstorage1 raidz2 /dev/sda5 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 /dev/sdg5 /dev/sdh5 /dev/sdi5 /dev/sdj5

    1. Done, pool will be visible on Proxmox Web GUI :)

    Earlier I had problem with disk signatures (zpool throwed error that 'disk is in use') even tho I completely removed partitions even with 'wipefs --all --force' & 'dd if=/dev/zero of=/dev/sda bs=512 count=1 conv=notrunc'.
    Maybe Proxmox template in Hetzner is causing this problem with signatures, but method that I've figured out above works perfectly!

    Thanked by 1donko
  • 0xbkt0xbkt Member
    edited July 2022

    @AXYZE said: Create partitions for boot & OS, leave rest of drive untouched

    So, are you now running the root filesystem on traditional RAID for redundancy?

    Another quick question: can a server with a non-root ZFS installation automatically recover from a non-redundant root filesystem blowing up if I were to reinstall the OS with a replacement drive? In other words, does ZFS store any vital state information on the root filesystem that can cause a disaster if lost?

    Thanked by 1AXYZE
  • dfroedfroe Member, Host Rep
    edited July 2022

    By the way, any particular reason why to use a virtualization layer? Do you actually need multiple VMs on that machine?

    Using virtualization will change various parameters. Especially you have to avoid double caching on the host node and the guest os, and many operations like snapshots, scrubbing, resilvering etc. will be much less efficient if the ZFS layer only handles a block volume containing the VM's virtual hard disk and not the real file system with the actual files. Probably most of the advantages by ZFS won't be really usable in this setup and performance might be significantly weaker.

    My recommendation for this use case would be to install the Linux OS directly on the bare metal by using ISO installer or rescue system.

    P.S.: If zpool create refuses to create a new pool due to old signatures, you can simply --force it to do so.

    Thanked by 1AXYZE
  • AXYZEAXYZE Member

    @0bkt said:
    So, are you now running the root filesystem on traditional RAID for redundancy?

    Yes, but ZFS is separated from it and has own partition with no linux raid, but raidz2 inside.

    I'm benchmarking it and next I'm gonna try to install Debian with full ZFS disks, no partial like now.

    @dfroe said:
    By the way, any particular reason why to use a virtualization layer? Do you actually need multiple VMs on that machine?

    ZFS is installed on bare metal and all storage will be on that partition without additional layers.
    Proxmox is there only to test out different stuff fast and to have web gui to monitor all of this stuff without configuring netdata or something else.

    I'll try Debian with full ZFS now, there is script on GitHub https://github.com/terem42/zfs-hetzner-vm but it has 'mirror' hardcoded. I modified it to 'raidz2' and I'll see if that works out :)

    P.S.: If zpool create refuses to create a new pool due to old signatures, you can simply --force it to do so.

    --force didn't work, same error. :(

  • AXYZEAXYZE Member
    edited July 2022

    With script I mentioned post above + my modification (change 'mirror' to 'raidz2' in code) everything went perfect!

    Here's obligatory YABS B)

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2022-06-11                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Fri 29 Jul 2022 08:15:25 PM CEST
    
    Basic System Information:
    ---------------------------------
    Uptime     : 0 days, 0 hours, 3 minutes
    Processor  : Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
    CPU cores  : 12 @ 1779.817 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 125.7 GiB
    Swap       : 2.0 GiB
    Disk       :
    Distro     : Debian GNU/Linux 11 (bullseye)
    Kernel     : 5.10.0-16-amd64
    
    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 145.86 MB/s  (36.4k) | 499.55 MB/s   (7.8k)
    Write      | 146.25 MB/s  (36.5k) | 502.18 MB/s   (7.8k)
    Total      | 292.12 MB/s  (73.0k) | 1.00 GB/s    (15.6k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 4.09 GB/s     (7.9k) | 4.22 GB/s     (4.1k)
    Write      | 4.31 GB/s     (8.4k) | 4.50 GB/s     (4.4k)
    Total      | 8.40 GB/s    (16.4k) | 8.73 GB/s     (8.5k)
    
    Geekbench 5 Benchmark Test:
    ---------------------------------
    Test            | Value
                    |
    Single Core     | 1011
    Multi Core      | 6016
    Full Test       | https://browser.geekbench.com/v5/cpu/16323446
    

    ARC cache doing its job!

    100TB HDD + 128GB RAM + nice CPU for 100 euro incl. 23% VAT :) Great value

  • @AXYZE said:
    With script I mentioned post above + my modification (change 'mirror' to 'raidz2' in code) everything went perfect!

    Here's obligatory YABS B)

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2022-06-11                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Fri 29 Jul 2022 08:15:25 PM CEST
    
    Basic System Information:
    ---------------------------------
    Uptime     : 0 days, 0 hours, 3 minutes
    Processor  : Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
    CPU cores  : 12 @ 1779.817 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 125.7 GiB
    Swap       : 2.0 GiB
    Disk       :
    Distro     : Debian GNU/Linux 11 (bullseye)
    Kernel     : 5.10.0-16-amd64
    
    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 145.86 MB/s  (36.4k) | 499.55 MB/s   (7.8k)
    Write      | 146.25 MB/s  (36.5k) | 502.18 MB/s   (7.8k)
    Total      | 292.12 MB/s  (73.0k) | 1.00 GB/s    (15.6k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 4.09 GB/s     (7.9k) | 4.22 GB/s     (4.1k)
    Write      | 4.31 GB/s     (8.4k) | 4.50 GB/s     (4.4k)
    Total      | 8.40 GB/s    (16.4k) | 8.73 GB/s     (8.5k)
    
    Geekbench 5 Benchmark Test:
    ---------------------------------
    Test            | Value
                    |
    Single Core     | 1011
    Multi Core      | 6016
    Full Test       | https://browser.geekbench.com/v5/cpu/16323446
    

    ARC cache doing its job!

    100TB HDD + 128GB RAM + nice CPU for 100 euro incl. 23% VAT :) Great value

    This is better than NVME? Using HDD?

    Thanked by 1AXYZE
  • If one or two disks fail in zfs, how do you recover the whole file system?

    Thanked by 1AXYZE
  • AXYZEAXYZE Member

    @letlover said:

    @AXYZE said:
    With script I mentioned post above + my modification (change 'mirror' to 'raidz2' in code) everything went perfect!

    Here's obligatory YABS B)

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2022-06-11                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Fri 29 Jul 2022 08:15:25 PM CEST
    
    Basic System Information:
    ---------------------------------
    Uptime     : 0 days, 0 hours, 3 minutes
    Processor  : Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
    CPU cores  : 12 @ 1779.817 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 125.7 GiB
    Swap       : 2.0 GiB
    Disk       :
    Distro     : Debian GNU/Linux 11 (bullseye)
    Kernel     : 5.10.0-16-amd64
    
    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 145.86 MB/s  (36.4k) | 499.55 MB/s   (7.8k)
    Write      | 146.25 MB/s  (36.5k) | 502.18 MB/s   (7.8k)
    Total      | 292.12 MB/s  (73.0k) | 1.00 GB/s    (15.6k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 4.09 GB/s     (7.9k) | 4.22 GB/s     (4.1k)
    Write      | 4.31 GB/s     (8.4k) | 4.50 GB/s     (4.4k)
    Total      | 8.40 GB/s    (16.4k) | 8.73 GB/s     (8.5k)
    
    Geekbench 5 Benchmark Test:
    ---------------------------------
    Test            | Value
                    |
    Single Core     | 1011
    Multi Core      | 6016
    Full Test       | https://browser.geekbench.com/v5/cpu/16323446
    

    ARC cache doing its job!

    100TB HDD + 128GB RAM + nice CPU for 100 euro incl. 23% VAT :) Great value

    This is better than NVME? Using HDD?

    I have 128GB RAM.
    64GB is dedicated to ARC Cache in my config.

    Frequently used data / fresh data is stored in ARC Cache automatically.
    That can indeed be faster than NVMe, but if you need to go back to HDDs it will be slower (500MBps in my Z2 array).

    ARC cache makes sure that 64GB of most important files are already in RAM and that greatly improves speed of everything - even if file is not in ARC cache then it will be still vastly faster, as 64GB of most requested data doesn't need to clog HDD throughput!

    It works way better than linux page cache from what I see now, I can push A LOT of data from this server. 1Gbit network is now a limitation, not HDD speeds :P

    Thanked by 1letlover
  • AXYZEAXYZE Member
    edited July 2022

    Now that I think about it... I'll go back to my earlier ideas.

    RAID-Z2 still gives me problem when HDD fails and doesn't provide HA. These are 10TB drives, restoring will take too much time and for that time all data is vurneable... too much compromises.

    So now with these two server, which idea do you guys think is better?
    1. RAID10 on both (still vulnerable to whole machine failing, power spike etc.)
    2. No raid at all, master/slave solution (possible one-way ethernet port clog)
    3. No raid, two masters (rsync from one to another to 'backup' folder, only 50% usage of upload compared to above, as other 50% will be download so if should be a lot more balanced and unnoticeable), load balance between them in web app/dns
    4. Erasure coding on both, still waiting for some input from you guys...

  • @AXYZE said:

    @letlover said:

    @AXYZE said:
    With script I mentioned post above + my modification (change 'mirror' to 'raidz2' in code) everything went perfect!

    Here's obligatory YABS B)

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2022-06-11                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Fri 29 Jul 2022 08:15:25 PM CEST
    
    Basic System Information:
    ---------------------------------
    Uptime     : 0 days, 0 hours, 3 minutes
    Processor  : Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
    CPU cores  : 12 @ 1779.817 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 125.7 GiB
    Swap       : 2.0 GiB
    Disk       :
    Distro     : Debian GNU/Linux 11 (bullseye)
    Kernel     : 5.10.0-16-amd64
    
    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 145.86 MB/s  (36.4k) | 499.55 MB/s   (7.8k)
    Write      | 146.25 MB/s  (36.5k) | 502.18 MB/s   (7.8k)
    Total      | 292.12 MB/s  (73.0k) | 1.00 GB/s    (15.6k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 4.09 GB/s     (7.9k) | 4.22 GB/s     (4.1k)
    Write      | 4.31 GB/s     (8.4k) | 4.50 GB/s     (4.4k)
    Total      | 8.40 GB/s    (16.4k) | 8.73 GB/s     (8.5k)
    
    Geekbench 5 Benchmark Test:
    ---------------------------------
    Test            | Value
                    |
    Single Core     | 1011
    Multi Core      | 6016
    Full Test       | https://browser.geekbench.com/v5/cpu/16323446
    

    ARC cache doing its job!

    100TB HDD + 128GB RAM + nice CPU for 100 euro incl. 23% VAT :) Great value

    This is better than NVME? Using HDD?

    I have 128GB RAM.
    64GB is dedicated to ARC Cache in my config.

    Frequently used data / fresh data is stored in ARC Cache automatically.
    That can indeed be faster than NVMe, but if you need to go back to HDDs it will be slower (500MBps in my Z2 array).

    ARC cache makes sure that 64GB of most important files are already in RAM and that greatly improves speed of everything - even if file is not in ARC cache then it will be still vastly faster, as 64GB of most requested data doesn't need to clog HDD throughput!

    It works way better than linux page cache from what I see now, I can push A LOT of data from this server. 1Gbit network is now a limitation, not HDD speeds :P

    Amazing. $100 for such kind of performance.

    Thanked by 1AXYZE
  • @AXYZE said:

    @letlover said:

    @AXYZE said:
    With script I mentioned post above + my modification (change 'mirror' to 'raidz2' in code) everything went perfect!

    Here's obligatory YABS B)

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2022-06-11                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Fri 29 Jul 2022 08:15:25 PM CEST
    
    Basic System Information:
    ---------------------------------
    Uptime     : 0 days, 0 hours, 3 minutes
    Processor  : Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
    CPU cores  : 12 @ 1779.817 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 125.7 GiB
    Swap       : 2.0 GiB
    Disk       :
    Distro     : Debian GNU/Linux 11 (bullseye)
    Kernel     : 5.10.0-16-amd64
    
    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 145.86 MB/s  (36.4k) | 499.55 MB/s   (7.8k)
    Write      | 146.25 MB/s  (36.5k) | 502.18 MB/s   (7.8k)
    Total      | 292.12 MB/s  (73.0k) | 1.00 GB/s    (15.6k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 4.09 GB/s     (7.9k) | 4.22 GB/s     (4.1k)
    Write      | 4.31 GB/s     (8.4k) | 4.50 GB/s     (4.4k)
    Total      | 8.40 GB/s    (16.4k) | 8.73 GB/s     (8.5k)
    
    Geekbench 5 Benchmark Test:
    ---------------------------------
    Test            | Value
                    |
    Single Core     | 1011
    Multi Core      | 6016
    Full Test       | https://browser.geekbench.com/v5/cpu/16323446
    

    ARC cache doing its job!

    100TB HDD + 128GB RAM + nice CPU for 100 euro incl. 23% VAT :) Great value

    This is better than NVME? Using HDD?

    I have 128GB RAM.
    64GB is dedicated to ARC Cache in my config.

    Frequently used data / fresh data is stored in ARC Cache automatically.
    That can indeed be faster than NVMe, but if you need to go back to HDDs it will be slower (500MBps in my Z2 array).

    ARC cache makes sure that 64GB of most important files are already in RAM and that greatly improves speed of everything - even if file is not in ARC cache then it will be still vastly faster, as 64GB of most requested data doesn't need to clog HDD throughput!

    It works way better than linux page cache from what I see now, I can push A LOT of data from this server. 1Gbit network is now a limitation, not HDD speeds :P

    This is actually a very nice setup for database servers. DB like mysql servers typically have to put the whole database in the ram, sometimes 1tb-2tb, too expensive. This seems keep the performance, but much cheaper.

    Thanked by 1AXYZE
  • AXYZEAXYZE Member

    @letlover said:
    Amazing. $100 for such kind of performance.

    100euro, just because of 23% VAT. If you live in US then its just 80 euro/mo because of 0% tax xD

    But we can deduct taxes from it as company, because it is "running costs" so it will cost us 80euro/mo

    Thanked by 1letlover
  • HxxxHxxx Member

    If you want HA you would be looking at other solutions or a custom one. For instance you could have some sort of load balancer in front and assuming that you have two identical setups. If one file failed to load from server A, retry with Server B.

    Thanked by 2AXYZE letlover
  • AXYZEAXYZE Member

    @letlover said:
    This is actually a very nice setup for database servers. DB like mysql servers typically have to put the whole database in the ram, sometimes 1tb-2tb, too expensive. This seems keep the performance, but much cheaper.

    Yes, but you can also setup Redis/Memcached in such instance, you can tweak it better (cache invalidation etc.).

    But ZFS with ARC cache is useful everywhere, it just works without setup and thats important. With Redis you need to spend time

    Thanked by 1letlover
Sign In or Register to comment.