New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Is IMPI free on Hetzner?
If so I'll try this method.
Sorry I mainly used VPSes in recent years, I forgot about IPMI at all yea it should make it very simple
Free for 3 hours, but you can ask it several times a day if needed. You can request via robot.
The credentials will come after a few minutes, they're fast.
Make sure to have popup blocker disabled for it later.
Try Dropbox Advanced (unlimited space) and mount file storage to your vps using rclone with local cache.
Can't your client afford to use B2 for storage?
Ceph is pretty complex and needs 10G network between nodes. 100 TB can easily fit on a single server. You just need to plan the bandwidth you want. Basically HDDs keep getting larger, but bandwidth is almost the same. So you need to balance space and bandwidth (=number of drives and RAID level)
They can! They increased budget to 2k euro/mo (and said "have fun" lol), so we definitely could use reliable providers.
But... they are tired of Twitch & YouTube - american corporations. They hate that even tho they are "big names" platform still owns everything and they have no rights, have crazy terms and platform take huge percent of their money.
I gave one streamer idea about "european video platform" & he loved my UX/UI design work, asked couple of Youtube/Twitch friends and they sponsored it
For now this project is at "fun stage" - I try to do it, gain experience, they will have platform for their editors and supporters. If that succeeds then they want to hire programmers and make something like "european YouTube+Twitch with good terms for creators". Platform that I'm making right now is not business-critical, so I can mess it up, no problem, its just for experience and fun
That's why using AWS/Backblaze etc. is out of question, they want european provider. Initially they wanted to buy their own servers, but at this stage renting is way better option.
Thanks! I've requested it and now I wait for reply
Thanks for information!
For now I will use ZFS with RAID-Z2, if they like the idea I will explore how to make very big array on MinIO/CEPH/GlusterFS. Budget is here, but I don't want them to waste money for simple stuff that I'm doing now.
Any recommendation for European hosts that will provide me 10G/40G network between nodes at good price? Stay with Hetzner (and rent router + 10G cards) or other company?
@Clouvider maybe?
You need to know what you are doing, it is no copy&paste snippet, but for installing Debian Buster with Root on ZFS you can follow this guide:
https://openzfs.github.io/openzfs-docs/Getting Started/Debian/Debian Buster Root on ZFS.html
I think Hetzner rescue system is also Debian based so you should be able to add ZFS into the rescue system and install your system from there without qemu.
Ubuntu installer even has/had experimental support for ZFS on Root but I do not have any personal experience with Ubuntu.
https://ubuntu.com/blog/zfs-focus-on-ubuntu-20-04-lts-whats-new
However, if you are not familiar yet with ZFS, I would not recommend a ZFS on Root setup. It works but it's not that easy and well integrated into Linux. Instead installing a Linux distribution simply by using the ISO installer into the first 100 GB or so of your disks is much easier. Then you can add ZFS within your running OS and build the zpool on partitions covering the remaining space of the disks.
I tried to do it via Proxmox Web GUI yesterday, but disk with OS wasn't selectable when I saw list of disks. There was no "a", only "b", "c", "d" etc.
So you're saying that instead of making that ZFS setup on Proxmox Web GUI I should do it via SSH and I can select all space expect 200GB from first drive? Won't that cut 200GB from every drive? Never heard that in RAID6/RAID-Z2 partitions can be different sizes...
I've tried to install Proxmox via QEMU once again with different configs.
Can't go past 99%.
Now I'll try to install it via IPMI like suggested in this thread
I have never used Proxmox or any Web GUI, cannot say much about that.
Example partitioning schema:
/dev/sd?1: 1 GB, mdadm RAID-1, ext4 for boot
/dev/sd?2: 100 GB, mdadm RAID-6, LVM, for root partition and other OS-related stuff.
/dev/sd?3: remaining space, used for ZFS RAIDz2 pool.
You should be able to install a Linux OS with any usual ISO installer this way.
The ZFS pool will be configured after installation.
Basically you install your prefered Linux distribution the same way like you would do with 100 GB disks, just keeping the remaining space for the ZFS partition.
Proxmox installer on IPMI throws error
I'll try once again to do it via rescue system and custom partitioning like @dfroe suggested
If it fails then I'll investigate why Proxmox on IPMI throws error (they alread removed IPMI, even tho 3 hours didnt pass )
I usually do this, maybe it will help:
https://gist.github.com/jaantaponen/df74eca76e6a3cffb129e826727ce22a
So now here's the problem
I've setup RAID-6 partitions - boot, swap and main one.
It works exactly as intended
But, I cannot create ZFS on unused space
I understand "No disks unused", but I want to make ZFS partition, not convert whole drive to ZFS.
Can I do it via Proxmox Web GUI? Or do I need use SSH?
I've managed to do it
zpool create bigstorage1 raidz2 /dev/sda5 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 /dev/sdg5 /dev/sdh5 /dev/sdi5 /dev/sdj5
Earlier I had problem with disk signatures (zpool throwed error that 'disk is in use') even tho I completely removed partitions even with 'wipefs --all --force' & 'dd if=/dev/zero of=/dev/sda bs=512 count=1 conv=notrunc'.
Maybe Proxmox template in Hetzner is causing this problem with signatures, but method that I've figured out above works perfectly!
So, are you now running the root filesystem on traditional RAID for redundancy?
Another quick question: can a server with a non-root ZFS installation automatically recover from a non-redundant root filesystem blowing up if I were to reinstall the OS with a replacement drive? In other words, does ZFS store any vital state information on the root filesystem that can cause a disaster if lost?
By the way, any particular reason why to use a virtualization layer? Do you actually need multiple VMs on that machine?
Using virtualization will change various parameters. Especially you have to avoid double caching on the host node and the guest os, and many operations like snapshots, scrubbing, resilvering etc. will be much less efficient if the ZFS layer only handles a block volume containing the VM's virtual hard disk and not the real file system with the actual files. Probably most of the advantages by ZFS won't be really usable in this setup and performance might be significantly weaker.
My recommendation for this use case would be to install the Linux OS directly on the bare metal by using ISO installer or rescue system.
P.S.: If
zpool create
refuses to create a new pool due to old signatures, you can simply--force
it to do so.Yes, but ZFS is separated from it and has own partition with no linux raid, but raidz2 inside.
I'm benchmarking it and next I'm gonna try to install Debian with full ZFS disks, no partial like now.
ZFS is installed on bare metal and all storage will be on that partition without additional layers.
Proxmox is there only to test out different stuff fast and to have web gui to monitor all of this stuff without configuring netdata or something else.
I'll try Debian with full ZFS now, there is script on GitHub https://github.com/terem42/zfs-hetzner-vm but it has 'mirror' hardcoded. I modified it to 'raidz2' and I'll see if that works out
--force didn't work, same error.
With script I mentioned post above + my modification (change 'mirror' to 'raidz2' in code) everything went perfect!
Here's obligatory YABS
ARC cache doing its job!
100TB HDD + 128GB RAM + nice CPU for 100 euro incl. 23% VAT Great value
This is better than NVME? Using HDD?
If one or two disks fail in zfs, how do you recover the whole file system?
I have 128GB RAM.
64GB is dedicated to ARC Cache in my config.
Frequently used data / fresh data is stored in ARC Cache automatically.
That can indeed be faster than NVMe, but if you need to go back to HDDs it will be slower (500MBps in my Z2 array).
ARC cache makes sure that 64GB of most important files are already in RAM and that greatly improves speed of everything - even if file is not in ARC cache then it will be still vastly faster, as 64GB of most requested data doesn't need to clog HDD throughput!
It works way better than linux page cache from what I see now, I can push A LOT of data from this server. 1Gbit network is now a limitation, not HDD speeds :P
Now that I think about it... I'll go back to my earlier ideas.
RAID-Z2 still gives me problem when HDD fails and doesn't provide HA. These are 10TB drives, restoring will take too much time and for that time all data is vurneable... too much compromises.
So now with these two server, which idea do you guys think is better?
1. RAID10 on both (still vulnerable to whole machine failing, power spike etc.)
2. No raid at all, master/slave solution (possible one-way ethernet port clog)
3. No raid, two masters (rsync from one to another to 'backup' folder, only 50% usage of upload compared to above, as other 50% will be download so if should be a lot more balanced and unnoticeable), load balance between them in web app/dns
4. Erasure coding on both, still waiting for some input from you guys...
Amazing. $100 for such kind of performance.
This is actually a very nice setup for database servers. DB like mysql servers typically have to put the whole database in the ram, sometimes 1tb-2tb, too expensive. This seems keep the performance, but much cheaper.
100euro, just because of 23% VAT. If you live in US then its just 80 euro/mo because of 0% tax xD
But we can deduct taxes from it as company, because it is "running costs" so it will cost us 80euro/mo
If you want HA you would be looking at other solutions or a custom one. For instance you could have some sort of load balancer in front and assuming that you have two identical setups. If one file failed to load from server A, retry with Server B.
Yes, but you can also setup Redis/Memcached in such instance, you can tweak it better (cache invalidation etc.).
But ZFS with ARC cache is useful everywhere, it just works without setup and thats important. With Redis you need to spend time