Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Using the BTRFS filesystem on a VPS
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Using the BTRFS filesystem on a VPS

daviddavid Member

Is anybody else using BTRFS on their VPS? I am successfully using it on my RackNerd VPS, and I'm planning to try it on another soon.

I have a backup of my system from borg backup, to start (helpful to install btrfs-progs on it for later).

I booted into a Rescue Environment. The newest kernel available was 4.x, so some of the new features aren't available. Only lzo or zlib compression is available (no zstd) with the old kernel.

You need static versions of btrfs (and borg for me, since I'm using borg backup).

For btrfs:

https://github.com/kdave/btrfs-progs

btrfs.static and btrfs.box.static. I renamed btrfs.static to btrfs and btrfs.box.static to mkfs.btrfs and uploaded them to the rescue environment /root.

For borg:

https://github.com/borgbackup/borg/releases

Many different versions. I found the one that worked for me was:

borg-1.2.8-linuxold64

"linuxold" uses an older version of glibc, and the newer ones didn't work in the rescue environment.

I found the data partition (/dev/vda1 in my case):

lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0     11:0    1 1024M  0 rom
vda    254:0    0   40G  0 disk
├─vda1 254:1    0 38.8G  0 part
└─vda2 254:2    0  1.3G  0 part
vdb    254:16   0  1.1G  0 disk
└─vdb1 254:17   0  1.1G  0 part /

Formatted with BTRFS (with the -f option to force, make sure the device name is correct):

/root/mkfs.btrfs -L label -f /dev/vda1

Mount the new filesystem:

mkdir /mnt/temp
mount -o compress=lzo,noatime /dev/vda1 /mnt/temp

In my case, I chose compress=lzo. You could also just use "compress" for zlib. I'd prefer zstd, but lzo is still OK. You could also leave it out, for no compression. The compress option is set at mount time, so if you want it to compress consistently, you'll want to always mount it with the compress option.

I created a root subvolume:

cd /mnt/temp
/root/btrfs subvol create @

Unmounted:

cd /
umount /mnt/temp

Mounted the new btrfs @ root subvolume:

mount -o subvol=@,compress=lzo,noatime /dev/vda1 /mnt/temp

Then I did the borg restore to /mnt/temp.

I backup everything I need with borg, except for these directories that I just recreate:

mkdir dev
mkdir media
mkdir mnt
mkdir proc
mkdir run
mkdir sys
mkdir tmp

I also created an @home subvolume after the restore:

cd /mnt/temp
mv home home2

/root/btrfs subvol create @home
/root/btrfs subvol list /mnt/temp

cd @home
mv /mnt/temp/home2/* .

cd /mnt/temp
rmdir home2

mkdir home

Modify the fstab under /mnt/temp/etc. Mine looks like this:

# <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/vda1       /               btrfs subvol=@,compress=lzo,noatime             0 0
/dev/vda1       /home           btrfs subvol=/@/@home,compress=lzo,noatime      0 0
/dev/vda2       swap            swap     defaults       0       0

I also found I needed to update the swap partition info here:

/mnt/temp/etc/initramfs-tools/conf.d/resume

Set RESUME=UUID= to the UUID of the swap partition.

If you need to change the /mnt/temp/etc/network/interfaces, if the IP is changing, now is the time to do it.

Install and update grub:

cd /
mount --rbind /dev /mnt/temp/dev
mount --rbind /sys /mnt/temp/sys
mount --rbind /proc /mnt/temp/proc
chroot /mnt/temp

grub-install /dev/vda
update-grub

At some point, you'll need to "update-initramfs -u" I ended up doing it later, after rebooting and getting a "resume" error, though it eventually boots without swap. I think this would be a better point to do it, before rebooting.

I saved around 3.8 GB disk space (with the compression) on about 12.6 GB restored (took up ~ 8.8 GB).

I'm also taking snapshots, so I can roll-back if desired, though I'm doing it with my own scripts, so it would require rebooting in the rescue-environment first to roll-back.

Please note, I'm using Debian 12. Some things may be slightly different on a different distro.

«1

Comments

  • Motion3549Motion3549 Member
    edited May 29

    RHEL moving from btrfs to xfs, isn't that an indication that it isn't meeting requirements?

  • daviddavid Member

    @Motion3549 said: RHEL moving from btrfs to xfs, isn't that an indication that it isn't meeting requirements?

    I'm not sure why they did that. I used RedHat and CentOS years ago, but switched to Debian. As far as I know xfs doesn't have compression or snapshots.

  • quicksilver03quicksilver03 Member, Host Rep

    I used BTRFS on almost all of my systems until 1 year ago, when I noticed that MariaDB databases were getting corrupted quite often. I switched to XFS and the problem went away immediately.

    Today I use XFS whenever I need a database, that is something that reads and writes blocks frequently to the same files, and BTRFS for backup data or Jellyfin content, that is large files that do not change often. BTRFS ability to dynamically add or remove disks and change a volume's profile has solved quite a bit of problems.

    Thanked by 3david vastness4594 0xC7
  • Commend7Commend7 Member
    edited May 29

    i've been using btrfs across all hosts without ever thinking of the underlying host/provider, but i had to write a nixos module that selectively disables cow in directories where frequent rw happens (database, storage servers). unfortunately, nodatacow also disables corruption checksums. xfs with lvm might be easier if you don't do iac. i'm considering to try zfs

    Thanked by 2david 0xC7
  • tentortentor Member, Host Rep

    Btrfs from my experience is not suitable for any databases due to CoW. Disabling CoW makes btrfs quite useless IMO, so I am not sure why would you use it on servers. I do use it as rootfs at my personal laptop btw.

    Thanked by 2oloke 0xC7
  • I haven't tried any other filesystem other than ext4. It just works for me.

    Thanked by 1david
  • daviddavid Member

    @quicksilver03 said: I used BTRFS on almost all of my systems until 1 year ago, when I noticed that MariaDB databases were getting corrupted quite often. I switched to XFS and the problem went away immediately.

    Thanks for this info. I haven't noticed an issue, yet, with MariaDB and BTRFS. I suppose I could disable copy-on-write for the mysqldb directory, or create a separate ext4 or xfs partition just for mariaDB.

  • emghemgh Member, Megathread Squad

    XFS on all serious setups here. EXT4 on some idlers I cba to XFS.

    Thanked by 1david
  • daviddavid Member
    edited May 29

    @vastness4594 said: I haven't tried any other filesystem other than ext4. It just works for me.

    It's the default option in Debian, and it is simple. BTRFS complicates things a little bit, but in return it gives you compression and snapshots. And due to the way it works, there's no fsck.

    I've also switched my home system to BTRFS, but not my raspberry pi and nano pi. I don't think the rpi kernel even supports it. And for simplicity sake, anyway, I think it's OK.

  • daviddavid Member
    edited May 29

    @quicksilver03 said: I used BTRFS on almost all of my systems until 1 year ago, when I noticed that MariaDB databases were getting corrupted quite often. I switched to XFS and the problem went away immediately.

    I decided to use the nodatacow option for the mysqldb directory, to be safe.

    I stopped mariadb, backed up the /mysqldb files, deleted the old /mysqldb directory, created a new one and:

    chattr +C /mysqldb
    

    Then I moved the /mysqldb files back, and checked with lsattr.

    ---------------C------ ./ibdata1
    (etc)
    
  • emghemgh Member, Megathread Squad

    I think MongoDB recommends XFS for what it’s worth

    Thanked by 1david
  • jperkinsjperkins Member

    zfs ftw

  • daviddavid Member

    ZFS is interesting, too. I also considered it, but my impression was that it was heavier on resources than BTRFS, but maybe that's not true.

  • darkimmortaldarkimmortal Member
    edited May 29

    Mentally strong people use a separate XFS partition (and LVM to manage things) for use cases that don't perform well on Btrfs.

    Disabling CoW is a shit hack that should never have been added, and especially never integrated by default into distros, systemd-journald, etc.

    For all the bad rep that Btrfs gets for dangerous RAID 5/6, running with CoW disabled is more dangerous, especially if you disable CoW anywhere on a Btrfs RAID partition, in which case data loss is guaranteed

    Thanked by 1tentor
  • daviddavid Member

    A side note, my swap partition UUID didn't change, but the value in:

    /etc/initramfs-tools/conf.d/resume
    

    was incorrect. I wonder how many VPS installations have slow boot times due to that resume timeout. After updating it and running update-initramfs -u, the boot is fast. I noticed it because I was connected to VNC.

  • raindog308raindog308 Administrator, Veteran

    @emgh said: MongoDB recommends XFS

    Despite that, XFS is a good filesystem.

  • jperkinsjperkins Member
    edited May 29

    @david said:
    ZFS is interesting, too. I also considered it, but my impression was that it was heavier on resources than BTRFS, but maybe that's not true.

    I've never used BTRFS but zfs does use more than ext4. The licensing issue with zfs is not a problem for me.

    It will use half your memory as cache(ARC) but release it as needed. I did have memory release issues on RHEL/Alma but it works fine on Debian.

    I use it on vps that have a 3-4 old xeon processors and 3-4gb of ram. works really great as an encrypted backup where the remote vps has never seen the keys. Sanoid with syncoid simplifies the snapshots and remote xfer of the raw encrypted dataset ( -w)

    https://openzfs.github.io/openzfs-docs/man/master/8/zfs-send.8.html#w
    https://github.com/jimsalterjrs/sanoid

    you dont even need a partition to assign to zpool for testing. it will work with a flat file
    https://openzfs.github.io/openzfs-docs/man/master/8/zpool-create.8.html#Example_4_:_Creating_a_ZFS_Storage_Pool_by_Using_Files

    Thanked by 1david
  • omelasomelas Member

    Can I use random filesystem benchmark for using it on VMs? Does nested nature of VPS filesystem (it's just a file in host filesystem) screw some filesystems?

  • emghemgh Member, Megathread Squad

    @raindog308 said:

    @emgh said: MongoDB recommends XFS

    Despite that, XFS is a good filesystem.

    MongoDB is awesome

  • jperkinsjperkins Member

    @omelas said:
    Can I ....

    If you are replying to me idk about benchmarking. Regarding nested zfs it works for me with disk partitions . My use case is primarily backup with some limited file serving . Not much of a benchmarker. When I have troubles I will check the rsync speed or run iperf. I only used a flat file as a zpool for testing purposes

  • tentortentor Member, Host Rep

    @emgh said:

    @raindog308 said:

    @emgh said: MongoDB recommends XFS

    Despite that, XFS is a good filesystem.

    MongoDB is awesome

    Genuine question - are you just lazy doing data normalization and migrations or why you use NoSQL database engine like MongoDB?

    Thanked by 2emgh ariq01
  • davidedavide Member
    edited May 29

    @quicksilver03 said:
    MariaDB databases were getting corrupted quite often.

    MariaDB :D

    # cat /usr/local/sbin/mariadb_is_garbage.sh
    #!/bin/bash
    
    status_file=/tmp/mariadb_is_garbage.status
    
    broken=false
    out=$(mysql -u root -pass -e "show slave status\G")
    pos=$(echo "$out" | grep -Pio "(?<=Exec_Master_Log_Pos: )[0-9]+")
    
    echo "$out" | egrep -iq "Slave_IO_Running: Yes" || broken=true
    
    if [ -f $status_file ]; then
        pos_old=$(< $status_file)
        [ "$pos" == "$pos_old" ] && broken=true
    fi
    
    echo -n "$pos" >$status_file
    
    if [ "$broken" == true ]; then
        /usr/sbin/service mariadb restart >/dev/null
    fi
    
    Thanked by 2tentor WyvernCo
  • emghemgh Member, Megathread Squad

    @tentor said:

    @emgh said:

    @raindog308 said:

    @emgh said: MongoDB recommends XFS

    Despite that, XFS is a good filesystem.

    MongoDB is awesome

    Genuine question - are you just lazy doing data normalization and migrations or why you use NoSQL database engine like MongoDB?

    Because if the whole frontend is based on JSON data, normalizing into a rational database makes no sense, only for it to be converted back to JSON for every request.

    Thanked by 1tentor
  • tentortentor Member, Host Rep

    @emgh said:

    @tentor said:

    @emgh said:

    @raindog308 said:

    @emgh said: MongoDB recommends XFS

    Despite that, XFS is a good filesystem.

    MongoDB is awesome

    Genuine question - are you just lazy doing data normalization and migrations or why you use NoSQL database engine like MongoDB?

    Because if the whole frontend is based on JSON data, normalizing into a rational database makes no sense, only for it to be converted back to JSON for every request.

    I will note it as laziness

    Thanked by 1emgh
  • emghemgh Member, Megathread Squad

    @tentor said:

    @emgh said:

    @tentor said:

    @emgh said:

    @raindog308 said:

    @emgh said: MongoDB recommends XFS

    Despite that, XFS is a good filesystem.

    MongoDB is awesome

    Genuine question - are you just lazy doing data normalization and migrations or why you use NoSQL database engine like MongoDB?

    Because if the whole frontend is based on JSON data, normalizing into a rational database makes no sense, only for it to be converted back to JSON for every request.

    I will note it as laziness

    :D

    Honestly, I did sketch up how it would all work rational-style, but it would be a horrible mess. If I went into further details I’d probably expose the website, but rational-style, a query that right now is exactly one query, would be a query full of subqueries, alternatively, I’d have to store JSON in a rational database, which make no sense.

    Also, as said, whole API is JSON and the website is data-driven, so converting it back and forth constantly, idk…

    Thanked by 1tentor
  • tentortentor Member, Host Rep

    @emgh said:

    @tentor said:

    @emgh said:

    @tentor said:

    @emgh said:

    @raindog308 said:

    @emgh said: MongoDB recommends XFS

    Despite that, XFS is a good filesystem.

    MongoDB is awesome

    Genuine question - are you just lazy doing data normalization and migrations or why you use NoSQL database engine like MongoDB?

    Because if the whole frontend is based on JSON data, normalizing into a rational database makes no sense, only for it to be converted back to JSON for every request.

    I will note it as laziness

    :D

    Honestly, I did sketch up how it would all work rational-style, but it would be a horrible mess. If I went into further details I’d probably expose the website, but rational-style, a query that right now is exactly one query, would be a query full of subqueries, alternatively, I’d have to store JSON in a rational database, which make no sense.

    Also, as said, whole API is JSON and the website is data-driven, so converting it back and forth constantly, idk…

    I am not blaming you!

    Thanked by 1emgh
  • emghemgh Member, Megathread Squad

    @tentor said:

    @emgh said:

    @tentor said:

    @emgh said:

    @tentor said:

    @emgh said:

    @raindog308 said:

    @emgh said: MongoDB recommends XFS

    Despite that, XFS is a good filesystem.

    MongoDB is awesome

    Genuine question - are you just lazy doing data normalization and migrations or why you use NoSQL database engine like MongoDB?

    Because if the whole frontend is based on JSON data, normalizing into a rational database makes no sense, only for it to be converted back to JSON for every request.

    I will note it as laziness

    :D

    Honestly, I did sketch up how it would all work rational-style, but it would be a horrible mess. If I went into further details I’d probably expose the website, but rational-style, a query that right now is exactly one query, would be a query full of subqueries, alternatively, I’d have to store JSON in a rational database, which make no sense.

    Also, as said, whole API is JSON and the website is data-driven, so converting it back and forth constantly, idk…

    I am not blaming you!

    When I took over this project I had only worked with rational databases before and hated that only the basics was normalized. Json + database made no sense to me and that’s why I sketched up how to migrate.

    But over time tbh I’ve started to like it :D

    Thanked by 1tentor
  • e2bs2k1e2bs2k1 Member

    xfs is better on vps since you normally do not need to scale.
    I always use ext4 with lvm2 on my dedicated server.

  • raindog308raindog308 Administrator, Veteran

    @emgh said:

    @raindog308 said:

    [@emgh said]uu(/discussion/comment/4434909/#Comment_4434909): MongoDB recommends XFS

    Despite that, XFS is a good filesystem.

    MongoDB is awesome

    Thanked by 1emgh
  • raindog308raindog308 Administrator, Veteran

    [@emgh said]
    Honestly, I did sketch up how it would all work rational-style, but it would be a horrible mess. If I went into further details I’d probably expose the website, but rational-style, a query that right now is exactly one query, would be a query full of subqueries, alternatively, I’d have to store JSON in a rational database, which make no sense.

    It’s relational, not rational.

    And you can do what you’re doing in MongoDB is Postgres, which is free and has a much better track record. Pg has excellent JSON support. People store JSON in relational DBs and muck with it uaing DB JSON functions all day long.

    MongoDB is indeed easy to work with, but…

    Thanked by 3emgh vicaya david
Sign In or Register to comment.