Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Base OS for lxc containers
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Base OS for lxc containers

I'm considering setting up an Ubuntu 20 server to host a bunch of lxc containers. Any other suggestions? I've used Proxmox before but prefer to just keep things as simple as possible. I'm always afraid something with ProxMox will break. I really don't like Ubuntu but it seems the best choice in this case. Container OS's would be centos/alma/rocky/etc.

Also open to any recommendations on where to put this - esp. need something during the testing phase (hourly billing or cheap monthly, more than one ip to test the containers).

Comments

  • ShakibShakib Member, Patron Provider

    My oldest E3 Proxmox node worked fine for 4 years.

    My current oldest Proxmox node is running fine for 2 years.

    I am not sure what you do mean by Proxmox will break.

  • whlwhl Member

    Thanks for your input.

    I don't mean to dismiss Proxmox at all. However (for me) I feel like it adds additional layers that could (in theory) go wrong. I feel like a bad update could break things and I would not be able to solve them. When dealing with a more basic Linux install I feel I could fix nearly anything that goes wrong.

    If given a broken Proxmox (broke by my own mistakes) and a broken Centos/Debian/Ubuntu I feel I'd be able to eventually repair the non-Proxmox machines (and have much more input from the internet if needed). I'm afraid I'd have a lot more trouble fixing the Proxmox machine.

    Another example is using pct instead of something like lxc-create - I feel like Proxmox adds their own layer that restrict most help to Proxmox forums.

    Thanked by 1darkimmortal
  • MaouniqueMaounique Host Rep, Veteran

    Yes, the KISS strategy works in practice however, when we talk about something so widely used and tested by millions, the other argument of simplicity (let others worry about stability and security) works too.
    I am all for low attack surface, use only what you need but, unless you have time and you remember to keep checking and updating, if you are sure you wont miss anything, etc. then going with a stable and tested platform might be better than a home brewed one, even if you keep it as simple as possible.

  • If you want to keep things really simple, there is always systemd nspawn

  • I am putting LXC on an Alpine base. Apart from the obvious advantage of being a very light-weight layer, it has a relatively modern kernel, relatively recent LXC in the repo, support for btrfs, and so on.

    Thanked by 1pbx
  • yoursunnyyoursunny Member, IPv6 Advocate

    My dedi is running Debian 11.
    It hosts several LXC containers, mostly Ubuntu 20.
    All configurations are manual, but I prefer to live closer to metal.

  • The main contributors of LXC are employed by Canonical. Canonical also created LXD and its website is on the same linuxcontainers.org domain as LXC. So they are obviously the best tested on Ubuntu.

    That said, I've been using LXC (or LXD on snapd) on Debian without any big issues since 2014.

  • NeoonNeoon Community Contributor, Veteran
    edited January 2022

    Proxmox does not break, especially with KVM.
    Containers can break and cause a pizdec, especially LXD.

    Thanked by 2devp Shakib
  • pbxpbx Member
    edited January 2022

    @tetech said: I am putting LXC on an Alpine base. Apart from the obvious advantage of being a very light-weight layer, it has a relatively modern kernel, relatively recent LXC in the repo, support for btrfs, and so on.

    LXC is nice as it's light and you can get everything you need from the repos. The host node will use as little RAM as possible.

    Debian/Ubuntu will need more RAM but come with unattended-upgrades which is great to automagically update your host node, including kernel and reboot if needed.

Sign In or Register to comment.