New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Tools to manage a set of VM's as end user?
What tools are you using to manage a set of low end VM's in different hosts, and with different things running on them? Not for reselling, but for your own usage.
Namely for: configuration and updates, monitoring resources and usage, backups.
Are you using some sort of orchestration panel/tool, Ansible, ssh + custom scripts, other things?
Comments
Ansible. It's free, does not require and agent, and works over ssh keys.
There are others: puppet, salt, cfengine, chef, nix. Of those, only cfengine is LEB-friendly. However, cfengine is also much older and you'll find more community support and examples for ansible. Besides, ansible has a cool science fiction history name.
I haven't used nix - I think @joepie91 has. But there you're into a different distro, while other options I mentioned work with all the mainstream distros.
I find ansible really easy to use - setup the ssh keys, write or copy/past some rules, and away you go. Using jinja2 templates makes things really easy as well.
Typically when I get a new VPS, I either have the panel install my ansible ssh key or I login and do it. After that, it's ansible-playbook for nearly everything - setting up apt, putting in my preferred profile/bashrc/sshd_config/blah blah, setting up cron and accounts...everything. I run it nightly so boxes are always in policy.
Only thing to keep in mind is that now you have a box that has root on every other box...but you will have the same issue regardless of which platform you choose.
Also, ansible is not nix, so if you install something and never uninstall it, it will stay installed unless you later write an uninstall rule...but everything other than nix works that way.
If you're looking for job control (i.e., batch jobs)...it kind of sucks. There's some commercial stuff, but even the free options assume you're running on big boxes. RunDeck is one option I looked at...seems like it's still maturing. In the end, I ended up writing my own, but it depends what you're looking for - jobs as in computational farm, or jobs as in "run this job to transfer media, this one to encode, these dependencies, then notify, etc" job flow, etc.
I should add that the move to ansible has been transformative in my world.
For example, I've easily recouped the time I wasted logging into boxes and slowly evolving towards to my personal preferences. Now it's always one universal vimrc, one universal bash_profile, that annoying damn bell is turned off everywhere in inputrc, etc.
Marry ansible with a simple CMDB (list of your servers with basic config info - you could keep this in sqlite or just a text file) and you have a lot of power. For example, after ansible I have firewalls everywhere...before, it was hit or miss to be honest, but now that I have one central place to manage everything, and I have a definitive database of host/IP/role/etc., it's easy to write a script to put out firewall rule jinja templates and push them out. Very easy to say "these three galera hosts should have these four ports open but only to each other, please spit out the rule".
Likewise, now everything in my world is encrypted. I created my own CA and installing its CA .pem is a standard part of my deployments, so every node recognizes my CA and has its own key, crt, etc.
I have a couple standard OSes (Deb 8, OpenBSD 6.0) and a bunch of roles, so it's just mix and match to setup a new host. Everyone gets common + an OS-specific one, then a set of roles depending on what that box is doing.
I have sent ansible to run nightly, which is nice from the perspective of "I made this change on node1, I'll just drop the config file in the ansible source dir and then at o-dark-thirty, ansible will push that out to all my hosts, restart the service, etc." In other words, you can do an immediate push if you want, but if you have it automated, you can do a lazy push as well.
I'm a pretty light user in terms of what the power of these kinds of config tools are...people are doing even more exotic things (e.g., Kubernetes) but if you just want to keep your host aligned and start to manage your VMs as a fleet instead of one-offs (cattle instead of pets), ansible or equivalent is awesome.
thanks, nice post, i will try Ansible
It's worth nothing that there's a good reason for that. Pretty much every orchestration tool will leave behind lots of state and cruft, which is pretty much inherent to the way they work. To have a deterministically configurable system, you must have a custom distribution, simply because standard Linux distributions don't provide you with the tools or guarantees you need to implement that.
This article goes into a lot more detail about that problem (which affects Ansible, Puppet, and all the others, but not NixOS).
rsync
apt-get
top, df, vnstat, smokeping
rsync
Unless you have literally hundreds of VMs, that shit is very likely to be more trouble than it's worth.
So just this. But actually my custom scripts are now an orchestration toolkit on their own, just a bare simple one which does exactly what I need, and nothing more.
Asyd - https://asyd.eu/
Jenkins more trouble than worth, Ansible... is ok but meh.
Ultimately i have a single bash script:
if [ "$(id -u)" == "0" ]; then
set_hostname
install_sources
update_system
install_pkg
get_sysinfo
install_ssh_keys
secure_sshd
secure_system
install_gpg
install_scripts
configure_snmpd
configure_bashrc
install_cron
generate_ssh_key
#install_cjdns
install_ovpn
install_3proxy
install_squid
external_script
exit 0
else
#no root, do only some things
install_ssh
install_gpg
install_scripts
configure_bashrc
generate_ssh_key
external_script
fi
What would be the best to only monitor resource usage and show status back to end user?
So i have 6 VPS servers with software installed, some using 5% cpu, some more. Reports all functions back to a control panel where i can see this over all vps's?
I don't agree at all. You could get ansible managing account passwords and keys in a handful of lines. Once you start it is easy to bolt on more things.
I have 20-30 servers, I started using ansible to manage basic things like chronyd, resolv.conf, passwords, packages. All of that is pretty simple to get going with ansible.
When I rolled out Sensu recently I used my existing ansible setup to so easily push it to all my servers.
Getting ansible to be useful doesn't take much work. Getting ansible to do everything though is.