All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Config Capture - what do you grab?
Like most of you (I hope!), I backup my systems nightly.
Before backing up, there are some prep steps - mostly db dumps (mysql or pg or whatever).
One other thing I do is dump the results of various commands into a backup dir, which the backup then includes. I find it's very helpful to have a dump of what packages where installed or what services were running. Some of this you can get from config or etc files, but even so, it only takes seconds for a script to run these commands and I figure better to have too much than too little.
The LET web filter would block this, so I put it on pastebin. What else would you add? I'm sure I'm missing some things.
You can assume that all the files on the system are backed up...with the usual exceptions (proc, tmp, etc.) So for example I don't grab crontabs because those are in var, but I do cat cpuinfo, since that wouldn't be backed up. Etc.
Comments
Interesting. I dont do that in general. I just do a bog standard nightly backup of all files.
One thing Id think of to include that isnt on the list is lsof i for open ports.
Interesting though, really. Might include it in my backups.
I dump a list of install packages to a text file which then is included in my backup. I used to grab individual config files I had updated but that got tedious after awhile, so now I just include the whole /etc directory, some /home and some /var. I can cherry-pick the files I want from that archive to get things up and running within an acceptable (to me) timeframe.
Edit: Obviously I also include database dumps.
I generally keep a list of which VPS has what, in keepass I note which services are running on the VPS & any special ports for that server, and you backup all files aside from tmp/proc/dev, what else do you need?
I'd say grab /etc/ but you're already doing that. The list of installed packages is a good idea but I don't understand the need to backup some of those things like CPU/MEM info, uptime, etc.. This data would all be included in your monitoring software stats and isn't necessary to restore a server/create a new one from the backup.
I run my MySQL backup script that checks for broken tables & create individual as well as a ful MySQL dump, grab some dir's from /etc/, /home/ content obviously and some other random things like crontabs, then I rsync it all over to another server, delete backups on the remote server older than 30 days and send off the email with results.
config files are edited rarely once services are setup and run. I suggest to create a vietc (a "vipw" twin) so as to make a backup before editing config, crontabs etc.
Another habit I grew is to not put all into one backup but to rather break it up. So, at say 2:20 am I pull my DB backup, at 3:00 am I (rotate and) pull web logs and reports, and so on ...
Moreover I scp all my backups to another server, too, where I also automatically check them for certain issues.
I always add any newly created or modified config to my auto installer or notes so i just backup files and db daily
I don't backup /proc so that's the only way to get them. uptime is probably not needed.
ansible :-)
But your suggestion is a good one. People also use git, etc. for /etc
That suggestion I honestly do not get. I see no advantage. I understand frequent DB backups to enable point-in-time recovery (though FOSS DBs suck for this) and things like that, but not breaking up backups.
duplicity :-)
I don't have a verify atm though perhaps I will add that, either on the vault side or by the client.
Managing backups is like a separate part-time job. Even when they work flawlessly, there's still all the checking to make sure they're flawless...
@raindog308
Two main reasons:
a) Most backup data I don't care about in terms of looking at them; others however like logs I do care about, I do look at them. So I dont want to decompress/untar the whole backup just to get at them.
b) having reports and logs as last batch also helps me to find out comfortably if - and if not why - any of the backup steps worked well.
See -> batched backup above, plus: All my backup scripts write to a backup.log with the date in their name, which is sent as the last piece. All I need to do then is to check that backup.log for the given day for success or error codes (and finally send myself the relieving daily "backups went O.K." email.
See, I'm more paranoid...I have One True List of Servers in a database, and I ask the question "did every server backup as it was supposed to". Otherwise, if you comment something out for some reason, you'll never get a failure.
There are logs of ways to do it. As I say, it can be a part-time job...
I do check whether all servers backed up properly. I just do it on a central machine to which all backups are scp copied. Plus some reasonability checks are made; example: todays backup must be between 98% and 105% of the size of yesterdays backup or else an alarm is triggered.
More paranoid than me? Forget it. Example: I wouldn't trust perl to tell me the date let alone to run anything important (nor would it come to my mind to use a crap shell like bash).
But that "contest" is off anyway as according to Murphine's law (Murphy's sister; far more positive than him) the chance of backups failing is indirectly proportional to the thinking and the effort put into them (translation: Yours won't fail, neither will mine).
Let us be worried about those who apt/yum install some backup thingy and occasionally run that maybe once every 2 months.
I'd add a few "extras" that I find very useful:
(Debian) apt-mark output (this when managed correctly will allow you to get rid of
cruftunneeded packages when you uninstall packages). Don't know the equivalent (or even if necessary in other systems).In case you have/use lmsensors, it is useful to dump the output of sensors (again just like cpuinfo etc. as a reference measure). Ditto for "ipmitool sdr".
/proc/mounts (again just a useful reference)
df output (granted this rightfully belongs in monitoring but it helps anyway) (do it twice to get blocks and inodes or else use something like di).
iptables --list -n : ADD -v to get counter output (again extra information is always better!). What about ip6tables? Also don't forget the nat/mangle table(s) in case you use it (or even if you don't - again better more than less). It may also be very handy to just dump/capture the output of iptables-save/ip6tables-save.
Where applicable, I've also found lsblk (customize output) and iostat -m to be very useful. After a few err...ummm... situations, I realized that I'd rather have all these data points than try to "figure" them out on the fly (when half the brain isn't working because of kernel panic and the other half is simply stuttering and muttering "I told you so...")
Where applicable, smart output of drives on the system (this is in case something else doesn't already capture it). Needless to say, please ensure that you ARE running smartd checks regularly (this'll reflect in the smart output as well).
One last point - in case you use chattr, I've not (yet) found a reliable way to save/capture chattr state using (userspace) tools like rsync etc. I think even tar doesn't cut it. So generating a chattr list is useful (also add getfattr if that helps or is used). Capabilities are nicely saved/restored via rsync so no worries there.
I think you have everything covered :-) (until you don't of course - please share when that happens).
Thanks.
P.S: It's great that you're running so many "variants". I find it hard enough to master and keep track of plain-old-lovely-Debian (and no, I'm quite happy thank you. I intend to first become a debian+vim master hopefully in this lifetime. Other OSs/Editors are reserved for reincarnations).
As always, interesting questions @raindog308
I feel monitoring and backups are two distinct things.
You should certainly monitor your backups, to ensure they're running properly. And you should backup your monitoring logs (e.g., your Nagios database). But monitoring is for detecting problems, and backups are for recovering from problems; they serve different purposes.
Regarding the OP's question about backing up config files, I backup /etc to be on the safe side, but all config is in a git-backed ansible repository. If a server dies, I can resurrect it from scratch in less than a minute with ansible. As the saying goes, cattle, not pets....