Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Happy SysAdminDay :)
New on LowEndTalk? Please Register and read our Community Rules.

Happy SysAdminDay :)

_MS__MS_ Member

Time to (if not already):

  • Install available patches
  • Disable password logins
  • Disable root login
  • Change the default SSH port
  • Schedule and test backups
  • Consolidate servers
  • Install htop :)

Any other tips?

Comments

  • _MS__MS_ Member

    SysAdminDay Will Be Happy

  • Find a nearest sysadmin next to you, even if you are one of them, and buy them pizza/beer or just
    say a few words of gratitude.

    Thanked by 3_MS_ hostdare Zyra
  • LTnigerLTniger Member

    @MS said: Schedule and test backups

    It is clear how to schedule backups. But for testing part - tricky. What is the methods you apply to test if backup actually correct (checksum) and any backed up files are not corrupted during transition?

    Thanked by 1hostdare
  • @LTniger said: It is clear how to schedule backups. But for testing part - tricky. What is the methods you apply to test if backup actually correct (checksum) and any backed up files are not corrupted during transition?

    You deploy it on a secondary environment, which is ideally identical to the original one.
    Practice it until you can do it half asleep with your eyes closed.

    Thanked by 4_MS_ LTniger Liso FrankZ
  • @LTniger said:

    @MS said: Schedule and test backups

    It is clear how to schedule backups. But for testing part - tricky. What is the methods you apply to test if backup actually correct (checksum) and any backed up files are not corrupted during transition?

    @luckypenguin said:

    @LTniger said: It is clear how to schedule backups. But for testing part - tricky. What is the methods you apply to test if backup actually correct (checksum) and any backed up files are not corrupted during transition?

    You deploy it on a secondary environment, which is ideally identical to the original one.
    Practice it until you can do it half asleep with your eyes closed.

    I got lazy, and decided that the best way for me to manage backups is to run EVERYTHING as either a KVM or LXC container. Backups are simple, restoring is simple, makes it easy to migrate quickly as a bonus, but definitely simple to backup and restore.

    In my case I use Proxmox to manage the backups (save to google drive) and can restore on any other Proxmox node by restoring that backup and hitting start.

    Thanked by 1_MS_
  • Happy?

    Thanked by 1_MS_
  • ralfralf Member

    I've often wondered about the practicalities of doing a full restore on a live system, just because it happens so infrequently that it's always a manual process, and stuff always goes wrong.

    One of the reasons I kept my crappy old KS-1 around so long was that I had build environments on there that hadn't been touched for years. One of them, for instance was a complicated FPGA synthesis setup and assembler for the embedded CPU etc that I last built in 2013 when I was still running debian 7. But I tried a borg restore of the necessary subdirectories for that onto a clean VM and it worked, so gradually piece by piece I've been testing legacy projects on a clean VM with the latest debian and making sure they can still be restored and built. It was still a bit scary dd'ing urandom over the drive in preparation for not renewing that machine, but I was remarkably at peace with it!

    But part of the mindset shift of that is that I've shifted my process from environments that I try to back up to having environments that can be easily recreated. Since I got a better dedi and a couple of KVMs with nested virtualisation, I've been setting that up with a VM per function, and making sure that all the steps to getting that image are easy to recreate from scratch. Creating a new worker VM for me now is just running make, and all the stuff that drives it is in a git repo, so I can easily replicate it on a new host. If I do any manual local configuration on a VM, I religiously record everything I did in a text file in my "hosting config" repo.

    So increasingly now, the only thing that really needs backing up are my private git repos. Of course, I do daily borg backups of all my VMs anyway, just in case I've missed something, but I'm not sure I'll ever use them apart from getting the odd file here and there if I've done something stupid.

    Anyway, TLDR: happy sysadmin day!

    Thanked by 3_MS_ dahartigan FrankZ
  • deqideqi Member

    have a great sysadmin day folks, good thing i'm on vacation just for that one single day

    Thanked by 2_MS_ ralf
  • MumblyMumbly Member

    Where are the deals?!

  • LTnigerLTniger Member

    @luckypenguin said:

    @LTniger said: It is clear how to schedule backups. But for testing part - tricky. What is the methods you apply to test if backup actually correct (checksum) and any backed up files are not corrupted during transition?

    You deploy it on a secondary environment, which is ideally identical to the original one.
    Practice it until you can do it half asleep with your eyes closed.

    Unfeasable due to volume of backed up data and prod systems. Prod costs 10 000$/month. To add another 10k - ridiculous.

  • @LTniger said: Prod costs 10 000$/month. To add another 10k - ridiculous.

    If you run a $10k/mo prod and still unsure about the quality/integrity of your backups,
    you are doing something wrong. With adequate planning and scalability, that $10k should
    consist of $5k clusters in HA that load balance each other. Another way to test backups during
    off-peak hours.

  • Yayyy!! lets get everything down!

    Thanked by 1_MS_
  • @LTniger said:

    @luckypenguin said:

    @LTniger said: It is clear how to schedule backups. But for testing part - tricky. What is the methods you apply to test if backup actually correct (checksum) and any backed up files are not corrupted during transition?

    You deploy it on a secondary environment, which is ideally identical to the original one.
    Practice it until you can do it half asleep with your eyes closed.

    Unfeasable due to volume of backed up data and prod systems. Prod costs 10 000$/month. To add another 10k - ridiculous.

    What's ridiculous is that was your showstopper reason and not how much time/money/productivity/opportunities is lost when that server shits the bed.

    Thanked by 1FrankZ
  • terrahostterrahost Member, Patron Provider

    @Mumbly said:
    Where are the deals?!

    In our slack community ;)

    Thanked by 1FrankZ
  • @terrahost said: In our slack community

    Did you have any downtime yesterday? I couldn't access my VM for around 30 minutes. Terrahost was not accessible either. When the VM came back online checked the uptime and the node was restarted.

  • LTnigerLTniger Member

    @Boogeyman said:

    @terrahost said: In our slack community

    Did you have any downtime yesterday? I couldn't access my VM for around 30 minutes. Terrahost was not accessible either. When the VM came back online checked the uptime and the node was restarted.

    Where is work order number?

  • BoogeymanBoogeyman Member
    edited July 30

    @LTniger said: Where is work order number?

    There shouldn't be work order for this. Neither I feel I am entitled to receive any response. Not losing millions :#

  • LTnigerLTniger Member

    @Boogeyman said: Not losing millions

    Than you are not in lowend club :O omg!

Sign In or Register to comment.