Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In with OpenID
Advertise on LowEndTalk.com

In this Discussion

Very Simple Backup Script

Very Simple Backup Script

djvdorpdjvdorp Member
edited February 2012 in General

Hi,

this morning when I woke up, a thought came to my mind; a Very Simple Backup Script. It is for one of my dedi's running on hardware raid, so the backup could just stay on the same server.

What I was thinking, was to make a cronscript (.sh I guess) which would run mysqldump and tar the public_html directory once a day, and name them both like the current date (eg public_html-01-02-2012.tar.gz).

What would your thoughts about this be?

Regards, Daniel

I use http://tuxlite.com to configure all my VPSes and I love it!

Comments

  • NickMNickM Member
    edited February 2012

    Never, never, never, never, never store your only backups on the same server. I actually use rsync for my backups. I use the following script, which basically makes versioned backups of my entire /var/www directory, and hard links files that haven't changed to save space. It requires you to set up key-based authentication on your backup server, of course. I run it from a cronjob every night.

    #!/bin/bash
    date=`date "+%Y-%m-%dT%H_%M_%S"`
    user=nick
    ipaddr=127.0.0.1
    rsync -e 'ssh -i /root/.ssh/backup.id_rsa' -az \
      --delete \
      --delete-excluded \
      --link-dest=/home/$user/backups/websites/current \
      /var/www $user@$ipaddr:/home/$user/backups/websites/incomplete_$date \
      && ssh -i /root/.ssh/backup.id_rsa $user@$ipaddr \
      "mv /home/$user/backups/websites/incomplete_$date /home/$user/backups/websites/$date \
      && rm -f /home/$user/backups/websites/current \
      && ln -s /home/$user/backups/websites/$date /home/$user/backups/websites/current"
    

    Lead Developer - HostGuard Control Panel

  • I wrote a little php script a while ago that runs using a cron every 6 hours or so and zips up my whole www directory and then renames it with a timestamp and then uploads it to a remote ftp directory.

    I can send you it if you want

    Thanked by 1djvdorp
  • @NickM

    You are THE-man. I am looking for something like this. If you google something like "versioning backup system" you get complicated stuff. If this does what you say, wonderful =) Most of my content doesn't change, so the backup process will be very fast.

    Some time ago someone mentioned a backup system with versions but I can't remember how is called.

  • Note that my script doesn't back up any databases, so you'll probably want to add something to do that if you need that functionality.

    Lead Developer - HostGuard Control Panel

  • KuJoeKuJoe Member
    edited February 2012

    Took @NickM's script and made it a bit more simple (with database backup):

    #!/bin/bash date=`date "+%Y-%m-%dT%H_%M_%S"` mysqldump --opt -h localhost --user=USER --password=PASS DATABASE | gzip > /backup/mysql_$date.sql.gz tar -zcf /backup/home_$date.tar.gz /home rsync -avz -e "ssh -i /root/.ssh/SSHKEY" /backup/ USER@HOSTNAME:/backup

    Thanked by 2djvdorp yowmamasita
  • And where is the hardlinking and so? :(

  • @yomero said: And where is the hardlinking and so? :(

    None. This is a complete backup of a website/database, I don't see the point in conserving space since it's so plentiful and the biggest website I've ever backed up was only 50GB but 99% of them are less than 1GB.

  • My VPS at SecureDragon has only 15GB...

  • @yomero said: My VPS at SecureDragon has only 15GB...

    That's not a lot of data to backup. ;)

  • And isn't a lot of space to create backups...

  • KuJoeKuJoe Member
    edited February 2012

    @yomero said: And isn't a lot of space to create backups...

    It all depends on how you look at it. If you're backing up a website it's plenty of space. If you're backing up a fileserver then it's not a lot. My advice is if you're going to back something up, make sure your backup server has more space than the server you're backing up. ;)

  • I'm happy using rsnapshot with a custom script to dump databases and erase last day databases. Has everything I need, hard links and easy central management for all my vps

  • djvdorpdjvdorp Member
    edited February 2012

    @NickM said: Never, never, never, never, never store your only backups on the same server. I actually use rsync for my backups. I use the following script, which basically makes versioned backups of my entire /var/www directory, and hard links files that haven't changed to save space. It requires you to set up key-based authentication on your backup server, of course. I run it from a cronjob every night.

    I know backups on the same server is not good, but the point is that the www dir is like 400 mb, which is pretty much a lot to backup. The server itself has hardware raid mirroring, and I can partition it like I want. Offsite is still better though.

    @titanicsaled said: I wrote a little php script a while ago that runs using a cron every 6 hours or so and zips up my whole www directory and then renames it with a timestamp and then uploads it to a remote ftp directory.

    I can send you it if you want

    You made me curious, can you please send it to me?

    @camarg said: I'm happy using rsnapshot with a custom script to dump databases and erase last day databases. Has everything I need, hard links and easy central management for all my vps

    I know, but rsnapshot is pretty much work to setup so I thought it might be possible to do it easier.

    I use http://tuxlite.com to configure all my VPSes and I love it!

  • Thanks all for your suggestions, scripts and time. Will make sure to try some of these suggestions out, as I really might need it one day :)

    I use http://tuxlite.com to configure all my VPSes and I love it!

  • KuJoeKuJoe Member
    edited February 2012

    @djvdorp said: I know backups on the same server is not good, but the point is that the www dir is like 400 mb, which is pretty much a lot to backup.

    Why not back it up to your home network? 400MB is not a lot and if you're concerned about bandwidth don't use the date in the file name and instead use MON, TUE, WED, THUR, FRI, SAT, and SUN for the days of the week. This will provide you with 7 days worth of backups and rsync will only backup the bits that change so you will only download 400MB 1 time (download it once, then copy/rename it to the other 7 days).

    Alternatively, you can backup 400MB of data to a 10GB VPS (assuming 1GB for OS) 22 times (more if you can compress that 400MB of data). Backing up 400MB of data over a 100Mbps link wouldn't take long at all.

    Thanked by 1djvdorp
  • I use:

    #!/bin/bash
    cd /var/backup
    rm backup.10.tgz -rf
    mv backup.9.tgz backup.10.tgz
    mv backup.8.tgz backup.9.tgz
    mv backup.7.tgz backup.8.tgz
    mv backup.6.tgz backup.7.tgz
    mv backup.5.tgz backup.6.tgz
    mv backup.4.tgz backup.5.tgz
    mv backup.3.tgz backup.4.tgz
    mv backup.2.tgz backup.3.tgz
    mv backup.1.tgz backup.2.tgz
    tar cvpf backup.1.tgz files
    rsync -avz -e "ssh" host1:/var/www/ /var/backup/files/host1/www
    rsync -avz -e "ssh" host2:/var/www/ /var/backup/files/host2/www
    FreeVPS.us - The oldest post to host VPS provider
  • @dmmcintyre3 said: I use

    Awesome. The simplest solutions are sometimes the most overlooked.

  • FreeVPS.us - The oldest post to host VPS provider
    Thanked by 3yomero tux yowmamasita
  • @dmmcintyre3 said: For MySQL I use http://sourceforge.net/projects/automysqlbackup/

    How did you manage to not re-invent the wheel?

    Hostigation High Resource Hosting - SolusVM OpenVZ/KVM VPS
  • Just use rsnapshot on backup server, and pull the files from other hosts to it. It is very easy to setup.

    New free DDNS service skipIP.com | Invites @ blog.srvbox.com | Availability status of my servers.
  • @dvjdorp

    Forgot I had it on my github repo, download it from there.

    https://github.com/titanicsaled/backuphp

  • InfinityInfinity Retired Staff

    @titanicsaled said: https://github.com/titanicsaled/backuphp

    I prefer rsync to be honest, ftp is a pain in the ass.

    我是一个巨魔 (;

  • I just make the date command output the day of the week, and use the day of the week in the filename, so there are only always 7 backups, which get rsyncd off the box.

    mysqldump -u user -ppass dbname | gzip -9 > dbname_date +%a.sql.gz

    same for tarball of the apache folder.

  • eh.. that got screwed up.. but you get the idea.

  • @charliecron said: I just make the date command output the day of the week, and use the day of the week in the filename, so there are only always 7 backups, which get rsyncd off the box.

    how would you override that Date function? noob mode

    I use http://tuxlite.com to configure all my VPSes and I love it!

  • Just enclose the date command in backticks.. usually the key next to your 1 key.

    The board erased the backticks.

    Thanked by 1djvdorp
  • @charliecron enclosing text in backticks is the markdown equivalent to HTML code tags. Simply escape them with a backslash like I did below :)

    mysqldump -u user -ppass dbname | gzip -9 >dbname_\`date +%a\`.sql.gz

    mysqldump -u user -ppass dbname | gzip -9 >dbname_`date +%a`.sql.gz

  • ah nice. Thanks Kuro!

  • Aren't backups fun? :)

    The best backup system (I think) is the one you are familiar with and have confidence in.

    Don't forget to periodically test or examine your backups from time to time, it's good to know they're actually valid. :)

    Thanked by 1yomero
  • @sleddog said: Don't forget to periodically test or examine your backups from time to time, it's good to know they're actually valid. :)

    100 times this. One time one of my backups was corrupted when I actually needed it, so I had to use an older version of it.

    Hello, World.

  • Hi djvdorp

    First thing, please do not ever see hardware raid as a backup, you will cry at some point if you do! Hardware raid should be seen as OS rebuild protection at most. that means if you server drops a disk you should in most cases be able to keep on providing the service while fixing the problem.

    If you want to use raid as your main form of data protection it is possible but not very cost effective.

    I hope this does not get me into trouble but I think the RESTRAINT period has past!

    The best way is to have the data live in three places and to make one of the copies read only.
    So you should have round robin access and have one copy be state-full. This is the idea behind one of the high end" unified email management" solutions.

    Must say the idea is not that far out, basically you keep 3 copies of everything in one place, "so its a three disk mirror across three storage servers" and do the same on the other side? At the current network speeds you can have a system that is almost perfect give or take a couple of milliseconds. So you will have 2 cluster heads with 2 writable copies and one read only copy on one side and the same on the other side. then you will adopt a grey-listing policy that .... OoOPPssss.

    Sorry almost got my self killed.

    Have a look at http://affa.sourceforge.net/

    It does snapshot style backups that should be good for almost realtime shots, just do a dump before the snap and you will have a good solution.

    Sorry I am mumbling on because there is a rerun of Oprah that is distracting me "what a gift to the world" and I am wondering if I should go back to VMware pre-sales consulting . LOL

    I think the best thing Harpo ever did iss Dr. Phil. He really has an open mind and that Doc, OZ I think. He said that a Good Quality Tomato Sauce it a must every day and since my better half has beaten me with that stick every day, thanks doc.

    Just got this from Groupon,

    Much like giving an impromptu motivational speech, romantic gestures can either turn out perfectly or fall as flat as sling-shot crepe.  While staging a stadium rock serenade complete with roses in your teeth or spelling their name in ...

    Discount: 50%

    I wonder what they pay the copy writer?

    YourVZ.com

    Thanked by 1djvdorp
  • For my backup I use GIT. It's fast and easy. It only have 2 downsides, I don't know how to delete old version(but I'm sure it's possible, I just never Google it, it's not big enough to justify it) and I had problem about the size of the directory after some time, I was doing a commit every 30 minutes with very little change on a log file. My 200 mb of files became 8 GB on my Amazon EC2, too much for the free tier. The command git gc take care of that but it can take time(20 minutes in my case).

  • @dwild git as awesome, I use it myself. I personally don't add/commit large files that don't change such as .pdf's .wma's, etc. I love that it only stores the changes, keeps the repository small.

  • After all these posts I still don't see that snapshot based system which some time ago we talked. I think it starts with the 'A'

  • @yomero said: After all these posts I still don't see that snapshot based system which some time ago we talked. I think it starts with the 'A'

    The http://affa.sourceforge.net/ mentioned above maybe?

    I use http://tuxlite.com to configure all my VPSes and I love it!

    Thanked by 1yomero
  • @dmmcintyre3 and #KuJoe you can do better than that! Since your script start with

    mv backup.1.tgz backup.2.tgz
    tar cvpf backup.1.tgz files
    rsync -avz -e "ssh" host1:/var/www/ /var/backup/files/host1/www

    my guess is that you're doing a backup of many websites, and you're keeping the most recent copy untarred + 10 full tarred archives.

    and that's exactly what i did. Now i found something better:

    rm -rf /var/old/backup.15
    mv /var/old/backup.9 /var/old/backup.15
    ...
    mv /var/old/backup.1 /var/old/backup.2
    cp -al /var/backup /var/old/backup.1

    i bet this uses just a fraction of disk space of your backup set even if i keep 15 copies and you just 10.

  • @djvdorp said: The http://affa.sourceforge.net/ mentioned above maybe?

    We have a winner =)

    What do you think? Too complex?

  • @yomero said: We have a winner =)

    What do you think? Too complex?

    It kinda seems to remind me of Rsnapshot, which I used in the past. Was some work to set it up initially, but kinda awesome after that :)

    Anyone knows advantages or differences between Rsnapshot and AFFA?

    I use http://tuxlite.com to configure all my VPSes and I love it!

  • @marrco said: i bet this uses just a fraction of disk space of your backup set even if i keep 15 copies and you just 10.

    I'm a bit confused... how is it saving space exactly?

    Thanked by 1djvdorp
  • Ok, after ~40 posts, aren't we just doing the same thing rsnapshot does? rsnapshot is too simple to set up to not do it. Combine it with some simple mysqldump shell scripts, and you've got a fully functioning backup system in about 15 minutes worth of work.

    Why reinvent the wheel? Is rsnapshot not working or missing features? I didn't get that from the OP's post.

    Signatures are to identify who I am. I'm me. Who the hell are you?

  • @KuJoe trick is cp -al part, so you're just creating a link to the files already present. In the sample posted you are doing the backup of /var/www so it's safe to assume that most of your files don't change between backups.

    Thanked by 2djvdorp yomero
  • Backups - LEB's best friend!

    dog lover

  • @marrco said: @KuJoe trick is cp -al part, so you're just creating a link to the files already present. In the sample posted you are doing the backup of /var/www so it's safe to assume that most of your files don't change between backups.

    Thanks for explaining it ¬_¬ that's the idea, not just backups and copies of copies like hell...

  • @marrco Missed that line. Thanks.

  • Duplicity is hard to beat when it comes to incremental and secure backups. http://trick77.com/2010/01/01/how-to-ftp-backup-a-linux-server-duply/

  • Sorry to bump an old topic but AutoMySQLBackup is a very handy tool to backup per-database or full mysql backup in a directory where you can rsync it to your remote server

    http://sourceforge.net/projects/automysqlbackup/

  • @marrco said: i bet this uses just a fraction of disk space of your backup set even if i keep 15 copies and you just 10.

    I realize this is an old thread that got bumped, but I saw this comment and wanted to post my take on it:

    I REALLY hate the concept of using hard links for backup purposes. Yes, you're right that your 15 hard linked versions will likely take up far less space than @dmmcintyre3's 10 tar/gzipped versions, but consider this:

    You have a REALLY_IMPORTANT.BIN file on your server. It never changes, so it gets hard linked 15 times. Yay for space savings!

    Now in the (very unlikely) event that bit flip(s) occur in such a way that the HDs error correction neither sees nor corrects the error, you have 15 hard linked versions of the same corrupt file. Boo for space savings!

    Personally I will ALWAYS take a small number of true full backups over a large number of hard linked backups. Sure, the above scenario may be so unlikely that I never encounter it in my lifetime, but why tempt Murphy :)

    Thanked by 1KuJoe
  • One way to alleviate the risk of random data corruption (e. g., bit-flips) is to store hashes of your data, either block-level or file-level.

    BackupPC, for instance (which is what I use) does this, along with file-level deduplication via hard-links.

    The best advice is not to spend too much time trying to find the best backup system; a quick-and-dirty backup is better than no backup at all!

    Best,

Sign In or Register to comment.