Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Very Simple Backup Script - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Very Simple Backup Script

2»

Comments

  • Hi djvdorp

    First thing, please do not ever see hardware raid as a backup, you will cry at some point if you do!
    Hardware raid should be seen as OS rebuild protection at most. that means if you server drops a disk you should in most cases be able to keep on providing the service while fixing the problem.

    If you want to use raid as your main form of data protection it is possible but not very cost effective.

    I hope this does not get me into trouble but I think the RESTRAINT period has past!

    The best way is to have the data live in three places and to make one of the copies read only.
    So you should have round robin access and have one copy be state-full. This is the idea behind one of the high end" unified email management" solutions.

    Must say the idea is not that far out, basically you keep 3 copies of everything in one place, "so its a three disk mirror across three storage servers" and do the same on the other side? At the current network speeds you can have a system that is almost perfect give or take a couple of milliseconds. So you will have 2 cluster heads with 2 writable copies and one read only copy on one side and the same on the other side. then you will adopt a grey-listing policy that .... OoOPPssss.

    Sorry almost got my self killed.

    Have a look at http://affa.sourceforge.net/

    It does snapshot style backups that should be good for almost realtime shots, just do a dump before the snap and you will have a good solution.

    Sorry I am mumbling on because there is a rerun of Oprah that is distracting me "what a gift to the world" and I am wondering if I should go back to VMware pre-sales consulting . LOL

    I think the best thing Harpo ever did iss Dr. Phil. He really has an open mind and that Doc, OZ I think.
    He said that a Good Quality Tomato Sauce it a must every day and since my better half has beaten me with that stick every day, thanks doc.

    Just got this from Groupon,

    Much like giving an impromptu motivational speech, romantic gestures can either turn out perfectly or fall as flat as sling-shot crepe.  While staging a stadium rock serenade complete with roses in your teeth or spelling their name in ...

    Discount: 50%

    I wonder what they pay the copy writer?

    Thanked by 1djvdorp
  • For my backup I use GIT. It's fast and easy. It only have 2 downsides, I don't know how to delete old version(but I'm sure it's possible, I just never Google it, it's not big enough to justify it) and I had problem about the size of the directory after some time, I was doing a commit every 30 minutes with very little change on a log file. My 200 mb of files became 8 GB on my Amazon EC2, too much for the free tier. The command git gc take care of that but it can take time(20 minutes in my case).

  • @dwild git as awesome, I use it myself. I personally don't add/commit large files that don't change such as .pdf's .wma's, etc. I love that it only stores the changes, keeps the repository small.

  • After all these posts I still don't see that snapshot based system which some time ago we talked. I think it starts with the 'A'

  • @yomero said: After all these posts I still don't see that snapshot based system which some time ago we talked. I think it starts with the 'A'

    The http://affa.sourceforge.net/ mentioned above maybe?

    Thanked by 1yomero
  • @dmmcintyre3 and #KuJoe you can do better than that!
    Since your script start with

    mv backup.1.tgz backup.2.tgz
    tar cvpf backup.1.tgz files
    rsync -avz -e "ssh" host1:/var/www/ /var/backup/files/host1/www


    my guess is that you're doing a backup of many websites, and you're keeping the most recent copy untarred + 10 full tarred archives.

    and that's exactly what i did. Now i found something better:

    rm -rf /var/old/backup.15
    mv /var/old/backup.9 /var/old/backup.15
    ...
    mv /var/old/backup.1 /var/old/backup.2
    cp -al /var/backup /var/old/backup.1


    i bet this uses just a fraction of disk space of your backup set even if i keep 15 copies and you just 10.

  • @djvdorp said: The http://affa.sourceforge.net/ mentioned above maybe?

    We have a winner =)

    What do you think? Too complex?

  • @yomero said: We have a winner =)

    What do you think? Too complex?

    It kinda seems to remind me of Rsnapshot, which I used in the past.
    Was some work to set it up initially, but kinda awesome after that :)

    Anyone knows advantages or differences between Rsnapshot and AFFA?

  • KuJoeKuJoe Member, Host Rep

    @marrco said: i bet this uses just a fraction of disk space of your backup set even if i keep 15 copies and you just 10.

    I'm a bit confused... how is it saving space exactly?

    Thanked by 1djvdorp
  • Ok, after ~40 posts, aren't we just doing the same thing rsnapshot does? rsnapshot is too simple to set up to not do it. Combine it with some simple mysqldump shell scripts, and you've got a fully functioning backup system in about 15 minutes worth of work.

    Why reinvent the wheel? Is rsnapshot not working or missing features? I didn't get that from the OP's post.

  • @KuJoe trick is cp -al part, so you're just creating a link to the files already present. In the sample posted you are doing the backup of /var/www so it's safe to assume that most of your files don't change between backups.

    Thanked by 2djvdorp yomero
  • Backups - LEB's best friend!

  • @marrco said: @KuJoe trick is cp -al part, so you're just creating a link to the files already present. In the sample posted you are doing the backup of /var/www so it's safe to assume that most of your files don't change between backups.

    Thanks for explaining it ¬_¬ that's the idea, not just backups and copies of copies like hell...

  • KuJoeKuJoe Member, Host Rep

    @marrco Missed that line. Thanks.

  • Duplicity is hard to beat when it comes to incremental and secure backups. http://trick77.com/2010/01/01/how-to-ftp-backup-a-linux-server-duply/

  • Sorry to bump an old topic but AutoMySQLBackup is a very handy tool to backup per-database or full mysql backup in a directory where you can rsync it to your remote server

    http://sourceforge.net/projects/automysqlbackup/

  • @marrco said: i bet this uses just a fraction of disk space of your backup set even if i keep 15 copies and you just 10.

    I realize this is an old thread that got bumped, but I saw this comment and wanted to post my take on it:

    I REALLY hate the concept of using hard links for backup purposes. Yes, you're right that your 15 hard linked versions will likely take up far less space than @dmmcintyre3's 10 tar/gzipped versions, but consider this:

    You have a REALLY_IMPORTANT.BIN file on your server. It never changes, so it gets hard linked 15 times. Yay for space savings!

    Now in the (very unlikely) event that bit flip(s) occur in such a way that the HDs error correction neither sees nor corrects the error, you have 15 hard linked versions of the same corrupt file. Boo for space savings!

    Personally I will ALWAYS take a small number of true full backups over a large number of hard linked backups. Sure, the above scenario may be so unlikely that I never encounter it in my lifetime, but why tempt Murphy :)

    Thanked by 1KuJoe
  • One way to alleviate the risk of random data corruption (e. g., bit-flips) is to store hashes of your data, either block-level or file-level.

    BackupPC, for instance (which is what I use) does this, along with file-level deduplication via hard-links.

    The best advice is not to spend too much time trying to find the best backup system; a quick-and-dirty backup is better than no backup at all!

    Best,

Sign In or Register to comment.