Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Hourly mySQL Backups
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Hourly mySQL Backups

BellaBella Member
edited March 2014 in Help

Hi.

I am currently looking for a way to take automatic hourly mySQL Backups and daily /var/www/ backups.

Can someone point me in the right direction.

I am still a newbie when it comes to backups.

I would like to store both a local copy on the server it's self, and a remote copy on another server.

Thanks

By the way, it would be great if the file names indicated the time/date it was taken.

Comments

  • AlexanderMAlexanderM Member, Top Host, Host Rep

    MySQL Dump, then rsync.

  • Are the databases large? MySQL backups can have quite a hit on performance. If you need hourly backups, replicating the MySQL server to a slave may be a better option.

  • RadiRadi Host Rep, Veteran

    Google for how to do it with a cron job.

  • @imtiax said:
    Hi.

    I am currently looking for a way to take automatic hourly mySQL Backups and daily /var/www/ backups.

    Can someone point me in the right direction.

    I am still a newbie when it comes to backups.

    I would like to store both a local copy on the server it's self, and a remote copy on another server.

    Thanks

    By the way, it would be great if the file names indicated the time/date it was taken.

    This script is easy to use: http://sourceforge.net/projects/automysqlbackup/

    however as @mpkossen MySQL backups can have a performance hit on a large database and it may be necessary to replicate to a slave and take backups from the slave.

  • Can't you just tar.gz /var/www/ and /var/lib/mysql/ then upload them?

  • BellaBella Member

    @mpkossen said:
    Are the databases large? MySQL backups can have quite a hit on performance. If you need hourly backups, replicating the MySQL server to a slave may be a better option.

    Its currently 30mb. Its my whmcs and I get sales every 5-10 mins

    My site was recently hacked and they deleted everything so I had to roll back my database by 1 day which caused a lot of trouble. So I want to take hourly backups in case it happens in the future.

  • tchentchen Member

    @hostnoob said:
    Can't you just tar.gz ... and /var/lib/mysql/ then upload them?

    No.

  • MakenaiMakenai Member
    edited March 2014

    @tchen said:

    Why so?

  • tchentchen Member

    @Makenai said:
    Why so?

    Consistency. If you shutdown the server then fine. Copy it. If it's MyISAM and you do a FLUSH TABLES WITH READ LOCK, then copy, then UNLOCK TABLES then fine. Everything else, you're playing russian roulette.

    These scripts people mention basically wrap an attempt to get the database into a consistent state somewhere on disk so that you can do the copy. Naked tarballing, because it takes a certain amount of time, leaves a window open if the database is still up and writing. It'll be happy tar balling away thinking it's doing records 1-10000 and then the database decides to reshuffle some pages about during the write. The sad thing is, you'll never really know whether you actually got a good backup or not.

  • pbalazs123pbalazs123 Member
    edited March 2014

    @imtiax I have a backup script that i'm using on all of my servers. I will share with you tomorrow. It is based on mysqldump command so it's safe.

  • If your tables are myisam you can use mysqlhotcopy to make a local backup -- faster and less of an impact than mysqldump. Then rsync the hotcopy to a remote locale and build redundancy there.

  • I agree with @mpkossen - look into master-slave replication solution if you wanna do it hourly

  • tchentchen Member

    There's also the option of mysql-zrm to do it hourly too using the binlogs in incremental mode. Not the prettiest thing to recover from, but it works.

  • tarring up the db folder is risky at best, and would only really work with MyISAM tables. anything that uses ibdata would likely become corrupt.

    master-slave makes the most sense. doing a mysql dump of the latest primary keys wouldn't be too intensive either but less practical.

  • ricardo said: master-slave makes the most sense. doing a mysql dump of the latest primary keys wouldn't be too intensive either but less practical.

    I am no expert but this method make sense the most. Dumping backup from a slave will have no impact on production database. Be sure to test integrity on slave frequently (daily atleast) and test your back up!!

    There is percona.com/software/percona-xtrabackup which is able to do non-locking hot backup. You should check it out.

  • slave was suggested up there ^^^^

    just wanted to comment that just copying the mysql data folder can be dangerous.

  • 30mb is too small. I think his site is not a busy site. He needs hourly backup for sanity because data is critical (sales). Maybe no need master slave for this.

    Thanked by 1mpkossen
  • tchentchen Member

    Slaves are a tad dangerous if you're not monitoring properly too.

  • @tchen said:
    Slaves are a tad dangerous if you're not monitoring properly too.

    May I know why it's dangerous if didn't monitor properly.

    Thanked by 1daxterfellowes
  • @tchen said:
    Slaves are a tad dangerous if you're not monitoring properly too.

    Would like to know why it's dangerous too? I have been doing that for months without any problem.

  • tchentchen Member

    @xDragonZ said: May I know why it's dangerous if didn't monitor properly.

    Especially if you've set it up as statement replication, not all things get replicated exactly. If it fails differently on the slave, it terminates the replication thread and just sits there after writing to the log. Row replication is a bit better although don't ever use it with MyISAM as you can corrupt the slave if it dies abruptly. etc etc.. hire a DBA :P

    Thanked by 1xDragonZ
  • mikhomikho Member, Host Rep

    If you go the replication path you need to monitor that the replication is really happining or you would backup a never changing database.
    With a 30mb database I would dump it locally and then send it somewhere else. Thats what I do on my WHMCS installation.

    Thanked by 1jcaleb
  • shyaminayeshshyaminayesh Member
    edited March 2014

    https://github.com/shyaminayesh/SQLBackup/blob/master/backup.sh

    i write a script for get mysql backups for my web 3 months ago. here is my script you can configure this and set cron to every 1 hours. so it will automatically upload your SQL Backup to Remote FTP :)

  • haphanhaphan Member
    edited March 2014

    shyaminayesh said: i write a script for get mysql backups for my web 3 months ago. here is my script you can configure this and set cron to every 1 hours. so it will automatically upload your SQL Backup to Remote FTP :)

    You should also put --single-transaction in your mysqldump command to ensure integrity.

    Personally I had an issue with how long should I store all backups. Keeping last 30 days of hourly backup can be quite a lot of storage. (30MB * 24 * 30 = ~22GB)

    My solution is to store back-ups in a separate pools. Currently I organize in 4 pools

    • hourly pool: db.h01.sql.gz,db.h02.sql.gz (for mission critical db, rotate for 24hours)
    • daily pool: db.20140301.sql.gz (keep last 30 files)
    • monthly pool: db.201403.sql.gz (keep last 12 files)
    • yearly pool: keep last 3 years for audits

    --> so total storage would cost ~2GB.

  • Master/Slave setup will not help for the scenario mentioned. If Master Db is hacked, so will the Slave once replicated

Sign In or Register to comment.