New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
How do you backup?
serverbear
Member
RAID drives in Dedi for redundancy.
Daily Rsync to backup to a SecureDragon VPS via cron
Daily Rsync a redundant copy to Dropbox
Offsite quarterly snapshots which also get backed up to a Time Machine.
How about you?
Comments
Why so much backup? I would assume even two of those sources are safe enough.
How do you rsync to dropbox?
git init
git add .
git commit -m "backup"
and then i git pull
For my personal data on my computers:
Realtime sync using Synology Cloud Station (3 copies synced between desktop, laptop, and NAS).
Nightly rsync from NAS to USB drive.
For my personal websites:
Hourly backup of databases.
Daily, weekly, and monthly backup of all data.
Nightly rsync to local backup server.
Nightly rsync to home NAS.
Nightly rsync of local backup server to off-site backup server.
Realtime sync of local backup server to CrashPlan (with versioning).
For business stuff... I had to build a diagram and documentation just for it to make sense (you can never have to much security/redundancy).
https://github.com/meskyanichi/backup
rsync to multiple servers and my alarm clock (old pentium 4 laptop)
For personal use I'm using GoodSync
For my blog/website, tar and gzipped the files and database to Stylexnetwork + Prometeus + FrontRangeHosting + OVH + Host1Free
8TB RAID5 node for some stuff, a few FDCServers boxes for other things, and a little Seagate USB HDD for local files.
Right now just doing a simple remote ftp of my mysql database to my ftp server at home every 24 hours.
I was going to use gmail but the 25mb email attachment won't work for me. I have a lot of space at home and I am fortunate to have a static IP for now... :P
Nothing compared to you guys though
It's not about who has the most backups, it's about having a backup that works for you. Any backup is better than no backup at all so you're already better than most of the people out there.
Selfless plug of something I wrote when I was the forum admin for GigeNET -> jmd.cc/servers/backups-do-them-yes-now-yes-always/
+1 for dropbox backup.
For my personal iMAC I use two 2TB HD. I run aRsync to back up my pictures folder from my mobile drive to one of the 2TB and then weekly back it up to the other 2TB using Carbon Copy Cloner. I rotate a third 2TB which I keep it at work.
For client backups, websites, etc., I use s3tools.org S3cmd to Amazon s3.
Part of the reason for this thread, backups is something that hardly anyone does in the same way so keen to see how people do it
Duplicity (gpg encrypted) for backup of local endpoints to home NAS running FreeBSD/ZFS (schedule depends on endpoint)
Hourly ZFS snapshots locally in case of human error
Sftp Duplicity archives to remote server and to Amazon Glacier (just implementing this part, was using google drive via API up to now)
There's never enough backup. :P Better safe than sorry.
My backup procedure is one I like to toy around with and change every now & then. Right now for shared hosting it's daily backups to the same system, weekly to another dedicated system in another DC used purely for backups, and monthly to local external (paranoid about security, of course). For vps nodes it's weekly to that same dedicated for shared backups, mirrored to kimsufi, and I can't do the local backup because my cable has a 250gb/m cap.
My backups need a little more redundancy, but the risk of 3 dedicateds in 3 locations losing data at once is fairly low. Kimsufi is the only one not in RAID. I'm also going to outgrow this soon and be forced to either scale with this plan or go another direction.
I have a server at home where I store all of my files across all of my computers that I own. That server is backed up using RAID, and then is sent off to my dedicated server where it meets up with all of the backups of my VPS's and then all of that is uploaded to CrashPlan.
VPS nodes daily backed up at a local server in the same vlan. Same rsynced to offsite location 1 time per week. When dealing with TBs of data it is hard to have a truely flexible backup solutions. However if it is site backups, you can easily do hourly/daily backups to a huge load of locations.
Redundant SAN storage and R1soft backup to 2x RAID6 NAS. I'm proud of my backups!
Raid and redundancy is not backup.
Anyone doing incremental backups of data, or is everyone doing one-shots of entire image sets?
It's part of an overall solution for the safety of data, which is the purpose of backups as well, so I'd say it's a valid mention in the discussion.
Yes, with R1soft. Every 15 minutes and held for 30 days.
@damain yes - ZFS snapshots nightly are effectively incremental (uses reference counting) and duplicity supports both full and incremental modes
Me! Rsyncin' the public_html folders every 10 minutes, although I do make snapshots of the backup directory daily.
It keeps the bandwidth down to a minimum, and the backups are pretty speedy too.
How do you find CrashPlan?
@raindog308 please share how you backup
@serverbear This is a good tutorial on how to run a headless version of CrashPlan on Ubuntu server. I used the tutorial for my backup setup. I also followed another tutorial (which I can't find right now) that shows how to change how much memory each CrashPlan Java instance will use so that it won't eat up all of the server memory.
My definition of backup is offsite and versioned.
At home: crash plan, dropbox, tarsnap (for a couple small things, it's expensive).
VPSes: backup VPS pulls via rsync. each client runs a read-only rsyncd, with iptables only allowing connections to the backup vps. pushing from individual VPSes to backup VPS is a bit dangerous because if a client is compromised, all backups could be wiped. If the backup VPS is compromised, worst they can do is nuke backups.
Databases dump before backups. Retention varies. Same script that does the dumps also cleans up old versions.
I get a report daily saying what's been backed up. Alerting only when a backup fails is not sufficient - if a backup is commented out, you'd never know.