All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
solution for migrate large websites?
Hi
I have a few websites with a large number of files. Ever time I backup, I use tar+7z. tar the whole folder and 7z it into numbers of files. like the following
tar cf - folder | 7z a -si file.tar.7z -v256m
My folder is like 20G+ (will reach 30G soon) so the backup is a pain every time.
I have no problem with this backup way when I use KVM VPS. a few OpenVZ providers limit the I/O usage and CPU usage so the backup can't succeed but some OVZ are fine. (For slow I/O node, it's extremely likely to end up a fail.)
I'm used to the snapshot from DigitalOcean and at the moment trying the bandwagonhosting's new snapshot function for their OVZ.
I wonder beside the snapshot from some providers, if there is an alternative efficient way for large backup without upset the provider/node/neighbors.
Thanks
Comments
rsync
No need to transfer/backup the the whole thing every time, only transfer the differences.
thanks for the reply. I think I didn't make the question clear enough. I didn't mean the frequent backups everyday. I mean I need to move the whole stuff all the time from 1 node to others to duplicate these websites. any easier solution?
I've changed the title to migrate large websites
I've used rsync to throw that amount of data between servers as a one-time only thing, was plenty quick enough with a large collection of small files. I once shifted my digital music collection (22GB) in about 25 minutes going from Ramnode in Atlanta to OVH in France.
So you want to synchronize multiple slaves from one master?
What is the data, does it change often? If most of the data doesn't change, the suggestion for rsync is still valid.
The data consists of 25G jpg files. I need to spread these pics to different nodes without upset the node. I guess just simply rsync them or even scp them all? no tar no zip? what if I need them to be compressed to a few files? any way to save time and resource of the node?
I would if they were music. but they are all small size jpg. lots of.
Compressing jpeg files makes no sense, they are already compressed.
One option is to rsync from one server to the other.
Another option is to use tar over the network, without storing the archive locally. Storing the archive locally is unnecessary IO.
I mean something like this:
There is the ionice command you could use to try to set your tar process to "idle" io priority so it doesn't cause too much IO strain for the node.
30GB?
You underestimate scp & rsync
I take it by that you've actually tried? I've migrated large-ish owncloud installations, which included 8GB of photos and they seemed to go equally quickly.
You could get box.com account actually few of free or 4€/month 100gb and mount it on every server then just copy data from box mounted folder. There was at some point promo for box you could get 50 gb free if you have such account then just mount it to your server.
You could try ionice
ionice -c2 -n7 tar cf - folder | 7z a -si file.tar.7z -v256m
That might bring it in line, if it works in your VM.
btsync might work, however ive noticed that on some RAID configurations, it eats up all the IO. You may have to find a way to cap it
Try this: https://www.cloudconverter.com/