New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I meant customer has 4 different locations plus roadwarriors that need to be moved.
What is obvious is that much more info is needed, its all well and good having a massive backup server, with that volume of data you seriously need to consider the backup mechanism.
Speed/WAN/restore time/how long it will actually take to restore over gbit, what the local restore point network can handle, and 100 other things.
I have seem so many requests in the past for massive backup server requests, my client need to backup 4TB daily, then you find out they only have a WAN at source of 100mbit (or significantly less).
Based on the very limited info, it sounds like you would be much better off having a local solution, even if that is in the building next door behind a fire wall (literal fire wall not firewall).
You could look at TransIP Big Storage:
https://www.transip.eu/vps/big-storage/
If I understand right, it's a ZFS NAS and you buy it in 2TB chunks at 10 euro/month each, up to 400TB according to their page. That could save you some operational headache and if I understand it, you can add to it anytime you like.
You can reach it only from their VPS though, and be careful of their VPS bandwidth limits.
Maybe better to duplicate all the data on the backup server, and switch over if something happens to the primary. Then there could be an archive copy elsewhere in case of disaster, with potentially slow restores.
Yes, that amount of data has "mass". It's not easy to move it around even with 10gbit network.
@willie yep, without known the specifics of the live system/data set and environment it lives in, how it is used, impact per day of being down etc, its impossible to create a real solution.
I haven't seen your post before @ehab.
We prefer Python over PHP, but we prefer transparent architecture over obscure and Seafile is pretty obscure.
I could even serve Pydio files without Pydio as it uses Webdav protocol, lots of ready to use integrations, native access from Windows, Mac and Linux, Include data from other storage filesystems, it's great. We have made some modifications to code for multiinstance setup.
Pydio works great, code is organized and design choices are correct, our kudos to Pydio team.
Pydio is written in PHP whereas Seafile is written in Python & C. Both are FOSS and have been around for about the same time if Ajaxplorer (former Pydio) is not counted.
Pavin.
a minor note, I just wanted to throw out that you never really get the advertised amount of storage. For example, Hetzner's sx291 is advertised as 15x6gb so it appears to be 90gb. Once you start formatting that it is more like 15x5.45 which is actually 81.75gb. Make a raid6 array and you end up with 70.85gb.
Physical disk drives are always sold in decimal terabytes, i.e. 1TB = 1e12 bytes which is 0.91 TiB (TiB=2**40 bytes). That's the main reason for the discrepancy. 1 TiB (binary terabyte, what software often calls a terabyte) is about 1.1e12 bytes.
Sorry @mailcheap but Ajaxexplorer and Pydio are the same and Ajaxexplorer age must be taken into account. Was just a rebranding, even some variable names AJXP persit in actual Pydio code.
Well.. if data integrity, security, performance or scalablity is your goal Seafile is the obv. choice.
Pavin.