Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Store around 30TB x 2 - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Store around 30TB x 2

2»

Comments

  • n3storm said: 4 different locations plus roadwarriors.

    I meant customer has 4 different locations plus roadwarriors that need to be moved.

  • AnthonySmithAnthonySmith Member, Patron Provider

    What is obvious is that much more info is needed, its all well and good having a massive backup server, with that volume of data you seriously need to consider the backup mechanism.

    Speed/WAN/restore time/how long it will actually take to restore over gbit, what the local restore point network can handle, and 100 other things.

    I have seem so many requests in the past for massive backup server requests, my client need to backup 4TB daily, then you find out they only have a WAN at source of 100mbit (or significantly less).

    Based on the very limited info, it sounds like you would be much better off having a local solution, even if that is in the building next door behind a fire wall (literal fire wall not firewall).

    Thanked by 3willie estnoc n3storm
  • You could look at TransIP Big Storage:

    https://www.transip.eu/vps/big-storage/

    If I understand right, it's a ZFS NAS and you buy it in 2TB chunks at 10 euro/month each, up to 400TB according to their page. That could save you some operational headache and if I understand it, you can add to it anytime you like.

    You can reach it only from their VPS though, and be careful of their VPS bandwidth limits.

  • AnthonySmith said:

    Speed/WAN/restore time/how long it will actually take to restore over gbit, what the local restore point network can handle, and 100 other things.

    Maybe better to duplicate all the data on the backup server, and switch over if something happens to the primary. Then there could be an archive copy elsewhere in case of disaster, with potentially slow restores.

    Yes, that amount of data has "mass". It's not easy to move it around even with 10gbit network.

  • AnthonySmithAnthonySmith Member, Patron Provider

    @willie yep, without known the specifics of the live system/data set and environment it lives in, how it is used, impact per day of being down etc, its impossible to create a real solution.

  • n3stormn3storm Member
    edited February 2017

    ehab said: @n3storm since you have experince with pydio have you tried seafile? have you made any pydio sources to adapt your needs?

    I haven't seen your post before @ehab.
    We prefer Python over PHP, but we prefer transparent architecture over obscure and Seafile is pretty obscure.

    I could even serve Pydio files without Pydio as it uses Webdav protocol, lots of ready to use integrations, native access from Windows, Mac and Linux, Include data from other storage filesystems, it's great. We have made some modifications to code for multiinstance setup.

    Pydio works great, code is organized and design choices are correct, our kudos to Pydio team.

  • mailcheapmailcheap Member, Host Rep

    @n3storm said:

    ehab said: @n3storm since you have experince with pydio have you tried seafile? have you made any pydio sources to adapt your needs?

    I haven't seen your post before @ehab.
    We prefer Python over PHP, but we prefer transparent architecture over obscure and Seafile is pretty obscure.

    I could even serve Pydio files without Pydio as it uses Webdav protocol, lots of ready to use integrations, native access from Windows, Mac and Linux, Include data from other storage filesystems, it's great. We have made some modifications to code for multiinstance setup.

    Pydio works great, code is organized and design choices are correct, our kudos to Pydio team.

    Pydio is written in PHP whereas Seafile is written in Python & C. Both are FOSS and have been around for about the same time if Ajaxplorer (former Pydio) is not counted.

    Pavin.

  • a minor note, I just wanted to throw out that you never really get the advertised amount of storage. For example, Hetzner's sx291 is advertised as 15x6gb so it appears to be 90gb. Once you start formatting that it is more like 15x5.45 which is actually 81.75gb. Make a raid6 array and you end up with 70.85gb.

    Thanked by 1n3storm
  • williewillie Member
    edited February 2017

    Physical disk drives are always sold in decimal terabytes, i.e. 1TB = 1e12 bytes which is 0.91 TiB (TiB=2**40 bytes). That's the main reason for the discrepancy. 1 TiB (binary terabyte, what software often calls a terabyte) is about 1.1e12 bytes.

  • Sorry @mailcheap but Ajaxexplorer and Pydio are the same and Ajaxexplorer age must be taken into account. Was just a rebranding, even some variable names AJXP persit in actual Pydio code.

  • mailcheapmailcheap Member, Host Rep

    @n3storm said:
    Sorry @mailcheap but Ajaxexplorer and Pydio are the same and Ajaxexplorer age must be taken into account. Was just a rebranding, even some variable names AJXP persit in actual Pydio code.

    Well.. if data integrity, security, performance or scalablity is your goal Seafile is the obv. choice.

    Pavin.

    Thanked by 1n3storm
Sign In or Register to comment.