Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How do you backup very large data? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How do you backup very large data?

2»

Comments

  • uptimeuptime Member
    edited February 2019

    @Jun said:
    I use raspberrypi as a NAS and a primary backup node and I feel no sympathy for this little one working days and nights to deal with daily cronjobs managing tens of terabytes of data. I bet you are less abusive and conscientious person to feel sympathy for your machine running a backup. You don't have to worry about machine's rights until they rebel against us.

    treat your little beastie right with a nice heatsink and it'll happily work a bit harder for you.

    I haven't checked the scuttlebutt on later models but raspberry pi is notorious for throttling cpu (by reducing voltage) to prevent overheating.

    See for example https://walchko.github.io/blog/Raspbian/Under-Voltage/under-voltage.html

    EDIT2:

    keep in mind other sbc options maybe better suited for NAS such as the odroid hc2.

    For backing up systems at home I'm currently just using an 8 TB external usb3 drive attached to an esspressobin - nothing fancy, but cheap (total under $200) and simple enough to setup without too much thought. (Stuff I care about for long term gets tarred up, encrypted, and uploaded to various storage KVMs and dedis.)

    (ok, I know OP is asking maybe more along the lines of software and online services - but a cheap home NAS is always nice to have in the mix as well.)

    Thanked by 1banxix
  • Blockchain.

  • uptimeuptime Member
    edited February 2019

    Ft. Meade / Ogden Utah hoover + eventual FOIA request. (Wait 20 years for restore.)

    EDIT2: well, what else am I paying taxes for?

    Thanked by 2eol Letzien
  • @Letzien said:
    This is why slave DB servers exist.

    Corrupt main will corrupt slave unless you delay replication which means potential data loss.

    Thanked by 3uptime banxix Letzien
  • uptimeuptime Member
    edited February 2019

    If power corrupts ...

    and absolute power corrupts absolutely ...

    And power loss also corrupts ...

    Then the low-end surely is nigh.

    EDIT2:

    That's why I'm yo dawg about my backups.

    But I move slooow. So I can keep up more easily.

    (If that makes any sense.)

    Thanked by 1eol
  • Thanked by 1uptime
  • uptimeuptime Member
    edited February 2019

    https://en.wikipedia.org/wiki/CAP_theorem

    I have a (slightly) different formulation:

    Consistency / Redundancy / Availability / Pick any two

    EDIT2:

    But I just pulled that out of my ass.

    Because I am full of it.

    Thanked by 1eol
  • I pick consistency and pick any two.

  • uptimeuptime Member
    edited February 2019

    Two out of three ain't bad.

    But it ain't good.

    It just is.

    EDIT2:

    I guess consistency and availability are the goal - and redundancy is the presumptive (imperfect) means to availability - but poses a challenge for consistency.

    I just like the acronym. Like I said, I am full of it.

    (as far as db replication goes at least).

    That's why I like to move slow.

    So I know I'm going to get to

    wherever I'm going to go.

    Eventually.

    Thanked by 1eol
  • True.
    And it is not.
    At the same time.

    Thanked by 1uptime
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    Google Spanner claims to provide nearly all 3 of them.

    Thanked by 2uptime eol
  • mfsmfs Banned, Member

    FAT32 said: Spanner

    inhales

    Thanked by 3uptime FAT32 eol
  • uptimeuptime Member
    edited February 2019

    (dos equis guy):

    I don't always backup my data.

    But when I do ...

    I use paxos and atomic clocks to ensure consistency.

    EDIT2: It's okay for you maybe ...

    Thanked by 2mfs eol
  • uptimeuptime Member
    edited February 2019

    raid 0 makes your backups twice as fast

    EDIT2:

    it's maybe not as bad as it might sound - used in a redundant array of inexpensive servers.

    Maybe running minio or even unison

    (something along those lines just might be my holy grail, my white whale, what led me here to LET.)

    (Not the raid 0, but hey ... yolo)

    Thanked by 1mfs
  • LeviLevi Member
    edited February 2019

    @uptime said:
    raid 0 makes your backups twice as fast

    And failures with massive data loss 5 times faster. That's how you roll. No regrets. Debian.

    Thanked by 1uptime
  • But RAID is backup.

    Thanked by 1uptime
  • uptimeuptime Member
    edited February 2019

    Nah it's all good man

    I'm using this primo sd-card raid

    EDIT2:

    Thing is, I'm half cereal about at least half of this.

    Just have to figure out which half.

    But anyway ... as they say:

    what could possibly go wrong?

    Thanked by 2eol Levi
  • Looks reasonable.

    Thanked by 1uptime
  • @eol said:
    But AFRAID is backup.

    Thanked by 1eol
  • @laoban said:

    @eol said:
    But AFRAID is backup.

    LOL.
    Nice one.

  • @Tion said:

    @Letzien said:
    This is why slave DB servers exist.

    Corrupt main will corrupt slave unless you delay replication which means potential data loss.

    This is true, but I was trying to be as basic as possible in my answer. I do complete dumps from a slave so it doesn't abuse the master node and it's fairly up-to-date. Then, you can always replay up to a specific/from a specific time and rebuild there.

    Thanked by 1uptime
  • Rclone to multiple sources. Works perfectly. Write a simple shell script to Gzip the folder you want, exclude the folders not needed, name the file with a date, then cron to rclone to the destination. Run a script to clean up the directory with the Gzips everyday to keep the folder lean.

Sign In or Register to comment.