Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Block storage: Internet accessible, single server iSCSI vs NBD vs ?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Block storage: Internet accessible, single server iSCSI vs NBD vs ?

craigbcraigb Member

Assume DC hosted VPS running a modern linux flavour that has a decent chunk of disk that you want to mount as block storage on your local machine (broadband).

  1. Which protocol/software have you found most reliable/robust in terms of handling network slowdowns and interruptions?
  2. Assume the block device has a LUKS container created on it (from the client device so the password never touches the VPS). Which filesystem would you use within the LUKS container and why?
  3. Has anyone done this with OpenBSD at one or both ends? What was your experience?

My first attempts with NBD (default settings) ended up with me having to reboot my laptop to deal with a hung filesystem. Testing with SCST iSCSI which is going much better so far wrt network slowdowns and disconnects.

I'm aware of and have used sshfs, samba, nfs, Ceph etc etc - this question is specifically about block storage from a single VPS. And no, this will not be the only copy of the data.

Comments

  • rm_rm_ IPv6 Advocate, Veteran
    edited April 2022

    I use NBD in such scenario and it works fine. I like it better than iSCSI, because it feels massively simpler.

    Did not try with a higher ping, only up to about 5ms. Did not have much network disruptions during transfers either.

    @craigb said: having to reboot my laptop to deal with a hung filesystem

    If the remote end "goes away", then yes, the Linux block and/or FS subsystems can sometimes wedge themselves up like that, unfortunately. But I think I had to reboot 1 or 2 times due to this, over a few years.

    For the filesystem I use Btrfs, because I want multiple generations of data kept as snapshots on the remote end (backups stored for a month or three).

    Thanked by 1craigb
  • @rm_ said: Did not try with a higher ping, only up to about 5ms. Did not have much network disruptions during transfers either.

    All my Internet machines are ~20ms+ away. I'm looking for a solution that handles network disruptions and temporary disconnects reasonably gracefully.

    I started with nbd for reasons of simplicity, but was surprised how poorly it handled a NIC down, sleep 5, NIC up test (left the dangling mount). I was planning to shovel nbd traffic via datagram socket with socat (with dynamic IP access rules), but didn't get past the first test (pure nbd).

    With nbd, do you use any non-default settings?

  • akbakb Member

    @craigb said: My first attempts with NBD (default settings) ended up with me having to reboot my laptop to deal with a hung filesystem.

    Have you tried using the -persist/-p flag of nbd-client?

    Thanked by 1craigb
  • craigbcraigb Member
    edited April 2022

    @akb said: Have you tried using the -persist/-p flag of nbd-client?

    Yesterday, I tried with defaults then later with -p (but also with a connection count) and had the same problem.

    Today, I just tested again and nbd is working faultlessly - with and without -p. This includes pulling the nic whilst a dd is writing random data to the mounted block device (obviously, this is hitting the local filesystem cache). Plugged back in after a few seconds - observed nbd-server writing to disk via iotop on the VPS.

    So now to dig into my laptop logs and try to figure out what was different between yesterday and today (since all commands issued were the same as per shell history and no software updates etc)

  • rm_rm_ IPv6 Advocate, Veteran
    edited April 2022

    @craigb said: I was planning to shovel nbd traffic via datagram socket with socat

    You mean putting TCP over UDP, so the connection doesn't break when IP changes? That's fun, never thought of that. Usually a VPN connection achieves the same, or if you specifically don't want encryption, then perhaps a GRE tunnel?

    @craigb said: With nbd, do you use any non-default settings?

    Yes I kind of forgot to mention, that I'd recommend not any random NBD-server, but maybe surprisingly, qemu-nbd from the qemu-utils package. It doesn't just export "QEMU images", does raw disks or partitions just fine. It has inherited the flexible configuration of AIO, caching modes, discard pass-through and even "detect zeroes" from the original QEMU.

    That's the only server that works wonders at my largest deployment of this... which is exporting an 8 TB HDD from a small ARM-based NAS with 64 MB of RAM . :)
    --cache=none works well there, all others would complain in dmesg about OOM conditions (page allocation failures).

    But even aside from that setup, I use it elsewhere as well, it seems very reliable and problem-free.

    Client-side, I use the regular nbd-client, and an older version of it, which still supports the so called "old-style exports" in NBD (without support for export names and descriptions). Seems like qemu-nbd can export both new and old, but I still use the previous style.

    Not sure if changing the server or client would help in your test case, I did not try to deliberately break it, and do not often have connection issues (the NAS one is even on LAN, but another destination isn't).

    Thanked by 1craigb
  • I'm looking for a solution that handles network disruptions and temporary disconnects reasonably gracefully.

    Does not exist. The best you would archieve is hotplug behavior but everything classic process will collapse with it.

    I tried ZFS raid once with two network mounts as partitions as that in theory should allow one mount to fail.. but lol nope

    Thanked by 1Peppery9
Sign In or Register to comment.