Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


High latency mounts
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

High latency mounts

PilzbaumPilzbaum Member
edited December 2022 in General

Howdy people,
I'd like to know how you handle high latency remote shares.
(I think) I am looking for a solution with local attribute and metadata caching for faster directory listing and traversal than (what I'm currently using) plain NFS. I have looked up some information, like tweaking actimeo, acregmin or acregmax. Alternatively I've seen someone recommending catfs, but I haven't heard of it and it's supposedly still in alpha. I think some sort of asynchronous solution could work as well, as I have only one writing source and multiple reading servers (e.g. backups and rather static stuff)

Any ideas, recommendations or experiences in that direction? Anything could be helpful
Only requirement for me is that I can mount it somehow
Many thanks to all

Comments

  • rclone mount

    Thanked by 1Pilzbaum
  • rm_rm_ IPv6 Advocate, Veteran
    edited December 2022

    Check out sshfs, surprisingly enough it deals with high latencies better than some more advanced options. I tried mounting my server with 75ms ping to watch 1080p MKVs from the network share. Could not do that with neither NFS nor CIFS/SMB (both over VPN) without constant micro-interruptions. But from sshfs it plays without a single hitch.

    Maybe that's only for data streaming (i.e it pre-buffers well), depending on your use might not be much better than those in metadata ops. However it has a lot of tunables for cache timeouts on everything (see the man page), maybe those can help.

    Thanked by 3Pilzbaum loay let_rocks
  • rclone mount and JuiceFS are both solid options

    Both can be configured with metadata caching.
    For JuiceFS you can store the entire metadata locally so only actual actual file data is stored & retrieved from the storage location.

  • Thank you all for your input!
    I'm gonna try these and come back with my findings

  • @Erisa said:
    rclone mount and JuiceFS are both solid options

    Both can be configured with metadata caching.
    For JuiceFS you can store the entire metadata locally so only actual actual file data is stored & retrieved from the storage location.

    Yup I use JuiceFS myself with Cloudflare R2 S3 storage https://github.com/centminmod/centminmod-juicefs :)

    Thanked by 2Pilzbaum Erisa
  • rm_rm_ IPv6 Advocate, Veteran
    edited March 2023

    @rm_ said: Check out sshfs, surprisingly enough it deals with high latencies better than some more advanced options. I tried mounting my server with 75ms ping to watch 1080p MKVs from the network share. Could not do that with neither NFS nor CIFS/SMB (both over VPN) without constant micro-interruptions. But from sshfs it plays without a single hitch.

    Maybe that's only for data streaming (i.e it pre-buffers well), depending on your use might not be much better than those in metadata ops. However it has a lot of tunables for cache timeouts on everything (see the man page), maybe those can help.

    Bump! Recently I got unhappy with sshfs:

    • it could not reach more than ~20 Mbit when copying a file from the mounted share
    • while it lets me watch a 1080p MKV, it was very choppy on trying to watch Bluray-level files

    Someone suggested to use rclone mount instead: https://superuser.com/a/1717047

    And indeed, it works just the same as sshfs, except not having any of the above speed limitation. The file copy speed now saturates my connection, and after adding --max-read-ahead 8192k (less could be enough) on the command-line, the Bluray files are now watchable as well! Browsing the network share in GUI is also much more responsive.

  • Sshfs was awesome before rclone mount got so good. Very happy with rclone here and it's super simple.

    Thanked by 1loay
  • MaouniqueMaounique Host Rep, Veteran
    edited March 2023

    Many people complained about iSCSI over high latency and I have found it normal since the protocol is designed for low latency and stable connections. That was the case until I have used it and found better than NFS.

    it was incredibly stable for my needs (WORM with rare changes as it is the case with back-ups) but I have been able to run a vm over a mount in France! It took minutes to load, of course, but it was not completely unusable.

    That being said, any database and similar running over iSCSI on high latency and unstable connections would probably shit itself but that is a heavy stretch which is probably unheard of in practice.

    Thanked by 2loay darkimmortal
  • @Maounique said:

    That being said, any database and similar running over iSCSI on high latency and unstable connections would probably shit itself but that is a heavy stretch which is probably unheard of in practice.

    Just for fun, and after reading about it in a different topic here on LET, I set up iSCSI between two VMs in different locations. Very cool to be able to share a raw block device like that. But with iSCSI being block-based, what is the possibility of corruption on a non-lossless connection? Could I wind up with a totally FUBAR, unmountable file system if something goes wrong?

    I don't worry about NFS or SSHFS because these are running on top of a remote file system and quite abstracted from the underlying storage.

  • MaouniqueMaounique Host Rep, Veteran
    edited March 2023

    @aj_potc said: Could I wind up with a totally FUBAR, unmountable file system if something goes wrong?

    It is highly unlikely, this is not bootable stuff (albeit it could have been) and alter system files. At most, you would probably corrupt the file you were writing on/in, this is why I was thinking of databases because there is a limited number of sectors involved. There can be extensive buffering and other techniques to mitigate it, but there is some risk present.

  • rm_rm_ IPv6 Advocate, Veteran
    edited March 2023

    @aj_potc said: But with iSCSI being block-based, what is the possibility of corruption on a non-lossless connection? Could I wind up with a totally FUBAR, unmountable file system if something goes wrong?

    iSCSI runs over TCP, which is a lossless connection. All lost packets are getting retransmitted until everything is received correctly. In case the link breaks down completely for a long period of time, that's similar to suddenly unplugging your USB drive. Modern filesystems should be mostly resilient against that, maybe only the data that was actively written will be partially lost. But the Linux storage/FS stack may be a bit brittle in some cases, and it is possible to get a "stuck" situation (with a device no longer accessible, but also unable to be unmounted), only solvable via a reboot.

  • MaouniqueMaounique Host Rep, Veteran

    @rm_ said: But the Linux storage/FS stack may be a bit brittle in some cases, and it is possible to get a "stuck" situation (with a device no longer accessible, but also unable to be unmounted), only solvable via a reboot.

    That happened to me over NFS so many times that I have said, fuck it, I'd better use iSCSI and it didn't happen since. I am curious, did you have such a situation and if you did, how often compared to NFS?

  • rm_rm_ IPv6 Advocate, Veteran

    @Maounique said: That happened to me over NFS so many times that I have said, fuck it, I'd better use iSCSI and it didn't happen since. I am curious, did you have such a situation and if you did, how often compared to NFS?

    I use NBD which is a much simpler version of iSCSI, had the issue at least once. But since then I found there's a separate option to make it not hang indefinitely after all. Either way, I don't keep any such mount active 24x7 (increasing chances to catch all the network downtimes), only for a short period of time each day to update incremental backups.

    For NFS, there is the "soft" mount option, which should also alleviate this to some extent.

    Thanked by 1Maounique
  • @Maounique said:

    @rm_ said: But the Linux storage/FS stack may be a bit brittle in some cases, and it is possible to get a "stuck" situation (with a device no longer accessible, but also unable to be unmounted), only solvable via a reboot.

    That happened to me over NFS so many times that I have said, fuck it, I'd better use iSCSI and it didn't happen since. I am curious, did you have such a situation and if you did, how often compared to NFS?

    I can't speak to iSCSI disconnections, but I've had quite good reliability with NFS when handling mounts with autofs, as opposed to mounting via /etc/fstab.

    Most of my use has been for scheduled backups, so the mount is used for a relatively short period once per day.

  • @rm_ said:

    @aj_potc said: But with iSCSI being block-based, what is the possibility of corruption on a non-lossless connection? Could I wind up with a totally FUBAR, unmountable file system if something goes wrong?

    iSCSI runs over TCP, which is a lossless connection. All lost packets are getting retransmitted until everything is received correctly.

    That's a good point, but I do worry about how other parts of the stack will react to a disconnection. When I'm running rsync over an NFS connection, I know that it won't leave me with silent corruption; it will simply fail to transmit the last file it was working on. I assume the same would be true of iSCSI, but I do worry about something going wrong with lower-level file system operations. While modern file systems are designed to be able to recover from such sudden-disconnection scenarios, as you mentioned, I don't like the thought of it testing it in production. :smile:

    It just feels like a protocol that's doing block-based operations could be more likely to foul something up than one that operates at a higher level. In other words, I'd feel a lot more confident pulling out a network cable during a transfer than I would pulling a SATA or SAS cable from an active hard drive. :smile:

  • MaouniqueMaounique Host Rep, Veteran
    edited March 2023

    @aj_potc said: In other words, I'd feel a lot more confident pulling out a network cable during a transfer than I would pulling a SATA or SAS cable from an active hard drive.

    It is not the same thing. The disk does not have the level of intelligent caching and buffering an entire OS could have at many levels. There is a lot of progress since MFM, but nowhere near what a modern OS can do regarding caching and error-correcting. There won't be sectors half-written in FAT area or any physical damage yanking an electric connector could do.

    In short, yes, data corruption can occur in some usage scenarios and extreme cases of bad luck, in most cases, recoverable through various techniques post-factum, but rendering the whole storage unusable like when the controller breaks in a physical disk or the data cable is yanked during writing in some specific areas, no, not really.

  • dosaidosai Member

    @rm_ said:

    @rm_ said: Check out sshfs, surprisingly enough it deals with high latencies better than some more advanced options. I tried mounting my server with 75ms ping to watch 1080p MKVs from the network share. Could not do that with neither NFS nor CIFS/SMB (both over VPN) without constant micro-interruptions. But from sshfs it plays without a single hitch.

    Maybe that's only for data streaming (i.e it pre-buffers well), depending on your use might not be much better than those in metadata ops. However it has a lot of tunables for cache timeouts on everything (see the man page), maybe those can help.

    Bump! Recently I got unhappy with sshfs:

    • it could not reach more than ~20 Mbit when copying a file from the mounted share
    • while it lets me watch a 1080p MKV, it was very choppy on trying to watch Bluray-level files

    Someone suggested to use rclone mount instead: https://superuser.com/a/1717047

    And indeed, it works just the same as sshfs, except not having any of the above speed limitation. The file copy speed now saturates my connection, and after adding --max-read-ahead 8192k (less could be enough) on the command-line, the Bluray files are now watchable as well! Browsing the network share in GUI is also much more responsive.

    Hi, which config with rclone? SFTP?

    https://rclone.org/docs/

  • rm_rm_ IPv6 Advocate, Veteran

    @dosai said: Hi, which config with rclone? SFTP?

    Yes. I also used "disable_hashcheck = true", dunno if it was going to start rehashing entire multi-GB files before transfer.

    Thanked by 2dosai let_rocks
  • dosaidosai Member

    @rm_ said:

    @dosai said: Hi, which config with rclone? SFTP?

    Yes. I also used "disable_hashcheck = true", dunno if it was going to start rehashing entire multi-GB files before transfer.

    I will give it a try, thanks.

Sign In or Register to comment.