Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


how to speed up plexdrive + nginx with many concurrent users streaming mp4 videos
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

how to speed up plexdrive + nginx with many concurrent users streaming mp4 videos

pikepike Veteran
edited January 2021 in Help

Hi all, since many people here use plexdrive and even more know how to use nginx I'm taking my issue here.

I got a Debian 10 VPS: 2x dedicated E5 cores, 12GB RAM, 50GB SSD, 1Gb/s

On that I run plexdrive that mounts my gdrive to a directory. This directory is then used as webroot for my nginx webserver so people can stream the files. All clients use chromium and sync their playback.

The issue starts when either >6 different files are accessed/streamed or >40 viewers access/stream a single file. The stream stutters for all users, if they refresh the browser the video won't buffer. Since the files have bitrates of <2500kbit/s and there is <100 peak viewers I have no worries about bandwith. Also I encoded the mp4s with +faststart flag to allow efficient streaming. I guess the issue is created by plexdrive serving each single user request from the nginx webserver in some inefficient way.

Ideally I would like to configure nginx cache, so it caches always a good bit ahead of the "chunks" that are currently being requested and serves all client requests out of the cache.

This is my current nginx and nginx-cache-proxy config that creates the issues I described above: https://pastebin.com/raw/zjXjePbY

Thank you for taking time to read into my issue! Hopefully someone has an idea or faced the same situation.

Edit: It may be important that I'm serving multiple copies of the same mp4 with different audio tracks, based on rewrite rules and the nginx GeoIP module (each language has a subdirectory of /var/www/html, filenames are the same) .

Comments

  • Try adding

        aio threads;
        mp4;
    

    in the server section. Also, did you play around yet with the mp4_buffer size settings etc?

    Thanked by 1pike
  • pikepike Veteran

    @FoxelVox I tried enabling aio threads but I get this error message: "aio threads" is unsupported on this platform [...]
    Do I have to enable mp4 and mp4 buffer in the proxy or the webserver that actually serves the files?

  • yoursunnyyoursunny Member, IPv6 Advocate
    edited January 2021

    You can switch to my video app:
    https://github.com/yoursunny/NDNts-video
    https://github.com/yoursunny/NDNts-video-server

    Demo sites:
    https://ndnts-video.ndn.today/
    https://pushups.ndn.today/

    We use the Named Data Networking global network to deliver content. If multiple users are watching the same video clip, the server only sends the content once, while the NDN network can multicast content to all the viewers. This greatly saves server resources and bandwidth.

    Videos are prepared with Shaka Packager. It's possible to deliver different audio track based on location through client side logic in JavaScript.

    Thanked by 1pike
  • I don't know the solution to this particular problem, but if the end result is that the files are accessed via NGINX may I suggest something like this: https://github.com/alx-xlx/goindex

    The files will be served via Cloudflare over Google Drive DDL and avoids needing a VPS.

    Thanked by 2pike ferri
  • pikepike Veteran

    Thank you @yoursunny, unfortunately all clients run chromium (embedded in a different software) which pretty much limits me to stick with the available codecs.

    @CyberneticTitan that looks interesting, if I can acquire direct links over cloudflare that cannot be linked back to my gsuite that'd be superb. I will have a look at this!

  • yoursunnyyoursunny Member, IPv6 Advocate

    @pike said:
    Thank you @yoursunny, unfortunately all clients run chromium (embedded in a different software) which pretty much limits me to stick with the available codecs.

    My app can support any codec compatible with Shaka Packager.
    The VP9 codec works in Chromium, but H264 would require "Google Chrome".
    https://pushups.ndn.today/ demo site has at least one VP9 video, but I forgot which.

    The main limitation is that it doesn't work on iOS, because iOS lacks Media Source Extension.
    iOS could play HLS content, but not from WebSockets.

    Thanked by 1pike
  • dfroedfroe Member, Host Rep
    edited January 2021

    As your whole setup is "rather complex" (meaning you have multiple components interacting with each other), could you already verify for sure which component is the root cause of your issues? IMHO this would be the most important first step, to know what to tune, before you start tuning.

    It's just an educated guess, but guessed on your description, I could imagine that the fuse mounted gdrive via plexdrive is your worst performing bottleneck in this setup. Maybe you can verify by debugging plexdrive/fuse if you are seeing unnecessary reads on the filesystem?

    If you know this is the case, getting rid of the unnecessary reads should be an effective optimization. I myself don't have any experience with nginx_cache, but maybe it is not caching very well, leading to unnecessary file reads on the fuse mountpoint?

    It may sound a bit old school, but you may give nginx + squid a try. My personal experience with squid is quite good. Some things might look a bit old fashioned from today's perspective. But once you know the most important knobs, squid is very mighty in terms of configuration. It can cache in memory and on disk and allows fine tuned proxy caching policies if required to reduce requests towards the backend.

    You would need a http backend for squid. I don't know much about mounting gdrive, but if there is no better way a very simple and leightweight nginx vhost on localhost would be sufficient. You can configure squid as reverse proxy to use this http backend and point your main nginx instance to squid.

    It may sound a bit complicated - and maybe it is. But with squid in between you could have a tool in your box which allows you to minimize requests to the fuse mountpoint as much as possible. Properly configured you will end up with only one single read attempt for every file/chunk, no matter how many users are accessing it.

    That's what I would do. :)

    Thanked by 2default pike
  • pikepike Veteran

    Thank you David for your educated guess, that's exactly the sort of knowledge I was looking for. I'm testing a new nginx configuration at this moment, high load test will likely happen this weekend. If the issues continue to appear I'll definitely try it with a squid proxy in front of nginx.

    Nginx config I'm testing now, but seems to have helped: https://pastebin.com/raw/M2m7PnaD

    Thanked by 1dfroe
Sign In or Register to comment.