Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


nginx required ressources per number of visitors?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

nginx required ressources per number of visitors?

afnafn Member

Hi there,

Is there any sort of formula for the amount of resources (ram, cpu load, etc) required per some amount of concurrent visitors. (something like: X connections at the same time would require 1 core with 2ghz and 1 gb of ram)

Assume no php/database intensive tasks are being executed.

I am trying to estimate how much ressources I need for 2 projects, the use cases for one of them is "load balancing/reverse proxy" and the other is basic filehosting (again for simplicity, assume it is directly via nginx, do not take into account nextcloud or whatever ressources used by the CMS"

Thanks

Comments

  • edited November 2021

    @afn said:
    Is there any sort of formula for the amount of resources (ram, cpu load, etc) required per some amount of concurrent visitors. (something like: X connections at the same time would require 1 core with 2ghz and 1 gb of ram)

    Not really, no. There are so many possible variables involved in how your applications are executed & configured and the pattern of data they'll contain & use they will see, that it is practically impossible to break it down to a simple formula that isn't simplified to the point of not really being useful at all.

    A common way to gauge these needs is load testing (see https://en.wikipedia.org/wiki/Load_testing and search for the phrase elsewhere) and other benchmarks with different use patterns and configurations to see how your particular applications behave in given environments. To avoid signing up for a bunch of VPSs you can run VMs locally to perform these tests for the most part, though some testing over the public network is a good idea too as latency and other limits can make a difference to response characteristics.

    Of course while systematic load tests is a common way to judge this, the most common way is just to stick the work on a cheap server/VM and see what happens (perhaps complaining to the provider if it doesn't seen fast enough!).

    Thanked by 2dahartigan AndrewL64
  • Perhaps you could use the experience of WordPress.com as a starting point:

    Load Balancer Update
    https://barry.blog/2008/04/28/load-balancer-update/

    "Only software we tested which could handle 8000 (live traffic, not benchmark) requests/second on a single server"

  • If you're not doing any backend operations (i.e. no application server or database), you can easily get thousands of requests per second without issue on 1GB of RAM and 1 CPU.

    If you're serving large files over secure connections, the main bottleneck would be io, network and encryption overhead considerations.

    I did a quick test on a DigitalOcean 1GB/1CPU ($5) droplet and I got around 5k req/sec with 128 concurrent connections using autocannon for the nginx default page without SSL.

  • Even two CPUs have same clock speed, the performance can be very different accroding to the CPU architecture such as the predictors. Also, different applications requires different resources. How many resources the service need, this is why experience gold.

  • Thanks for the answers and the links all.

    Even two CPUs have same clock speed...

    too much precision

    @MeAtExampleDotCom said: There are so many possible variables involved in how your applications are executed & configured and the pattern of data they'll contain & use they will see

    I perfectly understand these 2 comments. But I am not looking for exact science here. if you tell me server X with config Y can do roughly 100 visitors, I won't complain if it only does 70-80, but at least it would be weird if it handles only 10 and crashes!

    I am mainly looking for reasonable tight lower bounds.

    As I mentioned in my post, there are a lot of variables usually, but they are very simplified as one of my use-cases is only reverse proxy and the other is only serving stating archived files (DB and such variables are handled by backend, which I don't care about for now).
    when serving files what could possibly be the "lots of variables" involved?

    That works fine if there is no question :wink:

  • @ehhthing said: If you're serving large files over secure connections, the main bottleneck would be io, network and encryption overhead considerations.

    Yes, indeed I thought about this for the archive server, that's why I was thinking a raid config might be necessary instead of JBOD.

    I did a quick test on a DigitalOcean 1GB/1CPU ($5) droplet and I got around 5k req/sec with 128 concurrent connections using autocannon for the nginx default page without SSL.

    Thanks for sharing this info

  • You have to test it your self thru your website, as not all pages have the same # of queries/logic

  • @afn said:

    @MeAtExampleDotCom said: There are so many possible variables involved in how your applications are executed & configured and the pattern of data they'll contain & use they will see

    I perfectly understand these 2 comments. But I am not looking for exact science here. if you tell me server X with config Y can do roughly 100 visitors, I won't complain if it only does 70-80, but at least it would be weird if it handles only 10 and crashes!

    Unfortunately in the general case it isn't practical to be even that accurate.

    when serving files what could possibly be the "lots of variables" involved?

    Even ignoring uncontrollable things like noisy neighbours in a shared environment, there are a large number of differences to track: CPU (cores, speed, cache, ...), RAM (amount, speed), IO (disk throughput, latency, RAID?, locally mounted or SAN serving many hosts?), and that is before considering anything about what your application is actually doing: even just handing out static content has a pile of variables (what is serving them (apache, nginx, your own code, ...), size of objects, likelihood of cache hits (this is a mix of app/data properties and amount of RAM), ...

    Any general result is going to have error bars so wide that it is rendered meaningless, and any attempt to collect a useful set of more specific data would likely be both a gargantuan effort and a rapidly moving target. If it were easy to give a good general answer, or collection of benchmarks, you would find many many hosting articles (and click-bait blog-spam copies) talking about it (or, in the more likely that there were several common answers/sets, arguing with religious fervour as to why their favourite is best and the others less useful).

    I did a quick test on a DigitalOcean 1GB/1CPU ($5) droplet and I got around 5k req/sec with 128 concurrent connections using autocannon for the nginx default page without SSL.

    Thanks for sharing this info

    That is the sort of test you can do yourself, but with content more akin to what you expect to serve (the nginx default page is unlikely to be a close match). Running a basic test against a specific site/app is many orders of magnitude easier than trying to come up with useful generic answers.

    Thanked by 1afn
Sign In or Register to comment.