Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How much real-time users hosting can handle?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How much real-time users hosting can handle?

Hi,
Some of my customers keeps bumping me with the same question "How much realtime it can handle." Knowing the fact that it depends upon type of website and some few more factors. I'm sure many of you might be getting that question. What do you answer them? P.S. Most of them measures realtime using Google Analytics which isn't very accurate.
Any tip would be thankful.

TIA

Comments

  • AnthonySmithAnthonySmith Member, Patron Provider
    edited March 2017

    Simple answer: 4

    Real answer, its impossible to answer on a multi tenant environment where loads and use cases simply cannot be predicted in real time, you monitor performance and adapt as required.

    Its like asking how many steps will you take next month and how many steps you could take next month, you simply cannot answer that.

    Its not a few factors, it is thousands of factors, most unpredictable that change in real time.

  • @ AnthonySmith By realtime users. I mean users visit site simultaneously. Suppose it takes 1 sec for webserver to generate and server content for one site and 5 visitors visiting site at same second. It would count 1 second.
    Or If I am wrong, visitors actively reading content on site be counted? So, when customer refers to realtime I think they visitors reading content at the same time. Anyways, any set of answers you give to customers if needed?

  • Nginx on i7-4790K/32GB RAM/SSD Samsung PRO can handle close to ~30.000 requests per second via http and static 5KB html files. When you will switch http to https it's dramatically reduced to ~5.000 requests. When php-fpm and https is used, it's less than 1.000 requests per second.

    Thanked by 4sin fan adxn doughmanes
  • jetchirag said: Anyways, any set of answers you give to customers if needed?

    Still depends very much on the software you are running, content you are serving, type of server, distance to clients, etc. etc. If your customers don't comprehend this, they are probably running wordpress with 100s of plugins, so assume the worst.

  • @mrmoor said:
    Nginx on i7-4790K/32GB RAM/SSD Samsung PRO can handle close to ~30.000 requests per second via http and static 5KB html files. When you will switch http to https it's dramatically reduced to ~5.000 requests. When php-fpm and https is used, it's less than 1.000 requests per second.

    I never knew this make such big difference

  • @jetchirag Yes, it's huge difference. SSL using a lot of CPU cycles and PHP too. So, the better way is caching generated via Php files as html. Also Nginx have nice option, where you can gzip generated file once and send in response to request without loosing another CPU cycles for compressing in the air.

  • AnthonySmithAnthonySmith Member, Patron Provider

    Ok, give me the page load size and size of every asset on every page, full hardware and network specs, the full php/mysql/httpd config, cache config, nameserver details, and average from the last 30 days of the type of device used, browser, Country and City the users are from, and break down every day in to 5 second chunks and show me how many hits and which pages they access, I also need to know bounce rate and details of any caching.

    then you need to pay me $30,000 because its going to take me at least 6 months of solid work to give you some figures that might be about 50% accurate which will only be valid for the time period you gave me and will change without a doubt :)

    There are literally 1000's of things that affect this, the reality is, it is not a question that can be answered.

    The best way to get 'a number' is to shut down all but 1 site, use some bench marking scripts and see how much it can handle, a bit artificial but it will give you a number.

    here are some tools: http://xmodulo.com/web-server-benchmarking-tools-linux.html

    Thanked by 1saf31
  • @AnthonySmith You can't predict it.
    Simple example:
    MYSQL.

    1) There is huge difference between read records continuously and using filtering/ordering. More bigger table is, more poor result you'll get.

    2) You can get high level of continuously reads, but whats happen when there will be just one UPDATE on huge table (especially txt)? I'll tell you. Mysql on MyIsam table block any other IO operations on current table, until write isn't done. So, if mentioned UPDATE will get 1s, you will get just 1 request per second. In this case no matter how many cores you have, each one will be waiting for UPDATE done.

  • AnthonySmithAnthonySmith Member, Patron Provider

    @mrmoor that was kind of the point I was trying to 'over' make.

  • Multiple experiments at diverse universities (the first one being southern france iirc) clearly demonstrate that there is no real time.

    That said the correct answer to OPs question is "On what day of the week?"

  • PepeSilviaPepeSilvia Member
    edited March 2017

    Wouldn't you be able to give a more accurate estimation after performing a few load tests, from services like Blitz or Loader.io?

    Granted it mostly measures how many visitors arrive at the site, there's no further interaction to more accurately simulate the typical user experience, but it's still better than nothing!

Sign In or Register to comment.