Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


need help and insight about huge visitor traffic spikes. - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

need help and insight about huge visitor traffic spikes.

2»

Comments

  • @bsdguy said:
    I admire you. There is just this little thing that this isn't about 5 ms or 25 ms or 50 ms per request but about 10k req/s. Good luck trying that with your droplet thingy.

    I'm not showing off response times. I'm just showing that the there was no slowdowns and the everything was stable and responsive.
    That was made with 3000 req/s (as you can see with the green line), and in a 20$ droplet. If op gets a dedicated server with more resources, which is fairly easy, Im sure that he can get 10.000 to run without issues.
    Even if he can't, load balancing two servers that can handle about 5k visitors when you peak 10k, should be an easier task than

    @emre said:
    so I need at least 20 servers per traffic spike.

  • Only way to deal with so many requests per second is memcached and nginx and better designed php script that uses caches. Nginx reverse proxy can also use memcached backend if I remember correctly.

  • bsdguybsdguy Member
    edited August 2017

    @muratai

    YESS! The TRUTH and the ONLY and ALWAYS VALID TRUTH is ... (yawn)

  • eva2000eva2000 Veteran
    edited August 2017

    @bsdguy said:
    YESS! The TRUTH and the ONLY and ALWAYS VALID TRUTH is ... (yawn)

    LOL .. yeah there is no truth or only way - there's so many ways to do this and not one is the only correct way - get creative !

  • emreemre Member, LIR

    Updates about the situation:

    Coder contacted today and given exact problem description.

    It looks like he can re-code the basket stuff and make the site as static as possible.

    After coding updates done and tested I will make the site live again using 4+1 servers and some kind of "stupid" load balancing.

    Let's see how it goes this time...

    @eva2000 I will be using centmin mod as I said before..

    Thanked by 2Junkless eva2000
  • nice @emre interesting how current apache baseline scaling/benchmarks handled the load versus Centmin Mod's nginx config :) You said testing you managed 2,000 requests/s for apache ?

  • For some easy wins: Enable noatime on your database partition. For recent kernel versions, enable BBR (https://www.cyberciti.biz/cloud-computing/increase-your-linux-server-internet-speed-with-tcp-bbr-congestion-control/). Profiling your DB query plans and adding indexes where required can be a huge performance boost.

    Some notes on benchmarking/testing: The average/mean latency is far less important than the 95th percentile. Optimize for the worst case. I use Apache Bench (ab) while profiling my services, and the histogram is useful. You might have locking/synchronization problems that worsen when number of cores are increased. For instance, Python has a global interpreter lock and PHP might also have something similar

    Simpler server logic can be offloaded to special-purpose servers (https://lwan.ws/ with LuaJIT support). I believe Centinmod also comes with Lua support. The C10k problem (http://www.kegel.com/c10k.html) seems relevant.

  • WSSWSS Member

    @rincewind said:

    Learn what the fuck you're doing.

    Well, that's both helpful, AND useful!

Sign In or Register to comment.