New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I'm not showing off response times. I'm just showing that the there was no slowdowns and the everything was stable and responsive.
That was made with 3000 req/s (as you can see with the green line), and in a 20$ droplet. If op gets a dedicated server with more resources, which is fairly easy, Im sure that he can get 10.000 to run without issues.
Even if he can't, load balancing two servers that can handle about 5k visitors when you peak 10k, should be an easier task than
Only way to deal with so many requests per second is memcached and nginx and better designed php script that uses caches. Nginx reverse proxy can also use memcached backend if I remember correctly.
@muratai
YESS! The TRUTH and the ONLY and ALWAYS VALID TRUTH is ... (yawn)
LOL .. yeah there is no truth or only way - there's so many ways to do this and not one is the only correct way - get creative !
Updates about the situation:
Coder contacted today and given exact problem description.
It looks like he can re-code the basket stuff and make the site as static as possible.
After coding updates done and tested I will make the site live again using 4+1 servers and some kind of "stupid" load balancing.
Let's see how it goes this time...
@eva2000 I will be using centmin mod as I said before..
nice @emre interesting how current apache baseline scaling/benchmarks handled the load versus Centmin Mod's nginx config You said testing you managed 2,000 requests/s for apache ?
For some easy wins: Enable
noatime
on your database partition. For recent kernel versions, enable BBR (https://www.cyberciti.biz/cloud-computing/increase-your-linux-server-internet-speed-with-tcp-bbr-congestion-control/). Profiling your DB query plans and adding indexes where required can be a huge performance boost.Some notes on benchmarking/testing: The average/mean latency is far less important than the 95th percentile. Optimize for the worst case. I use Apache Bench (
ab
) while profiling my services, and the histogram is useful. You might have locking/synchronization problems that worsen when number of cores are increased. For instance, Python has a global interpreter lock and PHP might also have something similarSimpler server logic can be offloaded to special-purpose servers (https://lwan.ws/ with LuaJIT support). I believe Centinmod also comes with Lua support. The C10k problem (http://www.kegel.com/c10k.html) seems relevant.
Well, that's both helpful, AND useful!