Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


What do you do to handle sudden bursts of traffic?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

What do you do to handle sudden bursts of traffic?

One of my clients had a TV commercial this weekend and contacted me about it a week before.

Because I already set the shop up with caching and failover it was not a problem, whereas they normally have about 8000 unique hits a day there were about 190,000 this weekend. There was a little more load, everything else held up quite well, speedy even.

I have seen it go otherwise where when there are 30% more hits the sites crash. (Yes you, magento). What do you guys do when a situation like this appears? Upgrade the VPS for more specs? Set up caching (memcached/redis etc)? Scale out to more servers? Cloudflare? And, why?

Comments

    • 1 cloudflare
  • edited March 2015

    @Raymii said:
    One of my clients had a TV commercial this weekend and contacted me about it a week before.

    Because I already set the shop up with caching and failover it was not a problem, whereas they normally have about 8000 unique hits a day there were about 190,000 this weekend. There was a little more load, everything else held up quite well, speedy even.

    I have seen it go otherwise where when there are 30% more hits the sites crash. (Yes you, magento). What do you guys do when a situation like this appears? Upgrade the VPS for more specs? Set up caching (memcached/redis etc)? Scale out to more servers? Cloudflare? And, why?

    1. Start caching as much as possible to reduce load. Placing varnish in front of haproxy does wonders if you have enough RAM and you strip cookies where needed.

    2. Always be prepared to spin up for more load. Setup some ready-to-go images of the server or application needed to run the site. Test the VMs beforehand, make sure they work. A central repository such as git may be useful if you don't want to have to update your VMs each time you change something.

    3. If it uses a MySQL Backend, test whether doing a master-master replication on each VPS is fast enough or if a dedicated mysql cluster is needed. If its a DB heavy application, see if setting up a slave for read-only requests to reduce load on the master is possible. I recommend percona xtradb cluster either way, which requires at least 3 servers. After the 3 servers, it is relatively easy to create an image that can be added to to the cluster at will using automation tools. HaProxy can balance traffic between the MySQL servers to make sure all receive equal traffic.

    4. Stuff another haproxy in front of everything (or create two haproxies for redundancy/etc) and create some alarms with whatever monitoring server you use for each of the backend servers. When the load gets to a certain point, spin up a new VM using the image. Make sure everything is synced (again, I recommend using a automation tool for this), test if the site works, and add it to haproxy as a lower priority server. If errors start popping up, yank it back out of haproxy and see what is wrong. If nothing, let it be and set it to equal priority.

    Thanked by 2aglodek nitrosrt10
  • rm_rm_ IPv6 Advocate, Veteran
    edited March 2015

    All of this should be done long before you get a burst of traffic like that. Especially caching; it's the simplest thing ever, WTF you were thinking if you weren't doing it in the first place.

    Last thing you want to do when your site is in its 15 minutes of fame (got posted on a news site, etc), is to start mucking about with MySQL master-master replication, experiment with adding proxies everywhere, etc.

    About the only feasible answer as far as emergency measures go, is to just get behind CloudFlare.

  • aglodekaglodek Member
    edited March 2015

    Here's what I'm putting together for similar scenarios:

    (1) Multiple Varnish (or your preferred equivalent) reverse proxies as frontends

    (2) DO or equivalent with API, allowing you to automagically spin up additional droplets/reverse proxies (see item (6) below)

    (3) at least 2 synced webserver backends

    (4) MariaDB or Percona + Galera master-master replication DB backend (minimum 3 nodes, but using VM's, recommend at least 4 nodes, I'm going with 5 nodes)

    (5) GeoDNS - if you have a global audience

    (6) Ansible (or your preferred equivalent) + monitoring - to automate and run everything from. IMHO, too complex a setup to run/react to events manually.

    At first glance, looks (and is) complicated, but with the right plan in place, this can - and in fact should! - be set up incrementally.

  • You basically want to cache at every level you can. I've managed a lot of high traffic WordPress sites where you could expect thousands of visits to hit within minutes (tech news sites during announcements/live blogs/etc).

    A lot of it depends on the type of site, but for WP I implement opcode caching, DB/Object caching with memcached, and then create static pages and make sure the TTL/Rebuild time is appropriate for the type of site. And depending on the site, I'll sometimes serve the static files from a ram disk.

    It all depends on the type of site/application though. If the DB is getting hit a ton you're going to need to spread the load out more across multiple boxes, etc. A pull CDN is also easy to implement and will take load off the main web server.

Sign In or Register to comment.