New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
it is proxy_cache (for caching files from my central server) , which is a disk based cache AFAIK and not in Memory cache..
This is how the data actually flows
Central server with 12x4TB HDD's -> Data is cached by proxy_cache in a production server (which is limited to 3TB via nginx config file) -> Files Served via HTTP GET request.
The setup is a bit weird to me. On a cache miss, the production server has to ask the main server for the content the pass it to the client. That's disk IO operation on 2 servers, not to mention the client has to wait for your production server to pull data from your central server. What kind of caching are you doing on your production server?
@black -- it is not a weird setup , its a very common setup ...The way i am using nginx proxy_cache is one of the main uses of nginx .
Nginx proxy_cache streams and caches the remote content live in real time without any weight, so their is no waiting time for the end user , in fact end user has no idea where the actual content is hosted at ..
To everyone who has helped me through this , it turns out it was a silly mistake , that 200MB mysql DB had one table with around 2M rows (with tiny hashes) , that was updated with every page request and since every page had to make a mysql call to that table alteast one , due to row lock the table just was not ready for a read resulting in very slow page loads and ultimately timeout ...
Disk I/O was not the cluprit and this is how i ended with another SSD server with no