New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Nginx limit traffic
gsrdgrdghd
Member
Hey
Is there an easy way to limit traffic usage in Nginx, e.g. tell it to only use 300gb per month?
Comments
what do you want to happen when you hit the 300gb limit?
Just stop serving the files. Since i have a few unused VPSses ideling, i figured it would be nice to donate the bandwidth to smaller open source projects and create a mirror for them.
Isn't there something for FTP?
EDIT: Nevermind, only limiting speeds...
You may use PHP for this... start serving file, and for every one click, you sum the MB of the file... if it get above X sum (mb), you can program it to stop serving the file.
Serving files with PHP is VERY slow unless the number of request is low, which is not the TS' case.
use logs and cron to check the logs for amount of requests served, When limit is hit make the cron job tell ip tables to drop traffic on there. Or stop around 500 mb left and give an error saying why it wont download ?
Or limit the number of total download for example 10.000 downloads, after that it wont download anymore
Why slow? I haven't seen any performance clog
If we suppose the size of your files is even 700mb per file (I think the smaller is better to save bw) and supposed they are not that crowded, I suggest to start serving it without monitoring (since most of the mentioned methods so far are not accurate, especially with big file sizes) but instead you can check once or twice per day bandwidth and traffic graphs in Solus/HyperVM , if bandwidth graphs are always like 1 mb at only some hours, I wouldn't be worried, and if the traffic is less than 10gb/day that would be fine with monthly bandwidth.
I guess you just need to test it for a few days and decide if limiting traffic/connections is needed.
Serving with php is around 3% slower and increase on cpu from what I've seen. As php needs to read and write it.
Must you use nginx? This is a one-line configuration in Apache with mod_cband.
You could run vnstat. Then periodically a root cronjob that (a) checks monthly transfer and (b) if it over a specified limit writes iptables rules to drop connections on port 80 and send you an email.
well yeah... but you need to sacrifice somwthing to gain this advantage...
let me think about something fast:
Simple, isnt it?
Or insert a global redirect rule into the nginf domain configuration, which redirects everything to a "Damn, we're outta
bandwidthtransfer allowance" page.Then at 12:01am on the 1st, a script checks for that redirect and removes it.
That's a better solution for this situation... nice! and your new rule too
Spawning a php interpreter every time you need to send a file will take a lot of CPU and memory. Webservers like nginx and apache were invented for a particular reason.
Or just shutdown nginx.
Though in case somebody sees such sign like "no more bw" or will notice an offline mirror, he will not use it any more.
The idea with vnstat sounds good, i'll look into that.
Nginx isn't necessary so i'll also look into the apache module (although i'd prefer nginx due to lower ressource usage)
My idea was to build a small CDN of 3-4 servers. But i don't really know how to "disable" the servers that run out of traffic :S
Make a script on the main server to poll the nginx port on each one... if it is open, it has traffic
Sounds like a plan
Is it also possible to automatically remove the A record for IPs that ran out of traffic? Or will browsers automatically try the different servers (when they recieve multiple A records for a domain) when one of them is down?
Link the file as a shared document on Google Docs
No thats too easy Also i'd like a wget-friendly download
For anyone interested, i've uploaded the webserver traffic limiting script here. Its terrible hackish and the first bash script i've ever written so you probably shouldn't use it, but it seems to work
Now i just gotta find a way to remove the offline servers, use GeoLocation and sync the folders
@sleddog: Not a bad solution, but it applies blindly to all virtual hosts.
@netomx: Instead of doing the bandwidth check from database/file, have a cron job run the check every 15 minutes, and if it fails, reconfigure the virtual host to point to the error page. It's still overhead, but it's no longer applying directly to every transaction.
Guess you missed my second post
Just a small update, i found some rather elegant way to deal with this.
The cronjob now writes a file "traffic_left" to the root dir of the website and deletes the file when the traffic limit is exceeded.
gdnsd as DNS server check is the file "traffic_left" exists every 10 seconds and removes the server from the zone file if it doesn't exist.
In addition to that it also does GeoIP lookup and always returns the server closest to the user.
If anyone is interested i can post a small tutorial once the remaining stuff (syncing) is done.
Please post a tutorial on this. It sounds pretty interesting.
+1 for your tutorial!