Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Nginx limit traffic
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Nginx limit traffic

gsrdgrdghdgsrdgrdghd Member
edited March 2012 in Help

Hey
Is there an easy way to limit traffic usage in Nginx, e.g. tell it to only use 300gb per month?

Comments

  • dmmcintyre3dmmcintyre3 Member
    edited March 2012

    what do you want to happen when you hit the 300gb limit?

  • @dmmcintyre3 said: what do you want to happen when you hit the 300gb limit?

    Just stop serving the files. Since i have a few unused VPSses ideling, i figured it would be nice to donate the bandwidth to smaller open source projects and create a mirror for them.

  • BlazeMuisBlazeMuis Member
    edited March 2012

    Isn't there something for FTP?

    EDIT: Nevermind, only limiting speeds...

  • netomxnetomx Moderator, Veteran

    @gsrdgrdghd said: Just stop serving the files. Since i have a few unused VPSses ideling, i figured it would be nice to donate the bandwidth to smaller open source projects and create a mirror for them.

    You may use PHP for this... start serving file, and for every one click, you sum the MB of the file... if it get above X sum (mb), you can program it to stop serving the file.

  • @netomx said: You may use PHP for this... start serving file, and for every one click, you sum the MB of the file... if it get above X sum (mb), you can program it to stop serving the file.

    Serving files with PHP is VERY slow unless the number of request is low, which is not the TS' case.

  • use logs and cron to check the logs for amount of requests served, When limit is hit make the cron job tell ip tables to drop traffic on there. Or stop around 500 mb left and give an error saying why it wont download ?

  • Or limit the number of total download for example 10.000 downloads, after that it wont download anymore

  • netomxnetomx Moderator, Veteran

    @breton said: Serving files with PHP is VERY slow unless the number of request is low, which is not the TS' case.

    Why slow? I haven't seen any performance clog

  • IntcsIntcs Member
    edited March 2012

    If we suppose the size of your files is even 700mb per file (I think the smaller is better to save bw) and supposed they are not that crowded, I suggest to start serving it without monitoring (since most of the mentioned methods so far are not accurate, especially with big file sizes) but instead you can check once or twice per day bandwidth and traffic graphs in Solus/HyperVM , if bandwidth graphs are always like 1 mb at only some hours, I wouldn't be worried, and if the traffic is less than 10gb/day that would be fine with monthly bandwidth.

    I guess you just need to test it for a few days and decide if limiting traffic/connections is needed.

  • Serving with php is around 3% slower and increase on cpu from what I've seen. As php needs to read and write it.

  • MrDOSMrDOS Member
    edited March 2012

    Must you use nginx? This is a one-line configuration in Apache with mod_cband.

  • You could run vnstat. Then periodically a root cronjob that (a) checks monthly transfer and (b) if it over a specified limit writes iptables rules to drop connections on port 80 and send you an email.

  • netomxnetomx Moderator, Veteran

    @exussum said: Serving with php is around 3% slower and increase on cpu from what I've seen. As php needs to read and write it.

    well yeah... but you need to sacrifice somwthing to gain this advantage...

    let me think about something fast:

    1. Check date (maybe using MYSQL or a tmp file). If it is from the last month, reset it (in file, using touch), and reset the remaining mb file (or statement) (or, if you which, to 0)
    2. Check (reading a file, or a MySQL) how many mbs are left)
    3. If there's enough bandwidth, sum X mb of the file.
    4. If enough bandwidth, send headers content and the file.
    5. If not enough bandwidth, send text to say the bandwidth is over.
    

    Simple, isnt it?

  • sleddogsleddog Member
    edited March 2012

    @sleddog said: writes iptables rules to drop connections on port 80...

    Or insert a global redirect rule into the nginf domain configuration, which redirects everything to a "Damn, we're outta bandwidth transfer allowance" page.

    Then at 12:01am on the 1st, a script checks for that redirect and removes it.

  • netomxnetomx Moderator, Veteran
    edited March 2012

    @sleddog said: You could run vnstat. Then periodically a root cronjob that (a) checks monthly transfer and (b) if it over a specified limit writes iptables rules to drop connections on port 80 and send you an email.

    That's a better solution for this situation... nice! and your new rule too

  • @exussum said: Serving with php is around 3% slower and increase on cpu from what I've seen. As php needs to read and write it.

    Spawning a php interpreter every time you need to send a file will take a lot of CPU and memory. Webservers like nginx and apache were invented for a particular reason.

    @sleddog said: Or insert a global redirect rule into the nginf domain configuration, which redirects everything to a "Damn, we're outta bandwidth transfer allowance" page.

    Or just shutdown nginx.

    Though in case somebody sees such sign like "no more bw" or will notice an offline mirror, he will not use it any more.

  • The idea with vnstat sounds good, i'll look into that.

    Nginx isn't necessary so i'll also look into the apache module (although i'd prefer nginx due to lower ressource usage)

    My idea was to build a small CDN of 3-4 servers. But i don't really know how to "disable" the servers that run out of traffic :S

  • netomxnetomx Moderator, Veteran

    @gsrdgrdghd said: ut i don't really know how to "disable" the servers that run out of traffic :S

    Make a script on the main server to poll the nginx port on each one... if it is open, it has traffic ;)

  • @netomx said: Make a script on the main server to poll the nginx port on each one... if it is open, it has traffic ;)

    Sounds like a plan :)

    Is it also possible to automatically remove the A record for IPs that ran out of traffic? Or will browsers automatically try the different servers (when they recieve multiple A records for a domain) when one of them is down?

  • raindog308raindog308 Administrator, Veteran

    Link the file as a shared document on Google Docs :)

  • gsrdgrdghdgsrdgrdghd Member
    edited March 2012

    @raindog308 said: Link the file as a shared document on Google Docs :)

    No thats too easy :) Also i'd like a wget-friendly download

    For anyone interested, i've uploaded the webserver traffic limiting script here. Its terrible hackish and the first bash script i've ever written so you probably shouldn't use it, but it seems to work

    Now i just gotta find a way to remove the offline servers, use GeoLocation and sync the folders :D

  • MrDOSMrDOS Member

    @sleddog: Not a bad solution, but it applies blindly to all virtual hosts.

    @netomx: Instead of doing the bandwidth check from database/file, have a cron job run the check every 15 minutes, and if it fails, reconfigure the virtual host to point to the error page. It's still overhead, but it's no longer applying directly to every transaction.

  • @MrDOS said: @sleddog: Not a bad solution, but it applies blindly to all virtual hosts.

    @netomx: Instead of doing the bandwidth check from database/file, have a cron job run the check every 15 minutes, and if it fails, reconfigure the virtual host to point to the error page. It's still overhead, but it's no longer applying directly to every transaction.

    Guess you missed my second post :)

  • Just a small update, i found some rather elegant way to deal with this.

    The cronjob now writes a file "traffic_left" to the root dir of the website and deletes the file when the traffic limit is exceeded.

    gdnsd as DNS server check is the file "traffic_left" exists every 10 seconds and removes the server from the zone file if it doesn't exist.

    In addition to that it also does GeoIP lookup and always returns the server closest to the user.

    If anyone is interested i can post a small tutorial once the remaining stuff (syncing) is done.

  • Please post a tutorial on this. It sounds pretty interesting.

  • AmitzAmitz Member

    +1 for your tutorial!

Sign In or Register to comment.