I wrote a module to limit rate by given variable rather than limit_rate
by connection. Most code of this module is copied from limit_conn
module. I’m confusing to write this new module or modify limit_conn
module. The modification will be easily but a new module would be
installed easily. So, how about this new module?
README:
Nginx directive limit_rate could limit connection’s speed, and
limit_conn could
limit connection number by given variable. If the client is a browser,
it only
open one connection to the server. The speed will be limited to
limit_rate, unless
the client is a multi-thread download tool.
The limit_traffic_rate module record connection numbers by the variable
like
limit_conn module, and set limit_rate to
limit_traffic_rate/cur_conn_number. So
if client is a browser, the download max rate will be
limit_traffic_rate, if client
is a multi-thread tool, the total rate also be limited to
limit_traffic_rate.
The limit_traffic_rate module need to use a share memory pool. Directive
syntax
is same to limit_zone.
On Fri, Dec 17, 2010 at 09:17:07AM -0500, bigplum wrote:
Hello guys:
I wrote a module to limit rate by given variable rather than limit_rate
by connection. Most code of this module is copied from limit_conn
module. I’m confusing to write this new module or modify limit_conn
module. The modification will be easily but a new module would be
installed easily. So, how about this new module?
You return NGX_DECLINED from body filter. It’s just wrong.
You set r->limit_rate dynamically to do actual rate limiting.
Note that this isn’t going to work well as limit_rate limits
average download rate for request and not expected to be changed
during downloading.
E.g. with 1MB/s limit per $remote_address suppose client connects
and starts downloading file1. After downloading 1G (at 1MB/s
rate, i.e. after 1024 seconds) it starts downloading file2 (while
still downloading file1). Now r->limit_rate is set to 0.5MB/s for
both requests. This is not a problem for file2 request, but file1
request will suddenly find client was already downloaded twice as
much as it should be allowed and will pause downloading for 1024
seconds.
I modify the calculation method. Now all the connections with same $vara
defined by “limit_traffic_rate_zone” will be record in a queue. Number
of the
sent byte will be summarized in every cycle to get the last_rate of
all “same $vara” connections.
for the single connection, rate = (limit - last_rate)/conn + last_rate
So that, the first downloading will not be paused, but it will get a max
speed.
For example: The limit_rate is 300KB.First downloading start with rate
300KBps.
After the second, third downloading issued, first will get 250KBps+
rate, and the second, third will get 10~20KBps until the first finish
its downloading.
Some days ago, one of this module users complaint about the unfair speed
for
multi-connections. So I have some time to realize a new method to limit
download rate.
In the new code, if last second speed is lager than max rate, the timer
is
added in this module, and the body filter will return.
So nothing will be sent to client, and do not use the r->limit_rate in
write_filter to limit sendfile size.
And clcf->sendfile_max_chunk also is modified by the maximum rate.
I am not sure, is there any side-effect about this method?
So that, the first downloading will not be paused, but it will get a max
speed.
For example: The limit_rate is 300KB.First downloading start with rate
300KBps.
After the second, third downloading issued, first will get 250KBps+
rate, and the second, third will get 10~20KBps until the first finish
its downloading.
I wonder if it is good or not. Would be great if it could spare limit
fair between all connections. Anyway I am looking forward for your
module. Thanks.
– Piotr.
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.