We use nginx rate limiting function but realized that it does not work
for files smaller than the limit specified in limit_rate directive.
Analyzing the source code it turned out that forcing the limit is second
based. We modified the source to be able to rate limit more precisely
and it also works for smaller files.
Here is the patch:
leki@szunyog:~/src/agwlimit/nginx-1.2.2$ diff -u
src/http/ngx_http_write_filter_module.c{.old,}
— src/http/ngx_http_write_filter_module.c.old 2012-07-30
08:32:13.155354399 +0200
+++ src/http/ngx_http_write_filter_module.c 2012-07-30
13:55:01.074823215 +0200
@@ -211,8 +211,15 @@
}
As we tested the pathced nginx it worked fine. Could we use it in
production or do you see any mistakes we made? Is there any reason you
don’t count on milliseconds?
On Mon, Jul 30, 2012 at 08:22:42AM -0400, leki75 wrote:
src/http/ngx_http_write_filter_module.c{.old,}
if (ngx_cached_time->msec < r->start_msec) {
Using ngx_cached_time directly is a bad idea. While it likely
will work correctly right now (unless you are really happy and the
expression will be broken in the middle by a timer signal,
assuming timer_resolution used on linux), it’s quite likely to
break in the future.
This basically limits to 0 at request start unless
limit_rate_after is set to something non-zero, resulting in a
first byte delay for all requests.
}
if (limit <= 0) {
c->write->delayed = 1;
As we tested the pathced nginx it worked fine. Could we use it in
production or do you see any mistakes we made?
The above might be ok for your particular use case, though
problems outlined above (and below) might cause suboptimal
performance.
Is there any reason you don’t count on milliseconds?
Current code uses seconds as it’s easier to handle, faster
(implies less syscalls and less packets over network) and provides
some inherent size for a first write.
With milliseconds and a 4k/s limit (as used in example in docs,
see Module ngx_http_core_module) there will be 4 byte write each
millisecond, which is quite inefficient.
On Tue, Jul 31, 2012 at 11:18:18AM -0400, leki75 wrote:
[…]
Can we avoid network segmentation by setting the following configuration
directives?
tcp_nodelay off;
tcp_nopush on;
No.
Do you have any other suggestions how to rate limit objects, which size
is smaller than specified in limit_rate, other way?
You may try to delay last byte of a response (or something like
this, ideally - just a partially filled last packet) instead.
Alternatively, you may introduce some granularity for limits and
always allow a chunk of data (something like one packet at least)
if at least one byte is allowed. This should mitigate problems
introduced by millisecond resolution.
On Wed, Aug 22, 2012 at 09:24:03AM -0400, leki75 wrote:
off_t size, sent, nsent, limit, nlimit;
if (last && size == 1) {
}
if (last && limit > size - 1) {
if (size > 1) {
limit = size - 1;
}
}
if (clcf->sendfile_max_chunk
&& (off_t) clcf->sendfile_max_chunk < limit)
{
One obvious problem I see that the patch won’t delay last byte if
the last flag wasn’t yet received by the writer filter. This may
e.g. happen when returning proxied respone as last buffer is sent
separately there, from ngx_http_upstream_finalize_request().
You may also want to avoid delay of a last byte (and a separate
syscall for it) if there is no reason to.
Otherwise it should work and looks much better than previous one.
return NGX_OK;
}
Instead we could use this if I am right:
if (size == 0 && !(c->buffered & NGX_LOWLEVEL_BUFFERED)) {
if (last || flush) {
r->out = NULL;
c->buffered &= ~NGX_HTTP_WRITE_BUFFERED;
return NGX_OK;
}
Yes, thank you, the loop is for sure unneeded as is. It probably
was here as an incomple code to free chain links, the following
patch should be ok (and it saves some memory in some special
cases):
if (limit <= 0) {
c->write->delayed = 1;
ngx_add_timer(c->write,
@@ -224,6 +242,12 @@
return NGX_AGAIN;
}
if (last && limit > size - 1) {
if (size > 1) {
limit = size - 1;
}
}
if (clcf->sendfile_max_chunk
&& (off_t) clcf->sendfile_max_chunk < limit)
{
It also turned out that setting sendfile_max_chunk to a small enough
value
is also a solution for our problem, but this patch also works with
default
sendfile_max_chunk = 0 setting.
Anyway, in nginx 1.2.3 source we found this:
if (size == 0 && !(c->buffered & NGX_LOWLEVEL_BUFFERED)) {
if (last) {
r->out = NULL;
c->buffered &= ~NGX_HTTP_WRITE_BUFFERED;
return NGX_OK;
}
if (flush) {
do {
r->out = r->out->next;
} while (r->out);
c->buffered &= ~NGX_HTTP_WRITE_BUFFERED;
return NGX_OK;
}
Instead we could use this if I am right:
if (size == 0 && !(c->buffered & NGX_LOWLEVEL_BUFFERED)) {
if (last || flush) {
r->out = NULL;
c->buffered &= ~NGX_HTTP_WRITE_BUFFERED;
return NGX_OK;
}
Thanks for your help.
Regards,
Gabor
Posted at Nginx Forum:
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.