Exception for NGINX limit_req_zone

I got a problem with NGINX limit_req_zone. Anyone can help? The problem
is that, I want to limit user access to some specific URL, for example:

/forum.php?mod=forumdisplay?
/forum.php?mod=viewthread&***

But, I do want to add an exception for below URL,

/forum.php?mod=image&*

Below is the location section of my configuration, the problem is that,
for URL started with /forum.php?mod=image&*, the limitation is still
applied.
Any body can help?

location ~^/forum.php?mod=image$ {
root /web/www;
fastcgi_pass unix:/tmp/nginx.socket;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}
location ~
^/(home|forum|portal).php$ {
root /web/www;
limit_conn addr 5;
limit_req zone=refresh burst=5 nodelay;
fastcgi_pass unix:/tmp/nginx.socket;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}
location ~ .php$ {
root /web/www;
fastcgi_pass unix:/tmp/nginx.socket;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}

On Fri, Jul 20, 2012 at 09:48:04PM +0800, fhal wrote:

Hi there,

I got a problem with NGINX limit_req_zone. Anyone can help? The problem is that,
I want to limit user access to some specific URL, for example:

/forum.php?mod=forumdisplay?
/forum.php?mod=viewthread&***

But, I do want to add an exception for below URL,

/forum.php?mod=image&*

For nginx “location” matches, these three are all the same, and are all
exactly “/forum.php”. The “location” goes from the first / to just
before
the first ? or #.

That is why your configuration is not doing what you want.

location ~*^/(home|forum|portal).php$ {
root /web/www;
limit_conn addr 5;
limit_req zone=refresh burst=5 nodelay;
fastcgi_pass unix:/tmp/nginx.socket;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}

I do not know what the solution is, but I expect it will involve doing
something different based on the value of $arg_mod, within that location
block.

How is limit_req_zone zone=refresh defined? Would using “map” to set
the relevant $variable to empty if $arg_mod is “image” be appropriate?

Good luck with it,

f

Francis D. [email protected]

Hi Francis.

Thanks very much.
I tried to use map, but it doesn’t work. Below is the settings.

map $arg_mod $forum_limit {
    default  $binary_remote_addr;
    image    '';
    }
limit_conn_zone  $forum_limit  zone=addr:128m;
limit_req_zone  $forum_limit  zone=refresh:128m   rate=3r/s;

server {
location ~*^/(home|forum|portal).php$ {
root /web/www;
limit_conn addr 5;
limit_req zone=refresh burst=5 nodelay;
fastcgi_pass unix:/tmp/nginx.socket;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}
location ~ .php$ {
root /web/www;
fastcgi_pass unix:/tmp/nginx.socket;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}
}

Hi Francis,

Thanks very much for your reply.
It still doesn’t work.
The URL I want to exclude is like
/forum.php?mod=image&aid=1289568&size=300x300&key=6831686b88a927b0d9646529f88f5052&nocache=yes&type=fixnone
Does it because there are too many parameters, the map is not working?

On Sat, Jul 21, 2012 at 10:26:23AM +0800, fhal wrote:

Hi there,

I tried to use map, but it doesn’t work. Below is the settings.

It seems to work for me, in a slightly different test:

map $arg_mod $forum_limit {
    default  $binary_remote_addr;
    image    '';
    }
limit_conn_zone  $forum_limit  zone=addr:128m;
limit_req_zone  $forum_limit  zone=refresh:128m   rate=3r/s;

I use “rate=1r/m”, because that makes it very easy to see that things
are being blocked or not. But otherwise, those 6 lines are in
nginx.conf.

server {
location = /file {
limit_req zone=refresh burst=2 nodelay;
}
}

Now test:

for i in a b c d; do curl http://127.0.0.1:8000/file?mod=im; sleep 1;
done

gives me “file contents” 3 times, and then the 503 Service Temporarily
Unavailable message.

for i in a b c d; do curl http://127.0.0.2:8000/file?mod=im; sleep 1;
done

is the same, while

for i in a b c d; do curl http://127.0.0.1:8000/file?mod=image; sleep 1;
done

gives me “file contents” 4 times. Repeat the first command, and I get
503 4 times; repeat the last, and I get “file contents” 4 times.

That looks to me like the thing is restricted by ip address to 1 request
per minute, bursting to 2 extra; unless “mod=image” is included, in
which case there is no restriction.

Note that I haven’t tested limit_conn here, just limit_req, because that
is easier to build a test case for.

Does that test work or fail for you?

When I repeat the test with a fastcgi_pass configuration, instead of
just
loading a file, I see the same results.

All the best,

f

Francis D. [email protected]

HI Francis,

Thanks very much for your kindly help.
I testes the configuration as following steps:

  1. modify nginx.conf

http {
map $arg_mod $forum_limit {
default $binary_remote_addr;
image ‘’;
}

limit_conn_zone $forum_limit zone=forum_conn:10m;
limit_req_zone $forum_limit zone=forum_req:10m rate=1r/s;

server {
location ~*^/(home|forum|portal).php$ {
    root           /web/www;
    limit_conn   forum_conn  5;
    limit_req zone=forum_req burst=5 nodelay;
    fastcgi_pass   unix:/tmp/nginx.socket;
    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    include        fastcgi_params;
    }
location ~ \.php$ {
    root           /web/www;
    fastcgi_pass   unix:/tmp/nginx.socket;
    fastcgi_index  index.php;
    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    include        fastcgi_params;
    }
}

}

  1. Then I use webbench to test my website.
    First, I test the fourm.php
    webbench -c 10000 -t 10 XXX Sex - Free Porn Videos at XXX.com
    Then I checked nginx log, I can find that webbench got lots of 503
    error, the limit works.

    Then I test the exception:
    webbench -c -10000 -t 10
    XXX Sex - Free Porn Videos at XXX.com
    I checked the nginx log, I can find that there are still 503 errors
    for this access. It means the exception doesn’t work.

On Tue, Jul 24, 2012 at 09:25:29AM +0800, fhal wrote:

Hi there,

Thanks very much for your kindly help.
I testes the configuration as following steps:

Thanks for the description.

I am not able to reproduce your failure case.

When I use “siege -c 100 -t 10” against /forum.php, I rapidly start
seeing HTTP 503 messages. When I use it against /forum.php?mod=image,
I see only HTTP 200 messages, until I start seeing HTTP 502 – which is
because my php fastcgi service has died.

I then test against /env.php, and I see about the same number of HTTP
200 messages before they revert to HTTP 502 after the php service dies.

“siege” is not the same as “webbench”, but it was trivially available
on my test system, and it seems to do approximately the same thing.

Since you have a test case which reliably shows failure, can I suggest
you make some quick changes and repeat the test, in order to see at what
point things change from “failure” to “success”? That will hopefully
provide information that may point at the fix.

First: you are testing /forum.php. What happens if you test /mytest.php

ideally, the php script should do roughly the same as /forum.php.
Possibly
just copying forum.php to mytest.php will work. Your config puts no
limits
on /mytest.php. Does it show any failures, or does it run perfectly?

Next: you are testing with two different limits. What happens if
you comment the limit_conn directive are restart nginx? Then instead
comment the limit_req directive? Can you see that it fails only when
both
directives are active, or when either directive is active, or whenever
one particular directive is active?

Next: can the problem be due to the work done by the php script? (This
one strikes me as unlikely.) Replace the contents of forum.php with
just the single word “forum”. Repeat the “unlimited” test. Does it
fail differently?

If that doesn’t help find a configuration which doesn’t fail: what
happens if you use “-c 10” instead of “-c 10000”? How about “-c 100”?

And finally, for testing: change the limit to be (say) 1r/m, and
manually
make the requests. Does it fail for you now? If a mod=image request
shows
a HTTP 503 within the first 10 requests, you now have a much more easily
reproducible test case.

Good luck with it,

f

Francis D. [email protected]

On Sun, Jul 22, 2012 at 11:05:12PM +0800, fhal wrote:

Hi there,

Thanks very much for your reply.
It still doesn’t work.

What, precisely, do you mean by “doesn’t work”?

Be specific. (Do you mean “this one is being limited and I don’t want
it to be”, or “that one is not being limited and I do want it to be”,
or something different?)

Read my mail again. I included the exact (small) nginx.conf fragment
used. I included the exact test command. I described what I saw. I
described what I expected to see.

Can you do the same, just so that I can make sure that I am repeating
exactly what you did?

The URL I want to exclude is like

/forum.php?mod=image&aid=1289568&size=300x300&key=6831686b88a927b0d9646529f88f5052&nocache=yes&type=fixnone

Does it because there are too many parameters, the map is not working?

It still works for me.

for i in $(seq 4); do
curl -I
http://127.0.0.1:8000/forum.php?mod=imag&aid=1289568&size=300x300&key=6831686b88a927b0d9646529f88f5052&nocache=yes&type=fixnone
sleep 1
done

(Note: that has mod=imag, and so should be limited)

I see 3x HTTP 200 + 1x HTTP 503

for i in $(seq 4); do
curl -I
http://127.0.0.1:8000/forum.php?mod=image&aid=1289568&size=300x300&key=6831686b88a927b0d9646529f88f5052&nocache=yes&type=fixnone
sleep 1
done

(Note: that has mod=image, and so should not be limited)

I see 4x HTTP 200.

My nginx.conf http section:

===
http {
map $arg_mod $forum_limit {
default $binary_remote_addr;
image ‘’;
}
limit_req_zone $forum_limit zone=refresh:128m rate=1r/m;

include fastcgi.conf;
server {
    location ~*^/(home|forum|portal).php$  {
        limit_req zone=refresh burst=2 nodelay;
        fastcgi_pass unix:php.sock;
    }
}

}

f

Francis D. [email protected]