On Sat, Nov 26, 2011 at 5:03 AM, talisto [email protected] wrote:
That should really be all that is necessary, right? I’m using Nginx
1.1.6.
Maxim D. Wrote:
How do you test it?
I’ve been testing it with Siege, but I can actually generate error 503
responses just by refreshing my web browser fast enough. My results
with Siege are somewhat random; sometimes the very first request fails,
then a couple will pass, then another will fail, all within a couple
seconds. That’s with my rate set to 15r/s, which should be more than
enough… in Siege I’m only using 5 concurrent users with 1 request per
second, yet it still fails, often on the first or second hit. In my
browser, it’s a bit more reliable; I have a simple page which makes 2
ajax requests; the 2nd ajax request will always fail.
As soon as I remove the limit_req line, I can flood my server with as
many requests as Siege can handle and it never generates a 503, so I
know that the errors aren’t being caused by something else. I’m not
sure why the limit_req isn’t working properly though.
You are truncating your response. Don’t top post or do that, it makes
it difficult to track things or for us to help you.
From a review of the documentation this is by design.
Siege with 5 concurrent users at 1r/s could easily exceed 15r/s if
there is other traffic or if it is not behaving predictably.
Are you sure there are no other requests to the site, and that the
browser test only makes 2 requests, and that siege only sends 10
requests? We need logs.
From what you have sent, you will get 503 because you are limiting the
requests and have not defined burst. Set burst to nodelay and see if
it allows 15/s.
And you cannot claim bug without evidence; please see the DEBUG
README, show relevant configuration, define limit_req_log_level
correctly, and send the output.
Stefan C.
http://scaleengine.com/contact
“People who enjoy having meetings should never be allowed to be in
charge of anything.”