Forum: NGINX fastcgi timeout at big requests

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
Robert G. (Guest)
on 2009-04-10 16:04
I have nginx 0.6.36 with php-fastcgi. Im using SquirrelMail and have a
mail that is 25M, no attachement, just a text mail so big. Im trying to
read it, but fastcgi ends-up in
2009/04/10 13:55:35 [error] 22626#0: *537 recv() failed (104: Connection
reset by peer) while reading response header from upstream, client:

Before the problem was that php didnt have enough memory or
max_execution_time was too low. I modified that and set
keepalive_timeout to 32, but it just die even like this. Is it possible
fastcgi is limited to how big is the request or something.

How could I setup up nginx and/or php to be able to read that mail?

Thanks!
Maxim D. (Guest)
on 2009-04-10 16:47
(Received via mailing list)
Hello!

On Fri, Apr 10, 2009 at 02:04:09PM +0200, Robert G. wrote:

> I have nginx 0.6.36 with php-fastcgi. Im using SquirrelMail and have a
> mail that is 25M, no attachement, just a text mail so big. Im trying to
> read it, but fastcgi ends-up in
> 2009/04/10 13:55:35 [error] 22626#0: *537 recv() failed (104: Connection
> reset by peer) while reading response header from upstream, client:

From the error message it seems that php died even before it was
able to send header to nginx.

> Before the problem was that php didnt have enough memory or
> max_execution_time was too low. I modified that and set
> keepalive_timeout to 32, but it just die even like this.

keepalive_timeout is completely unrelated thing, it's only used by
nginx for client connections.

> Is it possible
> fastcgi is limited to how big is the request or something.

No, at least not the protocol itself.

> How could I setup up nginx and/or php to be able to read that mail?

Try looking into php.  It seems that it just dies due to errors or
you haven't tuned limits enough.

Maxim D.
Robert G. (Guest)
on 2009-04-10 17:24
Maxim D. wrote:
> Hello!
>
> On Fri, Apr 10, 2009 at 02:04:09PM +0200, Robert G. wrote:
>
>> I have nginx 0.6.36 with php-fastcgi. Im using SquirrelMail and have a
>> mail that is 25M, no attachement, just a text mail so big. Im trying to
>> read it, but fastcgi ends-up in
>> 2009/04/10 13:55:35 [error] 22626#0: *537 recv() failed (104: Connection
>> reset by peer) while reading response header from upstream, client:
>
> From the error message it seems that php died even before it was
> able to send header to nginx.
>
>> Before the problem was that php didnt have enough memory or
>> max_execution_time was too low. I modified that and set
>> keepalive_timeout to 32, but it just die even like this.
>
> keepalive_timeout is completely unrelated thing, it's only used by
> nginx for client connections.
>
>> Is it possible
>> fastcgi is limited to how big is the request or something.
>
> No, at least not the protocol itself.
>
>> How could I setup up nginx and/or php to be able to read that mail?
>
> Try looking into php.  It seems that it just dies due to errors or
> you haven't tuned limits enough.
>
> Maxim D.

Like I said before, in the logs at the begining I got this:

PHP Fatal error:  Allowed memory size of 67108864 bytes exhausted

I modified this to 128M and didnt complain anymore, but it did complain
about the max_execution_time and it looked like this:

PHP Fatal error:  Maximum execution time of 60 seconds exceeded

I modified this to 240 seconds and after this didnt get any errors
anymore from php, but got the erros in the error_log from nginx and it
just didnt do anything... also the php-cgi process just hanged in the
backgroud with 100% CPU usage. I had to stop fastcgi and kill then the
process which was using 100% CPU.

So I really dont know what else to tune in php.ini
Anıl Çetin (Guest)
on 2009-04-10 18:09
(Received via mailing list)
Are you connecting with a unix socket or by tcp? If it is TCP, try the
socket connection. Problem may be related something other than fastcgi
and nginx.


Robert G. yazmış:
Robert G. (Guest)
on 2009-04-10 18:15
Anıl Çetin wrote:
> Are you connecting with a unix socket or by tcp? If it is TCP, try the
> socket connection. Problem may be related something other than fastcgi
> and nginx.
>
>
> Robert G. yazmış:

Im using TCP/IP. Im not sure if would make such a big difference, and if
it would, why?
Maxim D. (Guest)
on 2009-04-10 18:22
(Received via mailing list)
Hello!

On Fri, Apr 10, 2009 at 03:24:18PM +0200, Robert G. wrote:

> Maxim D. wrote:
> > Hello!
> >
> > On Fri, Apr 10, 2009 at 02:04:09PM +0200, Robert G. wrote:
> >
> >> I have nginx 0.6.36 with php-fastcgi. Im using SquirrelMail and have a
> >> mail that is 25M, no attachement, just a text mail so big. Im trying to
> >> read it, but fastcgi ends-up in
> >> 2009/04/10 13:55:35 [error] 22626#0: *537 recv() failed (104: Connection
> >> reset by peer) while reading response header from upstream, client:

[...]

> anymore from php, but got the erros in the error_log from nginx and it
> just didnt do anything... also the php-cgi process just hanged in the
> backgroud with 100% CPU usage. I had to stop fastcgi and kill then the
> process which was using 100% CPU.

So you've got the above nginx's error above after killing php
process, right?  It's expected.

> So I really dont know what else to tune in php.ini

I belive it's SquirrelMail issue - it's just unable to render
(highlite urls in text and so on) the message in question in a
reasonable time.  I'm not sure how SquirrelMail's rendering works,
but if it's regexp-based it may take forever for such a big
message.

You may try to wait a bit more - but if I'm right 240 seconds of
execution time won't be enough, it may take days to complete.  And
make sure you tuned nginx's fastcgi_read_timeout (it's 60s by
default).  You browser timeouts may be a problem too.

Anyway, it doesn't looks like nginx issue.

Maxim D.
Anıl Çetin (Guest)
on 2009-04-10 19:22
(Received via mailing list)
Sometimes kernel and tcp related settings becomes bottleneck, first of
all you must be sure about, "is all data sent by nginx and/or recieved
by  php successfully?" so, to determine if tcp is problematic you may
try unix sockets. After being sure about sending and recieving you can
watch the php process using "strace -p <pid-of-php>", it will tell what
is going on php side.


Robert G. yazmış:
Michael S. (Guest)
on 2009-04-10 20:01
(Received via mailing list)
On Fri, Apr 10, 2009 at 5:04 AM, Robert G. 
<removed_email_address@domain.invalid>
wrote:
>
> How could I setup up nginx and/or php to be able to read that mail?

I would modify squirrelmail to use x-accel-redirect so it isn't using
readfile() or whatever that is keeping PHP busy churning on 25M
attachments :)
Robert G. (Guest)
on 2009-04-10 22:25
Michael S. wrote:
> On Fri, Apr 10, 2009 at 5:04 AM, Robert G. <removed_email_address@domain.invalid>
> wrote:
>>
>> How could I setup up nginx and/or php to be able to read that mail?
>
> I would modify squirrelmail to use x-accel-redirect so it isn't using
> readfile() or whatever that is keeping PHP busy churning on 25M
> attachments :)

Fix it, with the help of memory_limit to 256M, max_execution_time=300s
and fastcgi_read_timeout=240s and finally after about 5 mins or so, i
got the mail, fully :)

Seems it needs quite a lot of memory and time to be able to do this.

Thx guys!
Michael S. (Guest)
on 2009-04-10 22:55
(Received via mailing list)
On Fri, Apr 10, 2009 at 11:25 AM, Robert G. 
<removed_email_address@domain.invalid>
wrote:

> Fix it, with the help of memory_limit to 256M, max_execution_time=300s
> and fastcgi_read_timeout=240s and finally after about 5 mins or so, i
> got the mail, fully :)
>
> Seems it needs quite a lot of memory and time to be able to do this.

(cough) not if you tweaked it to use x-accel-redirect :)
This topic is locked and can not be replied to.