Fastcgi timeout at big requests

I have nginx 0.6.36 with php-fastcgi. Im using SquirrelMail and have a
mail that is 25M, no attachement, just a text mail so big. Im trying to
read it, but fastcgi ends-up in
2009/04/10 13:55:35 [error] 22626#0: *537 recv() failed (104: Connection
reset by peer) while reading response header from upstream, client:

Before the problem was that php didnt have enough memory or
max_execution_time was too low. I modified that and set
keepalive_timeout to 32, but it just die even like this. Is it possible
fastcgi is limited to how big is the request or something.

How could I setup up nginx and/or php to be able to read that mail?

Thanks!

Hello!

On Fri, Apr 10, 2009 at 02:04:09PM +0200, Robert G. wrote:

I have nginx 0.6.36 with php-fastcgi. Im using SquirrelMail and have a
mail that is 25M, no attachement, just a text mail so big. Im trying to
read it, but fastcgi ends-up in
2009/04/10 13:55:35 [error] 22626#0: *537 recv() failed (104: Connection
reset by peer) while reading response header from upstream, client:

From the error message it seems that php died even before it was
able to send header to nginx.

Before the problem was that php didnt have enough memory or
max_execution_time was too low. I modified that and set
keepalive_timeout to 32, but it just die even like this.

keepalive_timeout is completely unrelated thing, it’s only used by
nginx for client connections.

Is it possible
fastcgi is limited to how big is the request or something.

No, at least not the protocol itself.

How could I setup up nginx and/or php to be able to read that mail?

Try looking into php. It seems that it just dies due to errors or
you haven’t tuned limits enough.

Maxim D.

Maxim D. wrote:

Hello!

On Fri, Apr 10, 2009 at 02:04:09PM +0200, Robert G. wrote:

I have nginx 0.6.36 with php-fastcgi. Im using SquirrelMail and have a
mail that is 25M, no attachement, just a text mail so big. Im trying to
read it, but fastcgi ends-up in
2009/04/10 13:55:35 [error] 22626#0: *537 recv() failed (104: Connection
reset by peer) while reading response header from upstream, client:

From the error message it seems that php died even before it was
able to send header to nginx.

Before the problem was that php didnt have enough memory or
max_execution_time was too low. I modified that and set
keepalive_timeout to 32, but it just die even like this.

keepalive_timeout is completely unrelated thing, it’s only used by
nginx for client connections.

Is it possible
fastcgi is limited to how big is the request or something.

No, at least not the protocol itself.

How could I setup up nginx and/or php to be able to read that mail?

Try looking into php. It seems that it just dies due to errors or
you haven’t tuned limits enough.

Maxim D.

Like I said before, in the logs at the begining I got this:

PHP Fatal error: Allowed memory size of 67108864 bytes exhausted

I modified this to 128M and didnt complain anymore, but it did complain
about the max_execution_time and it looked like this:

PHP Fatal error: Maximum execution time of 60 seconds exceeded

I modified this to 240 seconds and after this didnt get any errors
anymore from php, but got the erros in the error_log from nginx and it
just didnt do anything… also the php-cgi process just hanged in the
backgroud with 100% CPU usage. I had to stop fastcgi and kill then the
process which was using 100% CPU.

So I really dont know what else to tune in php.ini

Are you connecting with a unix socket or by tcp? If it is TCP, try the
socket connection. Problem may be related something other than fastcgi
and nginx.

Robert G. yazmış:

Hello!

On Fri, Apr 10, 2009 at 03:24:18PM +0200, Robert G. wrote:

Maxim D. wrote:

Hello!

On Fri, Apr 10, 2009 at 02:04:09PM +0200, Robert G. wrote:

I have nginx 0.6.36 with php-fastcgi. Im using SquirrelMail and have a
mail that is 25M, no attachement, just a text mail so big. Im trying to
read it, but fastcgi ends-up in
2009/04/10 13:55:35 [error] 22626#0: *537 recv() failed (104: Connection
reset by peer) while reading response header from upstream, client:

[…]

anymore from php, but got the erros in the error_log from nginx and it
just didnt do anything… also the php-cgi process just hanged in the
backgroud with 100% CPU usage. I had to stop fastcgi and kill then the
process which was using 100% CPU.

So you’ve got the above nginx’s error above after killing php
process, right? It’s expected.

So I really dont know what else to tune in php.ini

I belive it’s SquirrelMail issue - it’s just unable to render
(highlite urls in text and so on) the message in question in a
reasonable time. I’m not sure how SquirrelMail’s rendering works,
but if it’s regexp-based it may take forever for such a big
message.

You may try to wait a bit more - but if I’m right 240 seconds of
execution time won’t be enough, it may take days to complete. And
make sure you tuned nginx’s fastcgi_read_timeout (it’s 60s by
default). You browser timeouts may be a problem too.

Anyway, it doesn’t looks like nginx issue.

Maxim D.

Sometimes kernel and tcp related settings becomes bottleneck, first of
all you must be sure about, “is all data sent by nginx and/or recieved
by php successfully?” so, to determine if tcp is problematic you may
try unix sockets. After being sure about sending and recieving you can
watch the php process using "strace -p ", it will tell what
is going on php side.

Robert G. yazmış:

On Fri, Apr 10, 2009 at 5:04 AM, Robert G. [email protected]
wrote:

How could I setup up nginx and/or php to be able to read that mail?

I would modify squirrelmail to use x-accel-redirect so it isn’t using
readfile() or whatever that is keeping PHP busy churning on 25M
attachments :slight_smile:

Anıl Çetin wrote:

Are you connecting with a unix socket or by tcp? If it is TCP, try the
socket connection. Problem may be related something other than fastcgi
and nginx.

Robert G. yazmış:

Im using TCP/IP. Im not sure if would make such a big difference, and if
it would, why?

Michael S. wrote:

On Fri, Apr 10, 2009 at 5:04 AM, Robert G. [email protected]
wrote:

How could I setup up nginx and/or php to be able to read that mail?

I would modify squirrelmail to use x-accel-redirect so it isn’t using
readfile() or whatever that is keeping PHP busy churning on 25M
attachments :slight_smile:

Fix it, with the help of memory_limit to 256M, max_execution_time=300s
and fastcgi_read_timeout=240s and finally after about 5 mins or so, i
got the mail, fully :slight_smile:

Seems it needs quite a lot of memory and time to be able to do this.

Thx guys!

On Fri, Apr 10, 2009 at 11:25 AM, Robert G. [email protected]
wrote:

Fix it, with the help of memory_limit to 256M, max_execution_time=300s
and fastcgi_read_timeout=240s and finally after about 5 mins or so, i
got the mail, fully :slight_smile:

Seems it needs quite a lot of memory and time to be able to do this.

(cough) not if you tweaked it to use x-accel-redirect :slight_smile: