Fastcgi_read_timeout with PHP backend

Hello,

I am trying to understand how fastcgi_read_timout works in Nginx.

Here is what I wanna do:
I list files (few MB each) on a distant place which I copy one by one
(loop) on the local disk through PHP.
I do not know the amount of files I need to copy, thus I do not know the
total amount of time I need for the script to finish its execution. What
I
know is that I can ensure is a processing time limit per file.
I would like my script not to be forcefully interrupted by either sides
(PHP or Nginx) before completion.

What I did so far:

  • PHP has a ‘max_execution_time’ of 30s (default?). In the loop copying
    files, I use the set_time_limit() procedure to reinitialize the limit
    before each file copy, hence each file processing has 30s to go: way
    enough!

  • The problem seems to lie on the Nginx side, with the
    ‘fastcgi_read_timeout’ configuration entry.
    I can’t ensure what maximum time I need, and I would like not to use
    way-off values such as 2 weeks or 1 year there. ;o)
    What I understood from the
    documentationhttp://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_read_timeoutis
    that the timeout is reinitialized after a successful read: am I right?

The challenge is now to cut any buffering occurring on the PHP side and
let
Nginx manage it (since the buffering will occur after content is being
read
from the backend). Here is what I did:

  • PHP’s zlib.output_compression is deactivated by default in PHP
  • I deactivated PHP’s output_buffering (default is 4096 bytes)
  • I am using the PHP flush() procedure at the end of each iteration of
    the
    copying loop, after a message is written to the output

Current state:

  • The script seems to still be cut after the expiration of the
    ‘fastcgi_read_timout’ limit (confirmed by the error log entry ‘upstream
    timed out (110: Connection timed out) while reading upstream’)
  • The PHP loop is entered several times since multiple files have been
    copied
  • The output sent to the browser is cut before any output from the loop
    appears

It seems that there is still some unwanted buffering on the PHP side.
I also note that the PHP’s flush() procedure doesn’t seem to work since
the
output in the browser doesn’t contain any message written after eahc
file
copy.

Am I misunderstanding something about Nginx here (especially about the
‘fastcgi_read_timeout’ directive)?
Have you any intel/piece of advice on hte matter?

Thanks,

B. R.

No ideas?

B. R.

Write a script that lists the remote files, then checks for the
existence of the file locally, and copy it if it doesn’t exist? That way
no internal loop is used - use a different exit code to note whether
there was one copied, or there were none ready.

That way you scale for a single file transfer. There’s nothing to be
gained from looping internally - well performance-wise that is.

Steve

On Sun, 2013-05-26 at 21:31 -0400, B.R. wrote:

    I am trying to understand how fastcgi_read_timout works in
    finish its execution. What I know is that I can ensure is a
    loop copying files, I use the set_time_limit() procedure to
    What I understood from the documentation is that the timeout

    * The script seems to still be cut after the expiration of the

[email protected]
http://mailman.nginx.org/mailman/listinfo/nginx


Steve H. BSc(Hons) MNZCS [email protected]
http://www.greengecko.co.nz
MSN: [email protected]
Skype: sholdowa

Thanks for your answer.

I didn’t go into specifics because my problem doesn’t rely at the
application-level logic.
What you describe is what my script does already.

However in this particular case I have 16 files weighting each a few MB
which need to be transfered back at once.

PHP allocates 30s for each loop turn (far enough to copy the file + echo
some output message about successes/failed completion).
Nginx cuts the execution avec fastcgi_read_timeout time even with my
efforts to cut down any buffering on PHP side (thus forcing the output
to
be sent to Nginx to reinitialize the timeout counter).
That Nginx action is the center of my attention right now. How can I get
read of it in a scalable fashion (ie no fastcgi_read_time = 9999999) ?

B. R.
*
*

Surely, you’re still serialising the transfer with a loop?

On Sun, 2013-05-26 at 22:11 -0400, B.R. wrote:

  1. ?
    existence of the file locally, and copy it if it doesn’t
    Steve
    >         Hello,
    I copy one
    >         I would like my script not to be forcefully
    procedure to
    like not
    on the PHP
    >         * I deactivated PHP's output_buffering (default is
    >         Current state:
    multiple files
    >
    >
    > nginx mailing list
    nginx mailing list
    [email protected]
    http://mailman.nginx.org/mailman/listinfo/nginx

nginx mailing list
[email protected]
http://mailman.nginx.org/mailman/listinfo/nginx


Steve H. BSc(Hons) MNZCS [email protected]
http://www.greengecko.co.nz
MSN: [email protected]
Skype: sholdowa

One way or another, even if an external script is called, PHP will need
to
wait for the scripts completion, making the parallelization impossible
or
at least useless (since, to wait for a return code of an external script
is
still blocking).

I am not trying to find a workaround, I need to know how the
fastcgi_reload_timeout works (if I understood it properly), if I
properly
disabled PHP buffering for my example case and how eventually to control
those timeouts.
I’d like to address the central problem here, not closing my eyes on it.


B. R.

OK, I leave you to it.

However, asynchronously spawning subprocesses will allow you to
parallelise the process. I’d call it design, rather than a workaround,
but there you go (:

Steve
On Sun, 2013-05-26 at 22:38 -0400, B.R. wrote:

    Surely, you're still serialising the transfer with a loop?
    >
    with my
    > B. R.
    doesn't
    performance-wise that
    >         >
    >         >
    >         thus I do
    >         interrupted by
    >         >         loop copying files, I use the
    >         the
    >         >         What I understood from the documentation
    >         >         side and let Nginx manage it (since the
    >         >         PHP
    >         written to
    the error
    >         >
    >         >
    Nginx here
    >         >         B. R.
    <[email protected]>
    > _______________________________________________
    _______________________________________________
    nginx mailing list
    [email protected]
    http://mailman.nginx.org/mailman/listinfo/nginx

nginx mailing list
[email protected]
http://mailman.nginx.org/mailman/listinfo/nginx


Steve H. BSc(Hons) MNZCS [email protected]
http://www.greengecko.co.nz
MSN: [email protected]
Skype: sholdowa

Hello!

On Sat, May 25, 2013 at 01:01:32PM -0400, B.R. wrote:

I would like my script not to be forcefully interrupted by either sides
I can’t ensure what maximum time I need, and I would like not to use
way-off values such as 2 weeks or 1 year there. ;o)
What I understood from the

documentationhttp://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_read_timeoutis

that the timeout is reinitialized after a successful read: am I right?

Yes.

  • The script seems to still be cut after the expiration of the
    copy.
    There is buffering on nginx side, too, which may prevent last part
    of the response from appearing in the output as seen by a browser.
    It doesn’t explain why read timeout isn’t reset though.

Am I misunderstanding something about Nginx here (especially about the
‘fastcgi_read_timeout’ directive)?

Your understanding looks correct.

Have you any intel/piece of advice on hte matter?

You may try looking into debug log, see
http://nginx.org/en/docs/debugging_log.html, and/or tcpdump
between nginx and php. It should help to examine what actually is
seen by nginx from php.


Maxim D.
http://nginx.org/en/donation.html

Thanks for programming 101.
I’ll keep your advice when my goal will be optimizing my current work,
which is not currently the case.
I do not simply want something to work here. I am fully capable of
finding
workarounds whenever I need/want them.
I’ll leave the ‘I do not care how it works as long as it works’ motto to
business-related goals ;o)

I need to understand the PHP/Nginx communication. And having searched
for
it on the Web showed me a lot of unsatisfaying/dirty workarounds, no
real
solution/explanation.
If anyone could enlighten me on those Nginx timeouts, I’d be more than
glad!

B. R.

Hello Maxim,

I spent a lot of time trying to figure out what is happening.
It seems that after some service restart, the problem sometimes
disappear
before coming back again on the following try.

I finally managed to capture the debug log you’ll find as attachment.
I’ll
need your expertise on it, but it seems that the tcpdump show stuff
which
do not appear in the nginx output.
T
​he archive attached is as much self-contained as possible, including :

  • Server information (uname -a)
  • Nginx information (nginx -V)​: self-compiled from sources since I
    needed
    to activate --with-debug
  • tcpdump between php and nginx (+ ‘control’ file containing the
    standard
    output of the tcpdump command, including interface and packet numbers
    information only)
  • nginx error_log (set on 'debug)
  • browser output (copied-pasted from source, you’ll see there is no end
    tag, thus proving the output is brutally cut out)

If I might be of any help providing some other information, please let
me
know

B. R.

Hello!

On Tue, May 28, 2013 at 01:32:48PM -0400, B.R. wrote:

​he archive attached is as much self-contained as possible, including :

  • Server information (uname -a)
  • Nginx information (nginx -V)​: self-compiled from sources since I needed
    to activate --with-debug
  • tcpdump between php and nginx (+ ‘control’ file containing the standard
    output of the tcpdump command, including interface and packet numbers
    information only)
  • nginx error_log (set on 'debug)
  • browser output (copied-pasted from source, you’ll see there is no end
    tag, thus proving the output is brutally cut out)

As per debug log, nothing is seen from php after 18:48:45, and
this results in the timeout at 18:50:45.

Unfortunately, tcpdump dump provided looks corrupted - it shows
only first 4 packets both here and on cloudshark
(http://cloudshark.org/captures/bf44d289b1f6).

Overral I would suggest that this is just how you code behaves.
You may want to add some debugging to your application to debug
this further if you still think there is something wrong.


Maxim D.
http://nginx.org/en/donation.html

Hello,

I do not know if my private emails on the matter to Maxim went through.
Non-broken resources were included.

B. R.

Hello!

On Sat, Jun 01, 2013 at 01:23:03PM -0400, B.R. wrote:

Hello,

I do not know if my private emails on the matter to Maxim went through.
Non-broken resources were included.

Non-broken tcpdump just confirms what was already said based on
error log: nothing is seen from php after 18:48:45, and
this results in the timeout at 18:50:45. You have to dig into
your code.


Maxim D.
http://nginx.org/en/donation.html

Hello,

Non-broken tcpdump just confirms what was already said based on

error log: nothing is seen from php after 18:48:45, and
this results in the timeout at 18:50:45. You have to dig into
your code.

​I agree.

However, if you look at the output, you’ll notice that the output is cut
in
the middle of what is sent at 16:45:43.8 UTC.
The content of the array as printed by PHP in the TCP socket contains 29
elements (numbered from 0 to 28). The output is cut at the 24th.

All the following content sent by PHP (and… received by Nginx?) are
not
displayed which produces the faulty browser output.

I understand there is a timeout at some point (PHP runs out of memory).
It
seems that the error is not sent through the FastCGI tunnel and PHP
simply
stops answering.
But that is another problem, not the main one I wanna outline here.​


B. R.

Hello!

On Wed, Jun 05, 2013 at 08:50:49AM -0400, B.R. wrote:

stops answering.
But that is another problem, not the main one I wanna outline here.​

As long as upstream server times out, nginx stops processing of
the request without sending what was buffered by nginx but not yet
sent to a client.


Maxim D.
http://nginx.org/en/donation.html

Hello!

On Wed, Jun 05, 2013 at 10:28:26AM -0400, B.R. wrote:

oO

Is that a bug or a feature?
Wouldn’t it be nice not to lose information in the middle? PHP sends
information and probably wants the Web server to do its job forwarding it
to the browser. I’d like that, as a personal note.

This is how it works. With proxy, you may avoid buffering in
nginx with proxy_buffering off. With fastcgi you can’t, as
unbuffered mode isn’t implemented (well, fastcgi_keep_conn will do
something comparable, but not exactly).

As long as this only happens if connection is broken anyway - this
isn’t considered to be a problem as information is lost anyway.

Thanks for that insight Maxim, that was one of the piece of information I
was seeking for. ;o)

This what I wrote in my very first message in this thread:

: There is buffering on nginx side, too, which may prevent last part
: of the response from appearing in the output as seen by a browser.
: It doesn’t explain why read timeout isn’t reset though.


Maxim D.
http://nginx.org/en/donation.html

oO

Is that a bug or a feature?
Wouldn’t it be nice not to lose information in the middle? PHP sends
information and probably wants the Web server to do its job forwarding
it
to the browser. I’d like that, as a personal note.

Thanks for that insight Maxim, that was one of the piece of information
I
was seeking for. ;o)

B. R.