How to disable output buffering with PHP and nginx

Hello,

In an effort to resolve a different issue, I am trying to confirm that
my stack is capable of servicing at least two simultaneous requests for
a given PHP script.

In an effort to confirm this, I have written a simple PHP script that
runs for a specified period of time and outputs the number of seconds
elapsed since the script was started.


<?php $start = time(); echo 'Starting concurrency test. Seconds elapsed:' . PHP_EOL; flush(); $elapsed = time() - $start; echo $elapsed . PHP_EOL; flush(); while ($elapsed < 60) { echo time() - $start . PHP_EOL; flush(); sleep(5); $elapsed = time() - $start; } echo time() - $start . PHP_EOL; flush(); ----------------------------------------------- For whatever reason, nginx *always* buffers the output, even when I set output_buffering = off in the effective php.ini, *and* I set fastcgi_keep_conn on; in my nginx.conf. Of course, when I request the script via the command-line (php -f), the output is not buffered. Is it possible to disable PHP output buffering completely in nginx? Thanks for any help! -Ben

On 9/16/2013 1:19 PM, Ben J. wrote:


flush();
echo time() - $start . PHP_EOL;
fastcgi_keep_conn on;
-Ben

Sorry to bump this topic, but I feel as though I have exhausted the
available information on this subject.

I’m pretty much in the same boat as Roger from

. I have tried all of the suggestions mentioned and still cannot disable
output buffering in PHP scripts that are called via nginx.

I have ensured that:

1.) output_buffering = “Off” in effective php.ini.

2.) zlib.output_compression = “Off” in effective php.ini.

3.) implicit_flush = “On” in effective php.ini.

4.) “gzip off” in nginx.conf.

5.) “fastcgi_keep_conn on” in nginx.conf.

6.) “proxy_buffering off” in nginx.conf.

nginx 1.5.2 (Windows)
PHP 5.4.8 Thread-Safe
Server API: CGI/FastCGI

Is there something else that I’ve overlooked?

Perhaps there is someone with a few moments free time who would be
willing to give this a shot on his own system. This seems “pretty
basic”, but is proving to be a real challenge.

Thanks for any help with this!

-Ben

Have you seen this one;

Also try php NTS, it might also be that a flush only works with
non-fcgi.

Posted at Nginx Forum:

Hello,

On Mon, Oct 7, 2013 at 5:35 PM, Francis D. [email protected] wrote:

Run the fastcgi server like this:

env -i php-cgi -d cgi.fix_pathinfo=0 -q -b 9009

Use an nginx config which includes something like this:

​I would recommend being careful about that experiment since there is a
high probability that Ben uses php-fpm (it’s actually the recommended
way
compared to the old FastCGI + php-cgi and the related issues).
First Ben​

​should ensure that php-cgi and php-fpm shares the exact same ini
configuration. That’s a common caveat… :o)

while also doing a

curl -i http://127.0.0.1:8080/php

and look at the network traffic from the fastcgi server.

If you don’t see a five-second gap between the two different response
packets, it is being buffered before it gets to nginx.

​That’s the best way of proceeding since it uses the exact environment
PHP
will be using for production-ready code. Wireshark may be used to read
pcap
dumps with a nice graphical presentation.​

Now make whichever please-don’t-buffer changes seem useful in the php
code

​I share the wish. :o)
Please share the results of every step with us for we could help you
further.​

B. R.

Hello!

On Mon, Oct 07, 2013 at 03:22:15PM -0400, Ben J. wrote:

[…]

1.) output_buffering = “Off” in effective php.ini.

2.) zlib.output_compression = “Off” in effective php.ini.

3.) implicit_flush = “On” in effective php.ini.

4.) “gzip off” in nginx.conf.

5.) “fastcgi_keep_conn on” in nginx.conf.

6.) “proxy_buffering off” in nginx.conf.

Just a side note: proxy_buffering is unrelated to fastcgi,
switching if off does nothing as long as you use fastcgi_pass.

nginx 1.5.2 (Windows)
PHP 5.4.8 Thread-Safe
Server API: CGI/FastCGI

Is there something else that I’ve overlooked?

Perhaps there is someone with a few moments free time who would be
willing to give this a shot on his own system. This seems “pretty
basic”, but is proving to be a real challenge.

There are lots of possible places where data can be buffered for
various reasons, e.g. postpone_output (see
Module ngx_http_core_module). In your configuration you
seems to disable gzip filter - but there are other filter which
may buffer data as well, such as SSI and sub filters, and likely
many 3rd party modules.

While it should be possible to carefully configure nginx to avoid
all places where buffering can happen, it should be much easier to
use

fastcgi_buffering off;

as available in nginx 1.5.6, see
Module ngx_http_fastcgi_module.


Maxim D.
http://nginx.org/en/donation.html

On Mon, Oct 07, 2013 at 03:22:15PM -0400, Ben J. wrote:

On 9/16/2013 1:19 PM, Ben J. wrote:

Hi there,

For whatever reason, nginx always buffers the output, even when I set

Is it possible to disable PHP output buffering completely in nginx?

Have you shown that the initial problem is on the nginx side?

I suspect it will be more interesting to people on this list if you
have a simple test case which demonstrates that it is nginx which is
buffering when you don’t want it to.

Use a php script like this:

==

<? echo "The first bit"; sleep(5); echo "The second bit"; ?>

==

Run the fastcgi server like this:

env -i php-cgi -d cgi.fix_pathinfo=0 -q -b 9009

Use an nginx config which includes something like this:

==
location = /php {
fastcgi_param SCRIPT_FILENAME /usr/local/nginx/test.php;
fastcgi_pass 127.0.0.1:9009;
}

Then do something like

tcpdump -nn -i any -A -s 0 port 9009

while also doing a

curl -i http://127.0.0.1:8080/php

and look at the network traffic from the fastcgi server.

If you don’t see a five-second gap between the two different response
packets, it is being buffered before it gets to nginx.

Now make whichever please-don’t-buffer changes seem useful in the php
code
and in the fastcgi server configuration. When you can see non-buffered
output getting to nginx, then you know the non-nginx side is doing what
you want. So now you can start testing nginx configuration changes;
and you can share the exact non-nginx configuration you use, so that
someone else can copy-paste it and see the same problem that you see.

(Change 127.0.0.1:9009 to be whatever remote server runs your fastcgi
server, if that makes it easier to run tcpdump.)

Good luck with it,

f

Francis D. [email protected]

Hello!

On Mon, Oct 07, 2013 at 10:57:14PM -0400, B.R. wrote:

[…]

I then noticed on the capture that PHP was rightfully sending the content
in 2 parts as expected but somehow nginx was still waiting for the last
parto to arrive before sending content to the client.

What makes you think that nginx was waiting for the last part
without sending data to the client?

Please note that checking by a browser as in your check list isn’t
something meaningful as browsers may (and likely will) wait for a
complete response from a server. In my limited testing on
Windows, IE needs a complete response, while Chrome shows data on
arrival.

Just in case, it works fine here with the following minimal
config:

events {}
http {
    server {
        listen 8080;
        location / {
            fastcgi_pass backend:9000;
            fastcgi_param SCRIPT_FILENAME /path/to/flush.php;
            fastcgi_keep_conn on;
        }
    }
}

But, again, testing with fastcgi_keep_conn is mostly useless now,
it’s an abuse of the unrelated directive. The fastcgi_buffering
directive is already here in 1.5.6, use

fastcgi_buffering off;

instead if you need to turn off buffering for fastcgi responses.
Just in case, documentation can be found here:

http://nginx.org/r/fastcgi_buffering


Maxim D.
http://nginx.org/en/donation.html

On 10/8/2013 11:48 AM, Maxim D. wrote:

What makes you think that nginx was waiting for the last part

}

Module ngx_http_fastcgi_module

Hi, everyone, so sorry for the delayed reply.

Thank you to ittp2012, Francis, Maxim, and B.R.

Well, after all of the configuration changes, both to nginx and PHP, the
solution was to add the following header to the response:

header(‘Content-Encoding: none;’);

With this header in-place (sent as the first output in the PHP test
script), I see the timing intervals from the test script printed to the
browser in real-time. This works even in nginx-1.5.2, with my existing
configuration. (This seems to work in Chrome and Firefox, but not IE,
which corroborates Maxim’s above observations re: individual browser
behavior.)

The whole reason for which I was seeking to disable output buffering is
that I need to test nginx’s ability to handle multiple requests
simultaneously. This need is inspired by yet another problem, about
which I asked on this list in late August: “504 Gateway Time-out when
calling curl_exec() in PHP with SSL peer verification
(CURLOPT_SSL_VERIFYPEER) off”.

Some folks suggested that the cURL problem could result from nginx not
being able to serve more than one request for a PHP file at a time. So,
that’s why I cooked up this test with sleep() and so forth.

Now that output buffering is disabled, I am able to test concurrency.
Sure enough, if I request my concurrency test script in two different
browser tabs, the second tab will not begin producing output until the
first tab has finished. I set the test time to 120 seconds and at
exactly 120 seconds, the second script begins producing output.

Also, while one of these tests is running, I am unable to request a
“normal PHP web page” from the same server (localhost). The request
“hangs” until the concurrency test in the other tab is finished.

I even tried requesting the test script from two different browsers, and
the second browser always hangs until the first completes.

These observations lend credence to the notion that my cURL script is
failing due to dead-locking of some kind. (I’ll refrain from discussing
this other problem here, as it has its own thread.)

Is this inability to handle concurrent requests a limitation of nginx on
Windows? Do others on Windows observe this same behavior?

I did see the Windows limitation, “Although several workers can be
started, only one of them actually does any work”, but that isn’t the
problem here, right? One nginx worker does not mean that only one PHP
request can be satisfied at a time, correct?

Thanks again for all the help, everyone!

-Ben

On 10/10/2013 11:26 AM, Maxim D. wrote:

[…]

that’s why I cooked up this test with sleep() and so forth.

Your problem is that you only have one PHP process running - and
it can only service one request at a time. AFAIK, php-cgi can’t
run more than one process on Windows (on Unix it can, with
PHP_FCGI_CHILDREN set). Not sure if there are good options to run
multiple PHP processes on Windows.

Thank you for clarifying this crucial point, Maxim. I believe that this
is indeed the crux of the issue.

PHP process which limits you.

Understood. This is so hard to believe (the lack of support for multiple
simultaneous PHP processes on Windows) that I had overlooked this as a
possibility.

And, now that you’ve explained the problem, finding corroborating
evidence is much easier:

.

An interesting excerpt from the above thread:

“I did look deeper into the PHP source code after that and found that
the section of code which responds to PHP_FCGI_CHILDREN has been
encapsulated by #ifndef WIN32 So the developers must be aware of the
issue.”

For the time being, I’ll have to run these cURL scripts using Apache
with mod_php, instead of nginx. Not the end of the world.

Thanks again for your valuable time and for clearing-up this major
limitation of PHP (NOT nginx) on Windows.

Best regards,

-Ben

I took a bit of time to do that… TBH I lost a lot of time finding a
way
to record traffic to a locally hosted Web server in Windows… :o
Why would people host stuff with Windows? oO

Anyway. Here are the details:

Configuration:
nginx 1.5.6
PHP 5.4.20 Thread-Safe
Wireshark 1.10.2
I took the liberty of upgrading test components to the latest release in
the same branch, since some bugs of interest might have been corrected.

Synthesis:
I didn’t go far on the PHP side, but I noticed on early captures that
PHP
was still sending everything after 5 seconds.

I cheated a little bit by modifying the test file to use the PHP flush()
procedure which forces buffers to be emptied and content sent to the
client.

I then noticed on the capture that PHP was rightfully sending the
content
in 2 parts as expected but somehow nginx was still waiting for the last
parto to arrive before sending content to the client.

There is still work to be done on the nginx side. Since we are on the
nginx
mailing list, you may prioritize and see to the PHP part later on. :o)

Every modification I made to the original nginx.conf file is
self-contained
in the location serving ‘.php’ files.
How to reproduce:
The main concern here was to record traffic between nginx and PHP. Here
are
the steps for a successful operation.

  1. Use the nginx configuration provided as attachment (nginx.conf to
    put
    in \conf, overwriting the default one)
  2. Place the test script in \html\
  3. Use the PHP configuration provided as attachement (php.ini to put
    in
    )
  4. Modify Windows’ routing table to force local traffic to make a
    round
    trip to the nearest router/switch (local traffic can’t be recorded on
    modern Windows) :
  5. In cmd.exe, type ‘route add
    ’ (you’ll find required information with a quick
    ‘ipconfig’)
  6. Start PHP with following arguments (either command-line or through
    a
    shortcut): ‘php-cgi.exe -b :9000’
  7. Start nginx (simply double click on it)
  8. Check that 2 nginx processes and 1 php-cgi.exe process exist in
    the
    task manager.
  9. Check (through ‘netstat -abn’) that php-cgi.exe is listening on
    :9000
  10. Start Wireshark recording on the interface related to the IP
    address
    used before (or all interfaces) with capture filter ‘port 9000’
  11. Browse to http://localhost/test.php
  12. Stop Wireshark recording

You’ll find my recording of the backend traffic as attachement.
Please ignore the duplicated traffic (ad traffic going forth and back on
the network interface is recorded 2 times total: that’s a drawback to
the
‘hack’ setup you need on Windows to record local traffic…).

Hope that’ll help

B. R.

Hello!

On Thu, Oct 10, 2013 at 11:13:40AM -0400, Ben J. wrote:

[…]

Well, after all of the configuration changes, both to nginx and PHP, the
solution was to add the following header to the response:

header(‘Content-Encoding: none;’);

Just in case: this is very-very wrong, there is no such
content-coding. Never use this in real programs.

But the fact that it helps suggests you actually have gzip enabled
somewhere in your nginx config - as gzip doesn’t work if it sees
Content-Encoding set.

All this probably doesn’t matter due to you only used it as a
debugging tool.

[…]

I even tried requesting the test script from two different browsers, and
the second browser always hangs until the first completes.

These observations lend credence to the notion that my cURL script is
failing due to dead-locking of some kind. (I’ll refrain from discussing
this other problem here, as it has its own thread.)

Is this inability to handle concurrent requests a limitation of nginx on
Windows? Do others on Windows observe this same behavior?

Your problem is that you only have one PHP process running - and
it can only service one request at a time. AFAIK, php-cgi can’t
run more than one process on Windows (on Unix it can, with
PHP_FCGI_CHILDREN set). Not sure if there are good options to run
multiple PHP processes on Windows.

Quick-and-dirty solution would be to run multiple php-cgi
processes on different ports and list them all in an upstream{}
block.

I did see the Windows limitation, “Although several workers can be
started, only one of them actually does any work”, but that isn’t the
problem here, right? One nginx worker does not mean that only one PHP
request can be satisfied at a time, correct?

Correct. One nginx process can handle multiple requests, it’s one
PHP process which limits you.


Maxim D.
http://nginx.org/en/donation.html

Correct. One nginx process can handle multiple requests, it’s one
PHP process which limits you.

Not really, use the NTS version of php not the TS, and use a pool as
suggested, e.a.;

# loadbalancing  php
upstream myLoadBalancer {
    server 127.0.0.1:19001 weight=1 fail_timeout=5;
    server 127.0.0.1:19002 weight=1 fail_timeout=5;
    server 127.0.0.1:19003 weight=1 fail_timeout=5;
    server 127.0.0.1:19004 weight=1 fail_timeout=5;
    server 127.0.0.1:19005 weight=1 fail_timeout=5;
    server 127.0.0.1:19006 weight=1 fail_timeout=5;
    server 127.0.0.1:19007 weight=1 fail_timeout=5;
    server 127.0.0.1:19008 weight=1 fail_timeout=5;
    server 127.0.0.1:19009 weight=1 fail_timeout=5;
    server 127.0.0.1:19010 weight=1 fail_timeout=5;

usage: fastcgi_pass myLoadBalancer;

}

For a 100mb pipeline this is enough to handle many, many concurrent
users.

runcgi.cmd

@ECHO OFF
ECHO Starting PHP FastCGI…
c:
cd \php

del abort.now

start multi_runcgi.cmd 19001
start multi_runcgi.cmd 19002
start multi_runcgi.cmd 19003
start multi_runcgi.cmd 19004
start multi_runcgi.cmd 19005
start multi_runcgi.cmd 19006
start multi_runcgi.cmd 19007
start multi_runcgi.cmd 19008
start multi_runcgi.cmd 19009
start multi_runcgi.cmd 19010

multi_runcgi.cmd

@ECHO OFF
ECHO Starting PHP FastCGI…
set PATH=C:\PHP;%PATH%
set TEMP=\webroot_other\xcache
set TMP=\webroot_other\xcache
set PHP_FCGI_CHILDREN=0
set PHP_FCGI_MAX_REQUESTS=10000

:loop
c:
cd \php

C:\PHP\php-cgi.exe -b 127.0.0.1:%1

set errorlvl=%errorlevel%
choice /t:y,3

date /t>>\webroot_other\fzlogs\ServerWatch.log
time /t>>\webroot_other\fzlogs\ServerWatch.log
echo Process php-cgi %1
restarted>>\webroot_other\fzlogs\ServerWatch.log
echo Errorlevel = %errorlvl% >>\webroot_other\fzlogs\ServerWatch.log
echo:>>\webroot_other\fzlogs\ServerWatch.log

if not exist abort.now goto loop

Hello!

On Thu, Oct 10, 2013 at 01:35:00PM -0400, itpp2012 wrote:

    server 127.0.0.1:19003 weight=1 fail_timeout=5;
    server 127.0.0.1:19004 weight=1 fail_timeout=5;
    server 127.0.0.1:19005 weight=1 fail_timeout=5;
    server 127.0.0.1:19006 weight=1 fail_timeout=5;
    server 127.0.0.1:19007 weight=1 fail_timeout=5;
    server 127.0.0.1:19008 weight=1 fail_timeout=5;
    server 127.0.0.1:19009 weight=1 fail_timeout=5;
    server 127.0.0.1:19010 weight=1 fail_timeout=5;

usage: fastcgi_pass myLoadBalancer;

}

Just in case, it would be good idea to use least_conn balancer
here.

http://nginx.org/r/least_conn


Maxim D.
http://nginx.org/en/donation.html

On 10/10/2013 2:24 PM, Maxim D. wrote:

# loadbalancing  php
    server 127.0.0.1:19010 weight=1 fail_timeout=5;

usage: fastcgi_pass myLoadBalancer;

}

Just in case, it would be good idea to use least_conn balancer
here.

Module ngx_http_upstream_module

Cool, this looks great.

Thanks for providing a full, concrete example, itpp2012! That’s hugely
helpful!

I’ll bear in mind your advice regarding least_conn balancer, too, Maxim.

Thanks again, guys!

-Ben