Forum: NGINX How to disable buffering when using FastCGI?

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
14641505e27de5fed3fd69174965fee6?d=identicon&s=25 Nicolas Grilly (Guest)
on 2009-10-13 13:30
(Received via mailing list)
Is there a way to disable buffering when using FastCGI? (for a Comet
style application)

Is there an option "fastcgi_buffering off", working like the option
"proxy_buffering off" in the HTTP proxy module?

Thanks,

Nicolas Grilly
Faf497d2b65e90e9965db852589a768d?d=identicon&s=25 Phillip Oldham (Guest)
on 2009-10-13 13:32
(Received via mailing list)
Nicolas Grilly wrote:
> Is there a way to disable buffering when using FastCGI? (for a Comet
> style application)
>
> Is there an option "fastcgi_buffering off", working like the option
> "proxy_buffering off" in the HTTP proxy module?
>

Have you tried:

fastcgi_buffers 0 0;

?

--

*Phillip B Oldham*
ActivityHQ
phill@activityhq.com <mailto:phill@theactivitypeople.co.uk>

------------------------------------------------------------------------

*Policies*

This e-mail and its attachments are intended for the above named
recipient(s) only and may be confidential. If they have come to you in
error, please reply to this e-mail and highlight the error. No action
should be taken regarding content, nor must you copy or show them to
anyone.

This e-mail has been created in the knowledge that Internet e-mail is
not a 100% secure communications medium, and we have taken steps to
ensure that this e-mail and attachments are free from any virus. We must
advise that in keeping with good computing practice the recipient should
ensure they are completely virus free, and that you understand and
observe the lack of security when e-mailing us.
A8108a0961c6087c43cda32c8616dcba?d=identicon&s=25 Maxim Dounin (Guest)
on 2009-10-13 14:21
(Received via mailing list)
Hello!

On Tue, Oct 13, 2009 at 12:17:05PM +0200, Nicolas Grilly wrote:

> Is there a way to disable buffering when using FastCGI? (for a Comet
> style application)
>
> Is there an option "fastcgi_buffering off", working like the option
> "proxy_buffering off" in the HTTP proxy module?

No, there is no such option.  Buffering can't be disabled for
fastcgi.

Maxim Dounin
14641505e27de5fed3fd69174965fee6?d=identicon&s=25 Nicolas Grilly (Guest)
on 2009-10-13 16:21
(Received via mailing list)
2009/10/13 Phillip Oldham <phill@activityhq.com>:
> Nicolas Grilly wrote:
> Is there a way to disable buffering when using FastCGI? (for a Comet
> style application)
>
> Is there an option "fastcgi_buffering off", working like the option
> "proxy_buffering off" in the HTTP proxy module?
>
> Have you tried:
>
> fastcgi_buffers 0 0;

Yes, I have tried this option, but it disables buffering only for the
FastCGI module, not for the output filters (especially gzip and SSL
modules). So, this option is not enough to completely disable
buffering.
14641505e27de5fed3fd69174965fee6?d=identicon&s=25 Nicolas Grilly (Guest)
on 2009-10-13 16:26
(Received via mailing list)
On Tue, Oct 13, 2009 at 13:19, Maxim Dounin <mdounin@mdounin.ru> wrote:
> On Tue, Oct 13, 2009 at 12:17:05PM +0200, Nicolas Grilly wrote:
>
>> Is there a way to disable buffering when using FastCGI? (for a Comet
>> style application)
>>
>> Is there an option "fastcgi_buffering off", working like the option
>> "proxy_buffering off" in the HTTP proxy module?
>
> No, there is no such option. šBuffering can't be disabled for
> fastcgi.

Hello Maxim,

Is there no such option just because nobody implemented it? Or is it
because of some kind of technical constraint?

Do you recommend to people developing Comet style application to use
HTTP proxying instead of FastCGI?

Is it difficult to implement the option "fastcgi_buffering off", using
the same technique as in the source code of module HTTP proxy?

Thanks for your advice,

Nicolas Grilly
0c31ca11d92038571de52b53c8ad4a8a?d=identicon&s=25 Denis F. Latypoff (Guest)
on 2009-10-13 16:54
(Received via mailing list)
Hello Nicolas,

Tuesday, October 13, 2009, 9:20:00 PM, you wrote:

>> fastcgi.
> Hello Maxim,

> Is there no such option just because nobody implemented it? Or is it
> because of some kind of technical constraint?

Yes. It's because of FastCGI protocol internals. It splits "stream"
into blocks max 32KB each. Each block has header info (how many bytes
it contains, etc). So nginx can't send content to the client until it
get the whole block from upstream.

> Do you recommend to people developing Comet style application to use
> HTTP proxying instead of FastCGI?

Yes. Nginx can establish pipe between backend and client just after
headers are sent in case when proxy_buffering = off and gzip = off.

> Is it difficult to implement the option "fastcgi_buffering off", using
> the same technique as in the source code of module HTTP proxy?

It is not possible. See comment above.

> Thanks for your advice,
A8108a0961c6087c43cda32c8616dcba?d=identicon&s=25 Maxim Dounin (Guest)
on 2009-10-13 19:35
(Received via mailing list)
Hello!

On Tue, Oct 13, 2009 at 04:20:00PM +0200, Nicolas Grilly wrote:

> > fastcgi.
>
> Hello Maxim,
>
> Is there no such option just because nobody implemented it? Or is it
> because of some kind of technical constraint?

Something like this.  FastCGI requires buffer processing which
isn't compatible with current code for unbuffered connections.

> Do you recommend to people developing Comet style application to use
> HTTP proxying instead of FastCGI?

For now you should either close & reopen connections, or use HTTP
proxy instead.

> Is it difficult to implement the option "fastcgi_buffering off", using
> the same technique as in the source code of module HTTP proxy?

Current "proxy_buffering off" implementation is something wierd
and should be nuked, IMHO.  The same implementation for FastCGI
just won't work.

I believe buffering control in upstream module (which includes
fastcgi, proxy and memcached) should be changed to something more
flexible.  In particular, fastcgi should be aware of FastCGI
record boundaries, and shouldn't try to buffer too much as long as
it got full record.

I've posted some preliminary patches for this as a part of backend
keepalive support work, but they are a bit stale now.

Maxim Dounin
14641505e27de5fed3fd69174965fee6?d=identicon&s=25 Nicolas Grilly (Guest)
on 2009-10-13 20:20
(Received via mailing list)
Hello Denis!

Thanks for your explanations.

2009/10/13 Denis F. Latypoff <denis@gostats.ru>:
> Tuesday, October 13, 2009, 9:20:00 PM, you wrote:
>> Is there no such option just because nobody implemented it? Or is it
>> because of some kind of technical constraint?
>
> Yes. It's because of FastCGI protocol internals. It splits "stream"
> into blocks max 32KB each. Each block has header info (how many bytes
> it contains, etc). So nginx can't send content to the client until it
> get the whole block from upstream.

I agree that when a FastCGI backend sends a record to the web server,
the web server must wait for the complete record before forwarding it
to the client. This implies a lot of buffering if the records sent by
the FastCGI backend are very long.

Alternatively, the FastCGI backend can choose to send very short
records (for example 50 bytes) and then the web server must be able to
forward each record immediately after reception, without any
buffering.

Source: the FastCGI specification
(http://www.fastcgi.com/drupal/node/6?q=node/22) and its Python
implementation
(http://trac.saddi.com/flup/browser/flup/server/fcgi_base.py)

But even when the records sent by the FastCGI backend are very short
(around 50 bytes), Nginx doesn't send them immediately. Nginx seems to
buffer over FastCGI record boundaries. Am I correct?

>> Is it difficult to implement the option "fastcgi_buffering off", using
>> the same technique as in the source code of module HTTP proxy?
>
> It is not possible. See comment above.

May be we can force Nginx to send data back to the client just after
having received each FastCGI record? Is it possible?

Thanks a lot for your advice,
Cheers,

Nicolas Grilly
14641505e27de5fed3fd69174965fee6?d=identicon&s=25 Nicolas Grilly (Guest)
on 2009-10-13 20:36
(Received via mailing list)
Hello again Maxim,

2009/10/13 Maxim Dounin <mdounin@mdounin.ru>:
> On Tue, Oct 13, 2009 at 04:20:00PM +0200, Nicolas Grilly wrote:
>> Is there no such option just because nobody implemented it? Or is it
>> because of some kind of technical constraint?
>
> Something like this. šFastCGI requires buffer processing which
> isn't compatible with current code for unbuffered connections.

Understood.

>> Do you recommend to people developing Comet style application to use
>> HTTP proxying instead of FastCGI?
>
> For now you should either close & reopen connections, or use HTTP
> proxy instead.

So, for now, I guess my best bet is to use HTTP proxying :-)

> record boundaries, and shouldn't try to buffer too much as long as
> it got full record.
>
> I've posted some preliminary patches for this as a part of backend
> keepalive support work, but they are a bit stale now.

It would be a perfect solution! If the fastcgi module is aware of
FastCGI record boundaries and stops buffering after having received a
full record, then the problem is solved. This gives to the FastCGI
backend complete control over the amount of buffering, sending short
records in order to limit buffering, or sending long records (around
8KB) for normal buffering. Is it your plan for the future of the
upstream module?

Cheers,

Nicolas
A8108a0961c6087c43cda32c8616dcba?d=identicon&s=25 Maxim Dounin (Guest)
on 2009-10-13 21:52
(Received via mailing list)
Hello!

On Tue, Oct 13, 2009 at 08:24:53PM +0200, Nicolas Grilly wrote:

[...]

> > record boundaries, and shouldn't try to buffer too much as long as
> 8KB) for normal buffering. Is it your plan for the future of the
> upstream module?

Complete control isn't really good thing, as it limits the ability
to optimize brain-damaged backends.  But as long as fastcgi
finished record and not started another one - it's probably a good
idea to pass data we got so far downstream.  And current aproach
won't work with keepalive connections anyway.

But please keep in mind that I'm not Igor.

Maxim Dounin
14641505e27de5fed3fd69174965fee6?d=identicon&s=25 Nicolas Grilly (Guest)
on 2009-10-14 14:31
(Received via mailing list)
Hello Maxim,

2009/10/13 Maxim Dounin <mdounin@mdounin.ru>:
> to optimize brain-damaged backends. šBut as long as fastcgi
> finished record and not started another one - it's probably a good
> idea to pass data we got so far downstream. šAnd current aproach
> won't work with keepalive connections anyway.
>
> But please keep in mind that I'm not Igor.

Thank you for your explanations! I will keep an eye on the evolution
of the FastCGI buffering, but I understand this is a complex topic. In
the meantime, I will use HTTP proxying.

Nicolas Grilly
This topic is locked and can not be replied to.