Is it possible to send html HEAD early (chunked)?

Hi all,

inspired by the bigpipe pattern I’m wondering if it’s possible to send
the
full html head so that the browser can start downloading CSS and
javascript
files.

An idea would be that the proxied backend uses a chunked encoding and
sends
the html head as first chunk. The body would be sent as a separate chunk
as
soon as all data is collected.

Not sure if this is relevant: In our particular case we’re using ssi in
the
body to assemble the whole page, and some of the includes might take
some
time to be loaded. The html head contains an include as well, but this
should always be loaded from the cache or should be served really fast
by
the backend.

What do you think about this? Has anybody tried this already?

Cheers,
Martin

sounds more like a custom solution that might be achieved using lua +
nginx;

from what i understand you have a “static” part that should get send
early/from
cache and a “dynamic” part that must wait for the backend?

the only solution i could think of in such an asynchronous delivery
is using nginx + lua, or maybe varnish (iirc you yould mark parts of a
page cacheable, but dont know if you can deliver asynchronously though)

regards,

mex

Posted at Nginx Forum:

Am 13.07.2014 15:40 schrieb “mex” [email protected]:

sounds more like a custom solution that might be achieved using lua +
nginx;

Ok, I haven’t done anything with nginx+lua so far, need to check out
what
can be done with lua. Can you give some direction how lua can be helpful
here?

from what i understand you have a “static” part that should get send
early/from
cache and a “dynamic” part that must wait for the backend?

Exactly.

Cheers,
Martin

Posted at Nginx Forum:
Re: Is it possible to send html HEAD early (chunked)?

Am 13.07.2014 18:37 schrieb “mex” [email protected]:

in your case i’d say the cleanest way would be a reengineering
of your application; the other way would imply a full regex
on every request coming back from your app-servers to filter out
those stuff that already has been send.
the problem: appservers like tomcat/jboss/rails a.s.o.
usually send full html-pages;

We’re using the play framework, we can easily send partial content using
chunked encoding.

if you find a way to just
send the itself, the rest like sending html-headers early
from cache seems easy:

location /blah {
content_by_lua ’
ngx.say(html_header)
local res =
ngx.location.capture(“/get_stuff_from_backend”)
if res.status == 200 then
ngx.say(res.body)
end
ngx.say(html_footer)
';
}

The html head, page header and page footer are dynamic as well and
depend
on the current request (but are easy to calculate - sorry if my previous
answer was misleading here).
I think the cleanest solution would be if the backend could receive 1
request and just split the content/response into chunks and send what’s
immediately available (html head + perhaps page header as well) as first
chunk and send the rest afterwards.

do you refer to something similar to this?
GitHub - bigpipe/bigpipe: BigPipe is a radical new modular web pattern for Node.js

Not exactly this framework but the bigpipe concept. The idea I really
like
is that the browser can start to download js + CSS and that the user can
already see the page header with navigation while the backend is still
working - therefore a much better perceived performance.

Cheers,
Martin

the only solution i could think of in such an asynchronous delivery
mex
[email protected]
nginx Info Page

Posted at Nginx Forum:
Re: Is it possible to send html HEAD early (chunked)?

Ok, I haven’t done anything with nginx+lua so far, need to check out
what
can be done with lua. Can you give some direction how lua can be
helpful here?

oh … lua might be used to manipulate every single phase of a request
coming to and processed by nginx; so a swiss army knife
super-extended version :slight_smile:

some stuff to skim through to get an impression:

in your case i’d say the cleanest way would be a reengineering
of your application; the other way would imply a full regex
on every request coming back from your app-servers to filter out
those stuff that already has been send.
the problem: appservers like tomcat/jboss/rails a.s.o.
usually send full html-pages; if you find a way to just
send the itself, the rest like sending html-headers early
from cache seems easy:

location /blah {
content_by_lua ’
ngx.say(html_header)
local res =
ngx.location.capture(“/get_stuff_from_backend”)
if res.status == 200 then
ngx.say(res.body)
end
ngx.say(html_footer)
';
}

do you refer to something similar to this?

the only solution i could think of in such an asynchronous delivery
mex
[email protected]
nginx Info Page

Posted at Nginx Forum:

On Sunday 13 July 2014 14:49:18 Martin Grotzke wrote:

Not sure if this is relevant: In our particular case we’re using ssi in the
body to assemble the whole page, and some of the includes might take some
time to be loaded. The html head contains an include as well, but this
should always be loaded from the cache or should be served really fast by
the backend.

What do you think about this? Has anybody tried this already?

Have you tried nginx SSI module?
http://nginx.org/en/docs/http/ngx_http_ssi_module.html

wbr, Valentin V. Bartenev

I think the cleanest solution would be if the backend could receive 1
request and just split the content/response into chunks and send
what’s
immediately available (html head + perhaps page header as well) as
first
chunk and send the rest afterwards.

sounds tricky … i must admit, i am not that deep into
nginx-internals
to say if nginx does this already (send-chunks-as-they-arrive) or if it
is
possible
via an additional nginx-module; maybe some of the nginx-guys might
answer
this?

Posted at Nginx Forum:

Am 13.07.2014 22:01 schrieb “Valentin V. Bartenev” [email protected]:

Have you tried nginx SSI module?
Module ngx_http_ssi_module

We’re using the SSI module to assemble the page from various backends,
but
how could SSIs help to send the head or page header early to the client?

Cheers,
Martin

Hello!

On Sun, Jul 13, 2014 at 02:49:18PM +0200, Martin Grotzke wrote:

Not sure if this is relevant: In our particular case we’re using ssi in the
body to assemble the whole page, and some of the includes might take some
time to be loaded. The html head contains an include as well, but this
should always be loaded from the cache or should be served really fast by
the backend.

What do you think about this? Has anybody tried this already?

By default, nginx just sends what’s already available. And for
SSI, it uses chunked encoding. That is, if a html head is
immediately available in your case, it will be just sent to a
client.

There is a caveat though: the above might not happen due to
buffering in various places. Notably, this includes
postpone_output and gzip filter. To ensure buffering will not
happen you should either disable appropriate filters, or use
flushes. Latter is automatically done on each buffer sent when
using “proxy_buffering off” (“fastcgi_buffering off” and so on).
Flush can be also done explicitly via $r->flush() when when using
the embedded perl module.


Maxim D.
http://nginx.org/

Am 14.07.2014 14:54 schrieb “Maxim D.” [email protected]:

By default, nginx just sends what’s already available. And for
SSI, it uses chunked encoding.

I don’t understand this. In my understanding SSI (the virtual include
directive) goes downstream (e.g. gets some data from a backend) so that
the
backend defines how to respond to nginx. What does it mean that nginx
uses
chunked encoding?

That is, if a html head is
immediately available in your case, it will be just sent to a
client.

Does it matter if the html head is pulled into the page via SSI or not?

There is a caveat though: the above might not happen due to
buffering in various places. Notably, this includes
postpone_output and gzip filter. To ensure buffering will not
happen you should either disable appropriate filters, or use
flushes. Latter is automatically done on each buffer sent when
using “proxy_buffering off” (“fastcgi_buffering off” and so on).

Ok. Might this have a negative impact on my backend when there are slow
clients? So that when a client consumes the response very slow my
backend
is kept “busy” (delivering the response as slow as the client consumes
it)
and cannot just hand off the data / response to nginx?

Thanks && cheers,
Martin

Hello!

On Mon, Jul 14, 2014 at 08:35:40PM +0200, Martin Grotzke wrote:

Am 14.07.2014 14:54 schrieb “Maxim D.” [email protected]:

By default, nginx just sends what’s already available. And for
SSI, it uses chunked encoding.

I don’t understand this. In my understanding SSI (the virtual include
directive) goes downstream (e.g. gets some data from a backend) so that the
backend defines how to respond to nginx. What does it mean that nginx uses
chunked encoding?

The transfer encoding is something what happens on hop-by-hop
basis, and a backend can’t define transfer encoding used between
nginx and the client.

The transfer encoding is selected by nginx as appropriate - if
Content-Length is know it will be identity (or rather no transfer
encoding at all), if it’s not known (and the client uses HTTP/1.1) -
chunked will be used.

In case of SSI, content length isn’t known in advance due to SSI
processing yet to happen, and hence chunked transfer encoding will
be used.

That is, if a html head is
immediately available in your case, it will be just sent to a
client.

Does it matter if the html head is pulled into the page via SSI or not?

It doesn’t matter.

and cannot just hand off the data / response to nginx?
Yes, switching off proxy buffering may have negative effects on
some workloads and it is not generally recommended.


Maxim D.
http://nginx.org/