Custom handler module - dynamic response with unknown content length

Hi there,

I learned some about how to write a handler module from [1] and [2].
[1]

[2] Emiller’s Guide to Nginx Module Development – Evan Miller

But I need to rewrite [1] to send dynamically generated octect stream to
client with unknown content length but it’ll be large usually. Firstly I
tried:

 /* allocate a buffer for your response body */
 b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t));
 if (b == NULL) {
     return NGX_HTTP_INTERNAL_SERVER_ERROR;
 }

 /* attach this buffer to the buffer chain */
 out.buf = b;
 out.next = NULL;

 /* adjust the pointers of the buffer */
 b->pos = ngx_hello_string;
 b->last = ngx_hello_string + sizeof(ngx_hello_string) - 1;
 b->memory = 1;    /* this buffer is in memory */
 b->last_buf = 0;  /* this is the last buffer in the buffer chain */

 /* set the status line */
 r->headers_out.status = NGX_HTTP_OK;
 //r->headers_out.content_length_n = sizeof(ngx_hello_string) - 1;

 /* send the headers of your response */
 rc = ngx_http_send_header(r);

 if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) {
     return rc;
 }

 /* send the buffer chain of your response */

int i;
for(i=1;i<10000000;i++){b->flush =
(0==(i%1000));rc=ngx_http_output_filter(r,
&out);if(rc!=NGX_OK){ngx_log_error(NGX_LOG_ALERT,r->connection->log,0,“bad
rc, rc:%d”, rc);return rc;}}
b->last_buf = 1;
return ngx_http_output_filter(r, &out);

But it simply fails with following errors:

2014/02/28 22:17:25 [alert] 25115#0: *1 zero size buf in writer t:0 r:0
f:0 00000000 080C7431-080C7431 00000000 0-0, client: 127.0.0.1, server:
localhost, request: “GET / HTTP/1.1”, host: “localhost:8080”
2014/02/28 22:17:25 [alert] 25115#0: *1 bad rc, rc:-1, client:
127.0.0.1, server: localhost, request: “GET / HTTP/1.1”, host:
“localhost:8080”

WHAT IS THE CORRECT WAY TO ACCOMPLISH MY NEED? (I searched a lot but I
only found [3] which has rc=-2 rather than -1)
[3] http://web.archiveorange.com/archive/v/yKUXMLzGBexXA3PccJa6

Thanks in advance!

Hello!

On Fri, Feb 28, 2014 at 10:44:38PM +0330, Yasser Zamani wrote:

Hi there,

I learned some about how to write a handler module from [1] and [2].
[1] ZhuZhaoyuan.com is for sale | HugeDomains
[2] Emiller’s Guide to Nginx Module Development – Evan Miller

But I need to rewrite [1] to send dynamically generated octect stream to
client with unknown content length but it’ll be large usually. Firstly I
tried:

[…]

2014/02/28 22:17:25 [alert] 25115#0: *1 zero size buf in writer t:0 r:0 f:0
00000000 080C7431-080C7431 00000000 0-0, client: 127.0.0.1, server:
localhost, request: “GET / HTTP/1.1”, host: “localhost:8080”
2014/02/28 22:17:25 [alert] 25115#0: *1 bad rc, rc:-1, client: 127.0.0.1,
server: localhost, request: “GET / HTTP/1.1”, host: “localhost:8080”

You’ve tried to send the same chain with the same buffer multiple
times. After a buffer is sent for the first time, its pointers
are adjusted to indicate it was sent - b->pos moved to b->last, and
buffer’s size become zero. Second attempt to send the same buffer
will expectedly trigger the “zero size buf” check.

WHAT IS THE CORRECT WAY TO ACCOMPLISH MY NEED? (I searched a lot but I only
found [3] which has rc=-2 rather than -1)
[3] http://web.archiveorange.com/archive/v/yKUXMLzGBexXA3PccJa6

Trivial aproach is to prepare full output chain, and then send it
using single ngx_http_output_filter() call.


Maxim D.
http://nginx.org/

Thanks for your response…

On Sat 01 Mar 2014 03:41:24 AM IRST, Maxim D. wrote:

Hello!

You’ve tried to send the same chain with the same buffer multiple
times. After a buffer is sent for the first time, its pointers
are adjusted to indicate it was sent - b->pos moved to b->last, and
buffer’s size become zero. Second attempt to send the same buffer
will expectedly trigger the “zero size buf” check.

Great! I tried:

for(i=1;i<10000000;i++){b->flush =
(0==(i%100));rc=ngx_http_output_filter(r,
&out);if(rc!=NGX_OK){ngx_log_error(NGX_LOG_ALERT,r->connection->log,0,“bad
rc, rc:%d”, rc);return rc;}b->pos = ngx_hello_string;b->last =
ngx_hello_string + sizeof(ngx_hello_string) - 1;b->memory =
1;b->last_buf = 0;}

which now fails with:

2014/03/01 12:23:39 [alert] 5022#0: *1 bad rc, rc:-2, client:
127.0.0.1, server: localhost, request: “GET / HTTP/1.1”, host:
“localhost:8080”
2014/03/01 12:23:39 [alert] 5022#0: *1 zero size buf in writer t:0 r:0
f:0 00000000 080C7431-080C7431 00000000 0-0, client: 127.0.0.1, server:
localhost, request: “GET / HTTP/1.1”, host: “localhost:8080”

And to resolve this I know I should follow the solution at [3].
[3] http://web.archiveorange.com/archive/v/yKUXMLzGBexXA3PccJa6

But is this a clean way to call ‘ngx_http_output_filter’ more than
once? (please see below to know why I have to call it multiple times)

FYI: Previous try did not fail in SECOND attempt but it failed when
client successfully download 936 bytes of repeated "Hello, World!"s (72
attempts).

Trivial aproach is to prepare full output chain, and then send it
using single ngx_http_output_filter() call.

The full output chain will be usually a long video which is not a file
but will be generated in memory on the fly. I have to send each chunk
as soon as it’s ready because the stream generation is time consuming
and client COULD NOT wait for all to be done. Suppose it’s a 1 hour
video which dynamically has been generated and I would like to send
each minute as soon as it’s ready without waiting for all 1 hour
transcoding. I’m aware about nginx’s mp4 module but it does not support
time consuming dynamically generated video on memory.

WHAT WILL BE THE CORRECT WAY TO DO THIS IN NGINX?

Thanks again!

On Sat 01 Mar 2014 03:01:42 PM IRST, Maxim D. wrote:

Hello!

The ngx_http_output_filter() function can be called more than
once, but usually it doesn’t make sense - instead, one should
install r->write_event_handler and do subsequent calls once it’s
possible to write additional data to socket buffer. Working with
event’s isn’t something trivial though.

Thanks a lot for write_event_handler.

Quoting Winnie-the-Pooh, “You needn’t shout so loud”.

Doing time-consuming transcoding in nginx worker isn’t correct in
any case, as it will block all connections in this worker process.
So you have to do transcoding in some external process, and talk
to this process to get transcoded data. This is basically what
upstream module do (as used by proxy, fastcgi, etc.), and it
can be used as an example of “how to do this in nginx”.

The transcoding process already is doing by an external process,
ffmpeg, into an incomplete file. I just need to read from first of this
incomplete file one by one chunks and send them to client until seeing
a real EOF. I have following plan for nginx configuration:
location *.mp4 { mymodule; }

So, did you mean if two clients get x.mp4 and y.mp4 in same time then
one of them is blocked until another one get complete file?! I don’t
think so while web servers usually make new threads.

I saw ‘./nginx-1.4.5/src/http/ngx_http_upstream.c’ but was so complex
for me to understand.

However, I saw FastCGI is simple for me to understand. So, do you
advise me to regularly read ffmpeg output file in a FastCGI script
and then fasctcgi_pass the nginx to that?

Sorry for my questions…I think these are last ones :wink:

Thank you so much!

Hello!

On Sat, Mar 01, 2014 at 12:48:11PM +0330, Yasser Zamani wrote:

(please see below to know why I have to call it multiple times)
The ngx_http_output_filter() function can be called more than
once, but usually it doesn’t make sense - instead, one should
install r->write_event_handler and do subsequent calls once it’s
possible to write additional data to socket buffer. Working with
event’s isn’t something trivial though.

has been generated and I would like to send each minute as soon as it’s
ready without waiting for all 1 hour transcoding. I’m aware about nginx’s
mp4 module but it does not support time consuming dynamically generated
video on memory.

WHAT WILL BE THE CORRECT WAY TO DO THIS IN NGINX?

Quoting Winnie-the-Pooh, “You needn’t shout so loud”.

Doing time-consuming transcoding in nginx worker isn’t correct in
any case, as it will block all connections in this worker process.
So you have to do transcoding in some external process, and talk
to this process to get transcoded data. This is basically what
upstream module do (as used by proxy, fastcgi, etc.), and it
can be used as an example of “how to do this in nginx”.


Maxim D.
http://nginx.org/

On Sat 01 Mar 2014 06:27:38 PM IRST, Yasser Zamani wrote:

think so while web servers usually make new threads.

I saw ‘./nginx-1.4.5/src/http/ngx_http_upstream.c’ but was so complex
for me to understand.

However, I saw FastCGI is simple for me to understand. So, do you
advise me to regularly read ffmpeg output file in a FastCGI script
and then fasctcgi_pass the nginx to that?

Thank you very much Maxim; Good news that as you advised, I finally
have done it in a nice way via FastCGI:

  1. I wrote my code in a FastCGI structure with a lot of thanks to [4].
    [4] http://chriswu.me/blog/writing-hello-world-in-fcgi-with-c-plus-plus/
  2. I compiled and fcgi-spawn my executable on 127.0.0.1:8000 (see [4])
  3. I configured nginx to proxy requests to 127.0.0.1:8000 (see [4])
  4. I started my friend, nginx, and pointed the browser to
    localhost:8080.

RESULTS:

  1. Multiple sametime clients download same file in a very good balanced
    speed.
  2. There is no error in nginx error log.
  3. OK, the best one result…we escaped from write_event_handler and
    NGX_AGAIN(=-2) :slight_smile:

THE FASTCGI CODE: (for future researchers :wink: )
#include
#include “fcgio.h”

using namespace std;

int main(void) {
// Backup the stdio streambufs
streambuf * cin_streambuf = cin.rdbuf();
streambuf * cout_streambuf = cout.rdbuf();
streambuf * cerr_streambuf = cerr.rdbuf();

FCGX_Request request;

FCGX_Init();
FCGX_InitRequest(&request, 0, 0);

while (FCGX_Accept_r(&request) == 0) {
    fcgi_streambuf cin_fcgi_streambuf(request.in);
    fcgi_streambuf cout_fcgi_streambuf(request.out);
    fcgi_streambuf cerr_fcgi_streambuf(request.err);

    cin.rdbuf(&cin_fcgi_streambuf);
    cout.rdbuf(&cout_fcgi_streambuf);
    cerr.rdbuf(&cerr_fcgi_streambuf);

    cout << "Content-type: application/octet-stream\r\n";

int i;
for(i=0;i<1000000;i++) cout
<< “\r\n”
<< “\n”
<< " \n"
<< " Hello, World!\n"
<< " \n"
<< " \n"
<< "

Hello, World!

\n"
<< " \n"
<< “\n”;

    // Note: the fcgi_streambuf destructor will auto flush
}

// restore stdio streambufs
cin.rdbuf(cin_streambuf);
cout.rdbuf(cout_streambuf);
cerr.rdbuf(cerr_streambuf);

return 0;

}