Internals: how do I send large file to the client?

Hi,

I’m writing a filter module which implies a backend to be sending XML
with
information about files that have to be concatenated and sent to the
client.

One way to send a file is to ngx_read_file into a buffer allocated in
the
heap(pool) and push it onto the chain. However, I obviously can’t
allocate
~10G
in the heap. I have to send it chunk-by-chunk. How do I perform such
kind of
I/O?

Regards.

Posted at Nginx Forum:

have u know the split command in linux . you can use that to split file
then send it after you can use join command to join files

I know those commands. But the question was about Nginx’s internals. I
thought
somebody would suggest a pseudo-code snippet similar to the following:

ngx_buf_t b;
size_t length = 0;

loop (files as file) {

u_char *filename = file->name;

if (ngx_open_cached_file(ccf->open_file_cache, filename, &of, 

r->pool)
!= NGX_OK)
return NGX_ERROR;

length += of.size;

b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t));
if (b == NULL) {
    return NGX_HTTP_INTERNAL_SERVER_ERROR;
}

b->file = ngx_pcalloc(r->pool, sizeof(ngx_file_t));
if (b->file == NULL) {
    return NGX_HTTP_INTERNAL_SERVER_ERROR;
}

b->file_pos = 0;
b->file_last = of.size;
b->in_file = b->file_last ? 1 : 0;
b->file->fd = of.fd;
b->file->name = *filename;
b->file->log = r->connection->log;
b->file->directio = of.is_directio;

cl = ngx_alloc_chain_link(r->pool);
if (cl == NULL) {
    return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
cl->buf = b;

*last_out = cl;
last_out = &cl->next;
cl->next = NULL;

...

}

I’ve found ngx_open_cached_file and ngx_alloc_chain_link just recently.
I
see,
there should be a way to chain open files without actually performing
I/O
myself. Still have no clear understanding how it works and how one
should
use
the cached files’ API.

[email protected] Wrote: