I am trying to understand the reason for buffering incoming request
(client_body_buffer_size) in which nginx would either keep request in
memory
or write to file client_body_temp_path (based on request size).
What are the performance advantages and/or technical challenges to such
an
approach as opposed to piping directly (even for smaller requests) to
unbuffered piping to remote server e.g. http://tengine.taobao.org/
Nginx allows disabling upstream (to client) buffering in which response
is
sent to the client synchronously while it is receiving it, why not the
opposite is possible? What are the technical challenges/pros/cons of
writing
to disk (client_body_temp_path) or in-memory buffer
(client_body_buffer_size)?
On Wednesday 19 November 2014 16:56:41 attozk wrote:
opposite is possible? What are the technical challenges/pros/cons of writing
to disk (client_body_temp_path) or in-memory buffer
(client_body_buffer_size)?
Clients are usually slow, backend’s resources are usually expensive.
So nginx is trying to not keep backend busy while client is slowly
uploading data.
It’s not an easy feature to implement and requires significant
changes in nginx internals. Part of this work was done when
chunked transfer encoding for requests was introduced.
wbr, Valentin V. Bartenev
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.