HttpLuaModule create asynchronous subrequests

Hi,

With the help of HttpLuaModule I’m trying to duplicate every request
into two upstreams. Here is my configuration:

site.conf

upstream prod_upstream {
server 127.0.0.1:5000;
server 127.0.0.1:5001;
}

upstream dev_upstream {
server 127.0.0.1:6000;
}

server {
location /prod {
proxy_pass http://prod_upstream/;
}

    location /dev {
            proxy_pass http://dev_upstream/;
    }

    location / {
            error_log "/tmp/error.log";
            content_by_lua_file /etc/nginx/luas/duplicator.lua;
    }

}

duplicator.lua

ngx.req.read_body()
local arguments = ngx.var.args

r1, r2 = ngx.location.capture_multi {
{ ‘/prod/’, { args = arguments, share_all_vars = true } },
{ ‘/dev/’, { args = arguments, share_all_vars = true } },
}

ngx.print(r1.body)

So far so good, all traffic destined to prod is being duplicated to dev
and
ONLY prod response is forwarded the client.

From the documentation here
Lua | NGINX :

" … This function will not return until all the subrequests terminate

"

That is where my problem starts:

I’m working with a real time distributed system so the response time
can’t
be longer than 50ms. Dev is definitely slower so I can’t wait for dev
response. Also Imagine if /dev is broken the timeout will take too much
time.

I’m thinking about making /dev calls in an asynchronous way if possible.

My second approach:

duplicator_v2.lua:

gx.req.read_body()
local arguments = ngx.var.args

r1 = ngx.location.capture(‘/prod/’, { args = arguments, share_all_vars =
true })
ngx.print(r1.body)

r2 = ngx.location.capture(‘/dev/’, { args = arguments, share_all_vars =
true })

From the documentation of ngx.print:

" … This is an asynchronous call and will return immediately without
waiting for all the data to be written into the system send buffer …"

I was hopping that splitting the captures and using ngx.print before the
second call will do what I need, answer to the client and continue with
calling /dev but that doesn’t happen, works exactly as the first
approach.

My final tests was this ugly configuration:

duplicator_v2.lua:

gx.req.read_body()
local arguments = ngx.var.args

r1 = ngx.location.capture(‘/prod/’, { args = arguments, share_all_vars =
true })
ngx.print(r1.body)
ngx.eof()

r2 = ngx.location.capture(‘/dev/’, { args = arguments, share_all_vars =
true })

Here, prod response is sent immediately as I want and dev receives the
traffic but the connection is closed the I got a Broken Pipe (which
makes
sense).

Is there a way to do capture calls in a asynchronous mode or to achieve
this in other way?

Thank you in advance,

Hello!

On Wed, Nov 12, 2014 at 12:20 PM, Guido Accardo wrote:

Here, prod response is sent immediately as I want and dev receives the
traffic but the connection is closed the I got a Broken Pipe (which makes
sense).

For this error, maybe you should configure

proxy_ignore_client_abort on;

for your dev location with proxy_pass.

But using ngx.eof() for this will still introduce extra delay for the
subsequent requests on the current downstream connection when HTTP
keepalive or HTTP pipelining is enabled.

Is there a way to do capture calls in a asynchronous mode or to achieve this
in other way?

The recommended way is to use cosocket-based http library like Brian
Akins’s lua-resty-http-simple [1] (instead of subrequests and
ngx_proxy) in a 0-delay timer created by ngx.timer.at() [2] (instead
of the ngx.eof hack).

BTW, it’s recommended to join the openresty-en mailing list [3] for
such ngx_lua related questions that way you may get more responses and
get responses sooner.

For your very use case, maybe lower level tools like tcpcopy [4] is a
better fit? Not sure though :slight_smile:

Regards,
-agentzh

[1] GitHub - bakins/lua-resty-http-simple: Lua HTTP client driver for ngx_lua
[2] GitHub - openresty/lua-nginx-module: Embed the Power of Lua into NGINX HTTP servers
[3] https://groups.google.com/group/openresty-en
[4] GitHub - session-replay-tools/tcpcopy: An online request replication tool, also a tcp stream replay tool, fit for real testing, performance testing, stability testing, stress testing, load testing, smoke testing, etc

Hello,

On Thu, Nov 13, 2014 at 5:24 PM, Yichun Z. (agentzh)
[email protected]
wrote:

proxy_ignore_client_abort on;

for your dev location with proxy_pass.

But using ngx.eof() for this will still introduce extra delay for the
subsequent requests on the current downstream connection when HTTP
keepalive or HTTP pipelining is enabled.

Using “proxy_ignore_client_abort on;” as you suggested worked for me.

From the doc of proxy_ignore_client_abort:

" … Determines whether the connection with a proxied server should be
closed when a client closes the connection without waiting for a
response
…"

So basically I’m discarding dev’s responses ?

of the ngx.eof hack).

I’m going for this solution, the ugly hack will be my second option but
this seems great.

BTW, it’s recommended to join the openresty-en mailing list [3] for
such ngx_lua related questions that way you may get more responses and
get responses sooner.

I’ll definitely do it!

For your very use case, maybe lower level tools like tcpcopy [4] is a
better fit? Not sure though :slight_smile:

I gave it a try, also duplicator - npm and
traffic
mirroring with iptables TEE module, but anyway the library that you
suggested.

Thank you,

Best Regards,

Hello!

On Fri, Nov 14, 2014 at 11:20 AM, Guido Accardo wrote:

From the doc of proxy_ignore_client_abort:

" … Determines whether the connection with a proxied server should be
closed when a client closes the connection without waiting for a response
…"

So basically I’m discarding dev’s responses ?

Well, you’re just discarding client connection closing events while
running your dev request. Otherwise if the client closes the
connection while your dev request is still running (which is very
likely when HTTP keepalive is not in use and you use ngx.eof), your
dev upstream connection will also be aborted prematurely.

Regards,
-agentzh