Communications Sequence with Upstream

I have been studying the nginx source code and read through Emiller’s
guides and I’m interested in doing something more general than I have
learned about so far.

Say I have a location /url that I want to construct responses to based
on a sequence of communications with an upstream, host:port. To be
specific, the upstream would be a database, and the sequence of
communications would comprise commands and responses. A simple example
of what I’d like to do would be:

  1. client communicates message to /url meaning “if key foo exists,
    add 1 to bar”
  2. module handling /url sends first database command to check if foo
    exists, if not reply “failed” to client, or it does and
  3. module sends database command to increment bar, replies “succeeded”.

The details are arbitrary, but I would think there are myriad examples
where you’d want a module to construct a reply based on back and forth
communications.

I’m hoping there’s some nice way of handling this case, but from I’ve
gathered looking at the code it seems oriented to “1 message to
module, filter as desired, send to upstream, get response, filter
again, then response back to client.”

Thanks

On Sun, Apr 25, 2010 at 4:58 PM, Magnus L. [email protected] wrote:

Say I have a location /url that I want to construct responses to based
on a sequence of communications with an upstream, host:port. To be
specific, the upstream would be a database, and the sequence of
communications would comprise commands and responses.

We’ve been doing this kind of things via ngx_echo’s echo_location
directive in our ngx_memc and ngx_drizzle’s tests suite. Here’s some
an example to do sequential memcached command execution against the
ngx_memc upstream module in a single location /main:

location /main {
    echo_location '/memc?cmd=flush_all';
    echo_location '/memc?key=foo&cmd=set&val=';
    echo_location '/memc?key=foo&cmd=get';
}
location /memc {
    set $memc_cmd $arg_cmd;
    set $memc_key $arg_key;
    set $memc_value $arg_val;
    memc_pass 127.0.0.1:11211;
}

You can find more examples here:
http://github.com/agentzh/memc-nginx-module/blob/master/test/t/

And here’s an example for sequential SQL queries against a mysql
backend:

location /test {
    echo_location /mysql "drop table if exists foo";
    echo;
    echo_location /mysql "create table foo (id serial, name

char(10), age integer);";
echo;
echo_location /mysql “insert into foo (name, age) values (‘’,
null);”;
echo;
echo_location /mysql “insert into foo (name, age) values (null,
0);”;
echo;
echo_location /mysql “select * from foo order by id”;
echo;
}
location /mysql {
drizzle_pass backend;
drizzle_module_header off;
drizzle_query $query_string;
rds_json on;
}

Responses look like this

{"errcode":0}
{"errcode":0}
{"errcode":0,"insert_id":1,"affected_rows":1}
{"errcode":0,"insert_id":2,"affected_rows":1}
[{"id":1,"name":"","age":null},{"id":2,"name":null,"age":0}]

More examples can be seen here:
http://github.com/agentzh/rds-json-nginx-module/blob/master/test/t/sanity.t

A simple example
of what I’d like to do would be:

  1. client communicates message to /url meaning “if key foo exists,
    add 1 to bar”
  2. module handling /url sends first database command to check if foo
    exists, if not reply “failed” to client, or it does and
  3. module sends database command to increment bar, replies “succeeded”.

It reminds me of the “safe incr” sample in my slides for my talk on
nginx.conf scripting:

http://agentzh.org/misc/slides/nginx-conf-scripting/nginx-conf-scripting.html

(Use the arrow keys on your keyboard to switch pages there.)

The details are arbitrary, but I would think there are myriad examples
where you’d want a module to construct a reply based on back and forth
communications.

Check out ngx_echo module to see if it fits your needs:

http://wiki.nginx.org/NginxHttpEchoModule

I’m hoping there’s some nice way of handling this case, but from I’ve
gathered looking at the code it seems oriented to “1 message to
module, filter as desired, send to upstream, get response, filter
again, then response back to client.”

IMHO, Evan M.'s Guide is very limited in expressing nginx’s full
power :wink: It’s just a good old introductory guide anyway :slight_smile: Still many
thanks to Evan M. because the guide has helped so many people
including me :slight_smile:

I must add that the echo_location and echo_subrequest thingies in
ngx_echo are still limited in power :slight_smile: We’re currently releasing the
full power of nginx’s subrequests in our ngx_lua module. Soon we’ll be
able to do things like this on the Lua land:

 res = ngx.location.capture('/sub1?id=32');
 if (res ...) {
       res2 = ngx.location.capture('/sub2?id=56');
 } else {
       res2 = ngx.subrequest.capture(ngx.HTTP_POST, '/sub2?id=56',

{ body: ‘hello’ });
}

And all the capture operations are transparent non-blocking I/O ones
that are based on nginx’s subrequests. Thanks to coco lua’s C-level
coroutine support!

There will also be equivalences to the echo_location and
echo_location_async directives on the Lua land, i.e.,
ngx.location.echo and ngx.location.echo_async :slight_smile:

Stay tuned!
-agentzh

IMHO, Evan M.'s Guide is very limited in
expressing nginx’s full
power :wink: It’s just a good old introductory guide
anyway :slight_smile: Still many
thanks to Evan M. because the guide has helped
so many people
including me :slight_smile:

me too :slight_smile:

And I check out the ngx_echo module’s source code.
It’s very cool for newbie :slight_smile:

Posted at Nginx Forum:

On Sun, Apr 25, 2010 at 6:29 PM, zhicheng [email protected] wrote:

And I check out the ngx_echo module’s source code.
It’s very cool for newbie :slight_smile:

Personally I dislike the coding style in that module :wink: It’s my first
non-trivial nginx module anyway. I do have plans to update it with my
latest nginx knowledge. Oh, well…

ngx_lua will reuse a good part of the ngx_lua codebase :slight_smile:

Cheers,
-agentzh

P.S. The current ngx_lua repository can be found here:
GitHub - openresty/lua-nginx-module: Embed the Power of Lua into NGINX HTTP servers and we’ve merely
checked in the set_by_lua and set_by_lua_file directives so far. But
these directives are already awesome and fast enough to do lots of
funny things that are impossible before :wink: And…it’s not releasable
yet and still under active development.

agentzh Wrote:

to update it with my
latest nginx knowledge. Oh, well…

ngx_lua will reuse a good part of the ngx_lua
codebase :slight_smile:

Yeah,Maybe, but I just want know some nginx module’s rule.
and I’m not very professor lua.
I planning do some work with lua,but not now :slight_smile:

these directives are already awesome and fast
enough to do lots of
funny things that are impossible before :wink:
And…it’s not releasable
yet and still under active development.


nginx mailing list
[email protected]
nginx Info Page

Posted at Nginx Forum:

On Sun, Apr 25, 2010 at 6:43 PM, agentzh [email protected] wrote:

ngx_lua will reuse a good part of the ngx_lua codebase :slight_smile:

Sorry, I mean the ngx_echo codebase :stuck_out_tongue:

Cheers,
-agentzh

On Sun, Apr 25, 2010 at 2:35 AM, agentzh [email protected] wrote:

}

You can find more examples here:
http://github.com/agentzh/memc-nginx-module/blob/master/test/t/

Thanks. I looked over echo_location. It uses ngx_http_subrequest,
which I had gathered to be limited to concatenating responses
together, and it still appears that way to me. It appears there’s no
hook for you to deal with the data the subrequest produces, as the
various filter hooks the upstream support provides.

This the flow it appears to me you’re limited to:
handler does HTTP GET x_1, response y_1 comes back, …, HTTP GET x_n,
response y_n. and what message the handler ultimately produces is
y_1…y_n.

I’m interested in a more general ability to:
handler sends arbitrary message x_1, response y_1 comes back, …
repeat n times. handler sends response f(y_1, …, y_n) to original
query, f an arbitrary function–which is to say I want to have the
responses passed through a filter of mine as they’re generated, not
sent straight to the client.

Maybe I’ll have to write my own code from scratch for dealing with this.

On Mon, Apr 26, 2010 at 6:33 PM, Magnus L. [email protected] wrote:

On Sun, Apr 25, 2010 at 2:35 AM, agentzh [email protected] wrote:

Thanks. I looked over echo_location. It uses ngx_http_subrequest,
which I had gathered to be limited to concatenating responses
together, and it still appears that way to me. It appears there’s no
hook for you to deal with the data the subrequest produces, as the
various filter hooks the upstream support provides.

And that’s way I said “echo_location” does not expose the full power
of nginx subrequests. We can use filters to capture all the outputs of
subrequests, as be done in my fork of ngx_eval module:

http://github.com/agentzh/nginx-eval-module

We’ll use this trick again in our ngx_lua’s ngx.subrequest.capture
implementation.

I’m interested in a more general ability to:
handler sends arbitrary message x_1, response y_1 comes back, …
repeat n times. handler sends response f(y_1, …, y_n) to original
query, f an arbitrary function–which is to say I want to have the
responses passed through a filter of mine as they’re generated, not
sent straight to the client.

Sure you can do that. That’s the way to go :wink:

Maybe I’ll have to write my own code from scratch for dealing with this.

Indeed :slight_smile:

BTW, I looked over the related code in ngx_echo which was written 5
months ago and found that it does not get r->main->count right for
nginx 0.8.x >= 0.8.11. I’ll rewrite the main dispatcher and the
related parts for echo_location, echo_subrequest, and echo_sleep in
the next day or two and make a new release.

I apologize for not updating the ngx_echo codebase in time. People who
have copied the current implementation of
echo_subrequest/echo_location should update their code according to
the next release of ngx_echo. I’m sorry :stuck_out_tongue:

Cheers,
-agentzh

On Mon, Apr 26, 2010 at 7:43 PM, agentzh [email protected] wrote:

the next release of ngx_echo. I’m sorry :stuck_out_tongue:

I’ve completed the Big Refactoring of the ngx_echo core in the git HEAD:

http://github.com/agentzh/echo-nginx-module/

I think the new implementation of echo_location, echo_subrequest,
echo_sleep, and echo_read_request_body finally fit reasonably well
with the nginx event model as well as Igor’s way of thinking :slight_smile:

I’ll make a new release (v0.29) to include this major update once
we’ve done more extensive testing.

This refactoring also paves a way to implementing the sequential
subrequest support in ngx_lua and “subrequests in subrequests” support
in ngx_srcache :slight_smile:

If anybody experiences any regressions with the git HEAD of the
ngx_echo module, please let me know :slight_smile:

Thanks!
-agentzh