Aggregating multiple upstream responses before finalizing the downstre am request

Hello,
I am working on an nginx module. The simplest description of what it has
to do can be summed up as a “voting based fault tolerance algorithm”.
Example:
A) the client requests that an URL be tested. It does so by simply
requesting the URL with an http GET
B) the headend nginx server (which has my module in it…) receives the
request and passes the request on to three or more upstream servers…
upstream1, upstream2, and upstream3
C1) upstream1 returns “ABCD” as a response
C2) upstream2 returns “ABCD” as a resonse also
C3) upstream3 returns “BLAHBLAH-I-AM-BROKEN”
D) during this time, the headend server is waiting for all three
upstream responses
E) when it has received all of the them or timed out, it compares the
three responses
F1) If all three responses are the same, it simply passes the result on
to the downstream client and closes the request
F2) If, as in this example, one or more responses do not match, then my
module does some processing and composes an appropriate response to the
downstream client and closes the request
I humbly request your collective thoughts on the best way to accomplish
this.
It appears that while round robin upstream requests are already
possible, such that I could change it to, in a sequential manner, send
the request to each upstream in turn (simulating a failure condition of
an upstream and moving on to the next) - I would prefer to create three
or more -->parallel<-- requests. The upstream entries in the header
files imply to me that the nginx design is one request/one upstream at a
time…
Question: do you think that multiple parallel upstream requests are
possible as a module and not a branch of the nginx source?
The design I am working on also has to create an arbitrary upstream
message, pretty much a GET or a POST while doing the processing… This
upstream message does not have to be blocking nor does it have to have
any special processing.
Question: Is there a mechanism by which I can inject a request into the
nginx queue as if it came from a downstream client?
Example: I have a module at location /test/something and location
/some/data - the latter being an upstream server. If my module has to
POST to /some/data, it would be nice to simply inject the POST request
back into the server and let the server manage finding /some/data.
Thanks for any comments on my endeavor. I have studied Evan M.'s
guide to module development and have followed the forum intently when
internals have been discussed. I am still learning nginx internals
though…
Question: Is the any documentation or better yet, flow graphs,
describing how requests are handled?
Thanks again,
Sincerely,
Daniel Chapiesky

dchapiesky@… <dchapiesky@…> writes:

I am working on an nginx module. The simplest description of what it has to do
can be summed up as a “voting based fault tolerance algorithm”.Â

B) the headend nginx server (which has my module in it…) receives the
request and passes the request on to three or more upstream servers…
upstream1, upstream2, and upstream3

Sorry if I don’t interpret correctly your idea but you really mean you
want to
issue the same request on each of the upstream servers in a parallel
way?

D) during this time, the headend server is waiting for all three upstream
responses

You mean actively waiting or waiting for an event on the upstream
connection? If
you have to do an active wait waiting for all of the upstream servers to
reply
it will be catastrophic because you already divided by 3 the processing
power of
the uptreams servers and in case of the failure of one server latency
will be
added due to error processing for each request done!

Question: do you think that multiple parallel upstream requests are possible
as a module and not a branch of the nginx source?

What would be interesting AMHO would be a heartbeat protocol checking
upstream
health, may be as a thread inside Nginx and manipulating upstream status
(it’s
not so easy to do because there could be multiple workers and critical
section
issues). But sadly, no light heartbeat protocol is defined and
normalized.

Question: Is there a mechanism by which I can inject a request into the nginx
queue as if it came from a downstream client?

Yes, I’ve done it, but for now it’s Linux [recent kernel] only by
using
signalfd() and a special event module based on the epoll one with
circular
buffers, messaging and dedicated worker threads.

Thanks for any comments on my endeavor. I have studied Evan M.'s guide to
module development and have followed the forum intently when internals
have been
discussed. I am still learning nginx internals though…

Yes, me too! My advice: take your time!

Question: Is the any documentation or better yet, flow graphs, describing how
requests are handled?

That’s why we all finished to land on this list :wink:

Best regards