Forwarding Requests to Multiple Upstream Servers?

Hello,

I am wondering whether it’s possible from within an nginx module to
forward a request to multiple upstream servers… AFAIU nginx’s
existing infrastructure does normally only support sending requests to
one upstream server. In my example I’d like to update data that resides
on multiple servers if it’s a POST request and want to send back only
the
response of the first upstream server.

Best,

Sven C. Koehler

Does nginx support forwarding specific request types (POST request only
for example), to a specific backend?

Handling file propagation from one “master” backend to the other nodes
would be easier than to have it come to a random backend.

For the original poster, I would recommend using a NFS server to share
files between the various backends. Keep in mind that NFS would be
slower than a local file system, so I would advise keeping a
file&directory index in the database, to avoid some basic problems. As
far as reading and writing files goes, I’ve had very little problems
over the years with a setup that uses NFS extensively.

Best regards,
Tit

On Wed, Jul 09, 2008 at 09:15:25AM +0200, Sven C. Koehler wrote:

I am wondering whether it’s possible from within an nginx module to
forward a request to multiple upstream servers… AFAIU nginx’s
existing infrastructure does normally only support sending requests to
one upstream server. In my example I’d like to update data that resides
on multiple servers if it’s a POST request and want to send back only the
response of the first upstream server.

No, nginx does not support it.

On Wed, Jul 09, 2008 at 11:08:58AM +0200, Tit P. wrote:

Does nginx support forwarding specific request types (POST request only
for example), to a specific backend?

AFAIU nginx let’s one customize the hashing which determines which
upstream server receives the request and the type of the request could
be
use for hashing.

-S.

Hi there,

Somewhat off-topic for nginx, but I’m really interested to hear more
about this. I recently tried this out by sharing a PHP code base
between 2 application servers over NFS, NFS server on app server 1 and
client on app server 2. I found that as soon as I added app server 2
(nfs client) into the nginx upstream list, the load on the app server
immediately and dramatically increased. I assumed it was something to
do with insufficiently aggressive NFS caching and tried various tweaks
on the mount and export options including the sync settings, but it
didn’t really make any difference. When I switched the app server 2 to
use local PHP files instead, the load dropped immediately.

Our application was built on our PHP framework which uses a lot of
include files, hence we were using APC opcode cache to minimise
interpreter time. I guessed these factors might have been big
contributors to the load as PHP would have been checking modification
times on a lot of files, and then APC was probably doing more checks.

Do you (or anyone) have any thoughts on whether what I was doing just
isn’t well suited to NFS sharing, whether it was possibly related to
the caching stuff, or whether if I’d been able to spend more time
tuning the NFS configuration I might have been able to get lower CPU
usage? I do seem to hear of people doing what I wanted to do
(obviously it’s better to have the code in one place and not have to
update in multiple places if possible) so I’m sure there must be ways
to get it to work; quite possibly my NFS configuration was naïve …

Thanks,
Igor

2008/7/9 Igor C. [email protected]:

I’m sure there must be ways to get it to work; quite possibly my NFS
configuration was naïve …

We are sharing static files over NFS with three nginx. The server side
is heavily tuning by default, because it’s served from a NAS/NAS (EMC
Celerra). In the client side is very important mount partitions with
async mode and use a MTU of 9000 (also known as “Jumbo Frames”),
remember other common tricks: noatime and nodirtime, increase rsize
and wsize, etc.

Our performance is very impressive, every nginx move over 30MB/s with
a load average less than one. Every server has 16 GB of RAM, the OS
(Linux) is caching all them. The OS also is tuning, specially several
tcp/ip kernel values, max open file descriptors, but this tricks are
relative to nginx and clients.

Of course, note that working with php is very different than working
with simply static files. The “dynamic side” is served from apache and
PHP, but also shared over NFS. In a future we will test nginx +
fastcgi.

Hope this helps.
BR.

Igor C. wrote:

really make any difference. When I switched the app server 2 to use
caching stuff, or whether if I’d been able to spend more time tuning the
NFS configuration I might have been able to get lower CPU usage? I do
seem to hear of people doing what I wanted to do (obviously it’s better
to have the code in one place and not have to update in multiple places
if possible) so I’m sure there must be ways to get it to work; quite
possibly my NFS configuration was naïve …
I’m not sure, we use an NFS mounted directory shared between 4 web
servers.
Our exported directory is sitting on hardware RAID10 (4x15000 RPM
Ultra320 disks on an LSI megaraid with 512mb battery backed up cache)
with the following export options:
/opt/shared/htdocs x.x.x.x/255.255.255.0(rw,no_subtree_check,sync)
clients mount it with options
rw,hard,intr,udp,noatime,rsize=8192,wsize=8192,async,auto
(on bonded gigabit ethernet)

Each of the 4 web servers uses it for an apache+mod_perl document root,
as well as nginx (for the static files).

Latency is tolerable though certainly higher than local filesystem, read
performance is OK, write performance is lousy, even if (or perhaps
because) you get NFS locking working properly.

Everything pretty much works, but directories with heavy read/write
activity suffer (ie Apache::Session directories).
Some benchmarks (somewhat old - with plain apache+mod_perl only):
three runs of ``ab -n10000 c10’’

Test 1: no sessions, local file
Requests/sec: 480.34 478.35 481.22

Test 2: no sessions, file served on nfs
Requests/sec: 475.26 472.72 472.41

Test 3: local sessions using tmpfs (AKA shm fs)
Requests/sec: 122.68 120.15 112.74

Test 4: sessions on nfs mounted device (ext3)
Requests/sec: 21.87 22.32 21.57

Test 5: sessions on nfs mounted ramdisk (ext2)
Requests/sec: 94.96 88.75 108.80

We use sessions on a very small subset of files served so that loss is
acceptable.
YMMV, of course.

Igor C. wrote:

really make any difference. When I switched the app server 2 to use
caching stuff, or whether if I’d been able to spend more time tuning the
NFS configuration I might have been able to get lower CPU usage? I do
seem to hear of people doing what I wanted to do (obviously it’s better
to have the code in one place and not have to update in multiple places
if possible) so I’m sure there must be ways to get it to work; quite
possibly my NFS configuration was naïve …
I’m not sure, we use an NFS mounted directory shared between 4 web
servers.
Our exported directory is sitting on hardware RAID10 (4x15000 RPM
Ultra320 disks on an LSI megaraid with 512mb battery backed up cache)
with the following export options:
/opt/shared/htdocs x.x.x.x/255.255.255.0(rw,no_subtree_check,sync)
clients mount it with options
rw,hard,intr,udp,noatime,rsize=8192,wsize=8192,async,auto
(on bonded gigabit ethernet)

Each of the 4 web servers uses it for an apache+mod_perl document root,
as well as nginx (for the static files).

Latency is tolerable though certainly higher than local filesystem, read
performance is OK, write performance is lousy, even if (or perhaps
because) you get NFS locking working properly.

Everything pretty much works, but directories with heavy read/write
activity suffer (ie Apache::Session directories).
Some benchmarks (somewhat old - with plain apache+mod_perl only):
three runs of ``ab -n10000 c10’’

Test 1: no sessions, local file
Requests/sec: 480.34 478.35 481.22

Test 2: no sessions, file served on nfs
Requests/sec: 475.26 472.72 472.41

Test 3: local sessions using tmpfs (AKA shm fs)
Requests/sec: 122.68 120.15 112.74

Test 4: sessions on nfs mounted device (ext3)
Requests/sec: 21.87 22.32 21.57

Test 5: sessions on nfs mounted ramdisk (ext2)
Requests/sec: 94.96 88.75 108.80

We use sessions on a very small subset of files served so that loss is
acceptable.
YMMV, of course.

Thanks, Thanos and Andan, interesting stuff.

It would also be great to hear of anyone using the APC cache or
similar in this situation.

On 7/9/08, Sven C. Koehler [email protected] wrote:

I am wondering whether it’s possible from within an nginx module to
forward a request to multiple upstream servers…

You can multiply one request to go to many servers using SSI.

Make the POST target location enable SSI, and be a document like this:

Then each /serverN location should have the appropriate proxy_pass
setting. You’ll probably want to mark them all “internal”. Only one of
the servers should probably actually produce output, unless it’s just
something like “Upload to serverN: complete”.

Note that these subrequests will be using the GET method (since
0.6.26), but by default still include the original request headers and
body. As long as your environment can deal with GETs having a request
body (PHP had some problems with it in my case), this should do what
you want, if you haven’t yet decided on a different replication
method.

On Wed, Jul 09, 2008 at 11:08:58AM +0200, Tit P. wrote:

Does nginx support forwarding specific request types (POST request only
for example), to a specific backend?

yes

if ($request_method = POST) {
proxy_pass http://my_post_backend;
break;
}

On Wed, Jul 09, 2008 at 09:50:56PM +0200, piespy wrote:

On 7/9/08, Sven C. Koehler [email protected] wrote:

I am wondering whether it’s possible from within an nginx module to
forward a request to multiple upstream servers…

You can multiply one request to go to many servers using SSI.

Make the POST target location enable SSI, and be a document like this:

Thanks for the idea piespy! In my case I want to do this from inside a
module. I’ve read a little in nginx’s source code and saw that
ngx_http_ssi_include handles these SSI requests and then does a call to
ngx_http_subrequest, which I assume I could call also directly in my
module without actually using SSI document…

-S.

that’s a neat idea. problem is, how to deal with failures etc.

you probably couldn’t treat it like an all-or-nothing transaction, i
assume…