Upstream setup with proxy and fastcgi?

Hi,

I plan to do a basic load balancer setup and want to understand the
differences between fastcgi_pass and proxy_pass.

upstream proxy {
server 1.localserverip:8000;
server 2.localserverip:8000;
server 3.localserverip:8000;
server 4.localserverip:8000;
}

upstream php {
server 1.localserverip:9000;
server 2.localserverip:9000;
server 3.localserverip:9000;
server 4.localserverip:9000;
}

server {
listen publicserverip:80;
server_name example.com;
root /var/www/html;
index index.php index.html;

location / {
    proxy_pass http://proxy;
    try_files $uri $uri/ /index.php;
}

location ~ \.php$ {
    fastcgi_pass php;
    include fastcgi.conf;
}

}

My goal is to have 2 servers as load balancers (1.localserverip and
5.localserverip set with heartbeat) and redirect the traffic through all
nodes using Nginx. The site content will be identical in all servers.
Can you let me know if using fastcgi_pass will do the same thing as
proxy_pass, or do I have to set everything up like in the above
configuration? I will serve static and php files through the backend.
Following the logic it might require that I set both fastcgi_pass and
proxy_pass into config. Could you give me a quick example so I
understand how everything works?

Thanks

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,191346,191346#msg-191346

Igor? Anyone could help me with this configuration?

Thanks

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,191346,191782#msg-191782

On Thu, Apr 14, 2011 at 02:15:13PM -0400, TECK wrote:

}
server_name example.com;
include fastcgi.conf;
proxy_pass into config. Could you give me a quick example so I
understand how everything works?

Sorry, but I do not understand what you want to get.
I can only say that this location:

 location / {
     proxy_pass http://proxy;
     try_files $uri $uri/ /index.php;
 }

will not work. "try_files looks up local files, while “proxy_pass”
passes request to other server.


Igor S.

Thanks Igor. The goal is to have Nginx do the load balancing for
multiple servers that serve the same static content and php files.
Basically, I install Nginx in only one server that will be the load
balancer, and PHP in all 4 servers that will contain the exact same
thing (static files and php files):

upstream proxy {
server 1.localserverip:8000;
server 2.localserverip:8000;
server 3.localserverip:8000;
server 4.localserverip:8000;
}

upstream php {
server 1.localserverip:9000;
server 2.localserverip:9000;
server 3.localserverip:9000;
server 4.localserverip:9000;
}

Can you give an example how the configuration should look, while serving
php and static files at the same time?
In other words, I want to be be able to output index.php from any of the
servers, as well any static files. The Nginx balancer should be able to
do this, right?

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,191346,192087#msg-192087

In theory, all I have to do is this:

location / {
proxy_pass http://proxy;
}

location ~ .php$ {
fastcgi_pass php;
include fastcgi.conf;
}

However, I have a script that is processed like that on a normal single
server setup:
location /forum/ {
try_files $uri $uri/ /forum/data.php$args;
}

How would I make the try_files work with the load balancer scheme?

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,191346,192112#msg-192112

On Mon, Apr 18, 2011 at 12:30:00PM -0400, TECK wrote:

Hi there,

I’ve re-read the thread, and I’m not sure what you are trying to
do. Hopefully the following will be useful; if not, if you could include
a clear picture of what services should be running and how they should
interact, it might make it easier for the next person to offer help.

In theory, all I have to do is this:

location / {
proxy_pass http://proxy;
}

If your backend servers are nginx, each configured to talk to a fastcgi
server, then the above should be pretty much all you need.

location ~ .php$ {
fastcgi_pass php;
include fastcgi.conf;
}

That will be useful if your frontend server (= load balancer) should
split the .php urls separately from the other urls.

However, I have a script that is processed like that on a normal single
server setup:
location /forum/ {
try_files $uri $uri/ /forum/data.php$args;
}

How would I make the try_files work with the load balancer scheme?

I think you’ve said that your front-end doesn’t have files, so try_files
can’t work.

Perhaps using

proxy_intercept_errors on;

with

error_page 404 = /forum/data.php$args;

inside the “location /” block would work in a similar way? (Although
it might need to be /forum/data.php?$args, for the php-using location
to match.)

Good luck with it,

f

Francis D. [email protected]

On Mon, Apr 18, 2011 at 01:45:54PM -0400, TECK wrote:

---- + [ 4.localserverip (node with php-fpm installed) ]
So if an user access the site, 1.localserverip will pick a node and
What would be the configuration like?
It’s better to place static files on balancers and to use the following
configuration:

location / {
root /var/www/html;
}

location ~ .php$ {
fastcgi_pass php;

}

There is no sense to transfer static files from other nodes,
the balancer should be able to handle them from local disk.


Igor S.

Thanks for the reply, Francis.
Let detail more the scenario, so you understand what I try to do.

[ 1.localserverip (main load balancer, with nginx and php-fpm installed)
]
|
---- + [ 2.localserverip (node with php-fpm installed) ]
|
---- + [ 3.localserverip (node with php-fpm installed) ]
|
---- + [ 4.localserverip (node with php-fpm installed) ]

A better graphical example can be viewed in this image:
http://farm6.static.flickr.com/5023/5631528237_9153f0d4a5_o.gif

Basically, 1.localserverip is the site entrance, where nginx load
balancing is done.
Each server (including the 1.localserverip) will have the exact same
content into /var/www/html directory.
The served content consist in static (images, cass, js, etc.) and php
files.
So if an user access the site, 1.localserverip will pick a node and
serve the content, based on the load balancing setup:
upstream proxy {
server 1.localserverip:8000;
server 2.localserverip:8000;
server 3.localserverip:8000;
server 4.localserverip:8000;
}

My goal is to be able to load balance not only the proxy but also the
fastcgi.
What would be the configuration like?

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,191346,192130#msg-192130

On Mon, Apr 18, 2011 at 02:25:51PM -0400, TECK wrote:

}

location ~ .php$ {
fastcgi_pass php;
include fastcgi.conf;
}

Yes.


Igor S.

Thanks Igor.
With your suggested configuration, can I use try_files as listed below?

location / {
root /var/www/html;
try_files $uri $uri/ /index.php;
}

location /forum/ {
try_files $uri $uri/ /forum/data.php$args;
}

location ~ .php$ {
fastcgi_pass php;
include fastcgi.conf;
}

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,191346,192139#msg-192139

Thanks Francis. My goal is to install Nginx only into load balancer.
I provided the graphical picture for sanity reasons, in case the text
model would be understood.

I will not use dual load balancers, like in the picture. At least not
for now.
Presuming that I want to add in the future more servers designated for
nginx load balancing, then I will add the proxy configuration.
What will happen then with my current php related configuration?
location / {
proxy_pass http://proxy;
}

location /forum/ {
try_files $uri $uri/ /forum/data.php$args;
}

location ~ .php$ {
fastcgi_pass php;
include fastcgi.conf;
}

Can you give me a real example? Location [php urls] is not really clear
for me.

Thanks.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,191346,192148#msg-192148

On Mon, Apr 18, 2011 at 01:45:54PM -0400, TECK wrote:

Hi there,

http://farm6.static.flickr.com/5023/5631528237_9153f0d4a5_o.gif
There’s still something of a lack of explicitness here. The words show
one
nginx (load balancing) http server plus four php-fpm fastcgi servers.
The
picture shows the load balancer plus multiple web servers.

Putting them together, what I think you have is:

one nginx http server which does the load balancing, plus

many nginx http servers which do file serving, plus

many php-fpm fastcgi servers which do php processing,

where each fastcgi server is on the same machine as a file-serving
nginx;
and also one of the file-serving nginx servers is on the same machine
as the load-balancing nginx server.

Basically, 1.localserverip is the site entrance, where nginx load
balancing is done.
Each server (including the 1.localserverip) will have the exact same
content into /var/www/html directory.
The served content consist in static (images, cass, js, etc.) and php
files.

Igor mentions “don’t load-balance static stuff”.

If you want to do it anyway, for failover reasons for example, then the
load-balancing nginx server would have something like

location / {
proxy_pass http://proxy;
}

and the file-serving nginx servers would just have something like

root /var/www/html;

My goal is to be able to load balance not only the proxy but also the
fastcgi.
What would be the configuration like?

There are two different ways you can do this.

If you decide “load balancer shall know about fastcgi”, then you will
have something like

location [php urls] {
fastcgi_pass php;
}

where [php urls] is the appropriate string or regex. That will be on
the load-balancing nginx server; the file-serving nginx servers won’t
have any mention of fastcgi.

Alternatively, you could let the file-serving nginxen handle fastcgi –
in that case, the load balancer would have

location / {
proxy_pass http://proxy;
}

and the file servers would have

location [php urls] {
fastcgi_pass this_php_server;
}

It’s an extra level of proxying for the fastcgi stuff, and it means that
one http server is pretty much tied to one fastcgi server; but since the
fastcgi server is accessing the same filesystem as the http server, the
“-e $filename” and other file-related cleverness in nginx can work.

I suggest just setting up a quick test, and seeing if one plan doesn’t
work well enough for you.

Good luck,

f

Francis D. [email protected]

On Mon, Apr 18, 2011 at 02:45:15PM -0400, TECK wrote:

Hi there,

Thanks Francis. My goal is to install Nginx only into load balancer.

Ok. But you have mentioned

upstream proxy {
server 1.localserverip:8000;
server 2.localserverip:8000;
server 3.localserverip:8000;
server 4.localserverip:8000;
}

and proxy_pass talks to a http server. So, what http server will you
have listening on port 8000, if not nginx?

If you follow Igor’s “don’t proxy static”, then it doesn’t matter. And
if you let the load balancer know about the fastcgi stuff, then it also
doesn’t matter.

location ~ .php$ {
fastcgi_pass php;
include fastcgi.conf;
}

Can you give me a real example? Location [php urls] is not really clear
for me.

You want something that will match all of the requests that should be
handled by the fastcgi server, and will match none of the requests that
should not be.

If “location ~ .php$” works for you, stick with it.

Exactly what you have working already, should keep working. If it
doesn’t,
then the generated error information will be interesting.

What happened when you tried it?

Good luck,

f

Francis D. [email protected]

Ok, some updates on the testing. Using the configuration listed below
does allow me to display index.php, but not index.html:
http {

upstream backend {
server 192.168.0.2:8000;
server 192.168.0.3:8000;
server 192.168.0.4:8000;
}

upstream fastcgi {
server 192.168.0.2:9000;
server 192.168.0.3:9000;
server 192.168.0.4:9000;
}

}

server {
listen publicip:80 default_server backlog=256 rcvbuf=32k
sndbuf=8k;
server_name domain.com;
root /var/www/html;
index index.php index.html;

location / {
proxy_pass http://backend;
}

location ~ .php$ {
fastcgi_pass fastcgi;
include fastcgi.conf;
}
}

I tried to use only one server as main entrance but the load is going
very high, once I run siege on it.
How would I enable support for regular html files in the above
configuration?

Thanks.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,191346,193007#msg-193007

On Fri, Apr 22, 2011 at 05:02:56AM -0400, TECK wrote:

Hi there,

Ok, some updates on the testing. Using the configuration listed below
does allow me to display index.php, but not index.html:

In your setup, when you access http://publicip/file.php, you want the
request to go to nginx, and then have nginx talk to one of the fastcgi
servers to get the response.

You report that this is working.

When you access http://publicip/file.html, you want the request to go
to nginx, and then what?

Do you want this nginx to serve /var/www/html/file.html from its
filesystem?; or do you want this nginx to talk to one of the other
backend
web servers to let one of them provide the file from their filesystem?;
or something else?

location / {
proxy_pass http://backend;
}

location ~ .php$ {
fastcgi_pass fastcgi;
include fastcgi.conf;
}

I tried to use only one server as main entrance but the load is going
very high, once I run siege on it.
How would I enable support for regular html files in the above
configuration?

If you want nginx to serve the file itself, get rid of the “proxy_pass”
line.

If you want a backend server to handle it, this should already be
working. You report that it isn’t, so check the logs of nginx, and of
the backend servers, to see if you can see where it fails.

(If that isn’t enough to solve this, the in the reply, please include
what you do see instead of the content of /var/www/html/file.html when
you make the request. The http status code is probably significant.)

Good luck with it,

f

Francis D. [email protected]

I think I understand now. This configuration will require that I have
installed Nginx in all servers listed below:

upstream proxy {
server 1.localserverip:8000;
server 2.localserverip:8000;
server 3.localserverip:8000;
server 4.localserverip:8000;
}

when in real life, I will have Nginx installed into 1.localserverip
only.
Presuming that I plan to add another balancer to the equation:

upstream proxy {
server 1.localserverip:8000;
server 2.localserverip:8000;
}

while I keep serving the php files from all servers:

upstream php {
server 1.localserverip:9000;
server 2.localserverip:9000;
server 3.localserverip:9000;
server 4.localserverip:9000;
}

That will require I install Nginx into 1.localserverip and
2.localserverip. So far, it is simple and logical.
My concern is what will happen with my fastcgi as well the try_files
directives:

upstream proxy {
server 1.localserverip:8000;
server 2.localserverip:8000;
}

upstream php {
server 1.localserverip:9000;
server 2.localserverip:9000;
server 3.localserverip:9000;
server 4.localserverip:9000;
}

location / {
proxy_pass http://proxy;
}

location /forum/ {
try_files $uri $uri/ /forum/data.php$args;
}

location ~ .php$ {
fastcgi_pass php;
include fastcgi.conf;
}

Can you post a quick configuration that will work with the above setup?
Thanks.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,191346,192160#msg-192160

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs