Php file with no extension

hi,

i have ANOTHER issue :slight_smile: sorry for this.

we have a file, called ‘get’ which should be treated as PHP.

does anyone know how to make nginx send that specific file or … all
requests that go throught that file, to send it to the apache proxy?

i am talking about this:

the file is named FILE. it generates a directory structure like the one
below. so i would like all requests that begin with FILE, to be treated
as php, and parsed accordingly.

FILE/video/1/thumb

    location ~* FILE$ {
        proxy_pass        http://localhost:8000;
        proxy_redirect    off;
        proxy_set_header  X-Forwarded-For 

$proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
}

this is the current ruleset.

    location ~* FILE\/*\/*\/*$ {
        proxy_pass        http://localhost:8000;
        proxy_redirect    off;
        proxy_set_header  X-Forwarded-For 

$proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
}

this doesn’t work either.

Hello!

On Fri, Jan 11, 2008 at 10:57:38PM +0100, Stefanita rares Dumitrescu
wrote:

i am talking about this:

the file is named FILE. it generates a directory structure like the one
below. so i would like all requests that begin with FILE, to be treated
as php, and parsed accordingly.

FILE/video/1/thumb

Try something like

 location /FILE/ {
     proxy_pass ...;
     ...
 }

Maxim D.

p.s. You are posting to mailing list, not forum. Please don’t post
multiple messages unless really need to and quote previous
messages.

Hi Igor and everyone,

I’m toying with some ideas for a multithreaded FastCGI application,
intended to work with nginx.
Seems that FCGI request/connection multiplexing would be an extremely
valuable feature.

I can’t find any reference to this on the wiki, in the mailing list or
on Google - I assume this is because nginx, like Apache, doesn’t
support FastCGI multiplexing. Is this correct? If so, are there any
plans to implement it at some point in the future?

It’s a spare-time/hobby project, so this is just out of interest really.

Cheers
Igor

Maxim D. wrote:

Hello!

On Fri, Jan 11, 2008 at 10:57:38PM +0100, Stefanita rares Dumitrescu
wrote:

i am talking about this:

the file is named FILE. it generates a directory structure like the one
below. so i would like all requests that begin with FILE, to be treated
as php, and parsed accordingly.

FILE/video/1/thumb

Try something like

 location /FILE/ {
     proxy_pass ...;
     ...
 }

Maxim D.

p.s. You are posting to mailing list, not forum. Please don’t post
multiple messages unless really need to and quote previous
messages.

sorry for the erasing quotes i forgota bout the mailing list.

i made some modifications to my setup. took off the apache, and loaded
php-fastcgi.

so the current config looks like:

server {
    listen       1.2.3.4:80;
    server_name  host1.com;
    #charset koi8-r;
    access_log  logs/fs01.nl.eu.bioget.com.access.log main;

    location /data {
        root   /home/fs01/storage;
        index  index.html index.htm index.php;
    }

    # serve static files directly
    location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|flv|zip|mp3)$ {
        root              /home/fs01/storage;
        access_log        off;
        expires           30d;
    }


    location /get/ {
            fastcgi_pass   127.0.0.1:8000;
            fastcgi_index  index.php;
            fastcgi_param  SCRIPT_FILENAME 

/home/fs01/www$fastcgi_script_name;
include /usr/local/etc/nginx/fastcgi.fs01.conf;
}

the file ‘get’ generates directory structure like:

http://host1.com/get/picture/5/data

/usr/home/fs01/www/get(25) : Notice - Undefined index: REQUEST_URI
/usr/home/fs01/www/get(26) : Notice - Undefined index: SCRIPT_NAME

below you have the fastcgi params config file.

[[email protected]:/home/fs01/www] cat /usr/local/etc/nginx/fastcgi.fs01.conf
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;

fastcgi_param SCRIPT_NAME //$fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;

fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;

fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;

PHP only, required if PHP was built with --enable-force-cgi-redirect

fastcgi_param REDIRECT_STATUS 200;

Igor C. ha scritto:

implement it at some point in the future?

As far as I know the multiplexing support in FastCGI is broken “by
design”.

Read this response one of the authors of Twisted gave me about FastCGI
(among other questions):
http://twistedmatrix.com/pipermail/twisted-web/2006-April/002598.html

It’s a spare-time/hobby project, so this is just out of interest really.

Cheers
Igor

Manlio P.

On 14 Jan 2008, at 13:46, Manlio P. wrote:

As far as I know the multiplexing support in FastCGI is broken “by
design”.

Read this response one of the authors of Twisted gave me about
FastCGI (among other questions):
http://twistedmatrix.com/pipermail/twisted-web/2006-April/002598.html

Thanks Manlio, that’s very interesting. Lack of flow control in the
protocol is obviously an issue for multiplexing; now that it’s been
pointed out, it seems bizarre that it should have been missed out. One
wonders if the intention was for the application to send an HTTP 503
over the FCGI connection in the event of overloading? I guess this
would require a web server module to back off from overloaded
application instances based on their HTTP status code - which seems
like trying to patch up the shortcomings of the transport in the
application.

It’s a shame; it seemed that removing all the TCP overhead between the
web server and the application server would be a good thing, but
perhaps FCGI just isn’t the way. I’m still just researching the area
at the moment though so any further thoughts or experiences would be
very welcome.

Is there any plan to implement HTTP/1.1 & keepalive connections in
nginx’s conversations with upstream servers? Can’t see anything in the
wiki or feature request list.

Cheers,
Igor

Thanks Manlio.

Igor C. ha scritto:

support FastCGI multiplexing. Is this correct? If so, are there any
protocol is obviously an issue for multiplexing; now that it’s been
pointed out, it seems bizarre that it should have been missed out. One
wonders if the intention was for the application to send an HTTP 503
over the FCGI connection in the event of overloading?

Maybe they just though that overflow is not possible, who knows.
TCP, as an example, has some form of flow control (but usually FastCGI
uses an UDP connection)

I guess this would
require a web server module to back off from overloaded application
instances based on their HTTP status code - which seems like trying to
patch up the shortcomings of the transport in the application.

It’s a shame; it seemed that removing all the TCP overhead between the
web server and the application server would be a good thing, but perhaps
FCGI just isn’t the way.

But with FCGI you can just execute one request at a time, using a
persistent connection.

The problems, with nginx, are:

  1. the upstream module does not supports persistent connections.
    A new connection is created for every request.
    In fact the http proxy supports HTTP 1.0 with the backend.
  2. nginx does not supports some forms of connections queue with upstream
    servers.
    This means that if nginx is handling 500 connections, then it will
    make 500 concurrent connections to the upstream server, and likely
    the upstream server (usually “a toy”) will not be able to handle
    this

Fixing this will require a redesign of the upstream module.

I’m still just researching the area at the
moment though so any further thoughts or experiences would be very welcome.

Is there any plan to implement HTTP/1.1 & keepalive connections in
nginx’s conversations with upstream servers? Can’t see anything in the
wiki or feature request list.

Igor S. has expressed his intentions to add support for persistent
upstream connections, try searching the mailing list archives.

Cheers,
Igor


Igor C. // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 7749
5355 // www.pokelondon.com

Manlio P.

Manlio P. ha scritto:

uses an UDP connection)

Ops, for some reason I have written (and talked about) UDP instead of
Unix Domain Socket TCP connections…

Manlio P.

Denis F. Latypoff ha scritto:

Hello Manlio,

Hi.

This means that if nginx is handling 500 connections, then it will

allows.

Not sure if this is possible, without having to modify the fastcgi
module.

OR

create a second fastcgi library which is non-blocking and threaded: main process
is accepts and process requests, all other threads are run app

The problem is the same: threads are not cheap, so creating 500 threads
is likely to destabilize the server and the system (nowadays good
multithreaded servers make use of thread pools).

Moreover some applications are not thread safe.

With threads you usually just spare memory, but you add an overhead
caused by synchronization.

(similar to
mod_wsgi, isn’t it?)

No.
The WSGI module for nginx is embedded in nginx, and nginx just uses a
fixed number of worker processes (and all the processess accept new
requests on the same inherited socket).

[…]

Manlio P.

Hello Manlio,

Monday, January 14, 2008, 10:54:53 PM, you wrote:

or on Google - I assume this is because nginx, like Apache, doesn’t
Thanks Manlio, that’s very interesting. Lack of flow control in the
protocol is obviously an issue for multiplexing; now that it’s been
pointed out, it seems bizarre that it should have been missed out. One
wonders if the intention was for the application to send an HTTP 503
over the FCGI connection in the event of overloading?

Maybe they just though that overflow is not possible, who knows.
TCP, as an example, has some form of flow control (but usually FastCGI
uses an UDP connection)

I guess this would
require a web server module to back off from overloaded application
instances based on their HTTP status code - which seems like trying to
patch up the shortcomings of the transport in the application.

It’s a shame; it seemed that removing all the TCP overhead between the
web server and the application server would be a good thing, but perhaps
FCGI just isn’t the way.

But with FCGI you can just execute one request at a time, using a
persistent connection.

The problems, with nginx, are:

  1. the upstream module does not supports persistent connections.
    A new connection is created for every request.
    In fact the http proxy supports HTTP 1.0 with the backend.
  2. nginx does not supports some forms of connections queue with upstream
    servers.
    This means that if nginx is handling 500 connections, then it will
    make 500 concurrent connections to the upstream server, and likely
    the upstream server (usually “a toy”) will not be able to handle this

Fixing this will require a redesign of the upstream module.

OR

create a special layer (say multiplexing proxy) which holds all 500
concurrent connections and pass only number of connections which fcgi
backlog
allows.

OR

create a second fastcgi library which is non-blocking and threaded: main
process
is accepts and process requests, all other threads are run app (similar
to
mod_wsgi, isn’t it?)

I’m still just researching the area at the
moment though so any further thoughts or experiences would be very welcome.

Is there any plan to implement HTTP/1.1 & keepalive connections in
nginx’s conversations with upstream servers? Can’t see anything in the
wiki or feature request list.

Igor S. has expressed his intentions to add support for persistent
upstream connections, try searching the mailing list archives.

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs