Reading from a named pipe / fifo

Hi,

how could I configure (or change) nginx to serve the output
from a named pipe / fifo (that was created with mknod or mkfifo)
as if it were a regular HTML file?

Let’s say I have a fifo named fifo.html that gets input from an
application. This input will sometimes be read locally by another
application and sometimes remotely through nginx, but once the
information is read, it should be considered consumed (no longer
available).

Can the current version of nginx be configured to provide this
feature?

Limiting the number of times a certain file can be served would
also be an acceptable solution.

If this isn’t possible, could you point me in the right direction
so I could change the source code or write a module that would
allow nginx to serve output from named pipes / fifos?

Thank you,
Max

nginx can http proxy to a unix socket. (as well as fastcgi, uwsgi, etc).
So
could the other app provide the data that way? With a fifo, you’d need
a
way to specify when to stop reading, so you may as well use a known
protocol.

21 января 2012, 02:21 от Brian A. : > nginx can http proxy to a unix
socket. (as well as fastcgi, uwsgi, etc). So > could the other app
provide the data that way? With a fifo, you’d need a > way to specify
when to stop reading, so you may as well use a known > protocol.
Unfortunately, the other app can provide the data only through a fifo
and there’s no easy way to change it due to licensing, so writing a fifo
module for nginx would be the easier and cleaner solution. Igor and
other developers, could you please give me a few tips to help me get
started with the development of the fifo module? Thank you, Max

Max Wrote:

Unfortunately,
the other app can provide the data only through a
fifo and there’s no easy way to change it due to
licensing, so writing a fifo module for nginx
would be the easier and cleaner solution. Igor and
other developers, could you please give me a few
tips to help me get started with the development
of the fifo module? Thank you,
Max

Personally, the way i’d do it is to write a bit of php (or python, perl,
etc) called via fast cgi to read your fifo (be mindful of the blocking
nature of pipes which can cause such things to hang waiting for input).

but,
http://squirrelshaterobots.com/programming/php/building-a-queue-server-in-php-part-3-accepting-input-from-named-pipes/
is an example of reading a fifo from php. And it would be quite easy to
do something like this (kinda off the top of my head):

    location /location/of/my/fifo.html {
            include /etc/nginx/fastcgi_params;
            fastcgi_pass

unix:/var/run/php-fastcgi/php-fastcgi.socket;
fastcgi_param SCRIPT_FILENAME
/path/to/local/php/that/reads/pipe.php;
}

within the nginx config…

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,221499,221507#msg-221507

21 января 2012, 21:41 от “takigama” [email protected]:

Personally, the way i’d do it is to write a bit of php (or python, perl,
etc) called via fast cgi […]

Thanks for the suggestion, but using FastCGI is not an option for
many reasons. The most elegant solution I could come up with
without hacking nginx involves embedded Perl like this:

===<FIFOReader.pm>===
use nginx;

sub handler {
my $r = shift;
alarm 5;
open(FIFO, “</path/to/fifo”) or return 444;
while () {
$r->print($_);
}
$r->rflush();
close(FIFO);
return OK;
}
===</FIFOReader.pm>===

===<nginx.conf>===
http {
perl_modules perl.modules/;
perl_require FIFOReader.pm;

server {
location ~ ^/readfifo {
ssi on;
perl FIFOReader::handler;
}
}
}
===</nginx.conf>===

======

======

FIFOReader.pm is located in $nginx_conf_dir/perl.modules/
readfifo is located in $document_root/

In the FIFOReader Perl module I first set the alarm to allow
the blocking open() call 5 seconds to open the FIFO. After
5 seconds the blocking open() call gets interrupted and
nginx returns code 444, which tears down the connection
without sending any kind of response code, so there is
no need to check for response codes - the client gets
either nothing or the data from the FIFO.

If the blocking open() call succeeds before the timeout,
the alarm is turned off and the data is read from
the FIFO and passed on to the client, after which the
FIFO is closed and the OK code is returned by nginx.

There is no additional alarm to prevent read timeouts
(before while ()) because that is left up to the
client by design.

Another side benefit of this approach is that it doesn’t
require additional processes (such as FastCGI wrappers),
so there is no need to supervise additional processes to
make sure they’re around. The embedded Perl module
will work for as long as nginx is running.

I have started working on a FIFO module to allow per-location
FIFO opening and reading. If I get it to production quality I’ll
post my patches to the list.

Max

21 января 2012, 21:41 от “takigama” [email protected]:

Personally, the way i’d do it is to write a bit of php (or python, perl,
etc) called via fast cgi […]

Thanks for the suggestion, but using FastCGI is not an option for
many reasons. The most elegant solution I could come up with
without hacking nginx involves embedded Perl like this:

===<FIFOReader.pm>===
package FIFOReader;

use nginx;

sub handler {
my $r = shift;
alarm 5;
open(FIFO, “</path/to/fifo”) or return 444;
while () {
$r->print($_);
}
$r->rflush();
close(FIFO);
return OK;
}

1;
END
===</FIFOReader.pm>===

===<nginx.conf>===
http {
perl_modules perl.modules/;
perl_require FIFOReader.pm;

server {
location ~ ^/readfifo {
ssi on;
perl FIFOReader::handler;
}
}
}
===</nginx.conf>===

======

======

FIFOReader.pm is located in $nginx_conf_dir/perl.modules/
readfifo is located in $document_root/

In the FIFOReader Perl module I first set the alarm to allow
the blocking open() call 5 seconds to open the FIFO. After
5 seconds the blocking open() call gets interrupted and
nginx returns code 444, which tears down the connection
without sending any kind of response code, so there is
no need to check for response codes - the client gets
either nothing or the data from the FIFO.

If the blocking open() call succeeds before the timeout,
the alarm is turned off and the data is read from
the FIFO and passed on to the client, after which the
FIFO is closed and the OK code is returned by nginx.

There is no additional alarm to prevent read timeouts
(before while ()) because that is left up to the
client by design.

Another side benefit of this approach is that it doesn’t
require additional processes (such as FastCGI wrappers),
so there is no need to supervise additional processes to
make sure they’re around. The embedded Perl module
will work for as long as nginx is running.

I have started working on a FIFO module to allow per-location
FIFO opening and reading. If I get it to production quality I’ll
post my patches to the list.

Max

On Jan 21, 2012, at 9:04 PM, Max wrote:

Thanks for the suggestion, but using FastCGI is not an option for
many reasons. The most elegant solution I could come up with
without hacking nginx involves embedded Perl like this:

Won’t embedded perl block in this case? I’m not as familiar with it. If
it does, this seems like a sub-optimal solution. FastCGI would not block
on the nginx side. It’s pretty trivial to manage fastcgi processes.

It doesn’t have to be fastcgi, it could be a min-http server uwsgi or
whatever or is there some restriction that nginx must talk directly to
the fifo?

21 января 2012, 21:41 от “takigama” :

Personally, the way i’d do it is to write a bit of php (or python, perl,
etc) called via fast cgi […]

Thanks for the suggestion, but using FastCGI is not an option for
many reasons. The most elegant solution I could come up with
without hacking nginx involves embedded Perl like this:

===<FIFOReader.pm>===
package FIFOReader;

use nginx;

sub handler {
my $r = shift;
alarm 5;
open(FIFO, “</path/to/fifo”) or return 444;
alarm 0;
while () {
$r->print($_);
}
$r->rflush();
close(FIFO);
return OK;
}

1;
END
===</FIFOReader.pm>===

===<nginx.conf>===
http {
perl_modules perl.modules/;
perl_require FIFOReader.pm;

server {
location ~ ^/readfifo {
ssi on;
perl FIFOReader::handler;
}
}
}
===</nginx.conf>===

======

======

FIFOReader.pm is located in $nginx_conf_dir/perl.modules/
readfifo is located in $document_root/

In the FIFOReader Perl module I first set the alarm to allow
the blocking open() call 5 seconds to open the FIFO. After
5 seconds the blocking open() call gets interrupted and
nginx returns code 444, which tears down the connection
without sending any kind of response code, so there is
no need to check for response codes - the client gets
either nothing or the data from the FIFO.

If the blocking open() call succeeds before the timeout,
the alarm is turned off and the data is read from
the FIFO and passed on to the client, after which the
FIFO is closed and the OK code is returned by nginx.

There is no additional alarm to prevent read timeouts
(before while ()) because that is left up to the
client by design.

Another side benefit of this approach is that it doesn’t
require additional processes (such as FastCGI wrappers),
so there is no need to supervise additional processes to
make sure they’re around. The embedded Perl module
will work for as long as nginx is running.

I have started working on a FIFO module to allow per-location
FIFO opening and reading. If I get it to production quality I’ll
post my patches to the list.

Max

On Jan 21, 2012, at 10:35 PM, Max wrote:

It will block, but not indefinitely, only for as long as I want it to block.
That’s why I used alarm to give open() 5 seconds to succeed

5 seconds can be an eternity in internet time.

Exactly, there are strict MAC (Mandatory Access Control) restrictions
in place.

You can use MAC with a fastcgi script.

Even if there weren’t, I’d still prefer the direct approach, it’s
much simpler and cleaner, IMHO.

I guess we’ll just have to disagree about that. Doing things to make
nginx block for seconds at a time on purpose seems much worse than
managing a simple fastcgi script. IMNSHO.

–Brian

22 января 2012, 06:26 от Brian A. [email protected]:

Won’t embedded perl block in this case? I’m not as familiar with it. If it
does, this seems like a sub-optimal solution. FastCGI would not block on the
nginx side. It’s pretty trivial to manage fastcgi processes.

It will block, but not indefinitely, only for as long as I want it to
block.
That’s why I used alarm to give open() 5 seconds to succeed - in case
it fails to open the FIFO within 5 seconds, it gets interrupted
(through SIGALRM) and since it fails the “or return 444” part
returns control to nginx to close the client’s connection.

However, if the open() call succeeds within 5 seconds, the alarm
gets turned off (alarm 0) and then the FIFO is read from.

It doesn’t have to be fastcgi, it could be a min-http server uwsgi or whatever
or is there some restriction that nginx must talk directly to the fifo?

Exactly, there are strict MAC (Mandatory Access Control) restrictions
in place. Even if there weren’t, I’d still prefer the direct approach,
it’s
much simpler and cleaner, IMHO. Why would you want to use another
wrapper process and yet another process to supervise the wrapper
process and yet another process to read the FIFO and at least
another pair of sockets when you don’t have to?

Max

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs