NGINX serving data via NFS mount

Hi,

Can anybody tell me what are the things needed by nginx to forward the
request via the NFS mount point?? Changes to the config file as such??

The config file looks like as shown below:

http {


server {
listen *:80 default accept_filter=httpready;
server_name vs0;
root /var/home/diag;
autoindex on;

}

The mount path is as shown above against the root entry.

This config is resulting in an error when i try to send request using
Curl
as shown below:

[rakshith@cyclnb15 ~]$ curl -X GET -qvk
http://10.238.62.234:80/vol1_mnt_point/output.dat

< HTTP/1.1 404 Not Found

But the file actually exists:

bash-3.2# pwd
/var/home/diag/vol1_mnt_point

bash-3.2# ls
.snapshot nginx.tar output.dat

Any help on this is greatly appreciated!!!

Thanks,
Rakshith

Posted at Nginx Forum:

On 12 Aug 2013 08:34, “Rakshith” [email protected] wrote:

Hi,

Can anybody tell me what are the things needed by nginx to forward the
request via the NFS mount point?? Changes to the config file as such??

Nothing special is needed in my experience. That’s the point of NFS
exposing a “normal” file system to user-space applications.

Some things you may wish to check:

Your curl invocation’s host doesn’t match the config
Are you sure you’re hitting that nginx server{}? Try it with vs0 instead
of
the IP (you may need a hosts file entry, of course)

Check the nginx error log.

Check the permissions, the directory and file ownership are all correct
and
allow the nginx daemon access - not just on the file you’re accessing
but
all the directories in the FS hierarchy leading to it. Ownership
mismatches
are a common operational NFS problem

Lastly, I would never point nginx to the root of a filer’s FS. Even in
testing it’s a bad idea. Create a directory to hold your content.

Cheers,
Jonathan

It makes no difference what file system the file is on. You just need to
ensure that the files are accessible, so take care with uid/gid used to
mount, as well as file ownership. Standard entries in /etc/exports work
from what I remember.

You will have a performance hit to contend with. I usually use lsync
with a backup of rsync and keep the files local.

hth,

Steve

So here is what the export policy looks like:

             Policy     Rule    Access  Client     RO

Vserver Name Index Protocol Match Rule



vs0 default 1 any 0.0.0.0/0 any

So i would like my nginx server as below:

Receive GET/PUT request from a client.
Forward the request to the NFS client via the NFS mount point.
The NFS client which has mounted the file system would then use NFS to
fetch
the file.

So to summarize, nginx server just acts like a proxy here…

FYI: I did try doing a GET and PUT via the VFS and it worked… The
config
file looks something like below:

http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
dav_methods PUT DELETE MKCOL COPY MOVE;
create_full_put_path on;
client_max_body_size 10G;

server {
    listen *:80 default accept_filter=httpready;
    server_name vs0;
    root /clus/vs0;
    autoindex on;

    location = /favicon.ico {
        access_log off;
        log_not_found off;
    }

}

}

Posted at Nginx Forum:

On 12 Aug 2013 09:41, “Rakshith” [email protected] wrote:

So here is what the export policy looks like:

             Policy     Rule    Access  Client     RO

Vserver Name Index Protocol Match Rule



vs0 default 1 any 0.0.0.0/0 any

That means nothing to me (in this nginx context). You need to check
file
permissions/ownership at the Unix FS level.

So i would like my nginx server as below:

Receive GET/PUT request from a client.
Forward the request to the NFS client via the NFS mount point.
The NFS client which has mounted the file system would then use NFS to
fetch
the file.

You need to explain this better. Nginx won’t give a damn that the file
is
on NFS, but what you’re explaining has nothing to do with nginx! Nginx
doesn’t talk “NFS” in any way.

So to summarize, nginx server just acts like a proxy here…

Given what you’ve explained, this is wrong. I /think/ you want Nginx to
serve filesystem-accessible files (admittedly stored on a filer) and
have
got the concept of a proxy here in your head wrongly.

FYI: I did try doing a GET and PUT via the VFS and it worked.

Demonstrate this test please.

The config
file looks something like below:

So if that works, what’s the problem?

Jonathan