Stat() permission denied

I get a stat() permission denied on certain directories (it used to be
another one, now it is this one) often. The directory exists, is not
huge by any means, and the website loaded without an issue.

2008/04/26 08:05:59 [crit] 9529#0: *3254134 stat()
“/home/mike/web/michaelshadle.com/” failed (13: Permission denied),
client: 71.123.33.22, server: michaelshadle.com, request: “GET /
HTTP/1.0”, host: “michaelshadle.com

What could cause this? How can I fix it? It is mounted over NFS… it
has the same permissions as all the other directories. I would expect
all of them to throw the same error then., It’s almost like it picks a
specific inode to hate or something.

[[email protected] nginx]# stat /home/mike/web/michaelshadle.com/
File: `/home/mike/web/michaelshadle.com/’
Size: 512 Blocks: 4 IO Block: 32768 directory
Device: 15h/21d Inode: 8997591 Links: 15
Access: (0711/drwx–x--x) Uid: ( 1000/ mike) Gid: ( 1000/ mike)
Access: 2008-04-26 01:01:05.000000000 -0700
Modify: 2008-03-04 02:17:22.000000000 -0800
Change: 2008-04-23 19:27:16.000000000 -0700
[[email protected] nginx]#

On Sat, Apr 26, 2008 at 08:09:31AM -0700, mike wrote:

has the same permissions as all the other directories. I would expect
Modify: 2008-03-04 02:17:22.000000000 -0800
Change: 2008-04-23 19:27:16.000000000 -0700
[[email protected] nginx]#

It seems that nginx workers run under user other than mike.

they do, they run as www-data

but why would it fail just on that one dir, and not the milions of
other files and dirs on the system owned by different users?

On Sat, Apr 26, 2008 at 12:28:14PM -0700, mike wrote:

they do, they run as www-data

but why would it fail just on that one dir, and not the milions of
other files and dirs on the system owned by different users?

Because this dir has these access rights:
Access: (0711/drwx–x--x) Uid: ( 1000/ mike) Gid: ( 1000/
mike)

No one except owner (mike) and root can read it.
Probably, stat() operation is somehow as reading directory as file.
Or it may be NFS specific.

BTW NFS is not good for using with nginx. NFS may be slow than local
disk,
so entire worker will be block on NFS operation and will not handle
other
connections.

What would you suggest then for a central storage mechanism?

NFS and CIFS are the only I know of (that are free) - and CIFS is even
worse AFAIK.

I’m actually going to try to migrate my clients to use MogileFS for
their data, and then have the website code be done using rsync - so I
will hopefully be reducing NFS reliance as much as possible.

If you’ve got any ideas though I am all for it!

mike wrote:

What would you suggest then for a central storage mechanism?
I think, the CodaFS is possible solution.

I’m actually going to try to migrate my clients to use MogileFS for
their data, and then have the website code be done using rsync - so I
will hopefully be reducing NFS reliance as much as possible.

If you’ve got any ideas though I am all for it!
Some links for OpenSolaris/Nexenta users:
http://docs.sun.com/app/docs/doc/817-5093/fscachefs-70682?l=en&q=cachefs&a=view
http://docs.sun.com/source/819-6148-10/index.html

I use pure NFS (mounted read only) for nginx under Solaris with no
problems.

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs