Igor C. wrote:
really make any difference. When I switched the app server 2 to use
caching stuff, or whether if I’d been able to spend more time tuning the
NFS configuration I might have been able to get lower CPU usage? I do
seem to hear of people doing what I wanted to do (obviously it’s better
to have the code in one place and not have to update in multiple places
if possible) so I’m sure there must be ways to get it to work; quite
possibly my NFS configuration was naïve …
I’m not sure, we use an NFS mounted directory shared between 4 web
servers.
Our exported directory is sitting on hardware RAID10 (4x15000 RPM
Ultra320 disks on an LSI megaraid with 512mb battery backed up cache)
with the following export options:
/opt/shared/htdocs x.x.x.x/255.255.255.0(rw,no_subtree_check,sync)
clients mount it with options
rw,hard,intr,udp,noatime,rsize=8192,wsize=8192,async,auto
(on bonded gigabit ethernet)
Each of the 4 web servers uses it for an apache+mod_perl document root,
as well as nginx (for the static files).
Latency is tolerable though certainly higher than local filesystem, read
performance is OK, write performance is lousy, even if (or perhaps
because) you get NFS locking working properly.
Everything pretty much works, but directories with heavy read/write
activity suffer (ie Apache::Session directories).
Some benchmarks (somewhat old - with plain apache+mod_perl only):
three runs of ``ab -n10000 c10’’
Test 1: no sessions, local file
Requests/sec: 480.34 478.35 481.22
Test 2: no sessions, file served on nfs
Requests/sec: 475.26 472.72 472.41
Test 3: local sessions using tmpfs (AKA shm fs)
Requests/sec: 122.68 120.15 112.74
Test 4: sessions on nfs mounted device (ext3)
Requests/sec: 21.87 22.32 21.57
Test 5: sessions on nfs mounted ramdisk (ext2)
Requests/sec: 94.96 88.75 108.80
We use sessions on a very small subset of files served so that loss is
acceptable.
YMMV, of course.