Nginx NAS serving large files

Hello Ladies and Gentlemen,

We have a very frustrating problem with a NAS configuration that we
recently set-up. After running a filesharing website on 8 servers for 1+
years we decided to switch to a NAS configuration. We set-up our NAS
(24x1TB Raid 5) to serve files to 9 front end servers over a private
switch - doing 9 servers on 1 disk(logical)… not sure if nginx can
handle this. Since we’ve moved to the NAS BW usage has dropped
significantly even though our traffic has stayed the same. Download
speeds are pretty bad aswell but this is due to IO issues that are being
created by NGINX. so here’s the problem we are facing…

In theory nginx is creating an io bottleneck because it’s opening to
many times without closing. We’re using nginx on our front end servers
to grab files off our nas - each file will stay alive though for 45+
minutes. Instead of opening it for 10 seconds, reading into memory, and
closing - it’s opening it for 10 seconds, reading into memory, streaming
it for 45 minutes, THEN closing. We ran a test with nginx and we were
only able to download at 30kbps on port 182, how-ever over apache we
were able to reach speeds of 20mb/s.

We’ve re-compiled nignx and installed the most recent stable version but
that didn’t fix anything.

Can someone please suggest a fix here? I’ve ran out of ideas…

Thank you!

Do you have sendfile on or off? We generally run it off for NAS. Also,
are
you using the open file cache?

Brian A. wrote in post #1027955:

Do you have sendfile on or off? We generally run it off for NAS. Also,
are
you using the open file cache?

Hello, yes at the moment we have sendfile set to off. And yes we are
using open file cache. And our readahead is 2mb incase you’re wondering.

Any other ideas?

Graham,

Forgive my ignorance, what’s “port 182” in your original email?

In your tests did you use a single nginx instance and a single Apache
against that 1 logical disk on Netapp?

Can you show sections on nginx configuration relevant to disk operations
(also, how many workers per nginx server)?

Do you use any specific tunables to adjust the network settings for
Netapp?

How many files on Netapp partition?

What’s your OS?

Any ideas guys - I need to get this resolved asap.

Otherwise i need to figure another solution for our NSF NAS Nginx
configuration…

On Sat, Oct 22, 2011 at 08:29:05PM +0200, Graham D. wrote:

created by NGINX. so here’s the problem we are facing…
We’ve re-compiled nignx and installed the most recent stable version but
that didn’t fix anything.

Can someone please suggest a fix here? I’ve ran out of ideas…

What OS do you use ?

Try the following:

worker_processes 32;

http {
sendfile off;
output_buffers 1 1m;

}


Igor S.

Hi,

On 25.10.2011 06:19, Graham D. wrote:

Any ideas guys - I need to get this resolved asap.

Otherwise i need to figure another solution for our NSF NAS Nginx
configuration…

Which HW do you have (Networkcard, …)?
Do you have anything in the messages/syslog file about some limits?
Do you use connection track in the iptables?

Cheers
Aleks

Andrew A. wrote in post #1028314:

Graham,

Forgive my ignorance, what’s “port 182” in your original email?

In your tests did you use a single nginx instance and a single Apache
against that 1 logical disk on Netapp?

Can you show sections on nginx configuration relevant to disk operations
(also, how many workers per nginx server)?

Do you use any specific tunables to adjust the network settings for
Netapp?

How many files on Netapp partition?

What’s your OS?

Hello Andrew, where not using a high end NAS and our OS is Centos 64 Bit

Aleksandar L. wrote in post #1028318:

Which HW do you have (Networkcard, …)?
Do you have anything in the messages/syslog file about some limits?
Do you use connection track in the iptables?

Cheers
Aleks

Hello Aleksandar, Our NAS is working fine and there are no errors in the
logs. The problem is with nginx it self.

What OS do you use ?

Try the following:

worker_processes 32;

http {
sendfile off;
output_buffers 1 1m;

}


Igor S.

Hello Igor, we are using Centos 64 Bit. We haven’t tried using
output_buffers yet so we will try that shortly…

Right now nginx is keeping the filehandle open after the file is cached
in ram. So we end up with 3,000 filehandles open for the duration of a
45 minute download.

This is a very frustrating problem.

On 25.10.2011 18:06, Graham D. wrote:

[snipp]

the
logs. The problem is with nginx it self.

I mean on your nginx server.

We are using a NFS appliance with a very simple nginx config and having
no
issues. we have this set:

sendfile off;
tcp_nopush off

And that’s all we really did as far as tuning is concerned.