Very poor performance for serving static files

we are a video hosting company and storing files in mp4 format. we have
another server, its litespeed and handle nearly 1gbit connection
succesfully. slow but it is responsing requests at least. but this new
server (nginx) cant handle requests. if bw usage hit the 150-200mbit its
going down. actually not down, just doesnt response any http request. we
tried and also hired someones and they tried but no success. this is the
last chance. is there anybody knows what the problem?

here is the nginx.conf


#user nobody;
worker_processes 8;
worker_rlimit_nofile 20480;

error_log /var/log/nginx/error.log info;

#pid logs/nginx.pid;

events {
worker_connections 768;
use epoll;
}

http {
server_name_in_redirect off;
server_names_hash_max_size 2048;
include mime.types;
default_type video/mp4;
server_tokens off;
#log_format main '$remote_addr - $remote_user [$time_local]
“$request” ’
# '$status $body_bytes_sent “$http_referer” ’
# ‘“$http_user_agent” “$http_x_forwarded_for”’;

#access_log  logs/access.log  main;

sendfile        on;
tcp_nopush on;
tcp_nodelay on;
connection_pool_size  256;

#keepalive_timeout  0;
#keepalive_timeout  300;

#gzip  on;

server {
    listen       80;
    server_name  localhost;

    #charset koi8-r;

    #access_log  /var/log/nginx/access.log  combined;

    location / {
        root   /home/username/public_html;
        index  index.php index.html index.htm;

location ~* .(mp4)$ {
expires max;
add_header Pragma public;
add_header Cache-Control “public, must-revalidate,
proxy-revalidate”;
}

   # location ~ \.flv {
   #     accesskey                    on;
   #     accesskey_hashmethod  md5;
   #     accesskey_arg              "key";
   #     accesskey_signature      "mypass$remote_addr";
   #     flv;
   # }

   # location ~ \.mp4 {
   #     accesskey                    on;
   #     accesskey_hashmethod  md5;
   #     accesskey_arg              "key";
   #     accesskey_signature      "mypass$remote_addr";
   #     mp4;

types {

video/mp4 mp4;

}

}

    location /nginx_status {
        stub_status on;
        access_log   off;
        allow all;
        #deny all;
    }

}

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }

    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on

127.0.0.1:9000
#
location ~ .php$ {
root /home/cdn1280/public_html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME
/home/username/public_html$fastcgi_script_name;
include fastcgi_params;
}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}
}

}


and nginx -v

nginx: nginx version: 1.0.4

our server has 1tb hdd and 12gb ram and 8core cpu and 1gbit line

Posted at Nginx Forum:

any system information like io stats?

btw, “worker_connections 768”, is this enough ? maybe you can try a
bigger limit

Posted at Nginx Forum:

Hello!

On Mon, Jul 18, 2011 at 11:46:41AM -0400, asdasd77 wrote:

#user nobody;
worker_processes 8;
worker_rlimit_nofile 20480;

error_log /var/log/nginx/error.log info;

#pid logs/nginx.pid;

events {
worker_connections 768;

You are using really low number for worker_connections, with 8
workers you’ll be only able to serve about 6k connections in
total. If you see nginx not responding to http requests - you are
probably hitting this limit. Try looking at error_log and
stub_status output to see if it’s true.

use epoll;
}

[…]

sendfile        on;

You rely on OS to do actual IO, and this may not be a good idea if
you are serving large files. Your OS will likely use something
about 16k read requests and this will trash your disks with IOPS
and seeks.

Try either AIO or at least normal reading with big buffers (and
without sendfile), i.e.

  sendfile off;
  output_buffers 2 512k;

or something like.

[…]

our server has 1tb hdd and 12gb ram and 8core cpu and 1gbit line

Just 1 spindle isn’t really good, but you should be able to get
something about 600 Mbit/s (raw disk speed on sequentional
reading, test your disk to see more correct numbers) with large
files and proper tuning even if your working set is much bigger
than memory and effective caching isn’t possible.

Maxim D.

zls Wrote:

any system information like io stats?

btw, “worker_connections 768”, is this enough ?
maybe you can try a bigger limit

i dont know how can i provide that io stats information? what is the
command?

we increased worker_connections to 10240. but nothings changed.

Maxim D. Wrote:

at least. but this new

probably hitting this limit. Try looking at
error_log and
stub_status output to see if it’s true.

i looked those files but couldnt see relative errors. and we increased
worker_connections to 10240. but nothings changed.

you are serving large files. Your OS will likely
output_buffers 2 512k;

or something like.

we applied this settings too, but nothings changed.

numbers) with large
files and proper tuning even if your working set
is much bigger
than memory and effective caching isn’t possible.

Maxim D.


nginx mailing list
[email protected]
nginx Info Page

i am looking the server simultaneously with “top -c -d 1”, here is the
output


[root@srv-46 ~]# top -c -d 1
top - 12:33:04 up 13 min, 1 user, load average: 13.82, 10.34, 5.30
Tasks: 158 total, 17 running, 137 sleeping, 0 stopped, 4 zombie
Cpu(s): 1.9%us, 45.7%sy, 0.0%ni, 11.4%id, 37.1%wa, 0.0%hi, 3.8%si,
0.0%st
Mem: 12299044k total, 8913512k used, 3385532k free, 33100k
buffers
Swap: 6094840k total, 0k used, 6094840k free, 5757568k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3180 root 20 -5 0 0 0 R 5.6 0.0 0:53.27 [vmmemctl]
4478 nginx 25 0 103m 76m 800 R 4.9 0.6 0:24.45 nginx:
worker process
4481 nginx 23 0 102m 75m 800 R 4.9 0.6 0:25.05 nginx:
worker process
4483 nginx 25 0 111m 83m 808 R 4.2 0.7 0:25.92 nginx:
worker process
4476 nginx 25 0 102m 75m 812 R 2.8 0.6 0:26.27 nginx:
worker process
4480 nginx 25 0 102m 75m 800 R 2.8 0.6 0:26.22 nginx:
worker process
4488 nginx 25 0 102m 75m 800 R 2.8 0.6 0:25.68 nginx:
worker process
4485 nginx 25 0 101m 73m 800 R 2.1 0.6 0:25.19 nginx:
worker process
893 root 10 -5 0 0 0 D 1.4 0.0 0:01.23
[kjournald]
4477 nginx 25 0 106m 78m 808 R 1.4 0.7 0:24.86 nginx:
worker process
24 root 10 -5 0 0 0 S 0.7 0.0 0:00.04 [events/6]
4337 root 15 0 90168 3420 2660 R 0.7 0.0 0:00.28 sshd:
root@pts/0
5657 root 15 0 12764 1168 832 R 0.7 0.0 0:00.29 top -c -d
1
1 root 15 0 10372 696 580 S 0.0 0.0 0:00.46 init [3]
2 root RT -5 0 0 0 R 0.0 0.0 0:00.00
[migration/0]
3 root 34 19 0 0 0 S 0.0 0.0 0:00.00
[ksoftirqd/0]
4 root RT -5 0 0 0 S 0.0 0.0 0:00.00
[migration/1]


any other suggestions?

actually i dont get it. other server with the same hardware (but
litespeed) is doing its job, but this one dont? i thought nginx created
for especially these situations. why is setting up too difficult and
manual doesnt giving enough information.

Posted at Nginx Forum:

Hello!

On Tue, Jul 19, 2011 at 03:39:00AM -0400, asdasd77 wrote:

[…]

i am looking the server simultaneously with “top -c -d 1”, here is the
output


[root@srv-46 ~]# top -c -d 1
top - 12:33:04 up 13 min, 1 user, load average: 13.82, 10.34, 5.30
Tasks: 158 total, 17 running, 137 sleeping, 0 stopped, 4 zombie
Cpu(s): 1.9%us, 45.7%sy, 0.0%ni, 11.4%id, 37.1%wa, 0.0%hi, 3.8%si,
0.0%st

System cpu usage looks really high…

Mem: 12299044k total, 8913512k used, 3385532k free, 33100k
buffers
Swap: 6094840k total, 0k used, 6094840k free, 5757568k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3180 root 20 -5 0 0 0 R 5.6 0.0 0:53.27 [vmmemctl]

… and it looks you are running in virtualized environment. This
may be a curlpit (see e.g. [1]). Try taking virtualization out of
the picture.

[1] http://blog.boxedice.com/2011/06/15/mongodb-cpu-cores-and-vmmemctl/

Maxim D.

Maxim D. Wrote:

3180 root 20 -5 0 0 0 R 5.6
res-and-vmmemctl/

Maxim D.


nginx mailing list
[email protected]
nginx Info Page

i read that page but i didnt understand anything. we dont use database
that server. even its not been installed.

Posted at Nginx Forum:

i am asking virtualization gurus, How can i solve this problem? do i
need to install nginx again?

Posted at Nginx Forum:

Hello!

On Tue, Jul 19, 2011 at 05:42:57AM -0400, asdasd77 wrote:

Maxim D. Wrote:

[…]

… and it looks you are running in virtualized environment.
This may be a curlpit (see e.g. [1]). Try taking
virtualization out of the picture.

[1]
http://blog.boxedice.com/2011/06/15/mongodb-cpu-cores-and-vmmemctl/

i read that page but i didnt understand anything. we dont use database
that server. even its not been installed.

The link provided is an example of problems caused by
virtualization. Something similar may happen in your case as well
(at least from your top ouput it’s clear that vmmemctl eats lots
of cpu).

Fighting virtualization problems may not be trivial, and at least
it’s not something people here do on a daily basis. As already
suggested, you should try running nginx on a real hardware. This
way it would be possible to narrow down cause of problem you are
seeing: it’s either nginx itself (so we’ll be able to track
bottlenecks and suggests some tunings) or virtualization (so
you’ll be able to ask virtualization gurus what’s going on, and
probably help us to make nginx behave better in such
environments).

Maxim D.

thx for everyone but we will go back to litespeed.

Posted at Nginx Forum:

Hi!

There are many factors that can affect performance of the VM.
Basically, the VM is like a black box – you cannot really measure
performance and compare it to real hardware, as resources are
dynamically allocated as needed, and I/O is likely the weakest point of
the VM.

What is the physical number of CPUs in the virtual domain? How much
physical RAM is allocated to VM guest? What is the version of ESX
vCenter?

First, I would suggest to configure the number of guest OS VPUs (cores)
to be identical to the number of physical CPUs (cores) in the ESX
configuration. Then I would allocate a decent amount of RAM to the VM.

Make sure to use fairly new kernel (2.6.38+) for the guest OS.

Also make sure that you use appropriate drivers for the hard disk
controller and network card eg PVSCSI and VMNET3 in the guest kernel
configuration.

See if any of the tweaks above bring any benefits and post your
results!

Andrejs

Posted at Nginx Forum:

Igor S. Wrote:

Igor S.
i dont know. i am not a server expert. im just a php coder.

i was googled and try to find a solution, because hosting company didnt
solve problem. actually we want to use nginx for httpaccesskeymodule,
but its eating cpu. then we deactivate module but problem didnt solve.

Posted at Nginx Forum:

On Wed, Jul 20, 2011 at 04:22:49AM -0400, asdasd77 wrote:


Igor S.

i dont know. i am not a server expert. im just a php coder.

i was googled and try to find a solution, because hosting company didnt
solve problem. actually we want to use nginx for httpaccesskeymodule,
but its eating cpu. then we deactivate module but problem didnt solve.

Could you show top from the first server ? It will show if server
is run in virtualized environment.


Igor S.

On Wed, Jul 20, 2011 at 03:29:26AM -0400, asdasd77 wrote:

thx for everyone but we will go back to litespeed.

Is your first litespeed server run under virtualization ?


Igor S.