Configuration issues with 100-150MB files downloads causing high CPU usage

Hello, I’m hoping someone would be able to assist in my dilemma, I’ve
searched for a few weeks on a fix with no success… I run Nginx in proxy
pass-through mode to apache… (I hope I got the right terminology…),
but basically, we’ve been seeing issues with our Media server, where
once we start getting enough requests to make the bandwidth get above
100mbit, we start seeing drops on our cacti graphs… when i can get
online as the time, I notice that Nginx is utilizing a ton of CPU
causing the server to slow down considerably… I’m worried that my
configuration has some issues and I would really appreciate the help
from the community. Details on the server, brand new dell r515 (6 cores,
16GB RAM, 2 15K SAS drives for the os in a RAID 0, 12 2TB HDDs in a RAID
6 for the media). Average file size is 120MB, they don’t deviate more
than +/-5% of that size… so it’s quite consistent…

Here is the config we use… We have a custom built CDN that uses nginx,
and we can get the same files served over 120mbit with no issues… so
I’m sure it’s something I tweaked that is wrong…

user nobody;
worker_processes 10;
error_log /var/log/nginx/error.log info;
worker_rlimit_nofile 20480;
events {
worker_connections 5120; # increase for busier servers
use epoll; # you should use epoll here for Linux kernels 2.6.x
http {
server_name_in_redirect off;
server_names_hash_max_size 10240;
server_names_hash_bucket_size 1024;
include mime.types;
default_type application/octet-stream;
server_tokens off;
disable_symlinks if_not_owner;
sendfile off;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 5;
gzip on;
gzip_vary on;
gzip_disable “MSIE [1-6].”;
gzip_proxied any;
gzip_http_version 1.1;
gzip_min_length 1000;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_types text/plain text/xml text/css application/x-javascript
application/xml image/png image/x-icon image/gif image/jpeg
application/xml+rss text/javascript application/atom+xml;
ignore_invalid_headers on;
client_header_timeout 3m;
client_body_timeout 3m;
send_timeout 3m;
reset_timedout_connection on;
connection_pool_size 512;
client_header_buffer_size 256k;
large_client_header_buffers 4 256k;
client_max_body_size 200M;
client_body_buffer_size 128k;
request_pool_size 64k;
output_buffers 4 64k;
postpone_output 1460;
proxy_temp_path /nginx_temp/;
client_body_in_file_only on;
log_format bytes_log “$msec $bytes_sent .”;
include “/etc/nginx/vhosts/*”;

Thank you again for any help, I’m including graphs from the last 24
hours, you can see the system have hiccups when the amount of downloads
gets to a certain point…

Brad R.
Systems Engineer
FTW Entertainment LLC