Memcached problems

Hallo folks again,

i try to configure nginx to deliver swf files etc, and wanted to cache
them within memcached. but until now i didnt have much success with
that.
what i already did:

include/excluded parts of the config (flv part, swf part & php part) and
tested them one by one. no change… but if i try to open a sfv file via
browser i see that nginx communicates with memcached and this error
messages appears in the nginx error log (without the memcached part in
the config i get the file as download, like expected)

2009/08/31 18:34:40 [info] 23808#0: *1 key:
“/files/_somewhere/xxx/xxxx.flv” was not found by memcached while
reading response header from upstream, client: 62.
116.129.3, server: media, request: “GET /files/_somewhere/xxx/xxxx.flv
HTTP/1.1”, upstream: “memcached://127.0.0.1:11211”, host: “ftp.xxx.org:8
1”

it seems that nginx looks in memcached for the file, but it has not yet
read from the disk so nginx cant find it inside memcached and throws me
a 404

heres my nginx.conf

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

 log_format  main  '$http_host $remote_addr - $remote_user

[$time_local] $request ’
'“$status” $body_bytes_sent “$http_referer” ’
‘“$http_user_agent” “$http_x_forwarded_for”’;

 access_log  /var/log/nginx/access.log  main;

 sendfile        on;
 tcp_nopush     on;

 #keepalive_timeout  0;
 keepalive_timeout  65;

 #gzip  on;
 server {
     listen       81;
     server_name  media;

     #charset koi8-r;

     #access_log  logs/host.access.log  main;


     location / {
         root   /home/www/htdocs;
         index  index.html index.htm;


     error_page  404     http://failover.somewhere.de/404.html;

     location ~ \.flv$ {
         memcached_pass 127.0.0.1:11211;
         set $memcached_key $uri;
         root           /home/www/htdocs;
     }

     location ~ \.swf$ {
         set $memcached_key $uri;
         memcached_pass     127.0.0.1:11211;
         root           /home/www/htdocs;
     }

  location ~ /\.ht* {
         deny  all;
     }
  }

}

On Mon, Aug 31, 2009 at 08:05:07PM +0200, Juergen G. wrote:

the config i get the file as download, like expected)
a 404
Put your swf files on filesystem and forget about memcached: in modern
OS
disk == memory, if content volume is comparable to physical memory (and
it’seems to be comparable if you want to put files in memcached).

normally, i whould agree. but in this special case the system has
trouble to handle the high io load due to really lots of files (about 8
million… i know… crazy, not my content).

to get a workaround for the hd io load, i just wanted to get nginx to
load the swf and flv files into memcached which should be possible for
my understanding.

i think i found one of my problems, the missing fallback server if the
content isnt already in memcached.

but now i get 502 bad gateway error messages, and such strange error log
entries

2009/08/31 23:21:17 30612#0: 8192 worker_connections is not enough
while accepting new connection on 0.0.0.0:81

? no connections yet … no production

but anyway, i trimmed down my config to a minimum for testing

server {
    listen       81;
    server_name  _;

    #access_log  logs/host.access.log  main;

    location / {
        set $memcached_key  $uri;
        memcached_pass      127.0.0.1:11211;
        error_page 404 502  = /fallback;

default_type text/html;

        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    location /fallback {
        proxy_pass http://127.0.0.1:81;
    }

    location ~ \.php$ {
        set $memcached_key  $uri;
        memcached_pass      127.0.0.1:11211;

        root           /home/www/htdocs;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME 

/home/www/htdocs$fastcgi_script_name;
include fastcgi_params;
}

PHP works well, but it doesnt seem to get stored in memcached, why? and,
for example if i try to download a zip file i get this in the error log

2009/08/31 23:30:16 30815#0: *12285 recv() failed (104: Connection
reset by peer) while reading response header from upstream, client:
127.0.0.1, server: _, request: “GET /koch/_banner/paldauer/paldauer.zip
HTTP/1.0”, upstream: “http://127.0.0.1:81/yes/_this/doesnt/work.zip”,
host: “127.0.0.1:81”, referrer:
http://ftp.nastyhost.de:81/koch/_banner/paldauer/index.php

please igor, anyhow if you got an idea how i can get nginx to use
memcached to store content…

greetings & thanks

juergen

Posted at Nginx Forum:

“swf and flv files into memcached which should be possible for my
understanding.”
are you sure?

i think so, yes

Posted at Nginx Forum:

Igor S. wrote:

    }
        fastcgi_pass   127.0.0.1:9000;

and how much is whole size of flv/swf content ?

I can not say about Linux, but in FreeBSD this can be done by increasing
“sysctl kern.maxvnodes”, which is 100,000 vnodes by default. A kernel
stores file pages binding them to a vnode.

On Linux you want to increase either

/proc/sys/fs/inode-max (if you have it - I don’t on my Ubunutu 9.04
x64 - not sure why)

or

/proc/sys/fs/file-max

e.g.

You can view these by e.g.:

cat /proc/sys/fs/inode-max

or change them by e.g.:

echo 1000000 > /proc/sys/fs/file-max

If you use the inode-max option, a number 3-4 x as big as the file-max
would be normal.

AFAIK this will reset upon reboot, so you’d probably want to add it to a
startup script.

Marcus.

On Mon, Aug 31, 2009 at 05:34:07PM -0400, JG wrote:

? no connections yet … no production
location / {
proxy_pass http://127.0.0.1:81;
include fastcgi_params;
}

PHP works well, but it doesnt seem to get stored in memcached, why? and, for example if i try to download a zip file i get this in the error log

2009/08/31 23:30:16 30815#0: *12285 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: _, request: “GET /koch/_banner/paldauer/paldauer.zip HTTP/1.0”, upstream: “http://127.0.0.1:81/yes/_this/doesnt/work.zip”, host: “127.0.0.1:81”, referrer: “http://ftp.nastyhost.de:81/koch/_banner/paldauer/index.php

please igor, anyhow if you got an idea how i can get nginx to use memcached to store content…

nginx can not store data in memcached, it can ony get data from it.
As to “recv() failed (104: Connection reset by peer)” this is some
connection
problem between nginx and 127.0.0.1:81.

However, as I already said, memcached will not resolve your problem.
You just will waste CPU time and memory. Instead you should tune kernel
to allow cache as much as possible. How much is the host physical memory
and how much is whole size of flv/swf content ?

I can not say about Linux, but in FreeBSD this can be done by increasing
“sysctl kern.maxvnodes”, which is 100,000 vnodes by default. A kernel
stores file pages binding them to a vnode.

On Tue, Sep 01, 2009 at 12:56:19PM +0300, Marcus C. wrote:

cat /proc/sys/fs/inode-max

or change them by e.g.:

echo 1000000 > /proc/sys/fs/file-max

If you use the inode-max option, a number 3-4 x as big as the file-max
would be normal.

As I understand file-max is number of simultaneously open files by
applications, including sockets, etc. As to inode-max it seems it has
been removed in 2.4.

Igor S. wrote:

stores file pages binding them to a vnode.

If you use the inode-max option, a number 3-4 x as big as the file-max
would be normal.

As I understand file-max is number of simultaneously open files by
applications, including sockets, etc.
Yes (as I understand it also).
As to inode-max it seems it has
been removed in 2.4.

I see.

oh @#! forget it… my fault… i set the http port to 81 and was
calling port 80 all the time in firefox. thats some way to spent time,
too

Posted at Nginx Forum:

On Wed, Sep 02, 2009 at 05:56:04PM -0400, JG wrote:

oh @#! forget it… my fault… i set the http port to 81 and was calling port 80 all the time in firefox. thats some way to spent time, too

I predict that you will see more CPU load and probably more IO load.
The reasons:

  1. When nginx gets request from memcached, then 3 copy operations are
    done:
    from memcached to kernel, from kernel to nginx, from nginx to kernel.
    When file is sent from VM cache: this is a zero copy operation:
    sendfile()
    manages network card to get data from memory using DMA.

  2. When a file is not in memcached, then nginx passes the request to
    a backend. The backend reads the file and stores it in memcached
    with 3 copy operations. If your current IO load is big, this
    means that physical memory is not enough to store hot content.
    memcached will decrease more memory available for VM cache resulting
    in more IO load.

hello again,

ok, i feed memcached now with this perl script…

--------- code ----------

#!/usr/bin/perl

Usual suspects

use strict;
use warnings;

Additional modules needed

use File::Find;
use Cache::Memcached;

Create Memcache connection

my $cache = new Cache::Memcached {
‘servers’ => [
‘localhost:11211’,
],
‘compress_threshold’ => 10_000,
} or die(“Failed to connect to memcache server”);

my @content;

Define root location to search, alter as appropriate

#my $dir = “/home/www/htdocs/”;
my $dir=“$ARGV[0]” if $ARGV[0];

#$dir.=“/$ARGV[0]” if $ARGV[0];
#print “$dir\n”;

Go find those files

find(&Wanted, $dir);

Process the found files

foreach my $file(@content){
open (my $source,“<$file”);
read $source,my $contents,(stat($file))[7];
$file =~ s/^$dir//;
if ($cache->get($file)){
my $compare = $cache->get($file);
if ($compare ne $contents){
print “Cached file $file is not up to date,
updating cache”;
$cache->delete($file);
$cache->set($file,$contents);
}
} else {
$cache->set($file,$contents)
}
}

sub Wanted
{
# only operate on image files
/.jpg$/ or /.gif$/ or /.png$/ or return;
push (@content,$File::Find::name);
}

#!/usr/bin/perl

Usual suspects

use strict;
use warnings;

Additional modules needed

use File::Find;
use Cache::Memcached;

Create Memcache connection

my $cache = new Cache::Memcached {
‘servers’ => [
‘localhost:11211’,
],
‘compress_threshold’ => 10_000,
} or die(“Failed to connect to memcache server”);

my @content;

Define root location to search, alter as appropriate

#my $dir = “/home/www/htdocs/”;
my $dir=“$ARGV[0]” if $ARGV[0];

#$dir.=“/$ARGV[0]” if $ARGV[0];
#print “$dir\n”;

Go find those files

find(&Wanted, $dir);

Process the found files

foreach my $file(@content){
open (my $source,“<$file”);
read $source,my $contents,(stat($file))[7];
$file =~ s/^$dir//;
if ($cache->get($file)){
my $compare = $cache->get($file);
if ($compare ne $contents){
print “Cached file $file is not up to date,
updating cache”;
$cache->delete($file);
$cache->set($file,$contents);
}
} else {
$cache->set($file,$contents)
}
}

sub Wanted
{
# only operate on image files
/.jpg$/ or /.gif$/ or /.png$/ or return;
push (@content,$File::Find::name);
}


to push jpg gif etc into memcached which works as far is i can see, but
nginx still doesnt try to fetch stuff from memcached

heres the part of my config

--------- config --------------

    location / {
        root   /home/www/htdocs/;
        index  index.html index.htm;

    }

    location ~* \.(jpg|png|gif)$ {
        expires     max;
        set $memcached_key $uri;
        memcached_pass     127.0.0.1:11211;
        error_page 404 502 504 @fetch;
    }

    location ~ \.php$ {
        root           /home/www/htdocs;
        set $memcached_key $uri;
        memcached_pass     127.0.0.1:11211;
        proxy_intercept_errors  on;
        error_page 404 502 504 @fetch;

        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME 

/home/www/htdocs$fastcgi_script_name;
include fastcgi_params;
}

    location @fetch {
        internal;
        expires max;

        proxy_pass http://backend;
        break;
    }

-------- /code --------------

for my understanding nginx should first check if the file is in
memcached, and if it gets a 404 there it should went to @fetch which is
a 2nd definition looking like this

------- config ------------

upstream backend {
server 127.0.0.1:82;
}

server {
listen 127.0.0.1:82;
server_name localhost;

    location / {
        root   /home/www/htdocs/;
        index  index.html index.htm;
    }

    location ~ \.php$ {
        root           /home/www/htdocs;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME 

/home/www/htdocs$fastcgi_script_name;
include fastcgi_params;
}
}

Posted at Nginx Forum: