Nginx_slowfs_cache

Hi,

Today after upgrading from nginx version 1.6.x to 1.7.x I have got a
segmentation fault. After short investigation the culprit was found. It
is module by Frikle - nginx_slowfs_cache.

Is anybody has the same issue? Is this module is obsolete?

Cheers,
Vitaliy

Am 19.04.2015 um 13:14 schrieb wishmaster [email protected]:

Hi,

Today after upgrading from nginx version 1.6.x to 1.7.x I have got a
segmentation fault. After short investigation the culprit was found. It is module
by Frikle - nginx_slowfs_cache.

Is anybody has the same issue? Is this module is obsolete?

Can you describe your use-case for it?

And whether you saw a performance-boost from it, compared to other
alternatives?

I wouldn’t say it’s useless these days, but I view it as a bit „exotic“.

— Original message —
From: “Rainer D.” [email protected]
Date: 19 April 2015, 15:53:29

Can you describe your use-case for it?

And whether you saw a performance-boost from it, compared to other alternatives?

I wouldn’t say it’s useless these days, but I view it as a bit „exotic“.

Read this from official website:

About

ngx_slowfs_cache is nginx module which allows caching of static
files
(served using root directive). This enables one to create fast caches
for files stored on slow filesystems, for example:

  • storage: network disks, cache: local disks,
  • storage: 7,2K SATA drives, cache: 15K SAS drives in RAID0.

WARNING! There is no point in using this module when cache is placed
on the same speed disk(s) as origin.

I use RAM disk for this cache. Yes, it is fast enough.
Do you know any alternatives?

WARNING! There is no point in using this module when cache is placed
on the same speed disk(s) as origin.

I use RAM disk for this cache. Yes, it is fast enough.
Do you know any alternatives?

I’ve briefly toyed with it myself, at some point.

What is your „slow“ filesystem?

At least in my experience unless your most used static files exceed in
size
your available RAM, or are changing, they are effectively cached by the
OS
anyway.

So storing them on a ram disk is really doing the same or worse job than
just letting the OS store them and serve them from its file cache memory
pages. Plus the OS has the advantage of knowing which are less
frequently
used and can be purged.

Am 19.04.2015 um 15:16 schrieb jb [email protected]:

At least in my experience unless your most used static files exceed in size your
available RAM, or are changing, they are effectively cached by the OS anyway.

Normally, yes.
Hence the reason why phk wrote Varnish, when he saw what squid was (and
still is) doing…
But is that the case with NFS, too?
I thought there was some caching, too. But I’m not sure.

So storing them on a ram disk is really doing the same or worse job than just
letting the OS store them and serve them from its file cache memory pages. Plus
the OS has the advantage of knowing which are less frequently used and can be
purged.

Yep, that’s why I was asking.

If his data-set was very big (in the large multi-TB region) and he had
a couple of small SSDs to cache stuff, while at the same time the size
of SSDs was about the size of the most requested files, it /could/ make
sense.
But OTOH, you could also just install FreeBSD and use the SSDs as L2ARC
and let the OS do the rest :wink:
Even the usefulness of L2ARC is often questioned by people familiar with
the matter…

OS caching is very hard to beat.

— Original message —
From: “Rainer D.” [email protected]
Date: 19 April 2015, 16:15:19

What is your „slow“ filesystem?

SATA II single disk, UFS.

Am 19.04.2015 um 15:24 schrieb wishmaster <[email protected]
mailto:[email protected]>:

I’ve briefly toyed with it myself, at some point.

What is your „slow“ filesystem?

SATA II single disk, UFS.

Just let the OS do its work.

https://openconnect.itp.netflix.com/software/index.html
https://openconnect.itp.netflix.com/software/index.html

AFAIK, almost all of the changes the Netflix made to improve performance
for their use-case are now back in the tree and available on stock
FreeBSD 10.1 with little or no tuning.
I assume, the same is true for improvements made to nginx.

I’d upgrade to FreeBSD10.1 and max-out the RAM.

No need to go ZFS.

Hi,

Today after upgrading from nginx version 1.6.x to 1.7.x I have got a
segmentation fault. After short investigation the culprit was found. It is module
by Frikle - nginx_slowfs_cache.

Is anybody has the same issue? Is this module is obsolete?

It’s not obsolete, but it’s not actively maintained either… I’ll
take a look later this week and fix it.

But as others already suggested, don’t try to beat OS with cache in RAM.

The original use-case for this module was local cache for files served
from NFS storage.

Best regards,
Piotr S.

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs