Just wondering why there is not a standard method of storing the
gzipped file in a cache directory in nginx. Seems to me that would be
developed before the pre-compression module.
Like in Lighty, there is a /var/tmp/lighttpd/cache/compress/ type
folder, and the server maintains its own cache there, as opposed to it
being up to the user to manually gzip their files in-place (and the
pre-compression module just checks if a {$file}.gz exists)
pre-compression - available in 0.6.24
on-the-fly compression - standard
standard server maintained one-time compression - ??
There could be a good reason for this, and I can see how #1 is pretty
neat (you control what is cached, don’t have N copies of the same file
gzipped on each webserver’s temp cache dir, etc…) but I am surprised #3 is not offered.
I know #2 seems to meet most people’s needs and from what I’ve read
has minimal overhead. Wondering why #3 was skipped altogether and #1
was done instead?
On Sun, Apr 13, 2008 at 12:01:31AM -0700, mike wrote:
on-the-fly compression - standard
standard server maintained one-time compression - ??
There could be a good reason for this, and I can see how #1 is pretty
neat (you control what is cached, don’t have N copies of the same file
gzipped on each webserver’s temp cache dir, etc…) but I am surprised #3 is not offered.
I know #2 seems to meet most people’s needs and from what I’ve read
has minimal overhead. Wondering why #3 was skipped altogether and #1
was done instead?
As to overhead of #2, I’ve tried to minimize it, but anyway on my
production
servers gzipping takes about 30% nginx CPU time according to
google-perftools.
I’m developing cache infrastructure in nginx. The first client of it
will be proxy module. But eventually I will add cache capabilities to
other
modules including gzip filter.
While it is inconvenient (and possibly error prone if you have to
patch your production servers) pre compressing your content gives you
two important things
a. the best compression ratio, 7za -tgzip -mx=9 -mpass=15 gives the
best possible compression for js and css files
b. content-length, which the gzip module does not give you
gzip_static is what controls the precompression checks right?
The wiki says this:
“You should ensure that the timestamps of the compressed and
uncompressed files match.”
What is the side effect if they do not? Will it not work unless they
both match? It says “should” but not “required”
Curious as to why that note is there. Thanks. So far I am enjoying
using nginx! I will be deploying it soon on some of a Fortune 100
company’s webservers soon!
you have to deploy a patch to a css or js file on your production
server, you forget to update the .gz version so browsers with
different Accept-Encoding: headers see different results
I have noticed that nginx uses the last-modified timestamp of
the .gz file if it delivers that file, this may or may not be a
problem in your setup.