Nginx as reverse proxy with "mod secdownload" feature - possible?

Hi,

currently i’m searching for some easy way to setup some kind of content
cache.
nginx is one of my favorites for this, because it seems to be the most
powerfull
and performant proxys out there.

The idea:

cache01 located in location abc

cache10 located in location xyz

main01 located in base DC

main04 located in base DC

The cacheXX servers should fetch the static content (pictures, videos)
from one
of the main servers, store it locally and deliver it to the users. This
seems to
be not a big deal so far. But we use mod_secdownload from lighttpd to
protect the
content from beeing hotlinked or “guessed”. If we now to an 1:1 caching,
this
would work fine so far. But it also would require the cache to fetch one
object
multiple times since the url will change. And this is something i would
like to
avoid.

So my idea was to setup some “internal” url on the mainserver without
mod_secdownload (access limited to the cache servers!), to have always
the same
URL for one object. But then the cache has to do the protection of the
content
instead of the main servers.

nginx has no feature like mod_secdownload so far, right? Is there
anything i
could do to get nginx working as a proxy with something like
mod_secdownload?
maybe some perl module or so? the important thing would just be to have
every
picture/video only downloaded once, or at least once in xx days…

Thanks and regards,
Sven

nginx has no feature like mod_secdownload so far, right? Is there anything
i
could do to get nginx working as a proxy with something like
mod_secdownload?
maybe some perl module or so? the important thing would just be to have
every
picture/video only downloaded once, or at least once in xx days…

I havent followed what is the current status with the TTL (time to live)
secure downloads ( there is some blog post
Nginx secure link module with TTL - Masterzen’s Blog )
but
we use Module ngx_http_secure_link_module

The secure link configuration is put on the caching servers (while of
course
you can duplicate it also on the backend in case you want to support
also
direct requests) which then fetch the object (if it doesnt exist in
cache
(local tree)) from backend via normal url (and do proxy_store locally) -
the
url normalising (striping out the hash parts) allows to store the same
object just once).

But to implement the download-once feature you could use php (or any
other
dynamic backend) and do it via X-Accel-Redirect (
XSendfile | NGINX ) - like transparently parse all the
file
downloading requests (through either try_files or rewrite) and let php
decide if the file is available or not and nginx just does the transfer.

rr

Hi Reinis,

thanks for your reply.

Am 19.10.2010 17:10, schrieb Reinis R.:

I havent followed what is the current status with the TTL (time to live)
secure downloads ( there is some blog post
Nginx secure link module with TTL - Masterzen’s Blog )
but we use Module ngx_http_secure_link_module

we definitivly need the expiring urls, so the ttl module points to the
right direction.

The secure link configuration is put on the caching servers (while of
course you can duplicate it also on the backend in case you want to
support also direct requests) which then fetch the object (if it doesnt
exist in cache (local tree)) from backend via normal url (and do
proxy_store locally) - the url normalising (striping out the hash parts)
allows to store the same object just once).

We currently use lighttpd on the servers, which work ok so far. We just
want to add caches we can put infront of the lighty setup to serve the
content from different places.

But to implement the download-once feature you could use php (or any
other dynamic backend) and do it via X-Accel-Redirect (
XSendfile | NGINX ) - like transparently parse all the
file downloading requests (through either try_files or rewrite) and let
php decide if the file is available or not and nginx just does the
transfer.

This would not help (at least i think so) because we do not want to sync
any content to the caches. We just want them to fetch the stuff from the
main servers if they haven’t stored it locally in their proxy cache, and
deliver it from the cache if they already have it.

So from all my readings, i think(! and might be wrong), the easiest and
maybe best way for now would be having a small perl module doing the
secdownload stuff (just a few lines of code, so no big deal) and rewrite
the request to a normalized url which can be found on the backend.

So, is such a setup poosible? like → request to nginx → perl module
does
some check and rewrite of the request → request passed to proxy module
→ fetches content from local cache if avail or from backend server.

If yes, some pointers/samples would be nice.

Thanks and regards,
Sven

On Thu, Oct 21, 2010 at 7:32 PM, Reinis R. [email protected] wrote:

And don’t miss our ngx_srcache and ngx_lua modules! :wink:

http://github.com/agentzh/srcache-nginx-module
http://github.com/chaoslawful/lua-nginx-module

Cheers,
-agentzh

This would not help (at least i think so) because we do not want to sync
any content to the caches. We just want them to fetch the stuff from the
main servers if they haven’t stored it locally in their proxy cache, and
deliver it from the cache if they already have it.

You don’t need to “store” directly (like push any files beforehand) to
the
cache servers - nginx can store the files on demand in the same tree
structure as on backend (quite easy to examine that way what is getting
fetched and purge the cache with simple filesystem tools like ‘find /
rm’)
by using the “proxy_store on” (
Module ngx_http_proxy_module ) directive or
either
in its own cache tree but then you need to adjust proxy_cache_key so
that it
doesnt include the default $request_uri (which would contain the dynamic
hash that way storing a single file multiple times (someone correct me
if
I’m wronge here)) but just the real path (do something like $secure_link
rewrite) and the file technically should be fetched from cache each time
rather than backend (the advantage of this is you can have a dynamic
garbage
collector (cache cleaner) by adjusting the overal size and time to live
rather than have to do it yourself) …

To give some example - some pseido config for the first approach:

upstream backend {
    server backendip:8080;
}

 server {
    root   /webroot;
    error_page      404 = @store;

    location /dlpath/ {
    secure_link_secret   randomkey;
            if ($secure_link = "") {
                    return 403;
            }
            rewrite  ^ /dlpath/$secure_link  break;
    }

    location @store {
            internal;
            proxy_pass           http://backend;
            proxy_store          on;
    }
}

So from all my readings, i think(! and might be wrong), the easiest and
maybe best way for now would be having a small perl module doing the
secdownload stuff (just a few lines of code, so no big deal) and rewrite
the request to a normalized url which can be found on the backend.

Since I am not aware of any third party modules which can keep track of
the
download status thats one of the solutions (imo the easy way).

On the other hand if you plan to exploit all of nginx possibilities /
features you could use the memcache + echo module
http://wiki.nginx.org/NginxHttpMemcModule

In a way the Memcached would hold an unique key (inserted by third party
app
or some nginx subrequest) which consists of the file path or true/false
and
after making a request the key would be deleted - thats just theory
though
and requires some voodoo :slight_smile:

rr