Hey!
At my place, we have a pretty image api written in Python. The generated
images are then cached using proxy_cache (which is awesome) and
optionally purged by either a) calculating the hash and deleting in file
system or b) using the excellent (self proclamation intended
http://labs.frickle.com/nginx_cache_purge/
So, onto the issue: Our image api accepts a wide range of customization,
filters and whatnot, which we therefore chose to cache by unique uri:
proxy_cache_key mysecrethash$host$request_uri;
This creates a dilemma when all representations of an image needs to be
removed (e.g: a user wants to delete the image). We now have an unknown
number of cached representation of this image.
My first approach was to patch proxy_cache even further by storing the
cache_key and hash in a table, but after speaking some with mdounin @
irc I prefer the approach of storing to disk using a a plain cache key
instead of a calculated one. This would allow me do do something like rm
-rf a/b/c/abc.jpg* to remove all representations (e.g: abc.jpg?zoom=3
and abc.jpg?bw=true) [just an example, I know nginx uses inverted
catalogue naming]
So, what I’m asking is if there’s an better idea than using a plain
proxy_cache_key for cache creation to solve my issue. Guess i should
note that scanning access logs to find “used” filenames is unavailable
to me.
Thanks for reading,
Johan
Bergström