I’m working on an image system that does rescaling and effects on
demand, rather than on upload. The uploads are stored as master files,
and resized images are generated, cached and returned by a controller,
using a request similar to this:
/images/21/150x100/filename.jpg (where 21 is the image id)
I also have various filters implemented, adding the concept of filter
sets (which is a series of filters, identified by a string). For
blog-thumbnails is the name of a filter set that could apply unsharp
mask on the resized images, tint the colours to match the design, etc
This works incredibly well. It enables us to play around with effects
and image resizing at whim in views and stylesheets, through external
API to Flash etc.
There is one catch though: This is begging for DOS-attacks. Results are
cached, but processing images is CPU-intensive. I could easily cause a
server meltdown armed with nothing more than a poor connection and a
small script, by sending loads of requests with unique image sizes.
I can think of a few solutions of the top of my head:
I could limit the allowed image sizes to a set defined somewhere (in
the config perhaps). Kind of defeats the purpose, though, and leads to
more configuration. Maybe a constraint for production mode, while
allowing any image size in development?
I could make a random hash token and put it in a table along with
image id, image size, filters etc, and use this in the request:
Drawbacks: rails would have to generate the URL for me every time i need
a new image variation. And it destroys the pretty and user-friendly
I could detect the attack somehow, and drop the connection or return
error codes instead of processing the image if there’s a possible siege
going on. This sounds hacky to me, I’d have to keep record of the
requests for the last few minutes. And it might result in occationally
broken images for some users.
I could check the referrer header, but that could be faked.
Any tips? Is there anything in apache or lighthttpd that might save me
alot of work?