Disabling sessions for web crawlers

I’m upgrading our Rails app from 2.1 to 2.3.11. Yes, this is
difficult. :slight_smile:

Our application controller contains the following line:

session :off, :if => Proc.new {|req| req.user_agent =~ BOT_REGEX}

The purpose of this line is to prevent the creation of sessions for
the Googlebot and other crawlers. Since the majority of our traffic
comes from crawlers, this is a significant performance savings for us.

Now, with Rails 2.3, sessions are lazy-loaded, and I’m getting the
“Disabling sessions for a single controller has been deprecated”
warning.

It appears that the way to avoid creating sessions is now to simply
never access them. However, our application references the session all
over the place, and it seems easiest to turn them off completely up
front, guaranteeing that we won’t accidentally create one.

Is such a thing still possible? Can I disable sessions completely for
a request, such that a lazy load cannot occur?

Thanks,
Robin

On Tue, May 17, 2011 at 2:32 PM, [email protected] <
[email protected]> wrote:

a request, such that a lazy load cannot occur?

Thats interesting and I dont know the answer for sure. One idea which I
am
sure might enatil some work if nothing else would be to wrap your access
to
the session and then run your check before assigning the value. Or reach
into the rails internals and override the code that handles assigning
the
session and have it run such a check before assigning it. Who knows how
difficult that might be.

Another idea would be to add an after filter to your app controller and
if
you find that it was a bot request, then clear it. That of course would
not
solve if the cost is in creating the session but it would reduce the
need to
keep it around.

Also, one consideration is that if your session store is such a draw on
resources, maybe going forward you could work to not save so much data
in it
and therefore reduce the latent cost.