Faking page requests

I’ve got a site built in RoR. I deliver the pages staticly - upload
the public folder contents to my site. The page creation is done by
caching and all that works very neatly.

I’d like to streamline the page creation. At the moment, I launch
WEBrick, and use my browser to visit new pages, or pages with new
content, to create (by caching) the static pages. Which is not neat.

Conceptually, I can see a few ways of going about this:

  • Scrape the information out of rails using irb, and write something
    for the browser that visits all the pages. Disadvantage - this is
    ugly, and not automatable.
  • The solution I found on the web - use wget. Needs an extra program.
    Very heavy duty.
  • The test routines contain methods to simulate web requests (‘get’).
    Now this seems more like it to me, as I could write all sorts of
    automation, and even a front end round this, but…
    can the use of ‘get’ trigger page caching?
    can ‘get’ operate outside of the test environment? How? (I’m not good
    at ruby or rails, example code would help)
    is ‘get’ surrounded by any code that’s going to, say, refresh my
    development or production environments (uh oh)?
  • avoid the above by calling methods directly. This very tidy, but I
    need a way of coaxing rails, after yielding a page under these special
    conditions, to throw exection back to my meta static-page generation
    program (If it’s any help, all site pages are generated by just two
    methods, so writing special code into templates, say, is feasable).

Any progress on the last two, especially, would be welcome.


Take a look at spider test plugin for Rails. It will be useful in this