On Fri, Dec 5, 2008 at 4:47 AM, Matt W. [email protected] wrote:
I’ll have to give autospec another shot then I guess. I have given up on it
as it takes over 7 minutes to ‘boot up’ with our test suite, and I find it
misses too many things it should have run, such that I have to hit CTRL-C
and do a complete (7 minute) re-run far more often than I’d like. I also
don’t trust it before a check-in - it seems to miss things that should have
been re-run.
It certainly can take a while to boot up, but that’s necessary: it
really has to run the whole suite the first time to be sure everything
is passing. It might be nice to have an option to skip that if you’re
sure everything passes when you start.
You shouldn’t be using Ctrl-C to restart autospec very often; at
least, I almost never do. I find that the work I’m doing very rarely
regresses other scenarios, and it’s not a problem not to notice until
the current features are finished and the whole suite runs.
Typically, in my workflow, I have a branch which adds a new feature or
scenario. I commit filling out a scenario and the step definitions to
make them no longer pending in one commit. Then I commit spec
examples and the come which makes them pass in logical chunks until
the scenario passes. Once it passes, the whole suite runs again
automatically, and I see if I’ve broken anything. If I have, I fix it
then. When I get the “all green”, I can fill out a new scenario and
start the cycle again.
I think of these as “micro-iterations”, since I reevaluate the
direction I’m going in between these iterations. A bunch of these
micro-iterations together form an “iteration” in he Agile/XP sense,
where I’ve built something I can show to the client/customer/boss
again for feedback. At this point I can merge my branch into master
or another longer-running branch.
Because of this workflow, I’m ok with committing code with broken
features, and even broken specs if they’re not related to what I’ve
been doing. At the end of a micro-iteration, though, everything needs
to pass.
I guess this may more be due to our codebase being a bit badly organised so
that the conventions autospec relies on aren’t always adhered to…
Yeah, that can be a killer. If you have your own conventions, though
(and you should), you can modify the mappings in your .autotest file
to match. See the Autotest#add_mapping rdoc for more info.
Peter