Cucumber and autotest - running all features?

Given I have cucumber (0.1.10)
And I have ZenTest (3.11.0)
And I have features in sub-directories under a directory called
features
And Each feature sub-directory has a sub-directory called
step_definitions
And I have .feature files in the features sub-directories
And I have .rb files in the step_definition sub-directories
And I have a cucumber.yml file that contains <<CUCUMBER
autotest: features -r features --format pretty
CUCUMBER
And all the steps pass

When I run autotest
And I touch one step_definitions file in one sub_directory

Then ONLY that feature branch should be rerun

script/cucumber --profile autotest --format autotest --out
/tmp/autotest-cucumber.8574.0

7 steps passed
1 step Failed

I understood that the purpose of autotest was that it ONLY ran a test
for the changed file. However, with this setup, if I touch any file
anywhere in the project then the full suite of feature tests apparently
gets run. for UnitTest autotest only run the tests associated with the
modified files.

Is this intended behaviour?

James B. wrote:

I understood that the purpose of autotest was that it ONLY ran a test
for the changed file. However, with this setup, if I touch any file
anywhere in the project then the full suite of feature tests apparently
gets run. for UnitTest autotest only run the tests associated with the
modified files.

Except if I update a step_definitions file. Then only scenarios that
use that step definitions file are run. However, if I change anything
else, including the .feature file that contains the scenario, then the
COMPLETE feature test is run.

On Thu, Nov 27, 2008 at 4:12 PM, James B. [email protected]
wrote:

Except if I update a step_definitions file. Then only scenarios that
use that step definitions file are run. However, if I change anything
else, including the .feature file that contains the scenario, then the
COMPLETE feature test is run.

The Autotest support in Cucumber doesn’t currently associate any files
with specific scenarios. It only keeps track of failed scenarios. If
any are failing, changing any file will run those scenarios. If
everything has been passing, changing any file will run the whole
feature suite.

Features should be orthogonal to your classes, so there’s no good way
to associate a scenario with the classes it tests. They can’t be
associated with steps files because they can use steps from any steps
file. The only file they can be associated with in theory is the
feature file itself. That might be worthwhile, but since scenarios
are run by name, the Autotest plugin would have to find the names of
the scenarios in the file, which is non-trivial.

Are you saying that modifying a steps file runs the scenarios which
use those steps? I didn’t write it to do that. If it does, that’s
quite magical! :slight_smile:

Peter

On Mon, Dec 1, 2008 at 2:50 AM, Matt W. [email protected] wrote:

I’ve been thinking about a more sophisticated mechanism for this, using code
coverage. If autotest / rspactor was able to record and remember the LOC
covered by each scenario / example, it would be possible to do more focussed
regression testing when a source file was changed.

It’s a clever thought, but you don’t know about code which scenarios
will depend on in the future. You’d have to manually restart
autotest and have it recalculate all of the mappings.

Peter

On 1 Dec 2008, at 02:45, Peter J. wrote:

The Autotest support in Cucumber doesn’t currently associate any files
with specific scenarios. It only keeps track of failed scenarios. If
any are failing, changing any file will run those scenarios. If
everything has been passing, changing any file will run the whole
feature suite.

Features should be orthogonal to your classes, so there’s no good way
to associate a scenario with the classes it tests.

I’ve been thinking about a more sophisticated mechanism for this,
using code coverage. If autotest / rspactor was able to record and
remember the LOC covered by each scenario / example, it would be
possible to do more focussed regression testing when a source file was
changed.

That’s as far as I’ve got: an idea. I just though I’d mention it in
case anyone has time on their hands… :wink:

Matt W.
[email protected]

On 2 Dec 2008, at 14:11, Peter J. wrote:

It’s a clever thought, but you don’t know about code which scenarios
will depend on in the future. You’d have to manually restart
autotest and have it recalculate all of the mappings.

Well. I reckon I would be adding new code under two circumstances:
(1) I am refactoring existing code, sprouting out classes etc.
(2) I am cutting new code to pass a new, failing, scenario / spec

In case (1 - refactoring), I expect that as I move code out of
BigFatClass and create ShinyNewClass, I would have to save
big_fat_class.rb, which would trigger the tests that cover it. That
would then update the coverage mappings to also cover shiny_new_class.rb

In case (2 - adding new code, guided by tests), there would be failing
tests to re-run which, when run, would hopefully spin up the new code
as I write it. The re-run of failing tests could be triggered either
manually, or by a trigger that just watched the folders where new
source files were likely to crop up.

WDYT? Possible?

Matt W.
http://blog.mattwynne.net

On Thu, Dec 4, 2008 at 6:17 PM, Matt W. [email protected] wrote:

WDYT? Possible?

Possibly. :slight_smile:

That does sound like it might be possible. On the other hand, in
practice, I’ve found that the current implementation works the way I’d
want it to at least 95% of the time.

Agree. Sounds like a micro optimisation. A little too complex/clever
for my liking.

On 5 Dec 2008, at 07:21, Aslak Hellesøy wrote:

I’d
want it to at least 95% of the time.

Agree. Sounds like a micro optimisation. A little too complex/clever
for my liking.

I’ll have to give autospec another shot then I guess. I have given up
on it as it takes over 7 minutes to ‘boot up’ with our test suite, and
I find it misses too many things it should have run, such that I have
to hit CTRL-C and do a complete (7 minute) re-run far more often than
I’d like. I also don’t trust it before a check-in - it seems to miss
things that should have been re-run.

I guess this may more be due to our codebase being a bit badly
organised so that the conventions autospec relies on aren’t always
adhered to…

Personally, I’m not terribly
inclined to do all the work to make it more intelligent. But if
you’d
like to give it a shot, I’m certainly curious to see if it can work.

:slight_smile: My barrier is reading the coverage.data files, although thinking
about it it should be possible to parse the HTML report quite
easily… hmmmm…

Matt W.
http://blog.mattwynne.net

On Thu, Dec 4, 2008 at 6:17 PM, Matt W. [email protected] wrote:

WDYT? Possible?

Possibly. :slight_smile:

That does sound like it might be possible. On the other hand, in
practice, I’ve found that the current implementation works the way I’d
want it to at least 95% of the time. Personally, I’m not terribly
inclined to do all the work to make it more intelligent. But if you’d
like to give it a shot, I’m certainly curious to see if it can work.

Peter

On Fri, Dec 5, 2008 at 4:47 AM, Matt W. [email protected] wrote:

I’ll have to give autospec another shot then I guess. I have given up on it
as it takes over 7 minutes to ‘boot up’ with our test suite, and I find it
misses too many things it should have run, such that I have to hit CTRL-C
and do a complete (7 minute) re-run far more often than I’d like. I also
don’t trust it before a check-in - it seems to miss things that should have
been re-run.

It certainly can take a while to boot up, but that’s necessary: it
really has to run the whole suite the first time to be sure everything
is passing. It might be nice to have an option to skip that if you’re
sure everything passes when you start.

You shouldn’t be using Ctrl-C to restart autospec very often; at
least, I almost never do. I find that the work I’m doing very rarely
regresses other scenarios, and it’s not a problem not to notice until
the current features are finished and the whole suite runs.
Typically, in my workflow, I have a branch which adds a new feature or
scenario. I commit filling out a scenario and the step definitions to
make them no longer pending in one commit. Then I commit spec
examples and the come which makes them pass in logical chunks until
the scenario passes. Once it passes, the whole suite runs again
automatically, and I see if I’ve broken anything. If I have, I fix it
then. When I get the “all green”, I can fill out a new scenario and
start the cycle again.

I think of these as “micro-iterations”, since I reevaluate the
direction I’m going in between these iterations. A bunch of these
micro-iterations together form an “iteration” in he Agile/XP sense,
where I’ve built something I can show to the client/customer/boss
again for feedback. At this point I can merge my branch into master
or another longer-running branch.

Because of this workflow, I’m ok with committing code with broken
features, and even broken specs if they’re not related to what I’ve
been doing. At the end of a micro-iteration, though, everything needs
to pass.

I guess this may more be due to our codebase being a bit badly organised so
that the conventions autospec relies on aren’t always adhered to…

Yeah, that can be a killer. If you have your own conventions, though
(and you should), you can modify the mappings in your .autotest file
to match. See the Autotest#add_mapping rdoc for more info.

Peter