On 2/4/07, firstname.lastname@example.org email@example.com wrote:
I don’t pretend to be a TDD or BDD guru at all but in TDD
often seen the suggestion that a newly written test (i.e., a test for
a feature or behavior that hasn’t been implemented yet) should fail.
You then implement the feature and know you have done so when the
Two questions for any BDD or rSpec gurus:
This should really go to the RSpec users list:
But since you posted it here, I’ll answer…
- Why does the tutorial suggest a two step process of
writing an empty spec, testing, and then going back and adding
and finally testing again? Perhaps this is just because it is a
An important part of TDD is running the tests (specs) often. Same for
BDD. When you’re first learning, it is better to run them more often
than less, so the tutorial stops to run them quite often.
In practice, everybody that I’ve ever worked with begins to expand the
distance between test runs as a matter of course. Eventually one
pushes that envelope too far and finds themselves in an ugly
headache-producing debugging session. At that point, it is helpful to
be able to call up the discipline of baby-steps.
- Why aren’t specifications with no expectations flagged as
the rSpec test runner ‘spec’?
I don’t know of any testing tool that flags empty test methods. Not
saying there isn’t one, but I’ve not come across it. They’ll flag test
classes (i.e. TestCase) with no test methods, but not empty methods.
That said, we are looking at allowing you to explicitly identify
unimplemented specs by excluding a block. So you would do this:
context “today” do
specify “should be a holiday”
Without the block, the output of a ‘spec’ run might be:
- should be a holiday (NOT IMPLEMENTED)
This is not a definite at this point, but something we are considering.
I’ve already been burned by the second situation where I put code in the
specification but didn’t actually call one of the expectation
thought my implementation was matching the spec but in fact my spec had
no expectations. It was impossible for the implementation to not meet
I think this boils down to discipline. Let’s say we did enforce a rule
that specs with no expectations would fail. What’s to stop you from
specify “something important” do
1.should == 1
Not only would that spec pass the rule, but if you did a quick glance
through 100s of specs, you wouldn’t even notice that it was doing
something just to pass the rule.
This may seem silly to you, but I’ve seen worse. As tools impose
rules, developers under schedule crunches will come up w/ some pretty
clever ways to fool the rules-checker.