BDD and Rspec

In the traditional development method like waterfal approach, we create
the requirement and design document. The code will be done after
requirement and design. Unit test will be written to test the existing
code block.

I understand that rspecs will act as the requirement and design document
in BDD. We can also write test case using rpsec.

I believe, we can create the plain text stories in rspec first. After
coding we can write the test case for the expected behaviour. How
the text stories and test cases are connected? How we will make sure
all the expected behaviours in the text stories are met?

Ayyanar Aswathaman wrote:

In the traditional development method like waterfal approach, we create
the requirement and design document. The code will be done after
requirement and design. Unit test will be written to test the existing
code block.

If at all!

I understand that rspecs will act as the requirement and design document
in BDD. We can also write test case using rpsec.

I believe, we can create the plain text stories in rspec first. After
coding we can write the test case for the expected behaviour. How
the text stories and test cases are connected? How we will make sure
all the expected behaviours in the text stories are met?

You are asking how “Agile” development works. Unfortunately, if you
google for
that, you will find a lot of noise, enthusiastic discussion, and
advocacy. Agile
is so popular that it’s hard to get the idea what Agile “is” these days!

At my day job, we have two or three “pair stations”. That’s a
workstation with
two monitors, two keyboards, two mice, and two chairs. We do our
important work

  • typically cutting new code - in pairs. Whoever thinks of something to
    type
    describes it to the other, and they type it in.

We alternate between writing tests and writing code, in very small
cycles. We do
this:

http://c2.com/cgi/wiki?TestDrivenDevelopment

That means if you think of a new line of code to write, you first write
a test
case that fails because it’s not there. Only when the test case is
failing for
the correct reason do you write a little code to pass the test.

To help us rapidly determine if a case is failing for the correct
reason, I
invented a new kind of assertion:

http://assert2.rubyforge.org/

Our tests also cover details in Views, so we use XPath to validate our
XHTML,
and to find details in it:

http://assertxpath.rubyforge.org/

(Get that one with ‘gem install assert_xpath’.)

Both work with RSpec.

When we edit, we hit one button in our editor (F5), and it saves all our
code.
Then we have a script that detects any changes, and runs every test
suite that
has been changed since the last integration. This means we can test very
frequently, after each edit. We predict, out loud, what the test run
will do. If
it fails unexpectedly, we have the option to Undo, or even ‘svn revert’,
to get
back to passing tests.

Undoing is much /much/ MUCH more productive than debugging!

As soon as we have improved our code, in any way, we integrate. That
means we
check out all changes, pass all our tests, and only ‘svn checkin’ if all
the
tests pass. These practices rigorously keep bugs out of our codebase.

We also deploy our code to live servers each time we add a new feature.

We have no “text stories” - only test cases. Our “onsite customer” keeps
text
lists of features by name, but they are only detailed enough for him.

Our managers have a weekly meeting where they ask our onsite customer
for more
features. These features are usually always tiny increments over our
current
code state. If we added a new page last month, for example, our workers
might
report they need a simplification on it, or more data in a corner, or
something.
Their manager petitions our manager, who writes the name of the feature
on a
card, then (within a week) tells us to do it.

A simple feature like a new button should take only a few hours, with
our
combination of Rails, clean code, and Test Driven Development. The
workers will
get the new feature that day.

We can absorb feature requests in any order, without scrambling our
code,
because we always take time after each feature requests to refactor the
code.
That means improve its design in tiny steps, while passing all tests
between
each step. To improve the design, we find code that is similar, and try
to make
it look exactly the same. Pass all tests. When it looks the same, we can
delete
one copy of the code, and then call into it from another call site. This
process
makes the code “DRY” - Don’t Repeat Yourself.

The tests then thoroughly document what our code does. To answer a
question
about behavior, we read all the names of the tests, and their contents.
We try
to make each case as “literate” as possible.

We can also analyze behavior by putting a ‘raise “yo”’ inside a suspect
method.
Then we run all the tests. The ones that fail will document that method!

These practices reduce bugs, and make our projects very easy for
managers to steer.


Phlip