Bad specs better than none?

Hi - I’ve got a bunch of people using specs at a company. Everybody is
writing specs, but people are not really practicing BDD. As in, the
specs are there, but it doesn’t go, write spec, write code, repeat. I
recently came across 8 failing specs checked into svn; I think the
plan was, I’ll write the design as specs, and then implement the
entire design to match. Obviously that’s not really the way it should
be.

I also had to go into specs on a project I’m not working on, and found
an unholy hive of database-accessing specs. It’s disheartening.
Basically, it’s cargo cult development practices - using the “best
practice” without actually understanding it.

Should I tell these people to throw away their specs? Should I train
them in the BDD “spec first” cycle? What do you do when you have specs
that are not really that useful? This is mostly Rails stuff; there’s a
lot of controller specs that duplicate model specs instead of stubbing
out the behavior. It’s driving me nuts but I have no idea what the
solution is yet.


Giles B.

Blog: http://gilesbowkett.blogspot.com
Portfolio: http://www.gilesgoatboy.org
Tumblelog: http://giles.tumblr.com
Podcast: http://hollywoodgrit.blogspot.com

A better question would be do you have that authority to retrain
them?That
sounds awful to say the least.

It could be worse, you could be the guy that is the test driven
development
library that manually goes in and see what breaks and what doesn’t.
When I refer to you “could be the guy”, I’m actaully stating I am that
guy.

I wish my company understood the concept or the need for BDD or at least
TDD.
I’m trapped and its so painful.

On Mon, Feb 25, 2008 at 8:18 PM, Giles B. [email protected] wrote:

Hi - I’ve got a bunch of people using specs at a company. Everybody is
writing specs, but people are not really practicing BDD. As in, the
specs are there, but it doesn’t go, write spec, write code, repeat. I
recently came across 8 failing specs checked into svn; I think the
plan was, I’ll write the design as specs, and then implement the
entire design to match. Obviously that’s not really the way it should
be.

This is a fairly common misconception about what BDD is all about. You
can move in the right direction by making those failing specs pending
so the suite still passes. Then remove pending from one example, watch
it fail, get it to pass, then the next pending example, etc.

I also had to go into specs on a project I’m not working on, and found
an unholy hive of database-accessing specs. It’s disheartening.
Basically, it’s cargo cult development practices - using the “best
practice” without actually understanding it.

This is a really tough problem. The whole motivation for BDD was
“people don’t get TDD, so let’s come up with some new ways to frame it
so people get it.” Now people don’t get the new frame. In that respect
we’ve made things twice as bad.

Should I tell these people to throw away their specs? Should I train
them in the BDD “spec first” cycle? What do you do when you have specs
that are not really that useful? This is mostly Rails stuff; there’s a
lot of controller specs that duplicate model specs instead of stubbing
out the behavior. It’s driving me nuts but I have no idea what the
solution is yet.

If you’re pair programming with these folks, then I’d recommend simply
pushing the pairing sessions in the right direction. Focus on the task
at hand, do it with good discipline, and change anything that’s not
right as you go.

If you’re working from afar, then I’m not sure what to recommend.
There are some people who are going to respond favorably to your
advise and others who will think you’re nuts.

The subject of this thread is “bad specs better than none?” If “bad”
means “wrong” and the examples don’t test what they say they’re
testing, then I’d point that out and ditch them - but probably one at
a time.

If “bad” means that there is poor balance, too much focus on
structure, duplication, etc, then as long as they are really testing
things, I’d leave them and gradually refactor them to a better place.

Or maybe I’d just crack open a beer, grab my guitar and call it a day :slight_smile:

Cheers,
David

On Tue, Feb 26, 2008 at 3:18 AM, Giles B. [email protected] wrote:

Hi - I’ve got a bunch of people using specs at a company. Everybody is
writing specs, but people are not really practicing BDD. As in, the
specs are there, but it doesn’t go, write spec, write code, repeat. I
recently came across 8 failing specs checked into svn; I think the
plan was, I’ll write the design as specs, and then implement the
entire design to match. Obviously that’s not really the way it should
be.

Here is how I would deal with this (and have dealt with it in the past)

  1. In your standup meeting (if you have one) say you have an obstacle,
    and that it’s failing specs in svn. Don’t explain why it’s an
    obstacle, just say you’d like to discuss it with the other devs after
    the standup.

  2. Explain that you can’t check in more code because you can’t know
    whether your new code is any good. When something fails you can’t know
    for sure whether it’s due to “old” or new failing specs.

  3. Try to get everyone to understand the benefits of a green SCM and
    the dangers of a red one. This can take days and weeks. Only then can
    you address the failing specs.

  4. Set up Continuous Integration

  5. Invest in some lamps and hook them up to your CI

I also had to go into specs on a project I’m not working on, and found
an unholy hive of database-accessing specs. It’s disheartening.
Basically, it’s cargo cult development practices - using the “best
practice” without actually understanding it.

What “best practice” are you referring to?

Should I tell these people to throw away their specs? Should I train
them in the BDD “spec first” cycle? What do you do when you have specs
that are not really that useful? This is mostly Rails stuff; there’s a
lot of controller specs that duplicate model specs instead of stubbing
out the behavior. It’s driving me nuts but I have no idea what the
solution is yet.

I don’t know if your developers chose BDD/RSpec themselves (and got it
wrong) or if they were “assigned”.
In any case, it sounds like there has been a lack of attention to
collective code ownership and staffing with appropriate skills.

Patience and pair programming would be my best suggestion. Have
someone who gets BDD (you?) pair with the whole team till they get it
too. And till they also get that they should fix bad code when they
see it.

Aslak

Are you on site or is this remote work? What role are you supposed to
be fulfilling?

Pat

I also had to go into specs on a project I’m not working on, and found
an unholy hive of database-accessing specs. It’s disheartening.
Basically, it’s cargo cult development practices - using the “best
practice” without actually understanding it.

This is a really tough problem. The whole motivation for BDD was
“people don’t get TDD, so let’s come up with some new ways to frame it
so people get it.” Now people don’t get the new frame. In that respect
we’ve made things twice as bad.

What did you expect? Honestly?
You need to show people the Right way, otherwise they’re unlikely to
figure
it out by themselves. But as the fortune cookies go:
“To make the right decision, one needs experience.
To gain experience, one needs to make the wrong decision.”

It is easier for me to explain this from the point of view of aikido:
I’ve been shown the right moves thousands of times. I can not even see
what sensei does, let alone reproduce it. I can not perceive the
balance,
the timing, the acceptance of an attacker and the reflection of his/her
energy to -ultimately- unbalance. How could I learn by trying even tens
or hundreds of thousands of time?
After seven years, I’m still a puny beginner. And I need other people
to show me my mistakes. Again, again and again.

To the original poster: yes, teach BDD.
Make sure they accept you as a teacher,
then teach, small steps at a time, by showing what is wrong.
when they figure out a solution by themselves, encourage them, accept
that solution (use it yourself). When they don’t figure it out by
themselves (likely enough), show how you would do it.
And be prepared to repeat yourself.

HtH,
Kero.

On 02008:02:27, at 10:01CST, Maurício Linhares wrote:

that your app works without a bunch of integration tests.
I’m reminded of one of my heros, Matthias Felleisen. If you don’t
know exactly what I’m talking about, go here: http://htdp.org/. His
mission in life is to teach the pattern to design programs. His
approach draws more from the Design by Contract paradigm than TDD,
but it’s the same goal.

If you can ever stop in at Northeastern University in Boston, drop in
on a class. Especially the freshman intro to programming. If we can’t
teach the method as well as him, we’re screwed. Because he’s been
working on it for twenty years.

Of course, my hope is that hackety.org + bdd = hdtp++. I can dream, no?

http:// Joseph Holsten .com

On Tue, Feb 26, 2008 at 2:59 AM, aslak hellesoy
[email protected] wrote:

I also had to go into specs on a project I’m not working on, and found
an unholy hive of database-accessing specs. It’s disheartening.
Basically, it’s cargo cult development practices - using the “best
practice” without actually understanding it.

What “best practice” are you referring to?

I’m also interested in discovering what is this “best practice”.

I can’t see any problem in specs running against a database, that’s
exactly what integration testing is about, shouldn’t we do integration
testing just because we’re using BDD? I really don’t think so.

The big problem about specs running against a database is not knowing
that it’s integration testing and also that specs that access
databases run slower than “pure” unit tests, but you can’t be sure
that your app works without a bunch of integration tests.


Maurício Linhares
http://alinhavado.wordpress.com/ (pt-br) |
http://codeshooter.wordpress.com/ (en)
João Pessoa, PB, +55 83 8867-7208

On Wed, Feb 27, 2008 at 9:09 PM, Joseph Anthony Pasquale Holsten
[email protected] wrote:

If you can ever stop in at Northeastern University in Boston, drop in
on a class. Especially the freshman intro to programming. If we can’t
teach the method as well as him, we’re screwed. Because he’s been
working on it for twenty years.

Pick up one/several of his books, too. The Little/Seasoned/Reasoned
Schemer, Little MLer, A Little Java A Few Patterns.

HTDP is a good book as well, it’s kind of like SICP’s little brother.
The ones I listed above are quite different - written in a Socratic
dialog format - and very enjoyable. Plus you can get them all off of
ebay for like 10 bucks.

Pat

I replied before in this topic saying that I was doing manual testing.I
should state that I am now using Selenium + Ruby + RSpec.

ASP 0 - Ruby 1

On Thu, Feb 28, 2008 at 12:09 AM, Joseph Anthony Pasquale Holsten <

On Thu, Feb 28, 2008 at 12:49 AM, Korny S. [email protected]
wrote:

rspec-land. “unit specs”? “stories” in story-runner?
I’ve been using “Stories” and “Object Specs”.

David

Totally agree with this - I’m happy to work with specs that just define
a
single bit of the system’s behaviour (i.e. “unit tests”) and specs that
define behaviour across several different parts of the system (i.e.
“integration tests”) - but it drives me mad when they are all mixed in
together, rather than in different directory trees.

Incidentally, is there a good naming scheme for this distinction in the
BDD
world? I’m used to saying “unit tests”, “integration tests”,
“acceptance
tests” (though the precise meaning of the last two seems to vary widely
between different organisations!) but I’m not sure what to call them in
rspec-land. “unit specs”? “stories” in story-runner?

  • Korny

On Thu, Feb 28, 2008 at 3:01 PM, Maurício Linhares <

On Mon, Feb 25, 2008 at 6:18 PM, Giles B. [email protected] wrote:

Hi - I’ve got a bunch of people using specs at a company. Everybody is
writing specs, but people are not really practicing BDD. As in, the
specs are there, but it doesn’t go, write spec, write code, repeat. I
recently came across 8 failing specs checked into svn; I think the
plan was, I’ll write the design as specs, and then implement the
entire design to match. Obviously that’s not really the way it should
be.

Here’s my theory.

Ranked, in ascending order of desirability:
No tests
Green suite, poor tests
Red suite, poor tests
Red suite, quality tests
Green suite, quality tests

A passing suite of high-quality tests gives you confidence that your
system works well. If your tests are of low quality, then you will be
confident, but your confidence will be misguided, and so you disrupt
the balance of what the team knows vs what the team thinks it knows.
The team has good values but does not understand the principles.

A broken suite of low-quality tests sounds discouraging, but I believe
it is a positive situation, because the team’s understanding of the
values and principles can grow at the same rate, rather than one
dominating the other like a weed. Some teams and team members will be
cynical, some will be eager but struggle. Most importantly though,
they will be realistic.

A broken suite of high-quality tests signifies a breakdown in process.
Unlike before, the problem at this stage is no longer a matter of
growth and understanding within the developers, it is managerial or
political. They may be under tight deadlines and so have to let
practices slip slightly, urged on by misinformed project managers.
Removing political obstacles is difficult and requires skill in
itself, but I think it’s a tedious work compared to the personal
growth that developers must experience.

Willfully choosing not to write tests means you are stupid.

Pat

On Thu, Feb 28, 2008 at 7:22 AM, Glenn F. [email protected] wrote:

A lot of times if I’m writing some code for a challenging piece, it’s
can’t figure out a better way, I have to write something that still
works!

So while I have a lot of the knowledge behind the theory of good BDD
practice, I can’t always implement it even when I want to. My Ruby /
RoR inexperience is what holds me back the most in that department.
It’s just something I have to cultivate really. Until then I’m happy
with my green specs with excellent coverage that slam the database
like crazy and take a long time and have few mocks/stubs/
should_receives.

And you should be happy with that! The beauty of your situation is
that even though you are admittedly new at this you are able to
deliver code in which you have confidence. Clearly you recognize that
you have some growth ahead, but you’re posing much less risk for your
clients.

Additionally, as you do learn, because you have good coverage, you’ll
be in a good position to address design decisions that you later
decide are poor based on new understanding.

Naturally, since you are not 100% clear about what’s going on you may
be missing a step here and there. Test quality is every bit as
important as test coverage, but good coverage is a better place to
start than poor coverage.

Keep up the good work!

Cheers,
David

I have a similar perspective from my own personal experience. I am
still quite the novice, but I’m as much of a novice in RSpec as I am
in Ruby / RoR. Honestly, a lot of my specs in new sections end up
having great coverage, but are full of real models and few of the
“purist” BDD practices. Before I started with BDD I did a lot of
reading so I feel that I have a good understanding of the goal, and I
do have some specs with little database access and great
implementation of the MVC “goodness” that RoR supports, but I simply
can’t always keep this up even when I want to.

A lot of times if I’m writing some code for a challenging piece, it’s
challenging to me because I don’t already know how to do it. I can
write the basic “here’s the setup, result.should eql(this_thing)” but
I can’t write any mocks/stubs/should_receives in the middle because at
every step, I just honestly have no idea how it should work!! So I
throw in real models and try to make it as real as possible, rather
than as “pure” as possible. It’s not until after I get things working
that I even know what the solution should remotely look like. This is
due to my inexperience that I have to hack around a lot before I
figure out how to make things work. Unfortunately, I know this means
I write code that is more complicated than it should be, but if I
can’t figure out a better way, I have to write something that still
works!

So while I have a lot of the knowledge behind the theory of good BDD
practice, I can’t always implement it even when I want to. My Ruby /
RoR inexperience is what holds me back the most in that department.
It’s just something I have to cultivate really. Until then I’m happy
with my green specs with excellent coverage that slam the database
like crazy and take a long time and have few mocks/stubs/
should_receives.

Glenn

On Thu, Feb 28, 2008 at 2:22 PM, Glenn F. [email protected] wrote:

I agree with David - you are certainly on the right track. I also think
you
are doing the right thing when you write specs even if they seem not
perfect
to you at first attempt - once you have written the code you can
evaluate
why they could be better and then make your adjustments as you learn
more.

A lot of times if I’m writing some code for a challenging piece, it’s

can’t figure out a better way, I have to write something that still
works!

I also often find it hard to implement a new solution from scratch using
the
BDD / TDD test-first approach. Sometimes it works for me to make a very
crude spike solution to figure out how this part of the program actually
should work (often writing no automated tests at all). Once I have an
idea
of the outline I put the spike away and start implementing the real
solution
in the real codebase, writing tests first etc.

Interestingly, it seems to me that the whole task actually is
accomplished
quicker this way because I don’t have to keep the production code clean
&
tested while experimenting and because I don’t have to sit and figure
out
the design while writing the production code.

As a gain experience I think there will be more solutions that I can
implement without having to spike them before.

– Siemen

Here’s my theory.
You remind me of the four stages of learning

Ranked, in ascending order of desirability:
No tests
Green suite, poor tests
Unconsciously Incompetent

Red suite, poor tests
Consciously Incompetent

Red suite, quality tests
Consciously Competent

Green suite, quality tests
Unconsiously Competent

where the fourth stage flows back into the first stage, either
because you stop learning, get stuck (“vastgeroest” in Dutch), and cease
to be competent, or
because you pick up new, additional things, and therefor have to start
at the first stage again.

Bye,
Kero.