Spec'ing via features

On Nov 24, 2008, at 3:26 PM, Luis L. wrote:

Is weird that you have autospec installed in two different places,
unless there was some forced GEM_PATH and GEM_HOME for it.

Yes, exactly - that’s why I’m trying to figure out where among all
this code and gems and install procedures that autospec gets created -
because I suspect that’s a clue.

Both installations are using the bundled ruby version that came with
OSX?

Both are using the same mac ports ruby, but one of them might have
been using bundled ruby when ZenTest was installed if that matters.

If you post the errors will be much helpful, is hard to figure out
things and assist you without the proper details.

I don’t really expect anyone else to slog through that - I would
really just like the one little tidbit “where does autospec come from?”

Thanks,
SR

On Mon, Nov 24, 2008 at 2:40 PM, Pat M. [email protected] wrote:

Wow, if that’s it in a nutshell… :slight_smile:

Nice nut.

///ark

On Nov 24, 2008, at 4:44 PM, Pat M. wrote:

Steven R. [email protected] writes:

I don’t really expect anyone else to slog through that - I would
really just like the one little tidbit “where does autospec come
from?”

autospec is installed as part of the rspec gem.

Cool - thanks

SR

Wow, if that’s it in a nutshell… :slight_smile:

Pat

Thanks Pat, great summary.

I have to admit that I’m as crazy as Yehuda,
and believe that all we need are just acceptance tests,
at different layers of abstraction, for clients and developers.

I also see the benefits of speccing out single object’s behaviors, with
the aim of a good design.
However, the drawbacks of doing this out-weight the benefits, in my
opinion.

Testing how every method of an object is going to behave,
implies that after refactoring, that spec will no longer be useful,
eventhough the business and application logic stay the same.

I believe that being able to come up with a good design,
is not only dependent on writing tests before your implementation,
but also on knowing how to write a good implementation.

This can be gained through experience,
reading books, blogs, pair-programming,
using tools to tell you about the complexity of your code,
and a constant process of refactoring as we complete requirements,
and then fully understand what the best design could be.

Therefore in my opinion, by writing tests that guarantee
the correct functioning of the system, we have a robust suite of tests.
Let the refactoring come storming in and change the whole
implementation,
but the tests should not be affected at all,
as I’m not testing my implementation nor design,
only the correct functioning of the system,
and relying on other tools on top of tests to maintain my code
nice, clean and understandable by anyone that comes along.

Kind Regards,

Rai

Pat M. wrote:

Here’s my latest Theory of Testing, in a nutshell:

I really understand what you are getting at. However, as I less
experienced developer (my degree is actually in business) I have found
that having more unit tests (for models and controllers) helps ensure
that I write better code. I can’t think of a single case in which the
code I write where every public method is tested is not better than the
code I write where I don’t do that.

Am I unique in this? Or is strict TDD for every public method a good
practice for someone who is still learning how to design code well?

Thanks,
Paul

Pau C. [email protected] writes:

Am I unique in this? Or is strict TDD for every public method a good
practice for someone who is still learning how to design code well?

No, I don’t think you’re unique in that. And yes, I think it’s good
practice. In my opinion, you have to extensively practice disciplined
TDD before you get a feel for when you can ease up. This is also why
I’m more strict with my own code when I’m working with newer people
(although the ideal situation would be pairing!)

Pat

I came across this idea of dropping unit tests for acceptance tests in
the java world. It didn’t like it there and I don’t like it here, but
maybe thats because I’m an old fuddy duddy or something :). I do think
that every public method of an object should be specifically unit
tested, and yes that means that if you refactor your object you should
refactor your unit tests. This isn’t really that much of a burden if
you design your objects to have simple and minimal public api’s in the
first place.

What is that makes you think you can refactor code run acceptance
tests and be save without unit tests? Writing tests “that guarantee
the correct functioning of the system” isn’t something you can just
do. Best you can hope for with acceptance tests is that part of the
system functions correctly most of the time in some circumstances.

Perhaps its the BDD ideal that your only writing the code you need to
make your acceptance tests pass, that make you think your acceptance
tests cover all your code. However just because you’ve written minimal
code to make an acceptance test pass doesn’t mean that you can’t use
this code in a multitude of different ways

Do you really think that BDD created code is just some black box that
you can tinker around with restructure and still be sure it works just
because your black box tests still work?

I just don’t believe you can get the coverage you need for an
application using acceptance testing / features alone. If you do
actually write enough features to do this you’ll end up doing much
more work than writing unit tests combined with features.

All best

Andrew

2008/11/25 Raimond G. [email protected]:

On Mon, Nov 24, 2008 at 9:41 PM, Pau C. [email protected] wrote:

I really understand what you are getting at. However, as I less
experienced developer (my degree is actually in business) I have found
that having more unit tests (for models and controllers) helps ensure
that I write better code. I can’t think of a single case in which the
code I write where every public method is tested is not better than the
code I write where I don’t do that.

Am I unique in this? Or is strict TDD for every public method a good
practice for someone who is still learning how to design code well?

I feel the same way. Unit testing for me is as much about writing
better code as about avoiding errors. As a rule, if i can write clear
specs, I’m writing clean code.

For instance, the Law of Demeter can seem silly at times–Why can’t I
just call foo.bar.baz.bax? But try spec’ing a method which calls
foo.bar.baz.bax and you’ll see. You have to tie yourself in knots of
mock objects to “decouple” from other object’s implementations. Now
you’ve tightly couples your spec to the implementation of the method,
as well as some other methods. That coupling is actually a reflection
of a hidden coupling in your code. The complexity of the tests
reveals a hidden complexity in that train wreck you put in your
method.

It’s easy to write bad code without noticing. It’s harder to write
bad specs. So I write the specs, and I write them first.

Peter

On Mon, Nov 24, 2008 at 1:18 PM, Mark W. [email protected] wrote:

Testing that is a means of detecting errors - it’s not a specification.

What happens when an ATM user tries to withdraw $100 more than available is
not an edge case, and should be shown to business.

I realize it’s a fine point - I’m just responding to whether the business
needs to see what we call “edge cases.”

///ark

I think this is going to vary from customer to customer, but in the
end, I think it’s up to the business to make this decision, not the
developers.

2 more cents. Don’t spend 'em all in one place :slight_smile:

David

Andrew P. wrote:

I came across this idea of dropping unit tests for acceptance tests in
the java world. It didn’t like it there and I don’t like it here, but
maybe thats because I’m an old fuddy duddy or something :). I do think
that every public method of an object should be specifically unit
tested, and yes that means that if you refactor your object you should
refactor your unit tests. This isn’t really that much of a burden if
you design your objects to have simple and minimal public api’s in the
first place.

+1

this code in a multitude of different ways

Do you really think that BDD created code is just some black box that
you can tinker around with restructure and still be sure it works just
because your black box tests still work?

I just don’t believe you can get the coverage you need for an
application using acceptance testing / features alone. If you do
actually write enough features to do this you’ll end up doing much
more work than writing unit tests combined with features.

+1 again.

All best

Andrew

Here is how I look at the two sets of tests…

Features at the application level (acceptance tests) instill more
confidence in me about the correctness of the system’s behavior.
Object level code examples (unit tests) instill more confidence in me
about the design of the system.

With acceptance tests passing we have no guarantee about the state of
the design. Remember, TDD/BDD naturally produces easy to test objects
and by skipping object level examples you run the risk of creating
dependent laden, highly coupled objects that are hard to test. (Just
think, you can make all of your features, for a web app, pass by writing
the app in PHP4 with no objects at all :stuck_out_tongue: .)

I also think that acceptance tests are too slow to be used in all
refactorings and they are not fine grained enough so you’ll end up doing
more debugging than you would otherwise with good object level
coverage. I generally try to keep each individual unit test faster than
a tenth of a second, as suggested in ‘Working Effectively With Legacy
Code’. What results is an extremely fast suite that can be used to
quickly do refactorings. I have experienced the pain of using just
Cucumber features first hand-- finding bugs on this level is just not as
fast object level examples. If you skip object level examples you are
incurring a technical debt that you will feel down the road, IMO.

Someone at the start of this thread had wondered what people had learned
when they went through this process of balancing FIT tests with unit
tests. While I know some people on this list could provide some first
hand experience, I think this post by Bob Martin should provide some
good insight:

http://blog.objectmentor.com/articles/2007/10/17/tdd-with-acceptance-tests-and-unit-tests

  • Ben M.

James B. wrote:

As I work with Rails TestUnit tests I am reconsidering how to use

I discover that in Ruby 1.9 TestUnit is out and minitest is in. I
wonder what effect, if any, this will have on future releases of Rails.

http://www.ruby-forum.com/topic/171625

On Tue, Nov 25, 2008 at 12:52 AM, Ben M. [email protected] wrote:

make your acceptance tests pass, that make you think your acceptance
actually write enough features to do this you’ll end up doing much
Here is how I look at the two sets of tests…

Features at the application level (acceptance tests) instill more confidence

CONFIDENCE!

That and, as Kent Beck describes today, responsible software, are why
we do testing at all.

in me about the correctness of the system’s behavior. Object level code
examples (unit tests) instill more confidence in me about the design of the
system.
With acceptance tests passing we have no guarantee about the state of the
design. Remember, TDD/BDD naturally produces easy to test objects and by
skipping object level examples you run the risk of creating dependent laden,
highly coupled objects that are hard to test. (Just think, you can make all
of your features, for a web app, pass by writing the app in PHP4 with no
objects at all :stuck_out_tongue: .)

Which is not an inherently bad deal, if that’s your comfort zone, and
if that’s the comfort zone of everybody on your team.

Someone at the start of this thread had wondered what people had learned
when they went through this process of balancing FIT tests with unit tests.

I can speak to this a bit. Maybe more than a bit.

When I was working with .NET FitNesse and NUnit, we had very high
levels of coverage in NUnit. Early on one project I told Micah M.
(who co-created FitNesse with Bob Martin) that I was concerned about
the duplication between our FitNesse tests and NUnit tests and
questioned the value of keeping it.

Micah pointed out reasons that made absolute 100% perfect sense in the
context of the project we were working on. The customers were
encouraged to own the FitNesse tests. They were stored on a file
system, backed up in zip files, while the NUnit tests were stored in
subversion with the code. The FitNesse fixtures were stored with the
application code, distant from the FitNesse tests.

In order to foster confidence in the code amongst the developers,
having a high level of coverage in NUnit made sense, in spite of the
duplication with some of the FitNesse tests.

That duplication, by the way, was only in terms of method calls at the
highest levels of the system. When a FitNesse test made an API call,
that message went all the way to the database and back.

When an NUnit test made the same call, that message typically got no
further than the object in the test, using stubs and mocks to keep it
isolated.

Now fast forward to our current discussion about Cucumber and RSpec.
As things stand today, we tend to store .feature files right in the
app alongside the step_definitions and the application code.

The implications here are different from having a completely decoupled
acceptance testing system. I’m not saying that abandoning RSpec or
Test::Unit or whatever is the right thing to do. But I certainly feel
less concerned about removing granular code examples, especially on
rails/merb controllers and views, when I’ve got excellent coverage of
them from Cucumber with Webrat. Thus far I have seen a case where I
couldn’t quickly understand a failure in a view or controller based on
the feedback I get from Cucumber with Webrat.

But this is mostly because that combination of tools does a very good
job of pointing me to the right place. This is not always the case
with high level examples. If you’re considering relaxing a requirement
for granular examples, you should really consider each case separately
and include the level of granularity of feedback you’re going to get
from your toolset when you make that decision.

Now this is how I see things.

For anybody who is brand new to all this, my feeling is that whatever
pain there is from duplication between the two levels of examples and
having to change granular examples to refactor is eclipsed by the pain
of debugging from high level examples.

Also, as I alluded to earlier, every team is different. If you are
working solo, the implications of taking risks by working
predominantly at higher levels is different from when you are on a
team. The point of testing is not to follow a specific process. The
point is to instill confidence so you can continue to work without
migraines, and deliver quality software.

Cheers,
David

On Mon, Nov 24, 2008 at 8:34 PM, Raimond G. [email protected]
wrote:

Wow, if that’s it in a nutshell… :slight_smile:

Pat

Thanks Pat, great summary.

I have to admit that I’m as crazy as Yehuda,
and believe that all we need are just acceptance tests,
at different layers of abstraction, for clients and developers.

There are different ways to ensure code works at different layers of
abstraction. One might even call these ATs for clients, and specs for
developers. :wink:

That’s not to say that you couldn’t simply write Cucumber scenarios
for both of these levels. But it’s important to point out that the end
goal is the same, the means and tools used to get there may use
different approaches.

I am glad that in this community there are such diverse opinions on
how-to approach delivering quality software. It pushes the envelopes,
challenges what we know now, and makes us each better at our craft.

Zach

I believe that being able to come up with a good design,
the correct functioning of the system, we have a robust suite of tests.
Rai

Posted via http://www.ruby-forum.com/.


rspec-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/rspec-users


Zach D.
http://www.continuousthinking.com

David C. wrote:

you design your objects to have simple and minimal public api’s in the

I just don’t believe you can get the coverage you need for an

results is an extremely fast suite that can be used to quickly do

subversion with the code. The FitNesse fixtures were stored with the
When an NUnit test made the same call, that message typically got no
less concerned about removing granular code examples, especially on
from your toolset when you make that decision.
predominantly at higher levels is different from when you are on a
team. The point of testing is not to follow a specific process. The
point is to instill confidence so you can continue to work without
migraines, and deliver quality software.

Cheers,
David

Thanks for sharing your experience and insight! Having never used
FitNesse I didn’t see that distinction at all. What you said makes a lot
of sense.
-Ben

On 25 Nov 2008, at 17:26, Ben M. wrote:

think
+1
acceptance
because your black box tests still work?

of the

tenth of a

(who co-created FitNesse with Bob Martin) that I was concerned about

further than the object in the test, using stubs and mocks to keep it
less concerned about removing granular code examples, especially on
for granular examples, you should really consider each case
of debugging from high level examples.

Thanks for sharing your experience and insight! Having never used
FitNesse I didn’t see that distinction at all. What you said makes a
lot of sense.
-Ben

Amen to that. Thanks guys, it’s been a fascinating and enlightening
discussion.

I am looking forward to the next chance I get to talk about this with
someone (who’s interested!) over a beer.

I don’t suppose any of you are going to XP Day, London, this year?

cheers,
Matt

Question: In Cucumber when you’re writing code to satisfy steps and
accessing
the model objects directly, what support for asserts, responses, etc.
do people use. (the equivalent of ActionController::TestCase and
ActiveSupport::TestCase), Fixtures, etc.

Thanks,

T

Tim W. wrote:

Question: In Cucumber when you’re writing code to satisfy steps and
accessing the model objects directly, what support for asserts, responses, etc.
do people use. (the equivalent of ActionController::TestCase and
ActiveSupport::TestCase), Fixtures, etc.

Cucumber depends upon RSpec. Try here:

http://rspec.info/documentation/rails/writing/

Tim W. wrote:

Question: In Cucumber when you’re writing code to satisfy steps and
accessing the model objects directly, what support for asserts,
responses, etc.
do people use. (the equivalent of ActionController::TestCase and
ActiveSupport::TestCase), Fixtures, etc.

Cucumber depends upon RSpec.

No it doesn’t

Aslak

Aslak Hellesøy wrote:

Cucumber depends upon RSpec.

No it doesn’t

Aslak

Forgive my misapprehension.

James B. wrote:

Aslak Hellesøy wrote:

Cucumber depends upon RSpec.

No it doesn’t

Aslak

Forgive my misapprehension.

So, where does one find a comprehensive list of expectations for
cucumber step matchers? Things like:

response.body.should +~ \pattern\

In my ignorance I have been using RSpec as a guide I am looking in the
cucumber rdocs but I do not recognize anything as an expectation.