Mocks? Really?

On Dec 17, 2007 2:08 PM, Scott T. [email protected]
wrote:

the entire process.
I didn’t isolate what about fixture instantiation was slow. It reads
the yaml files and converts the hash into objects.
All I know is when I did the optimization, I got around a 30%
performance increase when loading all fixtures in all Examples.

I assume you were using instantiated fixtures, and not transactional
fixtures?
I was using transactional fixtures.
This was before Rails 2.0 fixture optimizations, so I’m not sure if
the same applies today.

On Dec 17, 2007, at 4:42 PM, Brian T. wrote:

I’ve had around 20-30% increases using in memory sqllite, about 1
performance increase when loading all fixtures in all Examples.
I assume you were using instantiated fixtures, and not transactional
fixtures?

Scott

I know the questioned are directed towards Dan so I hope you don’t me
chiming in. My comments are inline.

On Dec 15, 2007 2:17 AM, Pat M. [email protected] wrote:

objects (where “never” means you deviate only in extraordinary
circumstances)? I’m mostly curious in the interaction between
controller and model…if you use real models, then changes to the
model code could very well lead to failing controller specs, even
though the controller’s logic is still correct.

In Java you don’t mock domain objects because you want to program
toward an interface rather then a single concrete implementation.
Conceptually this still applies in Ruby, but because of differences
between the languages it isn’t a 1 to 1 mapping in practice.

With regard to controllers and models in Rails, I don’t want my
controller spec to use real model objects. The requirement of a model
existing can be found as part of the discovery process for what a
controller needs to do its job. If the implementation of a model is
wrong it isn’t the job of the controller spec to report the failure.
It’s the job of the model spec or an integration test (if its an
integration related issue) to report the failure.

When you make it a job of the controller spec to ensure that the real
model objects work correctly within a controller it is usually because
there is a lack of integration tests and controller specs are being
used to fill in the void.

Also, controllers can achieve a better level of programming toward the
“interface” rather then a concrete class by using dependency
injection. For example consider using lighight DI using the injection
plugin (http://atomicobjectrb.rubyforge.org/injection/):

class PhotosController < ApplicationController
inject :project_repository

def index
  @projects = @project_repository.find_projects
end

end

There is a config/objects.yml file which exists to define what
project_repository is:

project_repository:
use_class_directly: true
class: Project

This removes any unneeded coupling between the controller and the
model. Although the most common thing I’ve seen in Rails is to partial
mock the Project class in your spec. Although this works there is
unnecessary coupling between your controller a concrete model class.

What is your opinion on isolating tests?

Tests should be responsible for ensuring an object works as expected.
So it’s usually a good thing to isolate objects under test to ensure
that they are working as expected. If you don’t isolate then you end
up with a lot of little integration tests. Now when one implementation
is wrong you get 100 test failures rather then 1 or 2, which can be a
headache when you’re trying to find out why something failed.

Do you try to test each
class in complete isolation, mocking its collaborators?

Yes. The pattern I find I follow in testing is that objects whose job
it is to coordinate or manage other objects (like controllers,
presenters, managers, etc) are always tested in isolation.
Interaction-based testing is the key here. These objects can be
considered branch objects. They connect to other branches or to leaf
node objects.

Leaf node objects are the end of the line and they do the actual work.
Here is where I use state based testing. I consider ActiveRecord
models leaf nodes.

A practice that I’ve been following that was inspired from a coworker
has been that an object should be a branch or a leaf, but not both.
Most Rails applications don’t follow anything like this and it’s
common to find bloated controllers and bloated models (most people IMO
do not understand the Skinny Controller, Fat Model post, bloated
models are now becoming an up and coming trend unfortunately).

Objects and methods built-in the language, standard library or
framework are exempt from my above statements. If a manager
coordinates return values from methods being called other objects and
pushes them onto an array, I don’t mock a call to Array.new.

When you use
interaction-based tests, do you always leave those in place, or do you
substitute real objects as you implement them (and if so, do your
changes move to a more state-based style)?

Leave the mocked out collaborators in place. An interaction based test
verifies that the correct interaction takes place. As soon as you
remove the mock and substitute it with a real object your test has
become compromised. It’s no longer verifying the correct interaction
occurs, it now only makes sure your test doesn’t die with a real
object.

If you do substitute in a real object, the only way you would be able
to maintain the integrity of the test is to partial mock your real
object to expect the right methods to be called. This will ensure that
the interaction continues to take place. But what happens is that the
test gets muddied up with things that don’t need to be there.

How do you approach
specifying different layers within an app?

One way to think about this is is terms of composition and
inheritance. When layers interact using composition you treat it and
test it differently then if you use inheritance. For example a
ProjectsController using a @project_repository (see injection example
above) or a Project model subclassing ActiveRecord::Base.

I need to think about them some more though.

Do you use real
implementations if there are lightweight ones available, or do you
mock everything out?

For me it depends. With most Rails projects I’ve worked on there has
been one suite of integration tests against the application as a whole
and then a bunch of unit tests. The times this has differed are when
the application relied on third-party services. These services would
be replaced with dummy or lightweight implementations for my
integration tests (for example geocoding). Although there would be
another set of integration tests to specifically test our app against
the actual service.

An integration test should test that real objects are working together
correctly produce the intended system behavior. You should never mock
objects out at this level, but you may need to provide stub
implementations for third party services.

I realize that’s a ton of questions…I’d be very grateful (and
impressed!) if you took the time to answer any of them. Also I’d love
to hear input from other people as well.

It’s too bad we can’t just stand at a whiteboard and talk this out.
The answers to these questions could fill a book and email hardly does
it justice to provide clear, coherent and complete answers. Not that
my response are “answers” to your questions, but it’s how I think
about testing and TDD.


Zach D.
http://www.continuousthinking.com

On 26 dec 2007, at 07:26, Jay D. wrote:

In all honesty I’m trying to


Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

You’re trying what? :slight_smile:

gr,
bartz

If the
implementation of a model is
wrong it isn’t the job of the controller spec to
report the failure.
It’s the job of the model spec or an integration
test (if its an
integration related issue) to report the failure.

It seems that it would be very easy to change a model,
thereby breaking the controller, and not realize it.
Let’s say that we decide to change the implementation
of a model, how do you then go about finding the
controllers that need to be updated? I know this is
the classic argument between classicists and mockists,
but I don’t see the benefit of this type of strict
mocking. If the integration test is required then what
benefit are we getting from the mock and is it worth
the cost?

Also, controllers can achieve a better level of
programming toward the
“interface” rather then a concrete class by using
dependency
injection.

I don’t see any reason to use DI in a dynamic language
like ruby. I also see no reason in this specific case.
Let’s assume we’re working on a rails social
networking site. If we have a Blog controller and a
Blog model class there is no reason to use DI to
inject the blog model in the blog controller. It isn’t
removing unneeded coupling, it’s adding unneeded
complexity. In java this injection is necessary to
make things like testing easier, but it is wholly
unnecessary in a language like ruby.

If you don’t
isolate then you end
up with a lot of little integration tests. Now when
one implementation
is wrong you get 100 test failures rather then 1 or
2, which can be a
headache when you’re trying to find out why
something failed.

This has never been a headache for me. If you run your
tests often you’ll know what was changed recently and
it’s trivial to find the problem. Also, if you run
localized tests frequently you’ll see the error
without seeing the failures that it causes through out
the test suite and you still get the benefit of mini
integration tests :wink:

In all honesty I’m trying to

  ____________________________________________________________________________________

Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now.
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

On Dec 26, 2007 1:26 AM, Jay D. [email protected] wrote:

Let’s say that we decide to change the implementation
of a model, how do you then go about finding the
controllers that need to be updated?

The integration test will die if you broke functionality.

I know this is
the classic argument between classicists and mockists,
but I don’t see the benefit of this type of strict
mocking. If the integration test is required then what
benefit are we getting from the mock and is it worth
the cost?

At either level an integration is required. I prefer extracting it out
into its own test so I can simplify (by which I mean isolate) my
controller. The other option is to give the test of the controller the
dual responsibility of testing the controller works correctly by
itself and also that it works correctly with real models.

By isolating the controller and doing interaction-based testing I find
that I end up with simpler controllers and more simple objects. I
think this is because my tests become increasingly painful to write
with the more crap I try to shove on my controller. I have learned to
listen to them and start extracting out other objects when my tests
become painful because it’s usually a sign.

I also prefer acceptance test driven development, which is TDD on top
down development so interaction-based testing is important since the
model is usually one of the last things I create.

Let’s assume we’re working on a rails social
networking site. If we have a Blog controller and a
Blog model class there is no reason to use DI to
inject the blog model in the blog controller. It isn’t
removing unneeded coupling, it’s adding unneeded
complexity. In java this injection is necessary to
make things like testing easier, but it is wholly
unnecessary in a language like ruby.

This is the Jamis B. argument. DI is unneeded in Ruby as it is
implemented in Java. Needle and Copland are Java implementations in
Ruby and they should be avoided. I do not agree that DI is wholly
unneeded. In my experience the Injection library has been very
lightweight and it has worked well in my controllers for Rails apps.
The only way to get around DI is to have every class/module know about
every other class/module it deals with OR to reopen classes and
override methods which would supply an object. Both of these have
their shortcomings.

I am not advocating using DI for the sake of DI, but it can be useful.
For example, I often extract out date, authentication, etc. helpers
and managers. So in my BlogsController there may be a reference to the
Blog model because as you say it is not unneeded coupling, however my
BlogsController requires authentication and rather then dealing with a
LoginManager directly it deals with a @login_manager. Having my
BlogsController know about the LoginManager implementation is unneeded
coupling. It needs to be able to authenticate, it doesn’t need to know
which implementation it uses to authenticate.

From a development perspective you end up with a declarative list of
objects your implementation will rely on. It’s highly readable what
your controller depends to do its job. This is a supporting +1 in my
opinion.

This has never been a headache for me. If you run your
tests often you’ll know what was changed recently and
it’s trivial to find the problem. Also, if you run
localized tests frequently you’ll see the error
without seeing the failures that it causes through out
the test suite and you still get the benefit of mini
integration tests :wink:

I agree we should be running tests frequently.

One of the things you didn’t hit up is how you test objects which
coordinate interactions vs those that do the work? Those branch/leaf
object scenarios. How do you see testing those? Do see the separation
of testing concerns as non-existent because only doing state-based
testing will cause every failure (even when an object is working
correctly, but the objects its coordinating are broken)?

I guess if the LoginManager is working correctly it seems wrong in
principle and practice to have it be red if the User object is working
is broken.

On Dec 23, 2007 2:27 PM, Zach D. [email protected] wrote:

I know the questioned are directed towards Dan so I hope you don’t me
chiming in. My comments are inline.

Thanks a lot for your comments, I really appreciate them. I’ve been
dying to respond to this for the past several days, but haven’t had
internet access.

objects, but not hit the database, would you “never” mock domain
objects (where “never” means you deviate only in extraordinary
circumstances)? I’m mostly curious in the interaction between
controller and model…if you use real models, then changes to the
model code could very well lead to failing controller specs, even
though the controller’s logic is still correct.

In Java you don’t mock domain objects because you want to program
toward an interface rather then a single concrete implementation.
Conceptually this still applies in Ruby, but because of differences
between the languages it isn’t a 1 to 1 mapping in practice.

I must be misunderstanding you here, because you say you “don’t mock
domain objects,” and the rest of your email suggests that you mock
basically everything.

Leaf node objects are the end of the line and they do the actual work.
Here is where I use state based testing. I consider ActiveRecord
models leaf nodes.

What about interactions between ActiveRecord objects. If a User
has_many Subscriptions, do you mock out those interactions? Would you
still mock them out if User and Subscription were PROs (plain Ruby
objects) and persistence were handled separately?

object.
This leads to perhaps a more subtle case of my previous
question…ActiveRecord relies pretty heavily on the real classes of
objects. To me, this means that it would make more sense to use mocks
if you didn’t use AR, but use the real objects when you are using AR.
Again, this is only between model objects. I agree that controller
specs should mock them all out.

I realize that’s a ton of questions…I’d be very grateful (and
impressed!) if you took the time to answer any of them. Also I’d love
to hear input from other people as well.

It’s too bad we can’t just stand at a whiteboard and talk this out.
The answers to these questions could fill a book and email hardly does
it justice to provide clear, coherent and complete answers. Not that
my response are “answers” to your questions, but it’s how I think
about testing and TDD.

Thanks again for your thoughtful reply. Looking forward to hearing a
little bit more.

Pat

On Dec 26, 2007 3:23 PM, Pat M. [email protected] wrote:

circumstances)? I’m mostly curious in the interaction between
domain objects," and the rest of your email suggests that you mock
Interaction-based testing is the key here. These objects can be
considered branch objects. They connect to other branches or to leaf
node objects.

Leaf node objects are the end of the line and they do the actual work.
Here is where I use state based testing. I consider ActiveRecord
models leaf nodes.

What about interactions between ActiveRecord objects. If a User
has_many Subscriptions, do you mock out those interactions?

For me it depends. If I am testing my User object and it has a custom
method called find_subscriptions_which_have_not_expired_but_which
_has_not_been_read_in_over_n_days. I will not mock out any
interactions with the subscriptions at that point. This is for two
reasons. One, when I first get this to work I may do it in pure ruby
code (no SQL help) just to get it working. At some later date/time
this is going to move to SQL. I want my test to not have to change in
order to do this. If I was interaction-based testing this custom find
method then it wouldn’t really help me ensure I didn’t break
something. Secondly, I view my model has a leaf node object. Most of
everything my model does I want to state based test the thing to
ensure the results are what I want (and not the interactions).

Some times I find there is a method where I will mock an association
because I truly don’t care about the result, and I really only care
about the interaction. For example if User delegates something to the
Subscription class either via a delegate declaration or a simple
method which delegates. For example:

delegate :zip_code, :to => :address

OR

def zip_code
address.zip_code
end

Would you
still mock them out if User and Subscription were PROs (plain Ruby
objects) and persistence were handled separately?

Possibly. I think it depends on how the objects used each other, what
kind of mini-frameworks or modules were in place to give
functionality, etc. Since models get most of their functionality
through inheritance of ActiveRecord::Base it would be difficult to
compare w/o knowing how my PROs were hooked up. Composition or
inheritance makes a difference in my head right now. Do you have any
specific concrete examples in mind?

become compromised. It’s no longer verifying the correct interaction
occurs, it now only makes sure your test doesn’t die with a real
object.

This leads to perhaps a more subtle case of my previous
question…ActiveRecord relies pretty heavily on the real classes of
objects. To me, this means that it would make more sense to use mocks
if you didn’t use AR, but use the real objects when you are using AR.
Again, this is only between model objects. I agree that controller
specs should mock them all out.

I agree with this. This is largely how I work now as described above.

my response are “answers” to your questions, but it’s how I think
about testing and TDD.

Thanks again for your thoughtful reply. Looking forward to hearing a
little bit more.

ditto,

On Dec 26, 2007 3:23 PM, Pat M. [email protected] wrote:

circumstances)? I’m mostly curious in the interaction between
domain objects," and the rest of your email suggests that you mock
basically everything.

I’ll respond more later, but wanted to point out that that should be
“In Java you mock domain objects because you want to …”. I don’t
know how the “n’t” got in there. :wink:

Zach D.
http://www.continuousthinking.com

I don’t know if anyone else will find this thought useful, but:

I think different programmers have different situations, and they
often force different sorts of priorities. I feel like a lot of the
talk about mocking – particularly as it hedges into discussions of
modeling, design as part of the spec-writing process, LoD, etc –
implicitly assumes you want to spend a certain percentage of your
work-week delineating a sensible class design for your application,
and embedding those design ideas into your specs. At the risk of
sounding like a cowboy coder I’d like to suggest that some situations
actually call for more tolerance of chaos than others.

I can think of a few forces that might imply this:

  • Team size. A bigger team means the code’s design has to be more
    explicit, because of the limits of implicity knowledge team members
    can get from one another through everyday conversation, etc.
  • How quickly the business needs change. Designs for medical imaging
    software are likely to change less quickly than those of a consumer-
    facing website, which means you might have more or less time to tease
    out the forces that would lead you to an optimal design.

In my case: I work in an small team (4 Rails programmers) making
consumer-facing websites, so the team is small and the business needs
can turn on a dime. From having been in such an environment for
years, I feel like I’ve learned to write code that is just chaotic
enough and yet still works. When I say “just chaotic enough”, I mean
not prematurely modeling problems I don’t have the time to fully
understand, but still giving the code enough structure and tests that

  1. stupid bugs don’t happen and 2) I can easily restructure the code
    when the time seems right.

In such environment, mocking simply gets in my way. If I’m writing,
say, a complex sequence of steps involving the posting of a form,
various validations, an email getting sent, a link getting clicked,
and changes being made in the database, I really don’t want to also
have to write a series of mocks delineating every underlying call
those various controllers are making. At the time I’m writing the
spec, I simply don’t understand the problem well enough to write good
lines about what should be mocked where. In a matter of hours or days
I’ll probably end up rewriting all of that stuff, and I’d rather not
have it in my way. We talk about production code having a maintenance
cost: Spec code has a maintenance cost as well. If I can get the same
level of logical testing with specs and half the code, by leaving out
mocking definitions, then that’s what I’m going to do.

As an analogy: I live in New York, and I’ve learned to have semi-
compulsive cleaning habits from living in such small places. When you
have a tiny room, you notice clutter much more. Then, a few years
ago, I moved to a much bigger apartment (though “much bigger” is
relative to NYC, of course). At first, I was cleaning just as much,
but then I realized that I simply didn’t need to. Now sometimes I
just leave clutter around, on my bedside table or my kitchen counter.
I don’t need to spend all my life neatening up. And if I do lose
something, I may not find it instantly, but I can spend a little
while and look for it. It’s got to be somewhere in my apartment, and
the whole thing’s not even that big.

Francis H.

Hi all - I’ve been keeping an eye on this thread and I’ve just been
too busy with holiday travel and book writing to participate as I
would like.

I’m just going to lay out some thoughts in one fell swoop rather than
going back through and finding all the quotes. Hope that works for
you.

First - “mock roles, not objects” - that comes from a paper of the
same name written by Steve Freeman, Nat Pryce, Tim Mackinnon, Joe
Walnes who I believe were all working for ThoughtWorks, London in
2004. They describe using mocks as part of a process to stay focused
on one object at a time and let mock objects help you to discover the
interfaces of the current object’s collaborators. My read is that they
do not make a distinction between domain objects and service objects,
though they do make a distinction between “your” objects (which you
should mock) and “everyone else’s” (which you should not).

My own approach is largely derived from this document, and I’d
recommend that everyone participating in this thread give it a read:
http://www.jmock.org/oopsla2004.pdf.

I think one place that we tend to get stuck on, and this is true of
TDD in general, not just mocks, is that mocks need not be a permanent
part of any example. Before I encountered Rails it was common for me
to use mocks in a test and then replace them with the mocked object
later. This decision would depend on many factors, and I can’t say
that I sought to eliminate mocks when I could, but there were times
when it just made more sense to use a real object once it came to be.

Rails is a different beast because we don’t really have a sense of 3
layers with lots of little objects in each. Instead we have what
amount to 3 giant objects with lots of behavior in each and even
shared state across layers. For me, this rationalizes isolating things
with mocks and stubs (which is counter to the recommendation in the
oopsla paper referenced above). Because the framework itself provides
virtually no isolation, the spec suite must if you want isolation.

Zach’s idea of branch nodes and leaf nodes really speaks to me. I
don’t remember where I read this, but I long ago learned that an ideal
OO operation consists of a chain of messages over any number of
objects, culminating at a boundary object (what Zach is calling a leaf
node). It should also favor commands over queries (Tell Don’t Ask), so
while all of the getters we get for free on our AR model objects is
convenient, from an OO perspective it’s a giant encapsulation-sieve
(again, more reason to isolate things w/ stubs/mocks in tests).

You might find
http://www.holub.com/publications/notes_and_slides/Everything.You.Know.is.Wrong.pdf
in regards to this. In this paper, Holub suggests that getters are
evil and we should use importers and exporters instead of exposing
getters/setters. If we were to re-engineer rails to satisfy this,
instead of this in a controller:

def index
@model = Model.find(params[:id])
render :template => “some_template”
end

you might see something more like this:

def index
Model.export(params[:id]).to(some_template)
end

Here Model would still do a query, but it becomes internal to the
Model (class) object. Then it passes some_template to the model and
says “export yourself”, at which point the model starts calling
methods like name=self.name. The fact that the recipient of the export
is a view is unknown to the model, so there is no conceptual binding
between model and view.

Ah, just think of how easy THAT would be to mock, and when things are
easy to mock, it means that it is easy to swap out components in the
chain of events, thus using run-time conditions to alter the path of a
given operation through different objects.

There is much, much more to say, but this is all I have time to
contribute right now.

Cheers, and Happy New Year to all!

David

On Dec 29, 2007 5:46 PM, Francis H. [email protected] wrote:

I don’t know if anyone else will find this thought useful, but:

I think different programmers have different situations, and they
often force different sorts of priorities. I feel like a lot of the
talk about mocking – particularly as it hedges into discussions of
modeling, design as part of the spec-writing process, LoD, etc –
implicitly assumes you want to spend a certain percentage of your
work-week delineating a sensible class design for your application,
and embedding those design ideas into your specs.

The fact is that you are going to spend time on designing, testing and
implementing anyways. It is a natural part of software
development. You cannot develop software without doing these
things. The challenge is to do it in a way that better supports the
initial development of a project as well as maintenance and continued
development.

can get from one another through everyday conversation, etc.
This argument doesn’t pan out. First, it’s highly unlikely that
the same developers are on a project for the full lifetime of the
project. Second, this fails to account for the negative impact of bad
code and design. The negative impact includes the time it takes to
understand the bad design, find/fix obscure bugs and to extend with
new features or changing to existing ones.

  • How quickly the business needs change. Designs for medical imaging
    software are likely to change less quickly than those of a consumer-
    facing website, which means you might have more or less time to tease
    out the forces that would lead you to an optimal design.

This doesn’t pan out either. Business needs also change at infrequent
intervals. Company mergers, new or updated policies, new or updated
laws, the new CEO wanting something, etc are things that don’t happen
every day, but when they do happen it can have a big impact. The goal
of good program design isn’t to add unnecessary complexity which
accounts for these.

The goal of good program design is to develop a system that is simple,
coherent and able to change to support the initial development of a
project as well as maintenance and continued development.

The ability to “change” is relative – every program design can be
changed. There are certain practices and disciplines that can allow
for easier change though, change that reinforces the goal of good
program design. The Law of Demeter is one of them. Simple objects with
a single responsibility is another which reinforces the separation of
concerns concept. Testing is another.

The concept of an “optimal” design implies there is one magical design
that will solve all potential issues. This puts people in the “design,
then build” mindset – the idea that if the design is perfect then all
you have to do is build it. We know this is not correct.

In my case: I work in an small team (4 Rails programmers) making
consumer-facing websites, so the team is small and the business needs
can turn on a dime. From having been in such an environment for
years, I feel like I’ve learned to write code that is just chaotic
enough and yet still works. When I say “just chaotic enough”, I mean
not prematurely modeling problems I don’t have the time to fully
understand, but still giving the code enough structure and tests that

  1. stupid bugs don’t happen and 2) I can easily restructure the code
    when the time seems right.

The challenge is to write code that is not chaotic, and to learn to do
it in a way that allows the code to be more meaningful and that enhances
your ability to develop software rather then hinder it.

cost: Spec code has a maintenance cost as well. If I can get the same
level of logical testing with specs and half the code, by leaving out
mocking definitions, then that’s what I’m going to do.

I think we should make a distinction. In my head when you need to write
code
and explore so you can understand what is needed in order to solve a
problem I
call that a “spike”.

I don’t test spikes. They are an exploratory task which help me
understand what I need to do. When I understand what I need to do I
test drive my development. Now different rules apply for when you use
mocks. In previous posts in this thread I pointed out that I tend to
use a branch/leaf node object guideline to determine where I use mocks
and when I don’t.

the whole thing’s not even that big.
Two things about this bothers me. One, this implies that from the
get-go it is ok to leave crap around an application code base. Two,
this builds on the concept of a “optimal” design; by way of
spending your life neatening up.

I am going to rewrite your analogy in a way that changes the meaning
as I read it, but hopefully conveys what you wanted to get across:
"
I do not want to spend the life of a project refactoring a code base
to perfection for the sake of idealogical views on what code should
be. I want to develop a running
program for my customer. And where I find the ideals clashing with
that goal I will abandon the ideals. Knowing this, parts of my
application may be clutter or imperfect, but I am ok with this and so
is my customer – he/she has a running application.
"

If this is what you meant then I agree with you. The question is, are
there things you can learn or discover which better support the goal
of developing software for your customer, for the initial launch, as
well as maintenance and ongoing development. If so, what are the ones
that can be learned and how-to they apply? And for the times you
discover be sure to share with the rest of us. =)

Finally IMO mocking and interaction-based testing has a place in
software development and when used properly it adds value to the
software development process.


Zach D.
http://www.continuousthinking.com

On 12/29/2007 5:46 PM, Francis H. wrote:

  • How quickly the business needs change. Designs for medical imaging
    software are likely to change less quickly than those of a consumer-
    facing website, which means you might have more or less time to tease
    out the forces that would lead you to an optimal design.

A few weeks ago, I ran across the following comment, explaining away 200
lines of copied-and-pasted internal structures in lieu of encapsulation,
in what was once the world’s largest consumer-facing web site:

/* Yes, normally this would be /
/
incredibly dangerous – but MainLoop is /
/
very unlikely to change now (spring '00) */

Careful about those assumptions.

Jay L.

On Dec 30, 2007, at 1:42 AM, Zach D. wrote:

and embedding those design ideas into your specs.

The fact is that you are going to spend time on designing, testing and
implementing anyways. It is a natural part of software
development. You cannot develop software without doing these
things. The challenge is to do it in a way that better supports the
initial development of a project as well as maintenance and continued
development.

I certainly didn’t mean to imply that you shouldn’t do any design or
testing. If I had to guess at my coding style versus the average
RSpec user, based on what’s been said in this thread, I’d guess that
I do about as much writing of tests/specs, and probably spend less
time designing. But there is certainly such a thing as overdesigning,
as well, right? I’m always trying to find the right amount, and I
suspect that “the right amount” can vary somewhat in context.

explicit, because of the limits of implicity knowledge team members
can get from one another through everyday conversation, etc.

This argument doesn’t pan out. First, it’s highly unlikely that
the same developers are on a project for the full lifetime of the
project. Second, this fails to account for the negative impact of bad
code and design. The negative impact includes the time it takes to
understand the bad design, find/fix obscure bugs and to extend with
new features or changing to existing ones.

Again, I did not say “if you have a small team you don’t have to do
any design at all.” I said that perhaps if you have a much smaller
team you can spend a little less time on design, because implicit
knowledge is much more effectively communicated.

Are you disagreeing with this point? Are you saying that two software
projects, one with four developers and the other with forty, will
ideally spend the exact same percentage of time thinking about
modeling, designing, etc.?

accounts for these.
I wasn’t saying that some businesses needs never change. The point I
was trying to make is in that some sorts of businesses and companies,
change happens more often, and can be expected to happen more often
based on past experience.

The challenge is to write code that is not chaotic, and to learn to do
it in a way that allows the code to be more meaningful and that
enhances
your ability to develop software rather then hinder it.

I wonder if part of the disconnect here depends on terminology. Some
might see “chaos” as a negative term; I don’t. There are plenty of
highly chaotic, functional systems, both man-made and natural.
Ecosystems, for example, are chaotic: They have an order that is
implicit through the collective actions of all their agents. But that
order is difficult to understand, since it’s not really written down.

I guess that’s what I’m trying to express when applying the word
“chaos” to code: It functions for now, but perhaps the way it works
isn’t as expressive as it could be for a newcomer coming to the code.

Another thing I’d express is that I find a codebase to assymetrical,
in terms of how much specification each individual piece needs. I
find it surprising, for example, when people want to test their Rails
views in isolation. I write plenty of tests when I’m working, but I
try to have a sense of which pieces of code require a more full
treatment. I’ll extensively test code when the cost/benefit ratio
makes sense to me, trying to think about factors such as:

  • how hard is it to write the test?
  • how hard is the code, and how many varied edge cases are there that
    I should write down?
  • are there unusual cases that I can think of now, that should be
    embodied in a test?

cost: Spec code has a maintenance cost as well. If I can get the same
understand what I need to do. When I understand what I need to do I
test drive my development. Now different rules apply for when you use
mocks. In previous posts in this thread I pointed out that I tend to
use a branch/leaf node object guideline to determine where I use mocks
and when I don’t.

My understanding of a spike is to write code that explores a problem
that you aren’t certain is solvable at all, given a certain set of
constraints. That’s not the lack of understanding I’m talking about:
I’m more addressing code that I know is easily writeable, but there
are a number of issues regarding application design that I haven’t
worked out yet. I’d rather write a test that encapsulates only the
external touchpoints – submit a form, receive an email, click on the
link in the email – and leave any deeper design decisions to a few
minutes later, when I actually begin implementing that interaction.

There’s another kind of “not understanding” that’s also relevant
here: A “not understanding” due to the fact that you don’t have all
the relevant information, and you can’t get it all now. For example:
You release the very first iteration of a website feature on Monday,
knowing full well that the feature’s not completed. But the reason
you release it is because on Wednesday you want to collect user data
regarding this feature, which will help you and the company make
business decisions about where the feature should go next.

while and look for it. It’s got to be somewhere in my apartment, and
the whole thing’s not even that big.

Two things about this bothers me. One, this implies that from the
get-go it is ok to leave crap around an application code base.

Well, not to belabor the analogy, but: It’s not “crap”. If it’s in my
apartment, I own it for a reason. I may not use it all the time, it
may not be the most important thing in my life, but apparently I need
it once in a while or else I’d throw it away. I may not spend all my
time trying to find the optimal place to put it, but that doesn’t
mean I don’t value it. I just might value it less than other things
in my apartment.

program for my customer. And where I find the ideals clashing with
that goal I will abandon the ideals. Knowing this, parts of my
application may be clutter or imperfect, but I am ok with this and so
is my customer – he/she has a running application.
"

That’s probably close to what I’m trying to say. But in a broader,
philosophical sense, I’m okay with the fact that my code is never
going to be perfect. Not at this job, not at any other job. In fact I
don’t know if I’ve ever met anybody who gets to write perfect code.
We write code in the real world, and the real world’s far from
perfect. I suppose Wabi Sabi comes into play here.

To bring it back to mocks: It seems to that mocks might play a role
in your specs if you were highly focused on the design and
interaction of classes in isolation from all other classes, but
understanding that isolation involves having done a decent amount of
design work – though more in some cases than in others. But if you
were living with code that was more chaotic/amorphous/what-have-you,
prematurely embedded such design assumptions into your specs might do
more harm than good.

I do, incidentally, use mocks extensively in a lot of code, but only
in highly focused cases where simulating state of an external
resource (filesystem, external login service) seems extremely
important. Of course, that usage of mocks is very different from
what’s recommended as the default w/ RSpec.

Francis H.

On 12/30/2007 3:29 PM, Francis H. wrote:

lines of copied-and-pasted internal structures in lieu of
code. Regardless of what company I was working at, pretty much any
code that would probably break a few years out would be a problem.

Of course, you can’t predict with 100% accuracy which parts of the
code are likely to change and which are likely to go untouched for
years. I think you can make educated guesses, though.

You can, and we did… turns out they weren’t. (OK, I’m exaggerating.
Most of them were, and the only guesses I’m finding now are the wrong
guesses, by definition. And the world’s changed a lot, and we know more
and can do more.)

Incidentally, how well-tested was that code base? 200 lines of copy-
and-paste smells like untested code to me.

15-20 years ago, unit tests were not a widespread industry practice :slight_smile:
This code’s in a procedural language that really, really doesn’t do
unit tests well. I’ve been trying, too. Almost wrote a pre-processor,
till I thought about the maintenance nightmare that’d cause.

Jay L.

On Dec 30, 2007, at 1:52 PM, Jay L. wrote:

encapsulation,
in what was once the world’s largest consumer-facing web site:

/* Yes, normally this would be /
/
incredibly dangerous – but MainLoop is /
/
very unlikely to change now (spring '00) */

Careful about those assumptions.

Yeah, well, there’s a difference between chaotic code and foolish
code. Regardless of what company I was working at, pretty much any
code that would probably break a few years out would be a problem.

Of course, you can’t predict with 100% accuracy which parts of the
code are likely to change and which are likely to go untouched for
years. I think you can make educated guesses, though.

Incidentally, how well-tested was that code base? 200 lines of copy-
and-paste smells like untested code to me.

Francis H.

On Dec 30, 2007, at 9:38 PM, Jay L. wrote:

Incidentally, how well-tested was that code base? 200 lines of copy-
and-paste smells like untested code to me.

15-20 years ago, unit tests were not a widespread industry practice :slight_smile:
This code’s in a procedural language that really, really doesn’t do
unit tests well. I’ve been trying, too. Almost wrote a pre-
processor,
till I thought about the maintenance nightmare that’d cause.

Right, that’s why I ask. I think working with languages, tools, and
frameworks that are easier to test is a great advantage to how we all
worked 10 or more years ago … I suspect part of that luxury
translates in being able to actually design less, since the cost of
fixing our design mistakes in the future goes down significantly.

Francis H.

On Dec 29, 2007, at 5:46 PM, Francis H. wrote:

actually call for more tolerance of chaos than others.

I can think of a few forces that might imply this:

  • Team size. A bigger team means the code’s design has to be more
    explicit, because of the limits of implicity knowledge team members
    can get from one another through everyday conversation, etc.
  • How quickly the business needs change. Designs for medical imaging
    software are likely to change less quickly than those of a consumer-
    facing website, which means you might have more or less time to tease
    out the forces that would lead you to an optimal design.

+1 - This helps my thought out a lot. Thanks for the contributions,
as always (this has been a great thread - from everyone involved).

Scott

On 12/30/2007 1:42 AM, Zach D. wrote:

I think we should make a distinction. In my head when you need to write code
and explore so you can understand what is needed in order to solve a problem I
call that a “spike”.

That’s great; I’ve been needing a term for exactly that and never saw
this word used.

On Dec 29, 2007 5:46 PM, Francis H. [email protected] wrote:

At first, I was cleaning just as much,
but then I realized that I simply didn’t need to. Now sometimes I
just leave clutter around, on my bedside table or my kitchen counter.

I’m having the opposite problem. I moved from a huge house (that I
specifically designed to always look clean) to a good-sized apartment,
and discovered I’m a disgusting, unsanitary slob. (I have no idea how
this relates to RSpec. I just wanted to share.)

Jay L.

On Dec 31, 2007 12:53 PM, Rick DeNatale [email protected] wrote:

till I thought about the maintenance nightmare that’d cause.
‘ahead of time’ and inventing a design, you can discover the design as
you go.

I don’t think it is “designing less” either. It’s designing better and
doing
it smarter, knowing that you’ll never fully comprehend the domain of
your
problem upfront, so you discover it, iteratively. As you discover more
about
the domain the design of your program changes (during refactoring) to
support a domain model to which it is representing.

This is a concept from Domain Driven Design.

It Francis is referring to doing less upfront design to try to master it
all
from the outset, then I agree that less of that is better. But that is
entirely different then just doing less design.