Assumption tests

Hi all,

I’ve been thinking about the whole validator/relationship speccing
issue, and I came up with a suggestion, which I’d love to get some
feedback on.

The full article is available at http://www.inter-sections.net/
2007/10/19/what-to-test-and-specify-and-where-to-do-it/ , with the
relevant bit being about halfway down, but here’s the gist of it:

  1. @user.new(some attr) … @user.should_be valid” is not behaviour
    specification, it’s outcome specification, and as such should not be
    in any spec. It also happens to be testing someone else’s code (the
    rails validation code), which shouldn’t need to be specified since we
    didn’t write it.

  2. The reason why people (myself included) feel compelled to include
    stuff like that is, in great part, because it helps codify our
    assumptions about the way ActiveRecord (or any other external
    components) work, which are sometimes not clear (as with non-trivial
    validations) and liable to change as Rails evolves.

Therefore, these are a new kind of beast - not a system integration
test (not system-wide or anything to do with users), not a behaviour
specification (not specifying our own code, outcome driven) - but
instead what I’m currently calling an “assumption test”.

I feel that these should be formalised, because writing somethiing like:

it “should validate_presence_of digits“ do
PhoneNumber.expects(:validates_presence_of).with(:digits)
load “#{RAILS_ROOT}/app/models/phone_number.rb“
end

is only meaningful as a specification if you assume that
“validates_presence of :digits” is the right syntax to use. So
therefore, it is based on an assumption about ActiveRecord, that
should be explicitly tested for at the unit level, so that if Rails
behaves in a different way, you’ll know about it at the unit level.

So my suggestion would be that we create an “assumptions” folder
somewhere in the rails folder hierarchy, so that we have 3 beasts:
assumptions, specs, and stories.

Obviously this could have dire consequences that I haven’t thought
of, hence why I’d like to hear what other people’s opinions are about
this. I’ve discussed some aspects of this briefly on #rspec with
David (chelimsky) (I’m swombat), but would love more opinions about
it, and it seems that all the fun stuff happens on the mailing list :slight_smile:

Thanks for any feedback,

Daniel
http://www.inter-sections.net/
(swombat on freenode#rspec)

But dont you really just want to test the behavior of the class?
(whereas the validator call is an implementation)
such as

it “should require digits” do
p = PhoneNumber.new( :digits => nil )
p.should_not be_valid
p.errors.on(:digits).should == “can’t be blank”
end

On 10/19/07, Jonathan L. [email protected] wrote:

But dont you really just want to test the behavior of the class?

The object, not the class.

(whereas the validator call is an implementation)
such as

it “should require digits” do
p = PhoneNumber.new( :digits => nil )
p.should_not be_valid
p.errors.on(:digits).should == “can’t be blank”
end

If I read correctly, Daniel is suggesting that this is not behaviour
because he’s equating behaviour with interaction. This example checks
an outcome, not an interaction.

Personally, I don’t draw boundaries around what is behaviour at
interaction. For me it’s what does it look like from the outside as
opposed to what it looks like on the inside. State testing is
perfectly fine, as long as you’re testing externally observable state
as opposed to internal state. For example, imagine that p.valid?
relies on an internal flag named @valid. Doing this would be fine:

p.should be_valid

But doing this would be evil:

p.instance_eval { @valid }.should be_true

The idea is to decouple specs from the things that change so they are
less brittle. Internal structure tends to change more than an object’s
API. Make sense?

So with that, I really don’t think there is a need for a new grouping
of tests. That’s my opinion. I look forward to everyone else’s.

Cheers,
David

If I read correctly, Daniel is suggesting that this is not behaviour
because he’s equating behaviour with interaction. This example checks
an outcome, not an interaction.

That’s right, one of my axioms is that “specifying” involves
behaviour/interaction, not state/outcome.

The reason for this is that I believe that this is the only way to
gain the full benefit from the fact that behaviours can be laid on
top of each other without interfering with each other (the benefit
being robustness of the specifications). This is all explained in
another post on my blog, but I’m not trying to advertise it, only to
have a discussion :slight_smile:

Daniel

On 10/19/07, Daniel T. [email protected] wrote:

If I read correctly, Daniel is suggesting that this is not behaviour
because he’s equating behaviour with interaction. This example checks
an outcome, not an interaction.

That’s right, one of my axioms is that “specifying” involves
behaviour/interaction, not state/outcome.

To be honest, I think you’re way off the mark. Objects behave in two
ways: they manipulate their state, or they interact with other
objects. Both are valid types of behavior and can be used for
specification.

There’s a more subtle problem with your argument though, and that’s
that as far as specs are concerned, there is no state!

Example:

given

@account = Account.new 500

when

@account.withdraw 300

then

@account.balance.should == 200

The behavior of this object is quite simple. However, do we know
anything about the implementation? Do we care to know?

You might reasonably think Account is implemented as
class Account
attr_reader :amount

def initialize(balance = 0)
@balance
end

def withdraw(amount)
@balance -= amount
end
end

You’d be wrong though. It’s actually implemented as
class Account
def initialize(balance = 0)
Transaction.new self, balance
end

def withdraw(amount)
Transaction.new self, -amount
end

def balance
Transaction.for(self).inject(0) { |sum, t| sum += t.amount }
end
end

(that’s the beauty of mailing list examples - I’m always right! :slight_smile:

Calling withdraw doesn’t reduce the account balance. It reduces the
balance reported by the account. It’s a subtle distinction, and one
that’s not important to think about 99% of the time. Hopefully though
you see why it’s a fallacy to discount state-based testing as a valid
specification technique. As long as you’re using an object’s API, and
not digging into it’s internal state as in David’s evil example,
you’re dealing with behavior.

Pat

On Oct 20, 2007, at 5:34 am, Scott T. wrote:

As I see it, the problem is that ActiveRecord is not like a typical
library, which is completely external. With rails,
ActiveRecord::Base (and co) become your code.

Surely the implication is that the only way to thoroughly spec an
ActiveRecord::Base subclass would be to have shared specs available
for the full behaviour of Base and it_should_behave_like them into
your own specs? As it happens, I use very little inheritance so
possibly there is a better way to handle this.

Of course, if you wanted to split up your specs into specs/, stories/
and assumptions/, you would certainly be free to.

I have used assumption specs before, to test things in Rails views
that were not mockable, but I put them immediately before the specs
on my modifications. They are there to flag the thing that needs
fixing. If I felt the need to have an assumption spec folder, I’d
start feeling like there was something fundamentally wrong that
needed a more decisive solution, like switching to a new framework…

Following up on the last idea: One thing that I don’t think is yet
widely understood is that there is no such thing as a “unit” or
“integration” test - test happen on a continuum (the classification
of a test is not a black and white sort of thing).

I’m glad someone said that. For a couple of months now I’ve been
struggling to define “unit test” and “integration test”. I realised
not long ago that to thoroughly test an app with a reasonably complex
object model could easily have 5-10 layers of testing. Each one
would extend the black box from individual objects and messages,
right up firing it off on the command line / web server / whatever
and testing the output. Most of these would be brittle and semi-
redundant, but they would tell you exactly where your app was going
wrong.

Ashley


blog @ http://aviewfromafar.net/
linked-in @ http://www.linkedin.com/in/ashleymoran
currently @ home

On 10/20/07, Ashley M. [email protected] wrote:

Following up on the last idea: One thing that I don’t think is yet
widely understood is that there is no such thing as a “unit” or
“integration” test - test happen on a continuum (the classification
of a test is not a black and white sort of thing).

I’m glad someone said that.

Dave A. said that 2 years ago.

http://blog.daveastels.com/2005/07/05/a-new-look-at-test-driven-development

On Oct 20, 2007, at 4:50 pm, David C. wrote:

Dave A. said that 2 years ago.

Yes but I’m new to this :slight_smile:

Hang on while I catch up!


blog @ http://aviewfromafar.net/
linked-in @ http://www.linkedin.com/in/ashleymoran
currently @ home

On 10/19/07, Scott T. [email protected] wrote:

Following up on the last idea: One thing that I don’t think is yet
widely understood is that there is no such thing as a “unit” or
“integration” test - test happen on a continuum (the classification
of a test is not a black and white sort of thing). Your “assumption
tests” are basically model-level integration tests, which are not as
fine grained as the normal spec (which would mock/stub out AR::Base,
associations, validations (with load…), etc).

While vocabulary is important, particularly shared vocab, it’s even
more important not to get too hung up on it. I’ve read plenty of
sources that say a unit test only touches one class, and if you’re
interacting with more than one class then it’s an integration test.
That’s a silly distinction to make - as you point out, there’s no
clear definition of what a unit test is. You should be comfortable
working at the appropriate level of granularity, whether it’s writing
a spec for one small method in a given context, or for a couple
collaborating objects.

Also, I probably wouldn’t go up to Kent Beck and say, “your book was
nice and all, but I wish you would have admitted that you were writing
INTEGRATION tests instead of unit tests.” :slight_smile:

I think it becomes too easy to use process or arbitrary constraints as
a crutch, instead of simple careful thought. If you find yourself
doing strange or painful things in order to make them fit some
definition, then you’re doing yourself a disservice and need to step
back and evaluate your goals.

Pat

On 10/20/07, Pat M. [email protected] wrote:

more important not to get too hung up on it. I’ve read plenty of
INTEGRATION tests instead of unit tests." :slight_smile:

I think it becomes too easy to use process or arbitrary constraints as
a crutch, instead of simple careful thought. If you find yourself
doing strange or painful things in order to make them fit some
definition, then you’re doing yourself a disservice and need to step
back and evaluate your goals.

Agreed. This is exactly why we talk about stories and specs instead of
integration and units. I realize that I’ve slung the term integration
tests around when talking about about stories so I apologize if I’ve
added to the confusion.

The distinction we make between stories and specs is that stories
describe how a system behaves in terms of a user’s experience, whereas
specs describe how an object behaves in terms of another object’s
experience.

Cheers,
David

On Oct 20, 2007, at 5:09 pm, Pat M. wrote:

I think it becomes too easy to use process or arbitrary constraints as
a crutch, instead of simple careful thought. If you find yourself
doing strange or painful things in order to make them fit some
definition, then you’re doing yourself a disservice and need to step
back and evaluate your goals.

That all hinges on developers wanting to apply simple careful thought
though. [must… not… be… bitter…]

Is there any way to get a team using BDD when the most they want to
apply IS process or arbitrary constraints? Or is that a lost cause?

Ashley


blog @ http://aviewfromafar.net/
linked-in @ http://www.linkedin.com/in/ashleymoran
currently @ home

On Oct 20, 2007, at 11:50 AM, David C. wrote:

On 10/20/07, Ashley M. [email protected] wrote:

Following up on the last idea: One thing that I don’t think is yet
widely understood is that there is no such thing as a “unit” or
“integration” test - test happen on a continuum (the classification
of a test is not a black and white sort of thing).

I’m glad someone said that.

I was thinking of him all the way (I never claimed I originated the
idea!). The reason that I brought it up was that it doesn’t seem to
be repeated much on this mailing list, the way some other things are,
like “spec the behaviour, not the implementation”

Another idea of Dave A. which I think has been lost, is that each
spec should not map one-to-on onto each implementation file (If you
rename the file, do you rename the spec? If you create tiny inner-
classes, or start delegating to other classes, do you create other
spec files, or include it in the current one?). Honestly, this is
another one of those ideas which seems it should be some sort of
mantra, but I’ve never seen it on this mailing list. Or maybe it’s
just the state of Autotest.

Scott

On 10/20/07, Scott T. [email protected] wrote:

Another idea of Dave A. which I think has been lost, is that each
spec should not map one-to-on onto each implementation file (If you
rename the file, do you rename the spec? If you create tiny inner-
classes, or start delegating to other classes, do you create other
spec files, or include it in the current one?). Honestly, this is
another one of those ideas which seems it should be some sort of
mantra, but I’ve never seen it on this mailing list. Or maybe it’s
just the state of Autotest.

I’ve actually come round on this one. I still believe firmly that
there should never be a 1-1 mapping of examples to methods, but I’ve
come to appreciate the ease of navigation afforded by mapping a single
spec file (with potentially many specs) to a single production code
file.

I’m not recommending 1-1 spec file/code file as an absolute guideline.
I’m certainly not there 100% myself. In another thread going on today
I described how I sometimes use shared behaviours to deal with
partials (instead of mocking the partial calls). There is, in a sense,
a 1-1 mapping there, but the shared behaviour is only indirectly
mapped to the partial.

And mapping files 1-1 doesn’t just aid human navigation. It supports
Autotest, as you point out, and it enables tools like TextMate to
support single command navigation between a spec and the code it is
describing.

All of that said, I must reiterate my very strong belief the mapping
one example to one method is the kiss of death.

Cheers,
David

On Oct 19, 2007, at 12:29 PM, Pat M. wrote:

To be honest, I think you’re way off the mark. Objects behave in two
ways: they manipulate their state, or they interact with other
objects. Both are valid types of behavior and can be used for
specification.

[snip]

Calling withdraw doesn’t reduce the account balance. It reduces the
balance reported by the account. It’s a subtle distinction, and one
that’s not important to think about 99% of the time. Hopefully though
you see why it’s a fallacy to discount state-based testing as a valid
specification technique. As long as you’re using an object’s API, and
not digging into it’s internal state as in David’s evil example,
you’re dealing with behavior.

Couldn’t agree with Pat more.

As I see it, the problem is that ActiveRecord is not like a typical
library, which is completely external. With rails,
ActiveRecord::Base (and co) become your code. This should be
obvious from the fact that you are decending from a base class which
can never be instantiated. Maybe this is how you got on to the whole
assumption test in the first place?

Of course, if you wanted to split up your specs into specs/, stories/
and assumptions/, you would certainly be free to.

Following up on the last idea: One thing that I don’t think is yet
widely understood is that there is no such thing as a “unit” or
“integration” test - test happen on a continuum (the classification
of a test is not a black and white sort of thing). Your “assumption
tests” are basically model-level integration tests, which are not as
fine grained as the normal spec (which would mock/stub out AR::Base,
associations, validations (with load…), etc).

Scott

On 20 Oct 2007, at 17:34 20 Oct 2007, David C. wrote:

Agreed. This is exactly why we talk about stories and specs instead of
integration and units. I realize that I’ve slung the term integration
tests around when talking about about stories so I apologize if I’ve
added to the confusion.

The distinction we make between stories and specs is that stories
describe how a system behaves in terms of a user’s experience, whereas
specs describe how an object behaves in terms of another object’s
experience.

If a spec describes how an object behaves in terms of another
object’s experience, doesn’t that mean that a spec describes an
object’s interactions with other objects?

I know I keep coming back to this. I promise I’m not doing it to be a
pain, but because I’m curious about whether there is something in
pushing this thing to extreme and saying “it’s not a spec unless it
describes what the object does” (as opposed to an outcome once the
object does something).

Here’s where I’m coming from. Take http://behaviour-driven.org/
TDDAdoptionProfile as the compass of where we’re going. Then, if
we’re only using BDD as “TDD done right”, we’re actually still stuck
at step 5, testing what the object looks like after we prod it in a
certain way. Instead, we should be defining behaviour, ie what the
object actually does, both internally and externally, in response to
a prod.

Fundamentally, I think the focus shift is from “what just happened”
to “what happens next”. i.e.:

TDD: Foo. Is Bar set to the right value? What about Baz?

BDD: This should happen; then this should happen; then this. Tell me
if that doesn’t happen when I say “foo”. Foo.

This is the way I’m seeing it at the moment and the reason why I’m
resisting writing things like @something.should be_somestate in my
specs, which I think is a test of what just happened rather than a
specification of what should happen next.

Also, interestingly, I’m finding I don’t have that resistance when
writing Stories, which I regard as just “better written integration
tests” (which are critical to making sure that all those behaviour
specs actually amount to a working system).

I guess the question someone could ask here is “if your specs are not
proving that your system works, what do they prove?” - my answer to
that is that they prove that my code is still doing what I specified
it should do - e.g. if i specified that when I call a certain method
the current user should receive a :delete call, the spec ensures that
this continues to happen no matter what other things i specify that
that method should do.

Daniel

On 10/20/07, Daniel T. [email protected] wrote:

experience.

If a spec describes how an object behaves in terms of another
object’s experience, doesn’t that mean that a spec describes an
object’s interactions with other objects?

Sometimes yes, sometimes no. Not every method of every object needs
to interact with another object. Eventually you want to have several
objects that are able to each do one thing and do it well, without
needing to ask other objects for help.

I know I keep coming back to this. I promise I’m not doing it to be a
pain, but because I’m curious about whether there is something in
pushing this thing to extreme and saying “it’s not a spec unless it
describes what the object does” (as opposed to an outcome once the
object does something).

It’s important to think about and explore different ideas, but you’re
taking this too far. Taking the example you gave in IRC yesterday:

@account.instance_variable_set(:@balance, mock_balance)
mock_balance.should_receive(:-=).with(300)
@account.withdraw 300

That is just an insane level of granularity. You’re not specifying
what the object does, you’re specifying how it does it. what it
does is withdraw - we express that through good naming. Now you need
some way of verifying that it actually does that properly. You can
explicitly do it by setting the instance variable to a mock and
setting an expectation on it, or you can do it implicitly by
exercising another behavior of the object - in the case of account,
expecting an ending balance.

In this case, you’re specifying the implementation rather than the
behavior. If we were writing some app that made use of an Account
class, we would have gotten to a point where the Account is
sufficiently granular enough that you can safely use state-based
verification. Account has a specific job, and the details of how it
does that job is its own responsibility, and for nobody else to know
anything about. That’s what we call encapsulation.

Here’s where I’m coming from. Take http://behaviour-driven.org/
TDDAdoptionProfile as the compass of where we’re going. Then, if
we’re only using BDD as “TDD done right”, we’re actually still stuck
at step 5, testing what the object looks like after we prod it in a
certain way. Instead, we should be defining behaviour, ie what the
object actually does, both internally and externally, in response to
a prod.

You seem to believe that the only way to define behavior is in terms
of interactions with other objects. That is flat-out wrong. Please
read http://martinfowler.com/articles/mocksArentStubs.html.

I guess the question someone could ask here is “if your specs are not
proving that your system works, what do they prove?” - my answer to
that is that they prove that my code is still doing what I specified
it should do - e.g. if i specified that when I call a certain method
the current user should receive a :delete call, the spec ensures that
this continues to happen no matter what other things i specify that
that method should do.

And that’s fine, when you’re specifying interactions between objects
of different levels of abstraction. Objects in one level should think
of objects in a lower level only in terms of interface. But you’ll
also find that those sorts of specs tend to be more brittle, and
rightly so - interfaces are quite simply more difficult to change,
because there are other objects that depend on that interface being
consistent. What you’re doing is applying this at an extreme level,
which leads to brittle specs that don’t add any value.

Honestly, your Account#withdraw example from IRC yesterday still has
me bewildered. It’s an example of particularly gross
overspecification, and maintaining that sort of codebase would be a
nightmare! That’s why I say there’s no room for dogmatically trying
to apply terms like “unit” or “integration.” The whole point of agile
development is to deliver software that satisfies the customer.
Nothing else matters. There are only tools that help us achieve that
goal, but they can most certainly hurt us if used improperly.

Pat

On Oct 20, 2007, at 7:05 pm, Daniel T. wrote:

I know I keep coming back to this. I promise I’m not doing it to be a
pain, but because I’m curious about whether there is something in
pushing this thing to extreme and saying “it’s not a spec unless it
describes what the object does” (as opposed to an outcome once the
object does something).

Well just the other day, I wrote two equally small classes that I
specced in different ways. One used open-uri, and the spec stubs out
the call and return value. The other extracted the element
from HTML using Hpricot, but the spec never mentioned Hpricot. Aside
from the obvious need to avoid real network connections, I just had a
gut feeling that one was more suitable to interaction-based speccing
than the other. It seemed that the methods called on the Hpricot
library were irrelevant, as long as it sent the correct string back,
yet I’ve written other code, both simpler and more complex, that I
felt the need to mock out.

Here’s where I’m coming from. Take http://behaviour-driven.org/
TDDAdoptionProfile as the compass of where we’re going. Then, if
we’re only using BDD as “TDD done right”, we’re actually still stuck
at step 5, testing what the object looks like after we prod it in a
certain way. Instead, we should be defining behaviour, ie what the
object actually does, both internally and externally, in response to
a prod.

I didn’t get the impression from that article that there was anything
magical about steps 6 and 7. If anything, I interpreted it as “if
you’re at step 7, you can say your TDD is ‘done right’ and deserves
to be called BDD”. I’m assuming people talked about the behaviour
discovered doing TDD, before the term BDD was drawn up. I don’t know

  • I wasn’t around then.

I guess the question someone could ask here is “if your specs are not
proving that your system works, what do they prove?” - my answer to
that is that they prove that my code is still doing what I specified
it should do - e.g. if i specified that when I call a certain method
the current user should receive a :delete call, the spec ensures that
this continues to happen no matter what other things i specify that
that method should do.

To me, they prove that a subsystem works. They mean you can add a
single model validation without having to retest the entire app - as
long as you know your controller checks that all models are valid,
you don’t really care what those validations are. Specs let you
check these little bits code in every conceivable way, when the
permutations they make as part of the larger app would be almost
uncountable.

That’s my current opinion at least, although it gets revised on an
almost weekly basis.

Ashley


blog @ http://aviewfromafar.net/
linked-in @ http://www.linkedin.com/in/ashleymoran
currently @ home

On 10/20/07, Daniel T. [email protected] wrote:

On 20 Oct 2007, at 17:34 20 Oct 2007, David C. wrote:

The distinction we make between stories and specs is that stories
describe how a system behaves in terms of a user’s experience, whereas
specs describe how an object behaves in terms of another object’s
experience.

If a spec describes how an object behaves in terms of another
object’s experience, doesn’t that mean that a spec describes an
object’s interactions with other objects?

Not necessarily.

When I say “from another object’s perspective” I mean that the spec is
acting as another object, whereas in a Story, the story is acting as a
human user. Or, at the very least, an external user (a person or
another program).

Again, I think you’re wanting to equate interaction testing with BDD.
While we do espouse using mocks as a discovery tool, that doesn’t mean
they are the only tool.

Also - there is the never ending debate about whether mocks should
stay in place once the real objects do come to life. There is no one
answer to that question. The trick is to balance a few things:

  1. mocks make your specs more brittle (encourages removing mocks)
  2. mocks help you focus on the behaviour of one object rather than
    that object and it’s collaborators (encourages leaving mocks)
  3. mocks help isolate you from expensive parts of the system
    (encourages leaving mocks)

Keep in mind too (I think I’ve brought this up before) that if
foo_collection#<< appends a foo to its internal Array, you would
likely not want to do this:

describe FooCollection do
it “should store a foo in its internal Array” do
internal_array = mock(“internal array”)
mock_foo = mock(“foo”)
foo_collection = FooCollection.new(internal_array)
internal_array.should_receive(:<<).with(mock_foo)
foo_collection << mock_foo
end
end

If, however, adding to a collection meant making a call to an external
service (database, web service, etc) you might do this:

describe FooCollection do
it “should message the service when it gets a new foo” do
foo_service = mock(“foo service”)
mock_foo = mock(“foo”)
foo_collection = FooCollection.new(foo_service)
foo_service.should_receive(:add_foo).with(mock_foo)
foo_collection << mock_foo
end
end

These two specs are basically the same, and I can tell you that I
would likely NOT write the first one, but I would very likely write
the second one. This means that my decision is based on the
implementation, which might bug our purist BDD sensibilities.

Which brings us to an important point. As is software in general, BDD
is all about balance. It requires thought. It requires weighing
opposing forces and making a practical decision.

Also keep in mind that all of the principles that we espouse can be
traced to a single source: productivity. It boils down to trying to
come up with practices that save our employers money. So no matter how
pure or beautiful something appears, if it ends up increasing project
costs (and I mean long term, from conception to obsolescence) then
it’s not a good candidate for adoption. Similarly, if something
decreases long term project costs, then it’s worth looking at, even if
it violates some other sensibilities that we’ve already developed.

Cheers,
David

These two specs are basically the same, and I can tell you that I
would likely NOT write the first one, but I would very likely write
the second one. This means that my decision is based on the
implementation, which might bug our purist BDD sensibilities.

Which brings us to an important point. As is software in general, BDD
is all about balance. It requires thought. It requires weighing
opposing forces and making a practical decision.

Doesn’t bug me, at all. What bugs me is that this sort of mocking is
not in some other mock library or plugin, which would do this pattern
for me. These common patterns (for Rails) should be abstracted away
(as some have been doing with Rails’ association proxy). There is no
reason that all of those implementation details of another library
need to be crowding the intention of my specs. Until this happens, I
need to know all of these details about how another library works,
which, for me, is too far off the scale (no balance there)…

Scott

On 10/20/07, Scott T. [email protected] wrote:

Doesn’t bug me, at all. What bugs me is that this sort of mocking is
not in some other mock library or plugin, which would do this pattern
for me.

How would such a library know whether you want to store your
FooCollection’s Foos in an Array or submit it to a FooService? How
would it know the APIs of all potention FooServices?

Perhaps there are solutions for specific libraries - e.g.
ActiveRecord, but even then you’re going to have to specify structure
in advance (has_many vs has_many through) and consequently change your
specs when you decide to change your model.

These common patterns (for Rails) should be abstracted away
(as some have been doing with Rails’ association proxy). There is no
reason that all of those implementation details of another library
need to be crowding the intention of my specs. Until this happens, I
need to know all of these details about how another library works,
which, for me, is too far off the scale (no balance there)…

We’re all looking forward to your matcher libraries :slight_smile:

Cheers,
David

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs