Forum: RSpec validate_presence_of

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
Yi W. (Guest)
on 2009-02-18 02:24
(Received via mailing list)
Hello,

according to this post:
http://blog.davidchelimsky.net/2009/1/13/rspec-1-1...

I should be able to write:

describe User do
  it {should valdate_presence_of(:login)}
end

with rspec 1.1.12

But I got:

NO NAME
undefined method `valdate_presence_of' for
#<Spec::Rails::Example::ModelExampleGroup::Subclass_1:0x2513298>

What did I do it wrong? Thanks for helping.

Yi
Yi W. (Guest)
on 2009-02-18 02:50
(Received via mailing list)
Sorry for the spam, I relized there was a typo. It should be
       it {should validate_presence_of(:login)}
It still didn't work
David C. (Guest)
on 2009-02-18 03:19
(Received via mailing list)
On Tue, Feb 17, 2009 at 6:25 PM, Yi Wen <removed_email_address@domain.invalid> 
wrote:
> Sorry for the spam, I relized there was a typo. It should be
>        it {should validate_presence_of(:login)}
> It still didn't work

Scrolling up a bit ...

  "There are a few matcher libraries out there like
rspec-on-rails-matchers that provide matchers like this
validate_presence_of(:email)"

validate_presence_of() is not part of rspec-rails. You can find
libraries that offer comparable matchers at:

http://github.com/thoughtbot/shoulda/tree/master
http://github.com/joshknowles/rspec-on-rails-match...
http://github.com/technoweenie/rspec_on_rails_on_c...
http://github.com/carlosbrando/remarkable/tree/master

HTH,
David
Tim G. (Guest)
on 2009-02-18 04:36
(Received via mailing list)
> But I got:
>
> NO NAME
> undefined method `valdate_presence_of' for
> #<Spec::Rails::Example::ModelExampleGroup::Subclass_1:0x2513298>


Hi Yi,

I believe you're looking for validate_presence_of -  you missed an "i"
in validate
Yi W. (Guest)
on 2009-02-18 05:06
(Received via mailing list)
ah! sorry, my bad. Thanks!
David C. (Guest)
on 2009-02-18 05:16
(Received via mailing list)
On Tue, Feb 17, 2009 at 8:42 PM, Yi Wen <removed_email_address@domain.invalid> 
wrote:
> ah! sorry, my bad. Thanks!

No worries - I always just read the code first too :)
Fernando P. (Guest)
on 2009-02-19 02:39
Yi Wen wrote:
> Hello,
>
> according to this post:
> http://blog.davidchelimsky.net/2009/1/13/rspec-1-1...
>
> I should be able to write:
>
> describe User do
>   it {should valdate_presence_of(:login)}
> end

What's the point in testing validates_presence_of for a model? It's
already tested in the framework, and so readable that a quick glance on
the model says it all. I would only test it if I added some bizarre
behavior with procs and so on.

What's the community's position about that?
Pat M. (Guest)
on 2009-02-19 03:32
(Received via mailing list)
On Wed, Feb 18, 2009 at 4:39 PM, Fernando P. 
<removed_email_address@domain.invalid>
wrote:
>> end
> rspec-users mailing list
> removed_email_address@domain.invalid
> http://rubyforge.org/mailman/listinfo/rspec-users
>

I'm with you, but there's nothing even *close* to consensus on this.
A lot of people liken it to double-entry bookkeeping.

Pat
Zach D. (Guest)
on 2009-02-19 03:55
(Received via mailing list)
On Wed, Feb 18, 2009 at 7:39 PM, Fernando P. 
<removed_email_address@domain.invalid>
wrote:
>> end
>
> What's the point in testing validates_presence_of for a model?

IMO...

It definitely doesn't provide any design benefits since that decision
has already been made for you, but it does provide a developer-level
example to communicate the behaviour of a model, and it covers
regression. I prefer both of these.

The flip-side would be that if you and your customer wrote scenarios
which communicated the same intent, as well as were executable then
you'd also achieve these benefits. While this works, if often doesn't
fit the bill for a scenario very well (I prefer less verbose, higher
level declarative scenarios) and it misses out on providing developer
documentation for how a particular object should behave.

For me, a non-option is to have neither of these in place.


>  It's
> already tested in the framework, and so readable that a quick glance on
> the model says it all. I would only test it if I added some bizarre
> behavior with procs and so on.

What's tested in the framework is that validates_presence works. What
isn't tested is that your Item model needs to ensure it always belongs
to an :order. I'm more interested in the fact that Item always
requires an order. I expected the Rails developers to do their due
diligence to ensure validates_presence_of works in all of its
intricacies.

>
> What's the community's position about that?

What Pat said.

> --
> Posted via http://www.ruby-forum.com/.
> _______________________________________________
> rspec-users mailing list
> removed_email_address@domain.invalid
> http://rubyforge.org/mailman/listinfo/rspec-users
>



--
Zach D.
http://www.continuousthinking.com
http://www.mutuallyhuman.com
Alex S. (Guest)
on 2009-02-19 03:56
(Received via mailing list)
On 19/02/2009, at 11:39 , Fernando P. wrote:

> What's the point in testing validates_presence_of for a model? It's
> already tested in the framework, and so readable that a quick glance
> on
> the model says it all.

Some people want the spec to stand as a contract, so you can then hand
the spec over to the proverbial trained monkeys and have them write
all the necessary code from scratch exactly the way you want it written.

These are not people I enjoy working with, so I play loose with the
specs and only spec stuff that matters to me at the time, code that
little bit, and get on with the next terribly pressing task.

Alex
Zach D. (Guest)
on 2009-02-19 04:04
(Received via mailing list)
On Wed, Feb 18, 2009 at 8:47 PM, Alex S. <removed_email_address@domain.invalid>
wrote:
> On 19/02/2009, at 11:39 , Fernando P. wrote:
>
>> What's the point in testing validates_presence_of for a model? It's
>> already tested in the framework, and so readable that a quick glance on
>> the model says it all.
>
> Some people want the spec to stand as a contract, so you can then hand the
> spec over to the proverbial trained monkeys and have them write all the
> necessary code from scratch exactly the way you want it written.

I have never seen or heard of anyone who writes a spec (developer
level RSpec spec), but not the code and then hands it over to someone
else and demands that that person implements it. If you do or have
could you share, I'm interested in hearing about that experience.

> removed_email_address@domain.invalid
> http://rubyforge.org/mailman/listinfo/rspec-users
>



--
Zach D.
http://www.continuousthinking.com
http://www.mutuallyhuman.com
Zach D. (Guest)
on 2009-02-19 04:12
(Received via mailing list)
On Wed, Feb 18, 2009 at 7:39 PM, Fernando P. 
<removed_email_address@domain.invalid>
wrote:
>> end
>
> What's the point in testing validates_presence_of for a model? It's
> already tested in the framework, and so readable that a quick glance on
> the model says it all. I would only test it if I added some bizarre
> behavior with procs and so on.
>

Question for folks who don't like writing any examples for this kind
of thing (including scenarios/steps). If I go tuck away some behaviour
behind a nice declarative interface, will you not care about having
examples showing that your objects utilize that behaviour?

Not testing things that have no logic makes sense. However, validation
methods have logic, it's just wrapped up behind a nice interface.

> What's the community's position about that?
> --
> Posted via http://www.ruby-forum.com/.
> _______________________________________________
> rspec-users mailing list
> removed_email_address@domain.invalid
> http://rubyforge.org/mailman/listinfo/rspec-users
>



--
Zach D.
http://www.continuousthinking.com
http://www.mutuallyhuman.com
Jim G. (Guest)
on 2009-02-19 04:28
(Received via mailing list)
On Feb 18, 2009, at 7:39 PM, Fernando P. wrote:

>> end
>
> What's the point in testing validates_presence_of for a model? It's
> already tested in the framework, and so readable that a quick glance
> on
> the model says it all. I would only test it if I added some bizarre
> behavior with procs and so on.
>
> What's the community's position about that?

It shouldn't be a test, it's a spec. So it's done *before* you write
the code. The specs describe the behavior of the application.
Alex S. (Guest)
on 2009-02-19 05:04
(Received via mailing list)
On 19/02/2009, at 13:02 , Zach D. wrote:

> I have never seen or heard of anyone who writes a spec (developer
> level RSpec spec), but not the code and then hands it over to someone
> else and demands that that person implements it.

The fun begins when you can point out two or three conflicting
requirements on the first page, such as "end date should not be null"
right next to, "a version with no end date is current for all dates
after the start date."

Then I sat the guy down and introduced him to autotest, commited the
specification to version control, removed all but the first three
entries and showed how to build from small pieces.

So truthfully speaking, I've not yet worked in an environment where
RSpec was used to specify a design up front. And I certainly won't be
introducing new managers to RSpec before introducing them to unit
testing, then testing driven development, and then behaviour driven
development. I prefer my feet intact and still attached to my legs!

Alex
David C. (Guest)
on 2009-02-19 05:09
(Received via mailing list)
On Wed, Feb 18, 2009 at 9:02 PM, Alex S. <removed_email_address@domain.invalid>
wrote:
> Then I sat the guy down and introduced him to autotest, commited the
> specification to version control, removed all but the first three entries
> and showed how to build from small pieces.
>
> So truthfully speaking, I've not yet worked in an environment where RSpec
> was used to specify a design up front. And I certainly won't be introducing
> new managers to RSpec before introducing them to unit testing, then testing
> driven development, and then behaviour driven development. I prefer my feet
> intact and still attached to my legs!

Why not start w/ RSpec but do it right?
Mark W. (Guest)
on 2009-02-19 05:44
(Received via mailing list)
On Wed, Feb 18, 2009 at 4:39 PM, Fernando P. 
<removed_email_address@domain.invalid>
wrote:
>>
>> I should be able to write:
>>
>> describe User do
>>   it {should valdate_presence_of(:login)}
>> end
>
> What's the point in testing validates_presence_of for a model?

To make sure you wrote that line of code.

///ark
David C. (Guest)
on 2009-02-19 05:45
(Received via mailing list)
On Wed, Feb 18, 2009 at 9:42 PM, Mark W. <removed_email_address@domain.invalid> 
wrote:
> To make sure you wrote that line of code.
Close.

To make sure you "will" write that line of code.
Pat M. (Guest)
on 2009-02-19 05:47
(Received via mailing list)
On Wed, Feb 18, 2009 at 7:42 PM, Mark W. <removed_email_address@domain.invalid> 
wrote:
> To make sure you wrote that line of code.
and how do you make sure you wrote this one?
it {should valdate_presence_of(:login)}

:)
Alex S. (Guest)
on 2009-02-19 06:05
(Received via mailing list)
On 19/02/2009, at 14:05 , David C. wrote:

> Why not start w/ RSpec but do it right?

I made the mistake of showing the guy a spec from a previous project
and narrating (not showing) how the code was built from the spec. So
the manager didn't realise that the spec was built one line at a time.

My fault entirely, the guy is now "doing it right" :)

Alex
Pat M. (Guest)
on 2009-02-19 06:05
(Received via mailing list)
On Wed, Feb 18, 2009 at 6:06 PM, Zach D. <removed_email_address@domain.invalid>
wrote:
>>>   it {should valdate_presence_of(:login)}
> behind a nice declarative interface, will you not care about having
> examples showing that your objects utilize that behaviour?

That's a huge "depends" but yeah, basically.  I don't really test code
that can't possibly break.  Declarative code like Rails validations or
associations can't possibly break*, it can only be removed.  Don't
remove it unless you need to then, right?

I came to this conclusion re: validations/assocations by observing the
evolution of how people write specs for them.  You start off doing
something like:

describe User do
  it "should require a name" do
    User.new(:name => '').should have_at_least(1).error_on(:name)
  end
end

and after you write a bunch of those you look for a way to DRY up your
specs a bit so you write some kind of custom matcher.  Make it nice
and concise and you end up with shoulda macros:

describe User do
  should_require_attributes :name
end

You could literally write a couple lines of adapter code that would
take this specification and generate the production class!

def describe(klass, &block)
  (class << klass; self; end).class_eval do
    alias_method :should_require_attributes, :validates_presence_of
  end
  klass.class_eval &block
end

What does it give you?

I'm looking at the shoulda examples and chuckling at how ridiculous
the AR ones are (controller ones are nice, they use macros for stuff
that you can't program declaratively).

class PostTest < Test::Unit::TestCase
  should_belong_to :user
  should_have_many :tags, :through => :taggings

  should_require_unique_attributes :title
  should_require_attributes :body, :message => /wtf/
  should_require_attributes :title
  should_only_allow_numeric_values_for :user_id
end

and in AR (not 100% sure this makes it pass, I'm just writing, you get
the idea)

class Post < ActiveRecord::Base
  belongs_to :user
  has_many :tags, :through => :taggings

  validates_uniqueness_of :title
  validates_presence_of :body, :title
  validates_format_of :message, :with => /wtf/
  validates_numericality_of :user_id
end

There are two types of specification that I've found useful:
declaration and example.  Rails association and validation macros are
specification by declaration.  RSpec gives us specification by
example.  Effectively this means that a class's specification is split
between its implementation (declarative parts) and RSpec examples.

If your code is specified declaratively, you don't need to write
examples.

> Not testing things that have no logic makes sense. However, validation
> methods have logic, it's just wrapped up behind a nice interface.

Sure but can that logic break?

Pat

* Associations can break via changes to the db, but that will get
caught by other specs or acceptance tests that make use of the
associations
David C. (Guest)
on 2009-02-19 06:21
(Received via mailing list)
On Wed, Feb 18, 2009 at 9:45 PM, Pat M. <removed_email_address@domain.invalid>
wrote:
>>
>> To make sure you wrote that line of code.
>
> and how do you make sure you wrote this one?
> it {should valdate_presence_of(:login)}

By removing the line from the model, running the specs, and looking
for a failiure :)
Yi W. (Guest)
on 2009-02-19 07:11
(Received via mailing list)
We should write a test/spec, whatever you call it, *first* before you
want
your code. But it doesn't mean one who writes the spec/test will use a
monkey coding the code to fix the test. To be realistic, a programmer
will
write this test, and implement it right away. Just like how TDD should
be
done.

Without this syntax sugar, we still have to test validates_presence_of
to
make sure it's there and won't broken, right? So this simple syntax is
nice
because it's lees code to type in. I really don't see how trained
monkeys
come into play in this scenario. :)

I am not a huge fan of "spec contract" for unit testing. Unit testing is
a
tool for developers to write better, DRY-er and more loosely-coupled
code.
At most it is a communication tool among developers. It's never meant to
be
for non-technical / clients / business people. Cucumber might serve that
purpose.

Yi
Stephen E. (Guest)
on 2009-02-19 07:48
(Received via mailing list)
On Wed, Feb 18, 2009 at 10:42 PM, Mark W. <removed_email_address@domain.invalid> 
wrote:
> On Wed, Feb 18, 2009 at 4:39 PM, Fernando P. <removed_email_address@domain.invalid> 
wrote:
>>
>> What's the point in testing validates_presence_of for a model?
>
> To make sure you wrote that line of code.

And the circle spins round and round...

Specs that mirror the code that closely are a bad idea, I think.  The
problem with that example is that the syntax of the code is driving
the syntax of the spec, even if the spec was written first.  You're no
longer thinking about high-level behavior, you're thinking about the
presence of a certain line in Rails.

I write those sorts of model specs a little differently.  I just poke
at things and set expectations on whether they break.  I'd write this
example like:

describe User do
  before(:each) do
    @this = User.make_unsaved   # I use machinist for my factory methods
  end

  it "is valid" do
    @this.should be_valid
  end

  it "can save" do
    @this.save.should be_true
  end

  it "requires a login" do
    @this.login = nil
    @this.should_not be_valid
  end

  it "may have a password reminder" do
    @this.password_reminder = nil
    @this.should be_valid
  end

  it "does not allow duplicate logins" do
    @that = User.make(:login => "EvilTwin")
    @this.login = "EvilTwin"
    @this.should_not be_valid
  end
end

...And so forth.  It's wordier, but very readable, and it doesn't rely
on the validation being done with a specific Rails method.  In fact,
when I shifted to using Merb and Datamapper, I didn't have to change
these sorts of tests at all.

Also, while I used to be very anal and write "should
have(1).error_on(:login)" and such, I eventually realized that there's
no point.  Checking on 'valid?' is entire and sufficient.  The first
example proves that the default factory case is valid, so as long as
we're only changing one thing at a time, we know that that's the thing
that breaks validity.  (Or, in the case of "may have," *doesn't* break
validity.)


--
Have Fun,
   Steve E. (removed_email_address@domain.invalid)
   ESCAPE POD - The Science Fiction Podcast Magazine
   http://www.escapepod.org
Mike G. (Guest)
on 2009-02-19 07:59
(Received via mailing list)
Pat, not nitpicking just using your eample, which was close, but you
missed one of the reasons we like shoulda type tests::

should_require_attributes :body, :message => /wtf/

makes you put

validates_presence_of :body, :message => "hey dude, wtf, you need a
body!"

because we have a bunch of custom error messages.

Another reason for this is that while you may trust the rails guys to
keep validates_presence of working, they may change HOW it works and
forget a deprecation warning. Ruby isn't a compiled language so
sometimes test like this do help. We had an eye opener on this a month
ago when we went to edge rails.


Finally, the shoulda tests are nice for things like column lengths and
maximums when you are using multiple database backends because you often
just plain forget about things like default column size differences
between oracle and mysql, for intance pretend you're an oracle head:

Migration.up:
  t.column :name, :string

Model:
  validates_length_of :name, :maximum => 4000, :message => "We gave you
4000 characters, what more could you type?"

Shoulda
  should_ensure_length_in_range :name, (0..4000), :long_message => "We
gave you 4000 characters, what more could you type?"

Will pass in oracle and fail in mysql because the default size is 255 in
Mysql and 4000 in oracle. We had a ton of these creep up on us over the
last few years because we just plain forgot, but the shoulda macro
exercises it and all of the assumptions so it doesn't happen any more.
Stephen E. (Guest)
on 2009-02-19 08:18
(Received via mailing list)
On Wed, Feb 18, 2009 at 11:42 PM, Yi Wen <removed_email_address@domain.invalid> 
wrote:
>
> Without this syntax sugar, we still have to test validates_presence_of to
> make sure it's there and won't broken, right?

Wrong.  You don't have to test validates_presence_of.  What matters,
and therefore what you should test, is whether the model will complain
at you if a particular value is left empty.

validates_presence_of happens to be the name of the method in
ActiveRecord that does that.  But if you decide to write your own
check_to_see_if_this_thingy_is_in_my_whatsis() method that does the
same thing, a good *behavior* spec will not break.  Because the
behavior remains the same.

If your spec breaks because you changed a method call, you're not
testing behavior any more.  You're testing syntax.



--
Have Fun,
   Steve E. (removed_email_address@domain.invalid)
   ESCAPE POD - The Science Fiction Podcast Magazine
   http://www.escapepod.org
David C. (Guest)
on 2009-02-19 08:28
(Received via mailing list)
On Wed, Feb 18, 2009 at 11:31 PM, Stephen E. 
<removed_email_address@domain.invalid> wrote:
> problem with that example is that the syntax of the code is driving
>    @this = User.make_unsaved   # I use machinist for my factory methods
>  it "requires a login" do
>    @that = User.make(:login => "EvilTwin")
>    @this.login = "EvilTwin"
>    @this.should_not be_valid
>  end
> end
>
> ...And so forth.  It's wordier, but very readable, and it doesn't rely
> on the validation being done with a specific Rails method.  In fact,
> when I shifted to using Merb and Datamapper, I didn't have to change
> these sorts of tests at all.

That's huge!

> Also, while I used to be very anal and write "should
> have(1).error_on(:login)" and such, I eventually realized that there's
> no point.  Checking on 'valid?' is entire and sufficient.  The first
> example proves that the default factory case is valid, so as long as
> we're only changing one thing at a time, we know that that's the thing
> that breaks validity.  (Or, in the case of "may have," *doesn't* break
> validity.)

I think this depends on whether or not error messages are part of the
conversation w/ the customer. If not, that seems fine.

I find that I'll spec validations directly, but not associations.
There's no need to say that a team has_many players when you have
examples like team.should have(9).players_on_the_field.

But my validation specs do tend to be closely tied to AR methods like
valid?(), which, as your example suggests, is impeding my ability to
choose a different ORM lib. Time for some re-thinking!

Cheers,
David
Matt W. (Guest)
on 2009-02-19 10:31
(Received via mailing list)
On 19 Feb 2009, at 05:40, Stephen E. wrote:

> validates_presence_of happens to be the name of the method in
> ActiveRecord that does that.  But if you decide to write your own
> check_to_see_if_this_thingy_is_in_my_whatsis() method that does the
> same thing, a good *behavior* spec will not break.  Because the
> behavior remains the same.

I agree with you is why I've avoided using things like this:
http://github.com/redinger/validation_reflection/tree/master

As I understand it, this just checks that you wrote the correct line
of code in the the AR model class. As Pat said, there is so little
value in doing this it seems pointless to me.

I've not looked at the shoulda macros. Would they still pass if I
decided to replace my call to a rails validation helper with
check_to_see_if_this_thingy_is_in_my_whatsis()? Or are they just
reflecting on the model's calls to the rails framework?

Matt W.
http://blog.mattwynne.net
http://www.songkick.com
Fernando P. (Guest)
on 2009-02-19 12:31
> Wrong.  You don't have to test validates_presence_of.  What matters,
> and therefore what you should test, is whether the model will complain
> at you if a particular value is left empty.
> ...
> If your spec breaks because you changed a method call, you're not
> testing behavior any more.  You're testing syntax.

I totally agree with your point of view.


> Also, while I used to be very anal and write "should
> have(1).error_on(:login)" and such, I eventually realized that there's
> no point.  Checking on 'valid?' is entire and sufficient.

I also came to the same conclusion. That's why I am very cautious with
"rake stats" and rcov, it entices people to write dumb tests / specs
just to get the figures up.
Mike G. (Guest)
on 2009-02-19 17:02
(Received via mailing list)
Dave, you make a good point. In our system, where we are converting a
legacy database/application, we typically have no user stories and have
the technical (or you could argue user) requirement that the database
logic / constraints get converted. This is where we are typically just
encoding all of the should_have_many, etcs. They at a first glance do
seem like fragile and redundant tests but when you consider that the
schema isn't in rails standard format, simple has_manys are not always
going to work so we actually need to test our configuration of the
associations.

-Mike

David C. wrote:
>> And the circle spins round and round...
>>
>>    @this.save.should be_true
>>  end
>> when I shifted to using Merb and Datamapper, I didn't have to change
>> we're only changing one thing at a time, we know that that's the thing
>
>>   Steve E. (removed_email_address@domain.invalid)
> removed_email_address@domain.invalid
> http://rubyforge.org/mailman/listinfo/rspec-users
>

--
-Mike G. (http://rdocul.us)
Stephen E. (Guest)
on 2009-02-19 17:07
(Received via mailing list)
On Thu, Feb 19, 2009 at 12:58 AM, David C. 
<removed_email_address@domain.invalid>
wrote:
>
>> Also, while I used to be very anal and write "should
>> have(1).error_on(:login)" and such, I eventually realized that there's
>> no point.  Checking on 'valid?' is entire and sufficient.
>
> I think this depends on whether or not error messages are part of the
> conversation w/ the customer. If not, that seems fine.

But "should have(1).error_on(:login)" isn't a test on error messages.
It's a test on a key called :login.  The conversation with the
customer has no bearing on that; the customer's never asked about the
errors data structure.

I do check for error messages making it to the user, but not in my
model specs.  Those get checked in my request specs.  (Or my Cucumber
features, whichever I'm doing that day.)  So again, it's covered; just
not twice.


> But my validation specs do tend to be closely tied to AR methods like
> valid?(), which, as your example suggests, is impeding my ability to
> choose a different ORM lib. Time for some re-thinking!

To be fair, the only reason the tests I quoted work when I switched to
Datamapper is because DM coincidentally (or not) uses the same
"valid?" method that AR does.  Eventually you do have to hit your API.
 I just like to hit it at the highest level that proves the behavior I
care about.


--
Have Fun,
   Steve E. (removed_email_address@domain.invalid)
   ESCAPE POD - The Science Fiction Podcast Magazine
   http://www.escapepod.org
David C. (Guest)
on 2009-02-19 17:15
(Received via mailing list)
On Wed, Feb 18, 2009 at 11:40 PM, Stephen E. 
<removed_email_address@domain.invalid> wrote:

<deliberately_out_of_context_to_make_a_point>
> If your spec breaks because you changed a method call, you're not
> testing behavior any more.  You're testing syntax.
</deliberately_out_of_context_to_make_a_point>

We've got to stop making laws out of guidelines. This is a very
general statement about what is really a very specific situation, and
it is not in any way as black and white as this statement sounds. But
*somebody* is going to read this, not understand the context, and
think it's international law.

Code examples are clients of the subject code, just like any other
clients that are part of the subject code. You don't expect all of the
other objects in your app to work correctly when you change a method
name in only one place, do you? You need to change all the clients,
including the code examples.

In Chicago we don't have any j-walking laws (at least that I know of -
I've yet to be arrested for it). The guideline we operate under is
that you should wait for the light, but we don't always follow that
guideline. When I'm at an intersection and don't have the light, I
look both ways, like I learned back in kindergarten, and cross if its
safe. If there are no cars coming, I'm very likely to survive the
incident. If there are cars coming, I can still navigate my way across
the street and, if I do so carefully, correctly, and with precise
timing, I might well survive.

Guidelines are great tools, but if we followed guidelines like laws
we'd never get where we're going.

FWIW,
David
Zach D. (Guest)
on 2009-02-19 17:37
(Received via mailing list)
On Thu, Feb 19, 2009 at 12:31 AM, Stephen E. 
<removed_email_address@domain.invalid> wrote:
> problem with that example is that the syntax of the code is driving
> the syntax of the spec, even if the spec was written first.  You're no
> longer thinking about high-level behavior, you're thinking about the
> presence of a certain line in Rails.

A highly expressive declarative phrase has been pushed down to nothing
more than  "a certain line". :)

While I agree with you in general, I think the wrong approach is to
immediately disallow ourselves from using words or phrases that are
found in the implementation in the specs. Yes, validates_presence_of
can be used in the implementation, but it also serves as great,
readable, behaviour expressing documentation. I'm not going to fault
anyone or any spec where it is used, since the phrase itself is highly
communicative. I'd be more concerned with its implementation rather
than the fact that someone found it as a clear way to write attribute
requiring examples.


>  it "is valid" do
>  end
>  end
> example proves that the default factory case is valid, so as long as
> we're only changing one thing at a time, we know that that's the thing
> that breaks validity.  (Or, in the case of "may have," *doesn't* break
> validity.)

I like the idea of having "may have" examples for optional attributes.

>
--
Zach D.
http://www.continuousthinking.com
http://www.mutuallyhuman.com
Yi W. (Guest)
on 2009-02-19 17:53
(Received via mailing list)
Good point, that's actually I am debating with myself everyday and
haven't
got a clear answer. This is classical "calssic unit tester" vs. mockist
war.
:)

Talking about this case:

1. I haven't checked how should valite_presence_of is implemented, but
it
could pretty much be checking if the value is left blank. So it is
behavior
tests

2. I couldn't see any reason why I would want to write my own version of
check_to_see_if_this_thingy_is_in_my_whatsis. So this is not a very
realistic assumption.

3. By checking if validation fails when a value is left blank, I am
actually
kind of testing Rails and here's why: what if they introduce a bug in
validates_presence_of that makes my test break? What if they have a bug
in
valid? to make my test break? To strictly just testing *my* own code,
the
test should be something like
      Person.should_receive(:validates_presence_of).with(:email)

I am not really advocating the view of mockists. Just throw a question
here.
:)

Yi
David C. (Guest)
on 2009-02-19 18:02
(Received via mailing list)
On Thu, Feb 19, 2009 at 8:20 AM, Stephen E. 
<removed_email_address@domain.invalid> wrote:
> It's a test on a key called :login.  The conversation with the
> customer has no bearing on that; the customer's never asked about the
> errors data structure.

The code in the examples are for developers. The docstrings are for
customers. In this very specific case, the matcher doesn't support the
specific error message, but if it did, the example would be:

describe User do
  context "with punctuation in the login" do
    it "raises an error saying Login can't have punctuation" do
      user = User.generate(:login => "my.login!name")
      model.should have(1).error_on(:login).with("can't have
punctuation")
    end
  end
end

Even without that ability, this would be fairly expressive to both
customer and developer:

describe User do
  context "with punctuation in the login" do
    it "raises an error saying Login can't have punctuation" do
      user = User.generate(:login => "my.login!name")
      model.should have(1).error_on(:login)
    end
  end
end


> I do check for error messages making it to the user, but not in my
> model specs.  Those get checked in my request specs.  (Or my Cucumber
> features, whichever I'm doing that day.)  So again, it's covered; just
> not twice.

This is where this all gets tricky.

TDD (remember? that's where this all started) says you don't write any
subject code without a failing *unit test*. This is not about the end
result - it's about a process. What you're talking about here is the
end result: post-code testing.

If you're true to the process, then you'd have material in both
places. The cost of this is something that looks like duplication, but
it's not really, because at the high level we're specifying the
behaviour of the system, and at the low level we're specifying the
behaviour of a single object - fulfilling its role in that system.

The cost of *not* doing this is different in rails than it is in home
grown systems. In home grown systems, since we are in charge of
defining what objects have what responsibilities, the cost of only
spec'ing from 10k feet is more time tracking down bugs. In rails, this
is somewhat mitigated by the conventions we've established of keeping
types of behaviour (like error message generation) in commonly
accepted locations. If a merb request spec or cucumber scenario fails
on an error message, we can be pretty certain the source is a model
object.

But even that is subject to the level of complexity of the model. If a
view is dealing with a complex object graph, then there are multiple
potential sources for the failure, in which case there is some benefit
to having things specified at the object level.

>> But my validation specs do tend to be closely tied to AR methods like
>> valid?(), which, as your example suggests, is impeding my ability to
>> choose a different ORM lib. Time for some re-thinking!
>
> To be fair, the only reason the tests I quoted work when I switched to
> Datamapper is because DM coincidentally (or not) uses the same
> "valid?" method that AR does.  Eventually you do have to hit your API.
>  I just like to hit it at the highest level that proves the behavior I
> care about.

Agreed in general. Just keep in mind that behaviour exists at more
than one level. At the object level, behaviour == responsibility. If
I'm a controller and my responsibility is to take a message from you,
re-package it and hand it off to the appropriate model, then *that* is
my behaviour.

Cheers,
David
Zach D. (Guest)
on 2009-02-19 18:19
(Received via mailing list)
On Thu, Feb 19, 2009 at 10:41 AM, Yi Wen <removed_email_address@domain.invalid> 
wrote:
> 2. I couldn't see any reason why I would want to write my own version of
> I am not really advocating the view of mockists. Just throw a question here.
This is a good example of strictly testing *your* code. But, to the
last statement--it is not a very good example of when to use mock
expectations. I don't think it advocates an accurate view of
*mockists*.


>> > make sure it's there and won't broken, right?
>>
>> _______________________________________________
>> rspec-users mailing list
>> removed_email_address@domain.invalid
>> http://rubyforge.org/mailman/listinfo/rspec-users
>
>
> _______________________________________________
> rspec-users mailing list
> removed_email_address@domain.invalid
> http://rubyforge.org/mailman/listinfo/rspec-users
>



--
Zach D.
http://www.continuousthinking.com
http://www.mutuallyhuman.com
Stephen E. (Guest)
on 2009-02-19 19:02
(Received via mailing list)
On Thu, Feb 19, 2009 at 9:55 AM, David C. <removed_email_address@domain.invalid>
wrote:
> *somebody* is going to read this, not understand the context, and
> think it's international law.

Doesn't it increase the probability that someone will read it and not
understand the context when you deliberately take it out of context to
make a point?  >8->

Anyway, I wasn't declaring any laws.  I didn't say "specs must never
break when method calls change."  That would be an impossible
standard, since at some point *everything* comes down to a method
call.  I actually didn't express any imperatives at all.

I will agree that "You're not testing behavior any more" is a bit of
an overblown statement, since the line between 'behavior' and 'syntax'
is highly subjective.  Every test is really a test on both.  I was
expressing my own opinion on where I feel the line is drawn, but it
was mostly in response to "You have to do *this*, right?"

There's a lot of testing dogma out there.  I'm starting to think
everyone who gets vocal on the subject lapses into sounding dogmatic
eventually...including, apparently, myself.  To the extent that I
sounded like I was trying to hand down the One True Way, I apologize
and withdraw my fervor.




--
Have Fun,
   Steve E. (removed_email_address@domain.invalid)
   ESCAPE POD - The Science Fiction Podcast Magazine
   http://www.escapepod.org
Mark W. (Guest)
on 2009-02-19 19:56
(Received via mailing list)
On Wed, Feb 18, 2009 at 9:40 PM, Stephen E. 
<removed_email_address@domain.invalid> wrote:
> On Wed, Feb 18, 2009 at 11:42 PM, Yi Wen <removed_email_address@domain.invalid> wrote:
>
> validates_presence_of happens to be the name of the method in
> ActiveRecord that does that.  But if you decide to write your own
> check_to_see_if_this_thingy_is_in_my_whatsis() method that does the
> same thing, a good *behavior* spec will not break.  Because the
> behavior remains the same.

I think you're talking about state-based, blackbox testing, rather
than behavior-based whitebox testing. RSpec unit tests are all about
speccing that one object calls another object's method at the right
time. The idea being that if that behavior occurs, and that the other
object's method has been similarly tested, that you're OK.

///ark
Stephen E. (Guest)
on 2009-02-19 20:32
(Received via mailing list)
On Thu, Feb 19, 2009 at 10:55 AM, David C. 
<removed_email_address@domain.invalid>
wrote:
>
> This is where this all gets tricky.

Yep.  >8->


> TDD (remember? that's where this all started) says you don't write any
> subject code without a failing *unit test*. This is not about the end
> result - it's about a process. What you're talking about here is the
> end result: post-code testing.

Yes.  And I didn't.  The test "it 'requires a login'" fails until I
write a validation for the login field.  I don't write the validation
until I have that test.  Once that test is written, any way of
validating login's presence -- with validates_presence_of in AR, or a
:nullable => false on the property in DataMapper, or a callback before
saving, or whatever -- will pass the test.  I have written the code to
pass the test, and I have followed TDD principles.  I can now move on
to the next problem.

But I did not write any code yet setting the message.  Because I
haven't written any tests for the message.  At this point I don't care
what the message is, just that I have the right data.  I care about
the message when I start focusing on presentation.  When I write specs
for the exchange with the user, I will write a test.  I might reopen
the model's spec and add it there (maintaining 'unit test' purity), or
I might put it in the request spec, but either way a test will break
before the code is written.

I think that keeps the *spirit* of TDD, whether or not it follows its
shelving rules.  And yes, I know it all comes down to "it depends."
On a larger project that would have a lot of people on it, I'd
probably insist on more formalism for the sake of keeping things
organized.  But if it's a small app with a focus on shipping fast and
frequently, having one test that fails is enough.


> If you're true to the process, then you'd have material in both
> places. The cost of this is something that looks like duplication, but
> it's not really, because at the high level we're specifying the
> behaviour of the system, and at the low level we're specifying the
> behaviour of a single object - fulfilling its role in that system.

And again: the extent to which I'd do that is the extent to which I
care how the system is organized.  Sometimes it really does matter.
More often, to me, it doesn't.  If an integration spec breaks, there's
*usually* no mystery to me because I can just look at the backtrace to
see what broke and fix it in a few seconds.  Writing low-level specs
to help isolate what's obvious and quickly fixed without them doesn't
save time.  Sometimes it is more complicated and confusing, and if it
takes me too long to understand why the high level is broken, I'll
sometimes write more unit specs to figure it out.

That's not backwards.  A test still broke.  If I always have at least
one test that fails on any incorrect behavior that matters, and I
never ship with failing tests, then my testing has satisfactory
coverage, whether it's an integration test or a unit test or a highly
trained hamster reviewing my log files.  Having more tests and finer
detail only matters if it saves me time.  (Which, sometimes, it does.)

That's just my opinion.  Not the law.

--
Have Fun,
   Steve E. (removed_email_address@domain.invalid)
   ESCAPE POD - The Science Fiction Podcast Magazine
   http://www.escapepod.org
David C. (Guest)
on 2009-02-19 20:41
(Received via mailing list)
On Thu, Feb 19, 2009 at 11:38 AM, Mark W. <removed_email_address@domain.invalid> 
wrote:
> than behavior-based whitebox testing. RSpec unit tests are all about
> speccing that one object calls another object's method at the right
> time.

This is true in cases where the object delegates responsibility and
that delegation is significant. If a collaborator is polymorphic, and
the correct collaborator is chosen based on conditions external to the
subject, then it makes sense to spec interactions.

If the collaborator connects to an external resource like a database
or a network, then stubbing the collaborator makes good sense.

On the contrary, if the collaborator is created internally and is
always the same object and does not require any setup outside of the
subject, then spec'ing interactions doesn't make sense.

Make sense?

> The idea being that if that behavior occurs, and that the other
> object's method has been similarly tested, that you're OK.

With the caveat that somewhere there is some level of integration
testing going on. Although it looks like J.B. Rainsberger disagrees:
http://agile2009.agilealliance.org/node/708 (you have to have an
account to view this - but the title of his proposed talk is
"Integration Tests are a Scam"). I don't know enough of the detail of
his arguments to argue them here, but it seems like an interesting
discussion.

FWIW,
David
Stephen E. (Guest)
on 2009-02-19 20:56
(Received via mailing list)
On Thu, Feb 19, 2009 at 12:15 PM, Stephen E. 
<removed_email_address@domain.invalid> wrote:
>
> But I did not write any code yet setting the message.  Because I
> haven't written any tests for the message.  At this point I don't care
> what the message is, just that I have the right data.  I care about
> the message when I start focusing on presentation.

By the way, this spun off a whole line of thought in my head that
maybe the way Rails handles validation messages in general is wrong.
It's certainly a violation of separation of concerns: models aren't
supposed to care about presentation, and yet we're putting plain
English (or other language, or internationalized, or whatever) text in
then that isn't relevant to the data, just for the purpose of
presenting it to the user.

*This* is backwards, and maybe that's why I felt some conflict about
where the spec on that message should go.  The responsibility of the
model is to report a problem, not to declare the exact wording of that
report.  In an ideal MVC world models wouldn't be filling up hashes
with message text at all.  They'd return exceptions on save, and the
standard create/update boilerplate in the controller would contain a
rescue instead of an if-then, and responsibility of turning the
properties of that exception into English would happen somewhere at
the view level.

Am I onto something here?

(Heh.  Maybe I agree with Pat after all: I just went from a very minor
"I can't figure out how to test this" to the arrogance of suggesting
that pretty much every Ruby ORM should be rewritten.)  >8->



--
Have Fun,
   Steve E. (removed_email_address@domain.invalid)
   ESCAPE POD - The Science Fiction Podcast Magazine
   http://www.escapepod.org
This topic is locked and can not be replied to.