Validate_presence_of

On Wed, Feb 18, 2009 at 9:45 PM, Pat M. [email protected]
wrote:

To make sure you wrote that line of code.

and how do you make sure you wrote this one?
it {should valdate_presence_of(:login)}

By removing the line from the model, running the specs, and looking
for a failiure :slight_smile:

We should write a test/spec, whatever you call it, first before you
want
your code. But it doesn’t mean one who writes the spec/test will use a
monkey coding the code to fix the test. To be realistic, a programmer
will
write this test, and implement it right away. Just like how TDD should
be
done.

Without this syntax sugar, we still have to test validates_presence_of
to
make sure it’s there and won’t broken, right? So this simple syntax is
nice
because it’s lees code to type in. I really don’t see how trained
monkeys
come into play in this scenario. :slight_smile:

I am not a huge fan of “spec contract” for unit testing. Unit testing is
a
tool for developers to write better, DRY-er and more loosely-coupled
code.
At most it is a communication tool among developers. It’s never meant to
be
for non-technical / clients / business people. Cucumber might serve that
purpose.

Yi

Pat, not nitpicking just using your eample, which was close, but you
missed one of the reasons we like shoulda type tests::

should_require_attributes :body, :message => /wtf/

makes you put

validates_presence_of :body, :message => “hey dude, wtf, you need a
body!”

because we have a bunch of custom error messages.

Another reason for this is that while you may trust the rails guys to
keep validates_presence of working, they may change HOW it works and
forget a deprecation warning. Ruby isn’t a compiled language so
sometimes test like this do help. We had an eye opener on this a month
ago when we went to edge rails.

Finally, the shoulda tests are nice for things like column lengths and
maximums when you are using multiple database backends because you often
just plain forget about things like default column size differences
between oracle and mysql, for intance pretend you’re an oracle head:

Migration.up:
t.column :name, :string

Model:
validates_length_of :name, :maximum => 4000, :message => “We gave you
4000 characters, what more could you type?”

Shoulda
should_ensure_length_in_range :name, (0…4000), :long_message => “We
gave you 4000 characters, what more could you type?”

Will pass in oracle and fail in mysql because the default size is 255 in
Mysql and 4000 in oracle. We had a ton of these creep up on us over the
last few years because we just plain forgot, but the shoulda macro
exercises it and all of the assumptions so it doesn’t happen any more.

On Wed, Feb 18, 2009 at 11:42 PM, Yi Wen [email protected] wrote:

Without this syntax sugar, we still have to test validates_presence_of to
make sure it’s there and won’t broken, right?

Wrong. You don’t have to test validates_presence_of. What matters,
and therefore what you should test, is whether the model will complain
at you if a particular value is left empty.

validates_presence_of happens to be the name of the method in
ActiveRecord that does that. But if you decide to write your own
check_to_see_if_this_thingy_is_in_my_whatsis() method that does the
same thing, a good behavior spec will not break. Because the
behavior remains the same.

If your spec breaks because you changed a method call, you’re not
testing behavior any more. You’re testing syntax.

–
Have Fun,
Steve E. ([email protected])
ESCAPE POD - The Science Fiction Podcast Magazine
http://www.escapepod.org

On Wed, Feb 18, 2009 at 11:31 PM, Stephen E. [email protected] wrote:

problem with that example is that the syntax of the code is driving
@this = User.make_unsaved # I use machinist for my factory methods
it “requires a login” do
@that = User.make(:login => “EvilTwin”)
@this.login = “EvilTwin”
@this.should_not be_valid
end
end

…And so forth. It’s wordier, but very readable, and it doesn’t rely
on the validation being done with a specific Rails method. In fact,
when I shifted to using Merb and Datamapper, I didn’t have to change
these sorts of tests at all.

That’s huge!

Also, while I used to be very anal and write “should
have(1).error_on(:login)” and such, I eventually realized that there’s
no point. Checking on ‘valid?’ is entire and sufficient. The first
example proves that the default factory case is valid, so as long as
we’re only changing one thing at a time, we know that that’s the thing
that breaks validity. (Or, in the case of “may have,” doesn’t break
validity.)

I think this depends on whether or not error messages are part of the
conversation w/ the customer. If not, that seems fine.

I find that I’ll spec validations directly, but not associations.
There’s no need to say that a team has_many players when you have
examples like team.should have(9).players_on_the_field.

But my validation specs do tend to be closely tied to AR methods like
valid?(), which, as your example suggests, is impeding my ability to
choose a different ORM lib. Time for some re-thinking!

Cheers,
David

On Wed, Feb 18, 2009 at 10:42 PM, Mark W. [email protected] wrote:

On Wed, Feb 18, 2009 at 4:39 PM, Fernando P. [email protected] wrote:

What’s the point in testing validates_presence_of for a model?

To make sure you wrote that line of code.

And the circle spins round and round…

Specs that mirror the code that closely are a bad idea, I think. The
problem with that example is that the syntax of the code is driving
the syntax of the spec, even if the spec was written first. You’re no
longer thinking about high-level behavior, you’re thinking about the
presence of a certain line in Rails.

I write those sorts of model specs a little differently. I just poke
at things and set expectations on whether they break. I’d write this
example like:

describe User do
before(:each) do
@this = User.make_unsaved # I use machinist for my factory methods
end

it “is valid” do
@this.should be_valid
end

it “can save” do
@this.save.should be_true
end

it “requires a login” do
@this.login = nil
@this.should_not be_valid
end

it “may have a password reminder” do
@this.password_reminder = nil
@this.should be_valid
end

it “does not allow duplicate logins” do
@that = User.make(:login => “EvilTwin”)
@this.login = “EvilTwin”
@this.should_not be_valid
end
end

…And so forth. It’s wordier, but very readable, and it doesn’t rely
on the validation being done with a specific Rails method. In fact,
when I shifted to using Merb and Datamapper, I didn’t have to change
these sorts of tests at all.

Also, while I used to be very anal and write “should
have(1).error_on(:login)” and such, I eventually realized that there’s
no point. Checking on ‘valid?’ is entire and sufficient. The first
example proves that the default factory case is valid, so as long as
we’re only changing one thing at a time, we know that that’s the thing
that breaks validity. (Or, in the case of “may have,” doesn’t break
validity.)

–
Have Fun,
Steve E. ([email protected])
ESCAPE POD - The Science Fiction Podcast Magazine
http://www.escapepod.org

Wrong. You don’t have to test validates_presence_of. What matters,
and therefore what you should test, is whether the model will complain
at you if a particular value is left empty.
…
If your spec breaks because you changed a method call, you’re not
testing behavior any more. You’re testing syntax.

I totally agree with your point of view.

Also, while I used to be very anal and write “should
have(1).error_on(:login)” and such, I eventually realized that there’s
no point. Checking on ‘valid?’ is entire and sufficient.

I also came to the same conclusion. That’s why I am very cautious with
“rake stats” and rcov, it entices people to write dumb tests / specs
just to get the figures up.

On 19 Feb 2009, at 05:40, Stephen E. wrote:

validates_presence_of happens to be the name of the method in
ActiveRecord that does that. But if you decide to write your own
check_to_see_if_this_thingy_is_in_my_whatsis() method that does the
same thing, a good behavior spec will not break. Because the
behavior remains the same.

I agree with you is why I’ve avoided using things like this:

As I understand it, this just checks that you wrote the correct line
of code in the the AR model class. As Pat said, there is so little
value in doing this it seems pointless to me.

I’ve not looked at the shoulda macros. Would they still pass if I
decided to replace my call to a rails validation helper with
check_to_see_if_this_thingy_is_in_my_whatsis()? Or are they just
reflecting on the model’s calls to the rails framework?

Matt W.
http://blog.mattwynne.net

On Thu, Feb 19, 2009 at 12:58 AM, David C. [email protected]
wrote:

Also, while I used to be very anal and write “should
have(1).error_on(:login)” and such, I eventually realized that there’s
no point. Checking on ‘valid?’ is entire and sufficient.

I think this depends on whether or not error messages are part of the
conversation w/ the customer. If not, that seems fine.

But “should have(1).error_on(:login)” isn’t a test on error messages.
It’s a test on a key called :login. The conversation with the
customer has no bearing on that; the customer’s never asked about the
errors data structure.

I do check for error messages making it to the user, but not in my
model specs. Those get checked in my request specs. (Or my Cucumber
features, whichever I’m doing that day.) So again, it’s covered; just
not twice.

But my validation specs do tend to be closely tied to AR methods like
valid?(), which, as your example suggests, is impeding my ability to
choose a different ORM lib. Time for some re-thinking!

To be fair, the only reason the tests I quoted work when I switched to
Datamapper is because DM coincidentally (or not) uses the same
“valid?” method that AR does. Eventually you do have to hit your API.
I just like to hit it at the highest level that proves the behavior I
care about.

–
Have Fun,
Steve E. ([email protected])
ESCAPE POD - The Science Fiction Podcast Magazine
http://www.escapepod.org

On Wed, Feb 18, 2009 at 11:40 PM, Stephen E. [email protected] wrote:

<deliberately_out_of_context_to_make_a_point>

If your spec breaks because you changed a method call, you’re not
testing behavior any more. You’re testing syntax.
</deliberately_out_of_context_to_make_a_point>

We’ve got to stop making laws out of guidelines. This is a very
general statement about what is really a very specific situation, and
it is not in any way as black and white as this statement sounds. But
somebody is going to read this, not understand the context, and
think it’s international law.

Code examples are clients of the subject code, just like any other
clients that are part of the subject code. You don’t expect all of the
other objects in your app to work correctly when you change a method
name in only one place, do you? You need to change all the clients,
including the code examples.

In Chicago we don’t have any j-walking laws (at least that I know of -
I’ve yet to be arrested for it). The guideline we operate under is
that you should wait for the light, but we don’t always follow that
guideline. When I’m at an intersection and don’t have the light, I
look both ways, like I learned back in kindergarten, and cross if its
safe. If there are no cars coming, I’m very likely to survive the
incident. If there are cars coming, I can still navigate my way across
the street and, if I do so carefully, correctly, and with precise
timing, I might well survive.

Guidelines are great tools, but if we followed guidelines like laws
we’d never get where we’re going.

FWIW,
David

Dave, you make a good point. In our system, where we are converting a
legacy database/application, we typically have no user stories and have
the technical (or you could argue user) requirement that the database
logic / constraints get converted. This is where we are typically just
encoding all of the should_have_many, etcs. They at a first glance do
seem like fragile and redundant tests but when you consider that the
schema isn’t in rails standard format, simple has_manys are not always
going to work so we actually need to test our configuration of the
associations.

-Mike

David C. wrote:

And the circle spins round and round…

@this.save.should be_true
end
when I shifted to using Merb and Datamapper, I didn’t have to change
we’re only changing one thing at a time, we know that that’s the thing

Steve E. ([email protected])
[email protected]
http://rubyforge.org/mailman/listinfo/rspec-users

–
-Mike G. (http://rdocul.us)

On Thu, Feb 19, 2009 at 12:31 AM, Stephen E. [email protected] wrote:

problem with that example is that the syntax of the code is driving
the syntax of the spec, even if the spec was written first. You’re no
longer thinking about high-level behavior, you’re thinking about the
presence of a certain line in Rails.

A highly expressive declarative phrase has been pushed down to nothing
more than “a certain line”. :slight_smile:

While I agree with you in general, I think the wrong approach is to
immediately disallow ourselves from using words or phrases that are
found in the implementation in the specs. Yes, validates_presence_of
can be used in the implementation, but it also serves as great,
readable, behaviour expressing documentation. I’m not going to fault
anyone or any spec where it is used, since the phrase itself is highly
communicative. I’d be more concerned with its implementation rather
than the fact that someone found it as a clear way to write attribute
requiring examples.

it “is valid” do
end
end
example proves that the default factory case is valid, so as long as
we’re only changing one thing at a time, we know that that’s the thing
that breaks validity. (Or, in the case of “may have,” doesn’t break
validity.)

I like the idea of having “may have” examples for optional attributes.

–
Zach D.
http://www.continuousthinking.com

Good point, that’s actually I am debating with myself everyday and
haven’t
got a clear answer. This is classical “calssic unit tester” vs. mockist
war.
:slight_smile:

Talking about this case:

  1. I haven’t checked how should valite_presence_of is implemented, but
    it
    could pretty much be checking if the value is left blank. So it is
    behavior
    tests

  2. I couldn’t see any reason why I would want to write my own version of
    check_to_see_if_this_thingy_is_in_my_whatsis. So this is not a very
    realistic assumption.

  3. By checking if validation fails when a value is left blank, I am
    actually
    kind of testing Rails and here’s why: what if they introduce a bug in
    validates_presence_of that makes my test break? What if they have a bug
    in
    valid? to make my test break? To strictly just testing my own code,
    the
    test should be something like
    Person.should_receive(:validates_presence_of).with(:email)

I am not really advocating the view of mockists. Just throw a question
here.
:slight_smile:

Yi

On Thu, Feb 19, 2009 at 10:41 AM, Yi Wen [email protected] wrote:

  1. I couldn’t see any reason why I would want to write my own version of
    I am not really advocating the view of mockists. Just throw a question here.
    This is a good example of strictly testing your code. But, to the
    last statement–it is not a very good example of when to use mock
    expectations. I don’t think it advocates an accurate view of
    mockists.

make sure it’s there and won’t broken, right?


rspec-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/rspec-users


rspec-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/rspec-users

–
Zach D.
http://www.continuousthinking.com

On Thu, Feb 19, 2009 at 9:55 AM, David C. [email protected]
wrote:

somebody is going to read this, not understand the context, and
think it’s international law.

Doesn’t it increase the probability that someone will read it and not
understand the context when you deliberately take it out of context to
make a point? >8->

Anyway, I wasn’t declaring any laws. I didn’t say “specs must never
break when method calls change.” That would be an impossible
standard, since at some point everything comes down to a method
call. I actually didn’t express any imperatives at all.

I will agree that “You’re not testing behavior any more” is a bit of
an overblown statement, since the line between ‘behavior’ and ‘syntax’
is highly subjective. Every test is really a test on both. I was
expressing my own opinion on where I feel the line is drawn, but it
was mostly in response to “You have to do this, right?”

There’s a lot of testing dogma out there. I’m starting to think
everyone who gets vocal on the subject lapses into sounding dogmatic
eventually…including, apparently, myself. To the extent that I
sounded like I was trying to hand down the One True Way, I apologize
and withdraw my fervor.

–
Have Fun,
Steve E. ([email protected])
ESCAPE POD - The Science Fiction Podcast Magazine
http://www.escapepod.org

On Thu, Feb 19, 2009 at 8:20 AM, Stephen E. [email protected] wrote:

It’s a test on a key called :login. The conversation with the
customer has no bearing on that; the customer’s never asked about the
errors data structure.

The code in the examples are for developers. The docstrings are for
customers. In this very specific case, the matcher doesn’t support the
specific error message, but if it did, the example would be:

describe User do
context “with punctuation in the login” do
it “raises an error saying Login can’t have punctuation” do
user = User.generate(:login => “my.login!name”)
model.should have(1).error_on(:login).with(“can’t have
punctuation”)
end
end
end

Even without that ability, this would be fairly expressive to both
customer and developer:

describe User do
context “with punctuation in the login” do
it “raises an error saying Login can’t have punctuation” do
user = User.generate(:login => “my.login!name”)
model.should have(1).error_on(:login)
end
end
end

I do check for error messages making it to the user, but not in my
model specs. Those get checked in my request specs. (Or my Cucumber
features, whichever I’m doing that day.) So again, it’s covered; just
not twice.

This is where this all gets tricky.

TDD (remember? that’s where this all started) says you don’t write any
subject code without a failing unit test. This is not about the end
result - it’s about a process. What you’re talking about here is the
end result: post-code testing.

If you’re true to the process, then you’d have material in both
places. The cost of this is something that looks like duplication, but
it’s not really, because at the high level we’re specifying the
behaviour of the system, and at the low level we’re specifying the
behaviour of a single object - fulfilling its role in that system.

The cost of not doing this is different in rails than it is in home
grown systems. In home grown systems, since we are in charge of
defining what objects have what responsibilities, the cost of only
spec’ing from 10k feet is more time tracking down bugs. In rails, this
is somewhat mitigated by the conventions we’ve established of keeping
types of behaviour (like error message generation) in commonly
accepted locations. If a merb request spec or cucumber scenario fails
on an error message, we can be pretty certain the source is a model
object.

But even that is subject to the level of complexity of the model. If a
view is dealing with a complex object graph, then there are multiple
potential sources for the failure, in which case there is some benefit
to having things specified at the object level.

But my validation specs do tend to be closely tied to AR methods like
valid?(), which, as your example suggests, is impeding my ability to
choose a different ORM lib. Time for some re-thinking!

To be fair, the only reason the tests I quoted work when I switched to
Datamapper is because DM coincidentally (or not) uses the same
“valid?” method that AR does. Eventually you do have to hit your API.
I just like to hit it at the highest level that proves the behavior I
care about.

Agreed in general. Just keep in mind that behaviour exists at more
than one level. At the object level, behaviour == responsibility. If
I’m a controller and my responsibility is to take a message from you,
re-package it and hand it off to the appropriate model, then that is
my behaviour.

Cheers,
David

On Thu, Feb 19, 2009 at 10:55 AM, David C. [email protected]
wrote:

This is where this all gets tricky.

Yep. >8->

TDD (remember? that’s where this all started) says you don’t write any
subject code without a failing unit test. This is not about the end
result - it’s about a process. What you’re talking about here is the
end result: post-code testing.

Yes. And I didn’t. The test “it ‘requires a login’” fails until I
write a validation for the login field. I don’t write the validation
until I have that test. Once that test is written, any way of
validating login’s presence – with validates_presence_of in AR, or a
:nullable => false on the property in DataMapper, or a callback before
saving, or whatever – will pass the test. I have written the code to
pass the test, and I have followed TDD principles. I can now move on
to the next problem.

But I did not write any code yet setting the message. Because I
haven’t written any tests for the message. At this point I don’t care
what the message is, just that I have the right data. I care about
the message when I start focusing on presentation. When I write specs
for the exchange with the user, I will write a test. I might reopen
the model’s spec and add it there (maintaining ‘unit test’ purity), or
I might put it in the request spec, but either way a test will break
before the code is written.

I think that keeps the spirit of TDD, whether or not it follows its
shelving rules. And yes, I know it all comes down to “it depends.”
On a larger project that would have a lot of people on it, I’d
probably insist on more formalism for the sake of keeping things
organized. But if it’s a small app with a focus on shipping fast and
frequently, having one test that fails is enough.

If you’re true to the process, then you’d have material in both
places. The cost of this is something that looks like duplication, but
it’s not really, because at the high level we’re specifying the
behaviour of the system, and at the low level we’re specifying the
behaviour of a single object - fulfilling its role in that system.

And again: the extent to which I’d do that is the extent to which I
care how the system is organized. Sometimes it really does matter.
More often, to me, it doesn’t. If an integration spec breaks, there’s
usually no mystery to me because I can just look at the backtrace to
see what broke and fix it in a few seconds. Writing low-level specs
to help isolate what’s obvious and quickly fixed without them doesn’t
save time. Sometimes it is more complicated and confusing, and if it
takes me too long to understand why the high level is broken, I’ll
sometimes write more unit specs to figure it out.

That’s not backwards. A test still broke. If I always have at least
one test that fails on any incorrect behavior that matters, and I
never ship with failing tests, then my testing has satisfactory
coverage, whether it’s an integration test or a unit test or a highly
trained hamster reviewing my log files. Having more tests and finer
detail only matters if it saves me time. (Which, sometimes, it does.)

That’s just my opinion. Not the law.

–
Have Fun,
Steve E. ([email protected])
ESCAPE POD - The Science Fiction Podcast Magazine
http://www.escapepod.org

On Wed, Feb 18, 2009 at 9:40 PM, Stephen E. [email protected] wrote:

On Wed, Feb 18, 2009 at 11:42 PM, Yi Wen [email protected] wrote:

validates_presence_of happens to be the name of the method in
ActiveRecord that does that. But if you decide to write your own
check_to_see_if_this_thingy_is_in_my_whatsis() method that does the
same thing, a good behavior spec will not break. Because the
behavior remains the same.

I think you’re talking about state-based, blackbox testing, rather
than behavior-based whitebox testing. RSpec unit tests are all about
speccing that one object calls another object’s method at the right
time. The idea being that if that behavior occurs, and that the other
object’s method has been similarly tested, that you’re OK.

///ark

On Thu, Feb 19, 2009 at 11:38 AM, Mark W. [email protected] wrote:

than behavior-based whitebox testing. RSpec unit tests are all about
speccing that one object calls another object’s method at the right
time.

This is true in cases where the object delegates responsibility and
that delegation is significant. If a collaborator is polymorphic, and
the correct collaborator is chosen based on conditions external to the
subject, then it makes sense to spec interactions.

If the collaborator connects to an external resource like a database
or a network, then stubbing the collaborator makes good sense.

On the contrary, if the collaborator is created internally and is
always the same object and does not require any setup outside of the
subject, then spec’ing interactions doesn’t make sense.

Make sense?

The idea being that if that behavior occurs, and that the other
object’s method has been similarly tested, that you’re OK.

With the caveat that somewhere there is some level of integration
testing going on. Although it looks like J.B. Rainsberger disagrees:
http://agile2009.agilealliance.org/node/708 (you have to have an
account to view this - but the title of his proposed talk is
“Integration Tests are a Scam”). I don’t know enough of the detail of
his arguments to argue them here, but it seems like an interesting
discussion.

FWIW,
David

On Thu, Feb 19, 2009 at 12:15 PM, Stephen E. [email protected] wrote:

But I did not write any code yet setting the message. Because I
haven’t written any tests for the message. At this point I don’t care
what the message is, just that I have the right data. I care about
the message when I start focusing on presentation.

By the way, this spun off a whole line of thought in my head that
maybe the way Rails handles validation messages in general is wrong.
It’s certainly a violation of separation of concerns: models aren’t
supposed to care about presentation, and yet we’re putting plain
English (or other language, or internationalized, or whatever) text in
then that isn’t relevant to the data, just for the purpose of
presenting it to the user.

This is backwards, and maybe that’s why I felt some conflict about
where the spec on that message should go. The responsibility of the
model is to report a problem, not to declare the exact wording of that
report. In an ideal MVC world models wouldn’t be filling up hashes
with message text at all. They’d return exceptions on save, and the
standard create/update boilerplate in the controller would contain a
rescue instead of an if-then, and responsibility of turning the
properties of that exception into English would happen somewhere at
the view level.

Am I onto something here?

(Heh. Maybe I agree with Pat after all: I just went from a very minor
“I can’t figure out how to test this” to the arrogance of suggesting
that pretty much every Ruby ORM should be rewritten.) >8->

–
Have Fun,
Steve E. ([email protected])
ESCAPE POD - The Science Fiction Podcast Magazine
http://www.escapepod.org