I know it’s, like, SO uncool to use fixtures. However, I find that there
are cases where the correctness of your code relies on correct
functioning of your finders. Since finders translate to SQL, and you
can’t test SQL without a database backend, I find that a totally
fixture-less approach leads to some ugly hacking and empty tests,
especially when shooting for full coverage.
As an example, I have the following code in a work application:
def self.for(link)
Score.find(:all,:conditions => {:user_id => link.user.id}, :order =>
‘ABS(score) desc’)
end
Without fixtures the only thing I can do there is stub out “find†on
Score to return an array of mock Score objects, then check that the
value I get back is the value I told it to return. Sure, the line is
marked as covered, but it didn’t actually test a damn thing. Aside from
manually testing the calling code through the web interface, I just have
to take it on faith that the finder is correctly written.
With a small set of fixtures for the Score model, on the other hand, I
can easily create a mock for “linkâ€, pass it in, and check that the
fixtures I get back really are the ones I expect, in the order I expect.
In this way, I can write a meaningful test that ensures that the
parameters I send to “find†actually work. There is literally no way to
do this without a database backend for your tests.
Actually, I take that back. There is one way, and that’s to rewrite the
code like this:
Score.find(:all).select{|sc| sc.user_id == link.user.id}.sort{|a,b|
b.score.abs <=> a.score.abs}
Now, I can test the code without fixtures. And in so doing, I’ve also
bloated my result set and offloaded trivial database work into Ruby
code, thereby pissing off my DBA, increasing the overall load on the
database, and slowing down my Ruby code. Is that type of deoptimization
really worth it, just so I don’t have to write a fixture?