Richard C. wrote:
these
scenarios anyway.
The benefits disappear as you reimplement them in test cases anyway?
Insert comment about wheel reinvention here.
Also, compiler checks fail fast. That’s a Good Thing ™. Bounds checks
also fail faster than a NoMethodError elsewhere on the stack. Unit test
checks that noone thought of coding yet don’t fail fast - having to
cover your whole code with checks for let’s say public object contracts
isn’t too gratifying work, and I wouldn’t be surprised if a lot of
people “didn’t get around to it”. And believe it or not, knowing I
-have- these checks in a language compiler / runtime does mean I
wouldn’t even dream of unit-testing for them. DRY.
The point is, to achieve robustness, amongst other things, trivial
errors must cause a failure, this must manifest itself as close to the
origin of the error as possible, and be detected immediately. Whether
using design-by-contract checks, IDE templates of common defensive
programming checks on method arguments / return values, or exhaustive
testing, special cases have to be detected and handled specially.
So dynamic languages with unit testing can do away with these features
and get better productivity bonuses, and more flexible coding styles
for no cost.
Except you end up with a larger mass of unit test code. Which, make no
mistake, is code, can be buggy, and needs to be developed and
maintained, and if you want to avoid redundancy, also well-designed.
(Pop Quiz: how often do you test on array index bounds being valid? How
reusable is the test code? A Test::Unit extension library of reusable
pre / postcondition at method parameter and object state scope around
method calls would be interesting, if we’re swapping compile-time
contract checks for unit-test-time ones.)
So, risking to reinforce my troll position, I’ll postulate that the
productivity increase from dynamic typing is a delusion. From what I
have seen, a decently designed type hierarchy, and some degree of type
inference would do away with 90% of the trivial anecdotes demonstrating
time savings. In Ruby, I see more productivity gainers in
metaprogramming facilities, and the powerful object model - these aren’t
at all exclusive with type / contract checking. As of yet I remain
unconvinced that extensive use duck typing and eval’ing public methods
into objects is more good than harm in the long term. I blame years of
doing code maintenance that taught me that most programmers (myself
included) think they’re much smarter than they are and are way too
impressionable by “clever ideas” (= horrid hacks) to think of edge
cases.
(To wit: my last work project wrapped ORM in domain-specific facades,
complete with some rudimentary transaction management. Quick, easy to
mechanically churn out new facade code, no dependency on a declarative
TX mgmt. solution, read very nicely and was easy to use. Fast forward to
several junior developers, including me, being put to work on a new
feature, mostly guessing around from existing code. I was the first to
ask for some non-trivial code that was working with the model to be
reviewed by a senior, and got blinked at for doing what was supposed to
be an atomic change in multiple facade calls, i.e. multiple
transactions. Of course by that time the others had commited several
show-stopping bugs related to optimistic locking that created
unmodifiable zombie records in the database that required a manual edit
of the testing server’s database to bring things back on track. And
thrashed the development DB of everyone else working on the module.
However, had the optimistic locking checks not been there, what was
essentially a potential invalid database change would have passed
silently and concurrent users would send the data integrity down the
proverbial creek with no paddle once out of testing because the
transaction-wrapping facades would hide the problem.)
David V.