Too many error checks in your code may result in bad coverage grade

This is something I realized is that if you try to write more bullet
proof
code, it may have alot of extra checks in it. That may make it harder to
test because there are more conditions. In theory your code is trying to
cover cases that may or may not occur but it is trying to be more robust
… However, then when you run something like the simplecov gem -
GitHub - simplecov-ruby/simplecov: Code coverage for Ruby with a powerful configuration library and automatic merging of coverage across test suites If your code has various checks
in
it that are not tested because they are not typical, your code module
will
get a bad percentage mark for not being fully tested.

I hate to say it, but that grade might be warranted.

If you have a lot of error conditions you are anticipating in a
function,
you should probably unit test all of those error cases to make sure they
handle the errors well. Otherwise how do you know you are handling them
appropriately?

It’s truly test driven development … There is no way to guarantee a
program to be bug free (Turing). Even if all your code is executed by
tests, it is still not executed with all possible conditions. It seems I
should want to write code that satisfies the testing requirement but I
am
not always sure that I would feel the most confident that it covers all
cases and I can’t always test all cases …

On 22 June 2016 at 21:02, Jedrin [email protected] wrote:

It’s truly test driven development … There is no way to guarantee a
program to be bug free (Turing). Even if all your code is executed by tests,
it is still not executed with all possible conditions. It seems I should
want to write code that satisfies the testing requirement but I am not
always sure that I would feel the most confident that it covers all cases
and I can’t always test all cases …

I think the point is that any extra checks that you add for robustness
should have associated tests. Otherwise you won’t know that your
check does what you intended should that condition ever arise.

Colin

Generally when tests are written, it seems they never cover all possible
cases. When you are writing code, you might sometimes think of a case
where
an unlikely type of input might occur so you convert it from one type to
another. Even if your code has just one or two lines that don’t get
covered, that is enough to knock your coverage grade down considerably.
If
those particular cases seem unlikely then your most probably going to
leave
them out because you may not have time to try to cover every last
unlikely
user error that could happen … That seems to be the actual reality even
if
not ideal , but I don’t see any real remedy to this

It’s really easy, the remedy is to write a test everytime you find a new
error condition, and then have your method satisfy that test.

If you are not writing a test everytime you are attempting to handle an
identifiable user error, what are you doing exactly? How are you solving
the error without making sure that the error handler works?

If you can’t test the error conditions that you are anticipating, you
might
not want to rescue from them.

Additionally, it sounds like you are being far too accepting with your
API.
Have some rules about what input types you accept, and leave it at that.
If
there’s the need to process another input type, you can always provide
another
function that decorates your original after converting the type
appropriately. There’s a difference between a “user” who is a developer
( I
would argue that is not a user ), and an actual “user” of your website
or
program.