How should I interpret RCov, code coverage?

First of all I’m new to RCov.

I’m having a rails application with around 20 models and 20 controllers

  • helpers and others.

And I’ve got an unusual good RCov test coverage, 39%(total) and 31%(code
coverage) - and this with only 12 RSpec examples.

I’m running RCov with the following options:

t.rcov_opts = “–callsites --xrefs --no-comments --rails --exclude
test/,spec/,features/,factories/,gems/*”

What kind of heuristic is RCov using? I’m reading its manual
Linux Certif - Man rcov(1) and it says:

rcov is a code coverage tool for Ruby. It creates code coverage
reports showing the unit test coverage of the target code.

rcov does “statement coverage”, also referred to as “C0 coverage analysis”. It
tests, if each line of the source code has been executed.

rcov is typically used to find the areas of a program that have not
been sufficiently tested. It reports, what code has not been run by
any test cases.

That being said, it means that:

(in my case) 31% Lines Of Code of the application logic have been
executed by the test files. E.g. if a method has been executed for one
test case than that method (the LOC that represent that method) are 100%
tested. The bad side is that if a test file loads a Class A (just by
including its name into its logic), and doesn’t execute any of its
method, the methods signatures of that class will count as executed LOC
100% tested, therefore the number can grow quite easy. Right?

Is an RCov code coverage of 100% really good? Because in my opinion a
method should be tested for more than one case but rcov doesn’t care
about this :(.

Is there another tool which does a better job on rails projects than
RCov on test coverage?

Is an RCov code coverage of 100% really good? Because in my opinion a
method should be tested for more than one case but rcov doesn’t care
about this :(.

RCov is C0 coverage[1]. It’s trivial to hit >95% coverage; in fact you
can very quickly achieve >60% coverage by writing a handful of Cukes
on most (simple/new) Rails apps.

And I’ve got an unusual good RCov test coverage, 39%(total) and 31%(code
coverage) - and this with only 12 RSpec examples.
With C0, this isn’t unusual. To mitigate this effect, on rails
projects we typically have two coverage report builds - just model
specs, model + controller specs. I typically expect my codebases to
have ~95% coverage for the model build, somewhat higher from the model

  • controller build. I don’t usually look at coverage from Cukes as it
    doesn’t really say very much.

the methods signatures of that class will count as executed LOC
100% tested, therefore the number can grow quite easy. Right?
It’s not unusual for us to see codebases where the customer mandated a
certain coverage number, but the contractor was unfamiliar with TDD
and so simply wrote specs with no assertions. The coverage numbers are
met, but the specs are useless.

You can mitigate this to some extent with a library like heckle, but
YMMV.

Is there another tool which does a better job on rails projects than
RCov on test coverage?
AFAIK, Ruby tools only provides C0 coverage metrics.

Best,
Sidu.
http://c42.in

[1]

On 25 August 2011 09:14, Andrei U. [email protected] wrote:

t.rcov_opts = "–callsites --xrefs --no-comments --rails --exclude

tests, if each line of the source code has been executed.
tested. The bad side is that if a test file loads a Class A (just by
RCov on test coverage?


Posted via http://www.ruby-forum.com/.


rspec-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/rspec-users

You can get lots of interesting metrics for upto date rails projects
using
the metrical gem. Integrating such a tool into Continuous Integration is
a
really good idea. Viewing metrics over a time period and seeing how they
change can be very informative.

All metrics need to be interpreted relative to the context of the
project
and the metric. So the answer to your question “Is an RCov code
coverage of
100% really good” is - it depends! What you really need to do here is
ask
better questions :slight_smile: - which is not easy!

The way to use metrics (IMO) is as indicators. With your current c0
coverage
of only 39% you can find lots of code that is not tested. You can then
find
out who wrote the code, and start dealing with the issue of why untested
code was added to the project. This reason will vary wildly between
projects. Some example reasons will illustrate this

  • There is only one developer on the project and they are new to TDD.
  • There is only one developer on the project contributing code with no
    tests. He refuses to write tests for his code
  • All the developers want to do TDD, but as they come up to the end of
    each
    sprint they feel under to much time pressure to get things done, and so
    stop
    doing TDD
    … ad infinitum

The metric is an indicator there is a problem, but it does not tell you
what
the problem is.

Using multiple metrics and viewing them over time will give you many
indicators of possible problems with a project. Properly identifying
a problem, with enough precision to have a chance of implementing
solutions
for it, requires the investigation of the source, developers and the
process
of development.

HTH

Andrew

Thanks for the info and suggestions Sidu & Andrew!
I’ve got the rails project to do some bug fixing and for adding new
features. And I wanted to see some metrics before touching the code…
anyway I will dig more into it - as I am new to rails and ruby but I
like TDD and BDD :D.

Any opinions on this topic are welcome!

Thanks,
Andrei.