Test::unit assertion pass scenario

When using the test::unit assertion, such as assert_equal, the script
will throw a failure if the test condition (assertion) is not met. If it
does pass, no output is displayed.

Is there a way to force the results of the test to display both passes
and failures?

Thanks in advance!

On Dec 30, 2009, at 12:08 , John S. wrote:

When using the test::unit assertion, such as assert_equal, the script
will throw a failure if the test condition (assertion) is not met. If it
does pass, no output is displayed.

Is there a way to force the results of the test to display both passes
and failures?

how are you using it?? Normally it displays something like:

/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -w -I…/…/minitest/dev/lib:lib:ext:bin:test -e ‘require “rubygems”; require “minitest/autorun”; require “test/test_autotest.rb”; require “test/test_focus.rb”; require “test/test_unit_diff.rb”; require “test/test_zentest.rb”; require “test/test_zentest_mapping.rb”’
Loaded suite -e
Started

Finished in 0.214105 seconds.

103 tests, 259 assertions, 0 failures, 0 errors, 0 skips

(this is for minitest, not test/unit, but the output is very similar)

Your test should be set up like:

test_blah.rb:

require ‘test/unit’

class TestThingy < Test::Unit::TestCase
def test_thingy
assert_equal 2, 1+1
end
end.

Here is a run:

Yep, the example below is exactly the way I’m using it.
However, as demonstrated in your example, the 259 assertions that were
run (and passed) do not display any kind of passing checkpoint, the way
it would have if any of those assertions failed.

Basically, I am looking for a way to provide info just for both passed
and failed assertions, similar to what is done when an assertion fails.

Thanks again!

Ryan D. wrote:

On Dec 30, 2009, at 12:08 , John S. wrote:

When using the test::unit assertion, such as assert_equal, the script
will throw a failure if the test condition (assertion) is not met. If it
does pass, no output is displayed.

Is there a way to force the results of the test to display both passes
and failures?

how are you using it?? Normally it displays something like:

/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -w -I…/…/minitest/dev/lib:lib:ext:bin:test -e ‘require “rubygems”; require “minitest/autorun”; require “test/test_autotest.rb”; require “test/test_focus.rb”; require “test/test_unit_diff.rb”; require “test/test_zentest.rb”; require “test/test_zentest_mapping.rb”’
Loaded suite -e
Started

Finished in 0.214105 seconds.

103 tests, 259 assertions, 0 failures, 0 errors, 0 skips

(this is for minitest, not test/unit, but the output is very similar)

Your test should be set up like:

test_blah.rb:

require ‘test/unit’

class TestThingy < Test::Unit::TestCase
def test_thingy
assert_equal 2, 1+1
end
end.

Here is a run:

On 31.12.2009 02:41, John S. wrote:

Yep, the example below is exactly the way I’m using it.
However, as demonstrated in your example, the 259 assertions that were
run (and passed) do not display any kind of passing checkpoint, the way
it would have if any of those assertions failed.

Basically, I am looking for a way to provide info just for both passed
and failed assertions, similar to what is done when an assertion fails.

Why?

If an assertion passes everything is well (within the test parameters,
anyway :wink: ), and no action is required.

In fact, you’d drown out, since you degrade the signal/noise ratio,
actual failures.

If you are looking for a way to see what code gets exercised (and if all
code gets tested), RCov (used to be) a good solution (alas, it hasn’t
been updated since 2007) to test for code coverage, too.

On Dec 30, 2009, at 17:41 , John S. wrote:

Yep, the example below is exactly the way I’m using it.
However, as demonstrated in your example, the 259 assertions that were
run (and passed) do not display any kind of passing checkpoint, the way
it would have if any of those assertions failed.

Basically, I am looking for a way to provide info just for both passed
and failed assertions, similar to what is done when an assertion fails.

Your use of “info” is pretty nebulous.


Finished in 0.214105 seconds.

103 tests, 259 assertions, 0 failures, 0 errors, 0 skips

All of that is “info”.

What do you want it to do differently, and (more importantly) WHY?

On Dec 30, 2009, at 17:47 , Phillip G. wrote:

If you are looking for a way to see what code gets exercised (and if all code gets tested), RCov (used to be) a good solution (alas, it hasn’t been updated since 2007) to test for code coverage, too.

not true:

rcov (0.9.7.1)
Platform: ruby, java
Authors: Relevance, Chad H. (spicycode), Aaron Bedra
(abedra), Jay McGaffigan, Mauricio F.
Homepage: GitHub - relevance/rcov: The new home of RCov on GitHub

Code coverage analysis tool for Ruby

see RubyGems.org | your community gem host

Versions
• 0.9.7.1 December 29, 2009
• 0.9.7.1 December 29, 2009 java
• 0.9.7 December 27, 2009
• 0.9.7 December 27, 2009 java
• 0.9.6 May 12, 2009

But I feel I should point out: rcov doesn’t tell you that your tests are
any good… it is only good for “what code gets exercised” but not
“[all] code gets tested”.

Why is a good question. First, the extra info is not for myself, nor
would it be for any of the devs who may run it. The theory is that
anyone who writes the test or uses them regularly should be familiar
with what is being tested anyway, and hence, only the failures really
need further investigation.

It’s more of a CYA item for those who are, shall we say, not in the
know.

Ryan D. wrote:

On Dec 30, 2009, at 17:41 , John S. wrote:

Yep, the example below is exactly the way I’m using it.
However, as demonstrated in your example, the 259 assertions that were
run (and passed) do not display any kind of passing checkpoint, the way
it would have if any of those assertions failed.

Basically, I am looking for a way to provide info just for both passed
and failed assertions, similar to what is done when an assertion fails.

Your use of “info” is pretty nebulous.


Finished in 0.214105 seconds.

103 tests, 259 assertions, 0 failures, 0 errors, 0 skips

All of that is “info”.

What do you want it to do differently, and (more importantly) WHY?

On Dec 30, 2009, at 20:35 , Jörg W Mittag wrote:

Ryan D. wrote:

But I feel I should point out: rcov doesn’t tell you that your tests
are any good… it is only good for “what code gets exercised” but
not “[all] code gets tested”.

Simple proof: take a hypothetical “perfect” test suite with 100%
coverage. Remove all assertions. Still 100% coverage, but nothing
gets tested anymore.

Exactly

Ryan D. wrote:

But I feel I should point out: rcov doesn’t tell you that your tests
are any good… it is only good for “what code gets exercised” but
not “[all] code gets tested”.

Simple proof: take a hypothetical “perfect” test suite with 100%
coverage. Remove all assertions. Still 100% coverage, but nothing
gets tested anymore.

jwm

On Dec 30, 2009, at 20:05 , John S. wrote:

Why is a good question. First, the extra info is not for myself, nor
would it be for any of the devs who may run it. The theory is that
anyone who writes the test or uses them regularly should be familiar
with what is being tested anyway, and hence, only the failures really
need further investigation.

It’s more of a CYA item for those who are, shall we say, not in the
know.

Some sort of detailed report of exactly what assertions you’re running
isn’t a very good CYA. You might be better off with:

  • of tests

  • of assertions (or better: assertions / test)

  • % of coverage (possibly add heckle #'s, but that’s a serious PITA)
  • loc test / loc impl (but please for gods’ sake refactor both sides)
  • test time

and then graph that over time.

On Dec 30, 2009, at 22:19 , Phillip G. wrote:

On 31.12.2009 04:35, Ryan D. wrote:

On Dec 30, 2009, at 17:47 , Phillip G. wrote:

If you are looking for a way to see what code gets exercised (and if all code gets tested), RCov (used to be) a good solution (alas, it hasn’t been updated since 2007) to test for code coverage, too.

not true:

Someone needs to update eigenclass.org’s RCov page, then.

He’s not responding to anyone. Which is why it has new parents.

On 31.12.2009 04:35, Ryan D. wrote:

On Dec 30, 2009, at 17:47 , Phillip G. wrote:

If you are looking for a way to see what code gets exercised (and if all code gets tested), RCov (used to be) a good solution (alas, it hasn’t been updated since 2007) to test for code coverage, too.

not true:

Someone needs to update eigenclass.org’s RCov page, then.

On 31.12.2009 09:42, Ryan D. wrote:

He’s not responding to anyone. Which is why it has new parents.

Aha! Thanks. :slight_smile: