Testy.rb - ruby testing that's mad at the world

NAME
testy.rb

DESCRIPTION
a BDD testing framework for ruby that’s mad at the world and plans
to kick
it’s ass in 80 freakin lines of code

SYNOPSIS
Testy.testing ‘your code’ do
test ‘some behaviour’ do |result|
ultimate = Ultimate.new
result.check :name, :expect => 42, :actual => ultimate.answer
end
end

PRINCIPLES AND GOALS
. it.should.not.merely.be.a.unit.testing.with.a.clever.dsl

. testing should not require learning a framework. ruby is a great
framework so testy uses it instead, requiring programmers learn
exactly 2
new method calls

. testing loc should not dwarf those of the application

. testing framework loc should not dwarf those of the application

. testing frameworks should never alter ruby built-ins nor add
methods to
Object, Kernel, .et al

. the output of tests should be machine parsable for reporting and
ci tools
to easily integrate with

. the output of tests should be beautiful so that humans can read it

. the shape of the test file should not insult the programmer so
that tests
can double as sample code

. the testing framework should never alter exception semantics

. hi-jacking at_exit sucks ass

. the exit status of running a test suite should indicate the
degree of it’s
failure state: the more failures the higher the exit status

. sample code should easily be able to double as a test suite,
including
it’s output

. testing should improve your code and help your users, not make
you want to
kill yourself

. using a format that aligns in terminal is sanity saving when
comparing
output

. testing frameworks should provide as few shortcuts for making
brittle
tightly coupled tests as possible

. test suites should be able to be created, manipulated, have their
output
streamed to different ports, and even tested themselves - they
should be
plain ol objects under the hood

SAMPLES

<========< samples/a.rb >========>

~ > cat samples/a.rb

 # simple use of testy involves simply writing code, and recording

the result
# you expect against the actual result
#
# notice that the output is very pretty and that the exitstatus
is 0 when all
# tests pass
#
require ‘testy’

   Testy.testing 'the kick-ass-ed-ness of testy' do

     test 'the ultimate answer to life' do |result|
       list = []

       list << 42
       result.check :a, :expect => 42, :actual => list.first

       list << 42.0
       result.check :b, 42.0, list.last
     end

   end

~ > ruby samples/a.rb #=> exitstatus=0

 ---
 the kick-ass-ed-ness of testy:
   the ultimate answer to life:
     success:
       a: 42
       b: 42.0

<========< samples/b.rb >========>

~ > cat samples/b.rb

 # testy will handle unexpected results and exceptions thrown in

your code in
# exactly the same way - by reporting on them in a beautiful
fashion and
# continuing to run other tests. notice, however, that an
unexpected result
# or raised exception will cause a non-zero exitstatus (equalling
the number
# of failed tests) for the suite as a whole. also note that
previously
# accumulate expect/actual pairs are still reported on in the
error report.
#
require ‘testy’

   Testy.testing 'the exception handling of testy' do

     test 'raising an exception' do |result|
       list = []

       list << 42
       result.check :a, :expect => 42, :actual => list.first

       list.method_that_does_not_exist
     end

     test 'returning unexpected results' do |result|
       result.check 'a', 42, 42
       result.check :b, :expect => 'forty-two', :actual => 42.0
     end

   end

~ > ruby samples/b.rb #=> exitstatus=2

 ---
 the exception handling of testy:
   raising an exception:
     failure:
       error:
         class: NoMethodError
         message: undefined method `method_that_does_not_exist'

for [42]:Array
backtrace:
- samples/b.rb:18
- ./lib/testy.rb:65:in call' - ./lib/testy.rb:65:in run’
- /opt/local/lib/ruby/site_ruby/1.8/orderedhash.rb:65:in
each' - /opt/local/lib/ruby/site_ruby/1.8/orderedhash.rb:65:in each’
- ./lib/testy.rb:61:in run' - ./lib/testy.rb:89:in testing’
- samples/b.rb:10
expect:
a: 42
actual:
a: 42
returning unexpected results:
failure:
expect:
a: 42
b: forty-two
actual:
a: 42
b: 42.0

a @ http://codeforpeople.com/

On Mar 28, 6:01 pm, “ara.t.howard” [email protected] wrote:

   returning unexpected results:
     failure:
       expect:
         a: 42
         b: forty-two
       actual:
         a: 42
         b: 42.0

You call this beautiful, but I don’t understand it. This says that ‘a’
is okay and ‘b’ isn’t, right? Maybe it’s not so much that I don’t
understand it as I don’t really like it.

Frankly, I find it rather ironic that you’re writing a testing
framework and seemingly advocating BDD. Maybe things have changed
mightily in these heady recent times.

I like some of what you have as points, like the output should be
readable (“beautiful” is a little subjective), and of course that
tests should improve your code. The framework points, about the
framework not being huge and not contributing to brittle tests are
good, and the exit status is interesting. Personally, I live with a
couple of methods (as few as possible) on Object and Kernel so writing
the tests doesn’t make me want to kill myself.

I used RSpec for a long time, and still do with some projects. I’ve
switched bacon for my personal projects, and I love it. As for
mocking, which is necessary in some cases if you want to test without
turning into a serial killer, mocha with RSpec, facon with bacon.

Yossef M. wrote:

You call this beautiful, but I don’t understand it. This says that ‘a’
is okay and ‘b’ isn’t, right? Maybe it’s not so much that I don’t
understand it as I don’t really like it.

Actually, it seems to be YAML format. It’s readable and can be parsed.

Regards,
Ian

ara.t.howard wrote:

. it.should.not.merely.be.a.unit.testing.with.a.clever.dsl

How about you simply let the programmer write anything they want, and
then if it
returns false or nil you rip their entire expression and report the name
and
value of every variable within it?

assert{ foo > 41.8 and foo < 42.1 } => false

  foo => 40.9

Notice we didn’t need to tell the assertion the variable’s name was
‘foo’. This
rewards programmers for writing readable code. You get the maximum
reflection
for the leanest statements.

Honestly I think it’s lack of a nauseatingly kewt DSL that inhibits
adoption
of assert{ 2.0 }…

ara.t.howard wrote:

NAME
testy.rb

DESCRIPTION
a BDD testing framework for ruby that’s mad at the world and plans
to kick it’s ass in 80 freakin lines of code

It’s nice to see you finally riffing on testing.

Later,

On Mar 28, 2009, at 5:45 PM, Ian T. wrote:

Actually, it seems to be YAML format. It’s readable and can be parsed.

bingo. emphasis on the latter. think unix pipes.

a @ http://codeforpeople.com/

On Mar 28, 2009, at 5:33 PM, Yossef M. wrote:

You call this beautiful, but I don’t understand it. This says that ‘a’
is okay and ‘b’ isn’t, right? Maybe it’s not so much that I don’t
understand it as I don’t really like it.

it’s a valid complaint. but compare it to what you’ll get in most
frameworks and consider that, by beautiful, i mean that a YAML.load
can slurp the entire set of expect vs actual. i considered a delta
style format:

diff
a:
expect: 42
actual: 42.0
b:
expect: 43
actual: 43.0

but it seems very hard to reconstruct for downstream filters. i’m
open to suggestion on format though. requirements are

. readable by humans
. easily parsed by computers

basically that means some yaml format. honestly open to suggestion
here…

Frankly, I find it rather ironic that you’re writing a testing
framework and seemingly advocating BDD. Maybe things have changed
mightily in these heady recent times.

i personally don’t think so, i think the community took a wrong turn,
from wikipedia (Wikipedia, the free encyclopedia
Behavior_Driven_Development)

"
The practices of BDD include:

 * Involving stakeholders in the process through outside-in

software development

 * Using examples to describe the behavior of the application, or

of units of code
* Automating those examples to provide quick feedback and
regression testing

 * In software tests, using 'should' to help clarify

responsibility and allow the software’s functionality to be questioned
* Test use ‘ensure’ to differentiate outcomes in the scope of the
code in question from side-effects of other elements of code.
* Using mocks to stand-in for modules of code which have not yet
been written

i have major issues with points two and three wrst to most ruby
testing frameworks. one of the main points of testy is to combine
examples with testing. rspec and all the others do not serve as
examples unless you are a ruby master. that is to say they introduce
too many additions to the code that’s supposed to be an example to
really preserve it’s ‘exampleness’. and of course the output is
utterly useless to normal humans. if a framework provides 1000
asset_xxxxxxxx methods ad nausea then the point of the code - it’s
level of example-good-ness - is lost to mere mortals

mocking, which is necessary in some cases if you want to test without
turning into a serial killer, mocha with RSpec, facon with bacon.

this will summarize where my thoughts are on that

cfp:~/redfission > find vendor/gems/{faker,mocha,thoughtbot}* -type f|
xargs -n1 cat|wc -l
24255

cfp:~/redfission > find app -type f|xargs -n1 cat|wc -l
1828

rspec and co might be fine but seriously, the above is insane right?

kind regards.

a @ http://codeforpeople.com/

On Mar 28, 2009, at 6:00 PM, Phlip wrote:

foo => 40.9

Notice we didn’t need to tell the assertion the variable’s name was
‘foo’. This rewards programmers for writing readable code. You get
the maximum reflection for the leanest statements.

Honestly I think it’s lack of a nauseatingly kewt DSL that
inhibits adoption of assert{ 2.0 }…

that’s interesting indeed. one of my goals with testy is that output
is meaningful both for computers and humans and, for that, yaml is tops.

still - reporting on the context the errors was thown raised from is
quite interesting. you are basically saying report binding not
stacktrace right?

a @ http://codeforpeople.com/

On Sun, Mar 29, 2009 at 08:01:18AM +0900, ara.t.howard wrote:

. the exit status of running a test suite should indicate the degree of
it’s failure state: the more failures the higher the exit status

Up to a limit of course. how about exiting with the percentage ? Exit
status
is limited to 256 values, so you can make it exit 0 with lots of
failures:

http://gist.github.com/87480

enjoy,

-jeremy

Jeremy H. wrote:

On Sun, Mar 29, 2009 at 08:01:18AM +0900, ara.t.howard wrote:

. the exit status of running a test suite should indicate the degree of
it’s failure state: the more failures the higher the exit status

Up to a limit of course. how about exiting with the percentage ? Exit status
is limited to 256 values, so you can make it exit 0 with lots of failures:

Again: If you have any reason to count the errors, you are already
absolutely
screwed anyway…

On Mar 29, 2009, at 1:39 PM, Phlip wrote:

Again: If you have any reason to count the errors, you are already
absolutely screwed anyway…

I really feel counting the errors and reading the output are both
things better handled by defining a good interface for the results
writer. If I could just define some trivial class with methods like
test_passed(), test_failed(), test_errored_out(), and tests_finished()
then just plug that in, I could easily do anything I want.

James Edward G. II

James G. wrote:

I really feel counting the errors and reading the output are both
things better handled by defining a good interface for the results
writer. If I could just define some trivial class with methods like
test_passed(), test_failed(), test_errored_out(), and tests_finished()
then just plug that in, I could easily do anything I want.

What, besides instantly fix it (or revert) do you want to do with an
error
message from a broken test?

On Mar 28, 2009, at 9:30 PM, Bil K. wrote:

It’s nice to see you finally riffing on testing.

:wink: more to come

a @ http://codeforpeople.com/

On Mar 29, 2009, at 2:29 PM, Phlip wrote:

error message from a broken test?
The default writer wants to write them out to the console for the user
to see.

In TextMate, I would override that behavior to color the error
messages red in our command output window and hyperlink the stack
trace back into TextMate documents.

James Edward G. II

Brian C. wrote:

Ara Howard wrote:

   result.check :name, :expect => 42, :actual => ultimate.answer

I’m afraid I’m missing something. Why is this better than

assert_equal 42, ultimate.answer, “name”
?

Or even less:

name = 42
assert{ name == ultimate.answer }

Ara Howard wrote:

   result.check :name, :expect => 42, :actual => ultimate.answer

I’m afraid I’m missing something. Why is this better than

assert_equal 42, ultimate.answer, “name”
?

. testing should improve your code and help your users, not make
you want to
kill yourself

Hear hear to that!

requiring programmers learn exactly 2 new method calls

Well, it would be nice to know what those 2 methods calls were, and
their semantics, without reverse-engineering the code. Are these
Testy.testing and Result#check ?

I did find the gem, and installed it, but the generated rdoc is entirely
free of comments or explanation.

The two samples don’t really help me understand why testy is good or how
to use it effectively, since they are both more verbose that what I
would have written in test/unit to get the same result.

How would I do something equivalent to these?

assert_raises(RuntimeError) { somecode }

assert_match /error/i, response.body

I think I’d also miss the ability to have setup and teardown before each
test (something which ‘shoulda’ makes very simple and effective).

Please don’t get me wrong - I’m absolutely interested in something which
will make testing simpler and easier, if I can understand how to use it
effectively.

Regards,

Brian.

Ara Howard wrote:

NAME
testy.rb

DESCRIPTION
a BDD testing framework for ruby that’s mad at the world and plans
to kick
it’s ass in 80 freakin lines of code

SYNOPSIS
Testy.testing ‘your code’ do
test ‘some behaviour’ do |result|
ultimate = Ultimate.new
result.check :name, :expect => 42, :actual => ultimate.answer
end
end

Hey Ara,

Interesting post and project. Nice work! I just tried to test testy.rb
out and, maybe i’m overlooking something obvious, but when i run:

ruby testy_test.rb (http://pastie.org/430735)

Desktop$: ruby testy_testing.rb

naming a student:
retrieving a student’s name:
failure:
expect:
name:
actual:
name: Lake
giving a student a name:
failure:
expect:
name:
actual:
name: Jake

You see, the output does not contain the second Testy.testing()
results…

What gives?

Thanks,

Lake

On Mar 29, 2009, at 12:16 PM, Jeremy H. wrote:

Up to a limit of course. how about exiting with the percentage ?
Exit status
is limited to 256 values, so you can make it exit 0 with lots of
failures:

gist:87480 · GitHub

enjoy,

-jeremy

good catch - that was uh, tired of me :wink:

i’ll add precent now.

a @ http://codeforpeople.com/

On Mar 29, 2009, at 12:39 PM, Phlip wrote:

Again: If you have any reason to count the errors, you are already
absolutely screwed anyway…

indeed. i was vaugely thinking of a status report with failure
reported by severity. really just an idea at this point.

a @ http://codeforpeople.com/

On Mar 29, 2009, at 1:29 PM, Phlip wrote:

What, besides instantly fix it (or revert) do you want to do with an
error message from a broken test?

report it in your ci tool

a @ http://codeforpeople.com/