Best practices for library development?

Hi all.

I’d like somebody to share their experience in organizing library
development, including:

  1. test-driven development
  2. code coverage analysis (through rcov?), which would be automathically
    performed after each test
  3. version control (through SVN?)
  4. optional code speed analysis (like benchmarking “how long it rans”,
    profiling “what rans so long”) after each test
  5. optional packaging (through rake? rant?) and uploading to (rubyforge?
    sourcefoge?)

All experiences are welcome.

Big thanks!

Victor.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Victor S. wrote:

  1. optional code speed analysis (like benchmarking “how long it rans”,
    profiling “what rans so long”) after each test

See the ‘profile’ library provided in Ruby core.

ruby -r profile
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (GNU/Linux)

iD8DBQFEfFTnmV9O7RYnKMcRAspCAJ9jrtB3qER8px6fV7LOuB8jvlM2mACgpcVY
jeMAo7c1AuHZ63cynsPelE4=
=EPvT
-----END PGP SIGNATURE-----

From: Suraj N. Kurapati [mailto:[email protected]]
Sent: Tuesday, May 30, 2006 5:22 PM

Victor S. wrote:

  1. optional code speed analysis (like benchmarking “how long it rans”,
    profiling “what rans so long”) after each test

See the ‘profile’ library provided in Ruby core.

ruby -r profile

No-no :slight_smile:
I know, how to do it technically.
What I what to know is how to organize all the task.
Just for now I do only unit-tests, but when to run benchmarking /
profiling / coverage analysis / dependency analysis and so on? Must
those
tasks be ran automataically? When (after any code change, like unit
tests?)
This is a question.

V.

On 5/30/06, Victor S. [email protected] wrote:

Hi all.

I’d like somebody to share their experience in organizing library
development, including:

  1. test-driven development

Yes.

Test::Unit I believe. “require ‘test/unit’”.

  1. code coverage analysis (through rcov?), which would be automathically
    performed after each test

Never used rcov, but it sounds like something you’d run via a rakefile.

  1. version control (through SVN?)

Besides CVS, that’s probably the most common these days. Working
without version control is like swinging on the trapeze without a net.

  1. optional code speed analysis (like benchmarking “how long it rans”,
    profiling “what rans so long”) after each test
  2. optional packaging (through rake? rant?) and uploading to (rubyforge?
    sourcefoge?)

Dunno about the benchmarking, but as far as optional packaging, my
guess is that you should indeed be making a gem. FWIU, when you create
a RubyForge project and upload a gem, it automagically becomes
available to the world via “gem install --remote”.

On May 30, 2006, at 10:27 AM, Victor S. wrote:

No-no :slight_smile:
I know, how to do it technically.
What I what to know is how to organize all the task.
Just for now I do only unit-tests, but when to run benchmarking /
profiling / coverage analysis / dependency analysis and so on? Must
those
tasks be ran automataically? When (after any code change, like unit
tests?)
This is a question.

Personally, I don’t think I’d want my tests run after every
change. But you might want to look into subversion’s hook scripts.
(post-commit, pre-commit, etc…). The would allow the repository to
run all the tests/profiling/coverage via rake and email the results
to interested parties upon every commit.

BUT, I wouldn’t focus too much on getting a lot of testing and
profiling set up if you’re just getting started. Although ruby makes
most of this stuff really easy, you’ll still burn time and energy
that might be better focused on the real task. Don’t optimize too
early.
-Mat

On Tue, May 30, 2006 at 11:27:51PM +0900, Victor S. wrote:

No-no :slight_smile:
I know, how to do it technically.
What I what to know is how to organize all the task.
Just for now I do only unit-tests, but when to run benchmarking /
profiling / coverage analysis / dependency analysis and so on? Must those
tasks be ran automataically? When (after any code change, like unit tests?)
This is a question.

As for coverage analysis: I run rcov before committing to make sure I’m
not
checking in (lots of) untested code. This is how the task can be defined
in
Rake:

require ‘rcov/rcovtask’
desc “Create a cross-referenced code coverage report.”
Rcov::RcovTask.new do |t|
t.libs << “ext/rcovrt”
t.test_files = FileList[‘test/test*.rb’]
t.rcov_opts << “–callsites” # comment to disable cross-references
t.verbose = true
end

and in Rant it’d be:

require ‘rcov/rant’
desc “Create a cross-referenced code coverage report.”
gen Rcov do |g|
g.libs << “ext/rcovrt”
g.test_files = sys[‘test/test*.rb’]
g.rcov_opts << “–callsites” # comment to disable cross-references
end

This way {rake,rant} rcov will generate a XHTML report and show
another
on stdout.

If your commits are small enough (or should I say “atomic”?), running
the
tests just before committing (say with the pre-commit hook of your VCS)
might
suffice. Otherwise (larger commits, or tests that don’t want to pass)
autotest
would be the way to go, I guess.

Regarding profiling, I don’t think it makes any sense to run that
automatically in general.

From: Mauricio Julio Fernandez Pradier [mailto:[email protected]]
On
Behalf Of Mauricio F.
Sent: Tuesday, May 30, 2006 7:53 PM

ruby -r profile

t.test_files = FileList[‘test/test*.rb’]
g.test_files = sys[‘test/test*.rb’]
autotest
would be the way to go, I guess.

Regarding profiling, I don’t think it makes any sense to run that
automatically in general.

Thanks Mauricio. It’s just what I’ve want to hear.

Mauricio F. - http://eigenclass.org - singular Ruby

V.