I ran across a problem today in which some code ran fine in regular
operation but failed in a test case. I scratched my head and thought
would running from within the test harness change the behavior of my
Clearly it was the Heisenberg uncertainty principle in action!
I discovered the root cause was that the rspec runner is setting the
ruby global, $KCODE, to ‘u’. However, my application (which is not a
app) had never specified $KCODE. I was relying on some default behavior
regular expressions under ruby 1.8 (byte-wise character matching
that change when $KCODE is set to “u”. Specifically, all regular
expressions change their default behavior to utf-8 character-wise
A simple example of this phenomenon: http://gist.github.com/592990
I’ve corrected the issue on my end by adopting the $KCODE=‘u’ semantics
my application, but this led me to a couple comments/questions I thought
would be relevant to raise with other rspec-minded folks:
Is it necessary for rspec to set $KCODE or is this a bug? Wouldn’t it
better if it didn’t twiddle any magical globals that change
behaviors? It reminds me of the bad days of perl when some distant code
would unexpectedly change out your line terminator character on you.
It seems like a good idea to call out all the things the rspec
changes that affect natural runtime behavior of code: twiddling of
class monkey patches, etc. It’d be great to get these into a list of
publicly documented pitfalls for people to watch out for.
Thanks for listening. Rspec has brought huge value to my engineering
and I really appreciate the tool!