Elegant way to determine if something is defined

Something like:

1.9.3-head :065 > b = a + 1

raises an exception if a is not defined:

NameError: undefined local variable or method a' for main:Object from (irb):65 from /home/tamara/.rvm/rubies/ruby-1.9.3-head/bin/irb:16:in

Is there a more elegant way to check if a is defined besides wrapping
it in begin/rescue/end block? Something along the order of:

Object.defined?(‘a’)

(I’m probably thinking about this wrong…)

Object.defined?(‘a’)

defined?(a) #=> nil
b = a + 1 #=> NameError
a = 1
defined?(a) #=> “local-variable” (strings are truthy)

However, relying on defined?(a) to check for local variables is
probably a code smell: they’re local variables, why isn’t the code
just written properly? What’s the problem you’re trying to solve?

Subject: elegant way to determine if something is defined
Date: dom 10 feb 13 12:09:02 +0900

Quoting tamouse mailing lists ([email protected]):

Is there a more elegant way to check if a is defined besides wrapping
it in begin/rescue/end block? Something along the order of:

Object.defined?(‘a’)

Your question has already been answered. I just wanted to add that,
coming from compiled languages, I was used to having the class of
problems you describe caught at compilation. With Ruby, you have the
advantage of not needing to declare variables. This great advantage
comes with the downside of having to wait until runtime to catch this
family of problems.

I got used to this. While I do not use unit tests (which constitute
more code, and thus give you more occasions to drop in the occasional
bug or two), I make sure my code can be tested in its functionality
very often during development. Problems like the one you write about
are tripped into, and the exception message exactly pinpoints where
the problem is located, so that it can be quickly fixed.

What I mean is that I don’t see when you could need an undefined
variable exception to be caught - more or less elegantly. An undefined
variable should shout as loudly as it can, so that the code may be
quickly corrected.

Carlo

On Sun, Feb 10, 2013 at 4:09 AM, tamouse mailing lists
[email protected] wrote:

Is there a more elegant way to check if a is defined besides wrapping
it in begin/rescue/end block? Something along the order of:

You can use a rescue modifier:

$ ruby -e ‘b = a + 1 rescue 99; p b’
99

Kind regards

robert

On Sun, Feb 10, 2013 at 7:39 AM, Carlo E. Prelz [email protected]
wrote:

While I do not use unit tests (which constitute
more code, and thus give you more occasions to drop in the occasional
bug or two), I make sure my code can be tested in its functionality
very often during development.

That does not seem to make sense to me: why would you advocate testing
but avoid providing automation?

What I mean is that I don’t see when you could need an undefined
variable exception to be caught - more or less elegantly. An undefined
variable should shout as loudly as it can, so that the code may be
quickly corrected.

That reminds me of a paper which made this distinction between
biological and technical systems: biological systems try to limit the
impact of an error to allow the whole system to keep going on while in
technical systems we want to make errors prominent so they are easily
caught and can be remedied. Because in technical systems errors which
go unnoticed can have catastrophic effects. See here for example:
http://baselinescenario.com/2013/02/09/the-importance-of-excel/

Kind regards

robert

Subject: Re: elegant way to determine if something is defined
Date: lun 11 feb 13 05:30:30 +0900

Quoting Robert K. ([email protected]):

While I do not use unit tests (which constitute
more code, and thus give you more occasions to drop in the occasional
bug or two), I make sure my code can be tested in its functionality
very often during development.

That does not seem to make sense to me: why would you advocate testing
but avoid providing automation?

I did not advocate anything. I only described how my development
process unrolls. When I wrote “my code can be tested,” I meant that,
along the development phase, my code has to be executable all the
time: I verify every new feature as soon as I have added it, when its
function is fresh in my mind, then I move on, removing, or often just
commenting out, whatever code I had added for verification purposes.

From what I have read about automated tests, the goal the authors of
the various systems have is not so widely different from what I do. It
is the pretense to make these processes automatic that, according to
my very personal opinion plus experience, makes them hit far from the
center of the target.

I recently came across a perspective employer who drafted a list of
so-called ‘software commandments.’ One of them reads:

“Quality code WORKS! All the time.”

Anybody who sincerely believes in this statement has not had enough
experience. I would redraft it as follows:

“After proper weaning, quality code WORKS almost all the time, the
interval between discovered bugs growing exponentially.”

That’s because we humans make errors, in all of our endeavours.
Writing code, writing test code, drafting ‘best practices.’ There is
no escape. G

Automatic tests are great for catching regressions, and are particularly
useful if your project is large and has many moving parts. Ideally I
write
tests for key boundary conditions (usually after I’ve written and
hand-tested the module in question, so I can test the tests, as it
were),
and, where relevant, for every bug that is discovered and fixed. In
practice, not so much; but then I don’t write buggy code so why bother?*

It’s not test driven design, it’s plain old regression catching. When
your
codebase get enough modules and enough lines, you don’t want to have to
hand test everything that might be touched by a change you make. Run it
through the tests, catch the obvious gaffes, and focus on the task at
hand. If a new bug crops up, add a new test so it doesn’t happen again.

  • yep, really, truly, my code is all 100% absolutely perfect. It’s
    meant
    to do that.

    Matthew K., B.Sc (CompSci) (Hons)
    http://matthew.kerwin.net.au/
    ABN: 59-013-727-651

    “You’ll never find a programming language that frees
    you from the burden of clarifying your ideas.” - xkcd

On Mon, Feb 11, 2013 at 8:05 AM, Carlo E. Prelz [email protected]
wrote:

That does not seem to make sense to me: why would you advocate testing
but avoid providing automation?

I did not advocate anything. I only described how my development
process unrolls.

Well, yes. But since you do test you obviously think that is a good
thing to do.

From what I have read about automated tests, the goal the authors of
the various systems have is not so widely different from what I do. It
is the pretense to make these processes automatic that, according to
my very personal opinion plus experience, makes them hit far from the
center of the target.

Can you elaborate that? In what ways do they miss the target?

I recently came across a perspective employer who drafted a list of
so-called ‘software commandments.’ One of them reads:

“Quality code WORKS! All the time.”

Anybody who sincerely believes in this statement has not had enough
experience.

What’s wrong with making that a goal? I know, according to the
wording this is not a goal but written as a fact. But, as you say,
everybody who is in the business longer than a few days must know that
there are always bugs. (btw. “works” and “has zero bugs” are not the
same.)

I would redraft it as follows:

“After proper weaning, quality code WORKS almost all the time, the
interval between discovered bugs growing exponentially.”

That would be the case only if no new requirements came up, no new
features were added and only bugs fixed.

That’s because we humans make errors, in all of our endeavours.
Writing code, writing test code, drafting ‘best practices.’ There is
no escape. Gdel speaks clearly. The advocates of automated testing
demand a sizeable increase in programmers’ workload (a double codebase
to maintain, after all), and then transfer the authority to judge on
the health of the code to the set of tests being passed. This is an
illusion.

I’d call it a trade off: you trade off developer time during initial
phase for bug fixing, customer support and SLA violation penalties.

All these systems, far from making code perfect, only push the bugs
further away in time.

How so?

The further away the bug is discovered, the more
catastrophic its consequences may be: first of all, the knowledge
needed to fix it may not be there anymore.

Which to me sounds like an argument for automated unit tests. With
those you can execute them any time and quickly after adding new
features or changing code. The helps discovering issues early.

In a nutshell, I believe that the tranquillity that is promised by any
of these automatic systems is false money. The PEOPLE who work
together have to engage their good will, and be willing to clean up
after their own mess. And to cultivate harmony. At that point, the
common practices grow spontaneously and quality ensues.

Of course that helps enormously. But automated tests give you the
confidence that the quality is at least as it needs to be.

I am afraid this cannot be bought by money, and cannot be certified by
certifications.

Yes, that’s a management task.

That reminds me of a paper which made this distinction between
biological and technical systems: biological systems try to limit the
impact of an error to allow the whole system to keep going on while in
technical systems we want to make errors prominent so they are easily
caught and can be remedied. Because in technical systems errors which
go unnoticed can have catastrophic effects. See here for example:
http://baselinescenario.com/2013/02/09/the-importance-of-excel/

You are right about the catastrophic effects. But forget your hopes if
you dream of an error-free world - at any level.

What makes you think I might dream of an error free world? I certainly
don’t.

Cheers

robert

Subject: Re: elegant way to determine if something is defined
Date: lun 11 feb 13 08:40:34 +0900

I hope you don’t mind if I reply only to the points I think are most
important - the mail would grow to unmanageable dimensions if I
replied to everything, and the other readers would get bored.

Quoting Robert K. ([email protected]):

Well, yes. But since you do test you obviously think that is a good
thing to do.

I obviously do think it is good to shake my code a lot so that the
flaky points manifest themselves. And I developed a good sense for how
they manifest. Nothing that could be codified/automatized, I am
afraid.

Anybody who sincerely believes in this statement has not had enough
experience.

What’s wrong with making that a goal?

Adopting impossible goals is very romantic. sometimes it is
impractical, but of course you are free to adopt any goal you
want. The problem arises when people think that, by adopting that
specific goal, they will actually obtain bug-free code.

But, as you say,
everybody who is in the business longer than a few days must know that
there are always bugs. (btw. “works” and “has zero bugs” are not the
same.)

“works always” and “has zero bugs” are equivalent statements. What I
state is that any non-trivial software package can be rendered
virtually bug-free by maintaining it long enough without changing it
too much. By that time, it will have become brittle (either unsuitable
to the hardware or unsuitable to the task, and very hard to change).

I further state that anybody who guarantees that a non-trivial new
software being delivered to the public will “work always” (or “has
zero bugs”) is a shameless liar. There is no way you can guarantee
that.

Software has to be gently led through its first steps, and then
affectionately helped in growing and adapting to new needs. A software
package is a living creature, not a commodity.

In a nutshell, I believe that the tranquillity that is promised by any
of these automatic systems is false money. The PEOPLE who work
together have to engage their good will, and be willing to clean up
after their own mess. And to cultivate harmony. At that point, the
common practices grow spontaneously and quality ensues.

Of course that helps enormously. But automated tests give you the
confidence that the quality is at least as it needs to be.

Here is the point. It is a false confidence. You may have made errors
in the tests, for example. You’d need tests for your tests,
etc. etc.

Or it may even be that you will face a situation that the creator of
the automated test system you use did not think about.

Have you read Douglas Hofstadter’s book about G

sto.mar wrote in post #1096319:

Hmmm… very dangerous.

That would catch all kinds of exceptions, not only the NameError:

a = []
b = a + 1

=> TypeError: can’t convert Fixnum into Array

vs.

a = []
b = a + 1 rescue 99
b # => 99

Back in the olden days, MOO code had the following:

`x + 1 ! E_TYPE';

… returns (x+1) or E_TYPE, or raises E_VARNF.

`x.y ! E_PROPNF, E_PERM => 17'

… returns x.y if x is an object and x.y is a readable property;
returns 17 if x is an object but x.y is not defined or
not readable; or raises something else (like E_VARNF or E_INVIND)

`1 / 0 ! ANY';

… returns E_DIV

I liked being able to explicitly trap a certain class of exception, and
either return the exception itself or a default value.

Additionally it had try/catch blocks.

[1] http://files.moo.ca/1/1/7/ProgrammersManual_26.html#SEC26

Am 11.02.2013 22:28, schrieb Matthew K.:

… returns x.y if x is an object and x.y is a readable property;

[1] http://files.moo.ca/1/1/7/ProgrammersManual_26.html#SEC26

b = a + 1 rescue $!
b # => #<NameError: undefined local variable or method `a’ for
main:Object>

Am 10.02.2013 21:17, schrieb Robert K.:

     from /home/tamara/.rvm/rubies/ruby-1.9.3-head/bin/irb:16:in `<main>'

$ ruby -e ‘b = a + 1 rescue 99; p b’
99

Hmmm… very dangerous.

That would catch all kinds of exceptions, not only the NameError:

a = []
b = a + 1

=> TypeError: can’t convert Fixnum into Array

vs.

a = []
b = a + 1 rescue 99
b # => 99

On Feb 12, 2013 7:57 PM, [email protected] wrote:

b = a + 1

I liked being able to explicitly trap a certain class of exception, and
either return the exception itself or a default value.

Additionally it had try/catch blocks.

[1] http://files.moo.ca/1/1/7/ProgrammersManual_26.html#SEC26

b = a + 1 rescue $!
b # => #<NameError: undefined local variable or method `a’ for
main:Object>

+1 I’ve been writing so much perl lately I should have remembered the
magic
globals…

There’s still no way to trap a class (or classes) of exception without
writing a begin-rescue block, is there? I’m just asking, not
complaining.

On Sat, Feb 9, 2013 at 9:20 PM, Adam P. [email protected]
wrote:

You’ve pretty much hit the nail on the head there – legacy code, much
of it not very well thought out, etc. Some of this is debugging
(de-smelling? SANITIZING!)

On Mon, Feb 11, 2013 at 8:38 AM, Carlo E. Prelz [email protected]
wrote:

thing to do.

“works always” and “has zero bugs” are equivalent statements. What I
Software has to be gently led through its first steps, and then

confidence that the quality is at least as it needs to be.
years…

I am afraid this cannot be bought by money, and cannot be certified by
certifications.

Yes, that’s a management task.

It’s the typical thing that cannot be imposed. Good luck,
management…

Ay! Ay! Ay! You all have launched off into one my favourite rabbit
holes. Alas, I am in the midst of trying to get a release out and not
quite free to join in.

Can I just say, you’re both right? Carlo’s methodology is how I
learned functional programming: building elements that always work,
and are constantly under test as the system grows and progresses.

Robert’s methodology is forward-loading the specification based on the
third of what I refer to as the 3 golden questions in determining
outcomes:

  1. what do you want?
  2. what will having that do for you?
  3. how will you know when you have it?

The first two are what one asks up front for figuring out what problem
you’ll solve, or what further ability you’ll enable. The last one,
though, is a bit different and how it is applied to software
development is precisely TDD/BDD. The modern versions of TDD/BDD can
be invaluable in fleshing out the requirements and design for your
product when you don’t necessarily understand it completely, and need
to build it with others who will also use your software and tests to
build their parts.

The defining factor, for me, was the amount of moving parts and
communication among the developers. In the case of many a modern
software project, sadly, many developers don’t realize that what they
are writing are communication instruments: the software must speak to
those who come along later and need to understand it to fix it,
enhance it and interact with it. Unfortunately, sometimes (how often
I’m not sure) the expectation is that one just does what one can
without understanding that fantastic Gilgamesh you wrote, and so in
the interests of expediency and that management that will sort things
out and can’t, stuff breaks.

It’s never going to be entirely one or the other; a set of components,
constantly under scrutiny while the project progresses can work well
when the project is at least well-known enough to be able to provide
well-contained components with well-defined and unchanging interfaces;
while a set of specification-driven-by-test can provide a means of
distributing development in parallel where the highest fidelity
communications are possible, or aren’t happening.

If given a team of unknown and varying skill, and/or broadly
distributed (and maybe especially if there are also culture, language,
and proficiency gulfs), I’d opt for the latter, even with the
understanding that it is imperfect and prone to error in itself.

Arggh, I cannot say what I want to in this short time. Please keep
this discussion going!

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs