Boolean annoyance

Hello,

there is one thing in ruby that annoys me most (at least for now).

if 0
puts “true”
end

Yes, I know everything expect nil and false are true but that’s probably
the most illogic part of ruby. Because of this stuff like

if flags & 0x01
# do some stuff if flag is set
end

will execute in any case. Perhaps I’m biased because I’m a crazy C
hacker
but I can not believe that others do not fall into this trap. I realy
like
the clearness of the ruby syntax but this “everything but nil and false
is
true” logic is totaly non obvious and annoying.

Why can’t there be a to_bool converter for all numerical Classes?
This converter could be used in boolean expressions, 0.to_bool would
return false and all other numbers would return true.
Probably the best way is to extend the Object class where to_bool would
return true. Subclasses may than overload to_bool with a more complex
version. This makes it possible to use .to_bool everywhere where a
boolean
expression is expected.

IMO if it looks like a boolean expression it should act like a boolean
expression.

there is one thing in ruby that annoys me most (at least for now).

if 0
puts “true”
end

Yes, I know everything expect nil and false are true but that’s probably
the most illogic part of ruby.

It’s actually the most illogic part of languages like C that treat 0
as false. Believe me, having spent most of my programming life using
C/C++ and being used to that, I make mistakes regularly whilst writing
Ruby code.

0 is an integer and quite often a valid value. C’s treatment of 0 as
false is convenient in some situations but horribly inconvenient in
others. Because if I’m expecting an integer and zero is a legal
value, then you have to start playing around in your conditionals…
“Okay, today any -1 is false”… or even at different levels than
that. Part of the problem is that C doesn’t have an actual NULL…
NULL is just defined as zero. Overlap, explosions, crash and burn…

Because of this stuff like

if flags & 0x01
# do some stuff if flag is set
end

There are alternatives… In C/C++, this is what I would do because
it is convenient. In Ruby, I might do:

if (flags & 0x01).nonzero?

Course, I probably wouldn’t hardcode 0x01 in any language, so:

if (flags & kSomethingEnabled).nonzero?

But I’d even go a bit further (and often do this in C++ for clarity):

def something_enabled?
(flags & kSomethingEnabled).nonzero?
end

if something_enabled?

do something

end

Or even (in Ruby w/ symbols):

mask = { :something => 0x01 }
def enabled? feature
(flags & mask[feature]).nonzero?
end

if enabled?(:something)

do something

end

You could even go so far to package this into a bitfield class (if
there isn’t one already).
Point is, there are lots of better ways to get what you need done
without sacrificing the value 0 to the old-skool gods. (Who I get
along with just fine usually, but not with Ruby’s zero.)

Quoting Claudio J. [email protected]:

IMO if it looks like a boolean expression it should act like a
boolean expression.

That’s the point, isn’t it? 0 isn’t a boolean value.

You typically only see 0 → false equivalences in languages that
don’t have a distinct boolean type (C, Perl, etc…). Ruby’s not
one of those.

Why can’t there be a to_bool converter for all numerical Classes?

There is one; it’s just called #nonzero? instead of #to_bool:

if ( flags & 0x01 ).nonzero?

end

-mental

Claudio J. wrote:

Hello,

there is one thing in ruby that annoys me most (at least for now).

if 0
puts “true”
end

class Person
def number_of_children()
@children.length
end

def has_children?()
	@children.length > 0
end

end

In this context, like most others from a high-level language
perspective, 0 is a valid meaningful value. If you don’t know how many
children a Person’s got, return nil. If you know a Person hasn’t got
any return 0. Make a boolean-esque method if you want boolean values.

As said in other replies and in the innumerable previous threads on this
topic, 0’s falsity is an implementation oddity of some languages.

It used to create subtle bugs in my Perl code more often than it was a
useful shortcut. Understanding why you need ‘if ($value)’ sometimes and
‘if (defined $value)’ other times is much more complicated than Ruby’s
solution.

alex

Quoting Alex F. [email protected]:

def has_children?()
@children.length > 0
end

Of course, this could also be written more simply as:

def has_children?
not @children.empty?
end

-mental

Claudio J. wrote:

Hello,

there is one thing in ruby that annoys me most (at least for now).

if 0
puts “true”
end

0 is a proper instance of the Fixnum class, how can it be considered as
a false? And remeber, only few languages consider 0 as a false (c and
python considerably) and not even by java/smalltalk etc!

lopex

DÅ?a Streda 08 Február 2006 16:25 Claudio J. napísal:

will execute in any case. Perhaps I’m biased because I’m a crazy C hacker
but I can not believe that others do not fall into this trap. I realy like
the clearness of the ruby syntax but this “everything but nil and false is
true” logic is totaly non obvious and annoying.

This is in my opinion a matter of convention.

The predominant C convention is to return logical values in predicates,
and
then either return a NULL on failure, and a valid result on success; or
return 0 on success, a nonzero error code on failure, and pass a valid
result
via an output parameter. Frankly, I find remembering which function uses
which slightly annoying.

The predominant Ruby convention is to return true / false in predicates;
and
in other functions return the result of computation on success, nil on
“mild”
failure, and throw an exception on severe failure. Because of
exceptions,
using error codes is unnecessary.

This means that:

if (foo = some_method)
puts “#some_method succeeded”
end

is -always- a valid idiom if this convention is followed, and a return
of
value 0 always indicates success, as opposed C, where it can mean either
depending on the call.

David V.

On Thu, Feb 09, 2006 at 01:16:26AM +0900, Matthew M. wrote:

as false. Believe me, having spent most of my programming life using

If you are inspecting a integer against nothing end everything is just
legal, why are you inspecting it?

The difference between NULL and nil is not that big. Did you ever try to
do stuff like nil.split or nil.capitalize.
In ruby you have two ways of living with nil. Either you catch exections
created by nil access all over the place or check for access. This is
similar to C the only difference is that in C you get a SIGSEGV.

Sure 0 is an integer but it is a special one. If you do just

if foo.to_i
puts “true”
end

I expect of a language as cool as ruby to actualy use some #to_bool
converter to duck type the integer into the boolean domain.
And from the math point of view I expect that 0 is treated as false in
boolean context. In boolean algebra 0 is always used to indicate false
if
numbers are used.

if (flags & 0x01).nonzero?

Wow. I was told that the cool thing about ruby is to write less code.
Side node: a & b is a boolean expression a bitwise boolean expression
but
still boolean.

Course, I probably wouldn’t hardcode 0x01 in any language, so:

This is only a simple example. My actual code does not use magic
constants and it has nothing to do with actual problem.

When I started using ruby I had exactly the same problem as you, porting
some c code to ruby that was checking some flags. I didn’t get to the
solution myself, as I never ever thought of 0 being true.
But once you see the advantages of it you’ll really appreciate that
convention.
In c you do: (flags & 0x01)
In ruby you do: (flags & 0x01 == 0) or (flags & 0x01).nonzero?
Or even nicer: flags[0x01]

Shortly there was a discussion about the same topic on this list (“nil
!= []”). I’ll cite two convincing postings:

matthew smillie.said:

0 the integer is only false by convention, and it’s a convention
confined to programming, originating (unless I’m mistaken) from
languages which didn’t define specific ‘true’ and ‘false’ logcial
values separate from integer math. 0’s used in some logical notations
as a symbol for ‘false’, but it’s unlikely that anyone familiar with
formal logic will tell you those 0’s are the same 0’s you get from “2 -
2”.

There’s no doubt that the convention’s been made very useful, but
there’s really no logical basis for equating any particular symbol to
true or false truth values over any other.

amrangaye said:

One example of this that tripped me up today is the regular expression
match operator (=~), which returns (0-based) the index of a match, or
nil otherwise. If 0 was false, you couldn’t do:

if str =~ /^Hello/

Claudio J. wrote:

if flags & 0x01

do some stuff if flag is set

end

[snip]

On Thu, Feb 09, 2006 at 01:45:22AM +0900, [email protected] wrote:

Quoting Claudio J. [email protected]:

IMO if it looks like a boolean expression it should act like a
boolean expression.

That’s the point, isn’t it? 0 isn’t a boolean value.

Have you ever looked at boolean algebra. The use of 0 as false is just
standard usage in boolean expressions.

You typically only see 0 → false equivalences in languages that
don’t have a distinct boolean type (C, Perl, etc…). Ruby’s not
one of those.

… and in bitwise boolean logic 0 meansi what? Even ruby has bitwise
logic operators. If I use 0 in a boolean domain it should not return
true.

Why can’t there be a to_bool converter for all numerical Classes?

There is one; it’s just called #nonzero? instead of #to_bool:

if ( flags & 0x01 ).nonzero?

end

But I have to explicitly use it. Ruby should use duck typing for boolean
expression like it does it in many other cases.

Because if I’m expecting an integer and zero is a legal
value, then you have to start playing around in your conditionals…

If you are inspecting a integer against nothing end everything is just
legal, why are you inspecting it?

I didn’t say everything was legal; I said zero could be legal. That
does not imply everything is legal.

I typically see this in C/C++ where someone has written a function
where 0 is a legal value, they then return a negative number for
illegal status. But, IMO, now you start building maintainence
headaches, since some function calls look like:

if (foo(…))

and others look like:

if (foo(…) < 0)

Ruby just takes the stance (as do many other languages) that there
shall no automatic determination of what integers are valid and which
are invalid, and that shall be left to the coder.

The difference between NULL and nil is not that big. Did you ever try to
do stuff like nil.split or nil.capitalize.

Yes, I have. Not those particular methods, but others. Just type
‘nil.methods’ in irb and you can see what’s defined. And nil could be
expanded if needed (although, I suspect, generally not a good idea,
but possible). Hell, you could do this if you really wanted to have
some excitement:

class NilClass
def method_missing(m) nil end
end

NULL, on the other hand, is zero. Not an object.

In ruby you have two ways of living with nil. Either you catch exections
created by nil access all over the place or check for access. This is
similar to C the only difference is that in C you get a SIGSEGV.

I contend that exceptions are a better solution that SIGSEGV. As far
as checking for nil, I don’t have as much experience here w/Ruby as
some others, but I’ve never had to put nil checking all over the
place. Only about as often as I might branch otherwise.

Sure 0 is an integer but it is a special one.

It is only special by convention.

Side node: a & b is a boolean expression a bitwise boolean expression but
still boolean.

a & b is an integer expression, implicitly typecast in C/C++ to
boolean because zero is treated as false. Implicit typecasting can be
a dangerous thing. The way to write a boolean expression is (a & b)
!= 0.

Wow. I was told that the cool thing about ruby is to write less code.

There is a limit, of course. You could remove constant names, change
function identifiers to single letters, remove whitespace, etc. All
that is writing less code, but I’d hesitate to do it.

I prefer to think that Ruby lets me write clearer code, quickly,
easily, and that often amounts to less code, in part because it is a
dynamic language (as compared to how much code I often have to write
in C++ because of it’s static nature).

Still being a fairly new Ruby user, I would never write:
if (a & b)
Yes, of course, because it doesn’t work, but I also wouldn’t write:
if (a & b).nonzero?

Now I did mention that in my post, but it was to lead you to better
ways. Something like:

class State
def visible?
@flags[0x01].nonzero?
end
end

state = State.new

… do stuff …

draw_scene if stat.visible?

“if stat.visible?” is much more readable that “if flags & 0x01” or the
like.

On Thu, Feb 09, 2006 at 07:35:08AM +0900, Alexis R. wrote:

When I started using ruby I had exactly the same problem as you, porting
some c code to ruby that was checking some flags. I didn’t get to the
solution myself, as I never ever thought of 0 being true.
But once you see the advantages of it you’ll really appreciate that
convention.
In c you do: (flags & 0x01)
In ruby you do: (flags & 0x01 == 0) or (flags & 0x01).nonzero?
Or even nicer: flags[0x01]

Does flags[0x01] work with any kind of mask. I thought that flags[3]
would
return the value of bit 3 and not the value of bit 0 and 1…
and from the ruby reference Bignum#[] returns the nth bit a 0 or 1 and
so
your flags[0x01] would suffer the same way as (flags & 0x01).

languages which didn’t define specific ‘true’ and ‘false’ logcial
values separate from integer math. 0’s used in some logical notations
as a symbol for ‘false’, but it’s unlikely that anyone familiar with
formal logic will tell you those 0’s are the same 0’s you get from “2 - 2”.

There’s no doubt that the convention’s been made very useful, but
there’s really no logical basis for equating any particular symbol to
true or false truth values over any other.

I don’t believe that 0 is originating from programming languages but
actually came from the boolean algebra itself. In a binary domain it is
natural to make 0 false. (a && !a == 0 or if converted into the
set-theory doing a intersection of a set A and the complementary set !A
results in the empty set which is normaly written as some kind of 0).
OK doing set algebra in ascii sucks but I think it is parsable.

OK this is a valid argument against changing behaviour because it would
cause a major regression to existing scripts. I think I have to swallow
this pill and change my coding stile for ruby. Doing binary protocols in
ruby is a bit more challenging than expected.

But I have to explicitly use it. Ruby should use duck typing for boolean
expression like it does it in many other cases.

The problem is, if you do that one place (i.e. “oh, this is
bitwise-and, each bit represents a bool”), then you have to do it
everywhere, since bitwise-and is NOT limited to single-bit flag checks
(e.g. it works great for masking as well). Doing it everywhere is a
very bad thing.

Which means you either need to propose an extension to Ruby for a new
“bitwise-and-as-boolean” operator (i.e. flags ?& 0x01 or similar), or
you need to abstract out a bit more and turn (flags & 0x01) into
something valid. With a little wrapper func, it can look quite nice:

class Something
def enabled?
@flags[0x01].nonzero?
end
end

And that’s assuming someone hasn’t already made a bitfield type class
that makes all of this even prettier.

On Thu, Feb 09, 2006 at 08:35:51AM +0900, Matthew M. wrote:

“bitwise-and-as-boolean” operator (i.e. flags ?& 0x01 or similar), or
you need to abstract out a bit more and turn (flags & 0x01) into
something valid. With a little wrapper func, it can look quite nice:

Until here I agree with you.

class Something
def enabled?
@flags[0x01].nonzero?
end
end

I think this is a bad example at it does not what you think it does.
While flags & 0x01 != 0 compares the least significant bit flags[0x01]
does not. You’re of by one. It will also not work for multibit checks
like
flags & 0x1C.

And that’s assuming someone hasn’t already made a bitfield type class
that makes all of this even prettier.

We will see.

flags & 0x1C.
Ooops… Yup, that was wrong given the context.

On 08/02/06, Claudio J. [email protected] wrote:

if flags & 0x01
# do some stuff if flag is set
end

This is your coding problem. It should be – even in C/C++:

if (flags & 0x01 == 0x01)

end

will execute in any case. Perhaps I’m biased because I’m a crazy C hacker
but I can not believe that others do not fall into this trap. I realy like
the clearness of the ruby syntax but this “everything but nil and false is
true” logic is totaly non obvious and annoying.

Why? What is it about zero that makes it non-true? After all, in bash:

foo && bar # runs bar if and only if foo returned 0!

In shell scripting, 0 is the true value and everything else is false.

Just as there’s no meaningful sort order for “true”, “false”, there’s
no meaningful interpretation of “0” as “false”. It’s merely a C-style
convention that should be abandoned with relish.

IMO if it looks like a boolean expression it should act like a boolean
expression.

Neither expression (“if 0” or “if flags & 0x01”) looks like a valid
boolean expression to me.

-austin

On 08/02/06, Claudio J. [email protected] wrote:

that. Part of the problem is that C doesn’t have an actual NULL…
NULL is just defined as zero. Overlap, explosions, crash and burn…
If you are inspecting a integer against nothing end everything is just
legal, why are you inspecting it?

Sorry, but that doesn’t work. If you have something that returns an
integer value – consider strtol(3). This can return any valid integer
value, but if you get a 0, LONG_MAX, or LONG_MIN, you have to then
check errno to see if the conversion was, in fact, successful. (And the
conversion could be unsuccessful for any number of reasons.)

Ruby’s general approach is much better: either throw an exception or
return +nil+ if you’ve got a non-useful value. I shouldn’t have to
second-guess what may be a valid value.

I expect of a language as cool as ruby to actualy use some #to_bool
converter to duck type the integer into the boolean domain. And from
the math point of view I expect that 0 is treated as false in boolean
context. In boolean algebra 0 is always used to indicate false if
numbers are used.

But Ruby is not Boolean Algebra. And simply using “0” to represent
false is a representation – one could use “f”, just as easily. This is
what Ruby has done.

Zero isn’t a special number. It’s just zero.

Side node: a & b is a boolean expression a bitwise boolean expression
but still boolean.

It is not, in fact, a boolean expression. It is a bitwise expression.
The only thing that makes it even remotely close to being boolean is
its presence in an if statement. Otherwise, you wouldn’t get the desired
result when you really do want to do bitwise OR or AND operations.

-austin

DÅ?a Streda 08 Február 2006 23:54 Claudio J. napísal:

Have you ever looked at boolean algebra. The use of 0 as false is just
standard usage in boolean expressions.

Recalling the earlier "Why Ruby isn’t " thread,
someone
noted that what the 0 symbol represents in boolean algebra has
absolutely
nothing to do with what the 0 symbol represents in arithmetics.

Just like a capital letter O isn’t a numerical 0, an arithmetical 0
isn’t the
same as a boolean 0. A boolean false value is in Ruby represented by the
keyword “false”, an arithmetical zero by the literal “0” - that’s all
there’s
to it. C’s lack of distinction between these can arguably be considered
as a
fault of C while we’re nitpicking, as well as Ruby’s rather broad range
of
values that are true in a boolean context. Neither is, IMO, more “right”
than
the other, it depends on how often you find one practical in your code,
and
both are a lot more often practical than Java’s strictness on the issue.

David V.

On Feb 8, 2006, at 23:35, Claudio J. wrote:

I don’t believe that 0 is originating from programming languages but
actually came from the boolean algebra itself.

Right, I’ll put it simply:

The symbols used in Boolean algebra to indicate true and false values
are the same symbols as those used to indicate the arithmetic integer
values 1 and 0. This does not mean that those symbols represent the
same things, or are equivalent in any way apart from their shape.

Any two-element Boolean algebra (and there are many of them) is
equally valid using {a, b}, {t, f}, or even {larry, bob} as the
elements. Various algebras also use symbols like + and * for things
other than addition and multiplication.

If you still don’t think that integer 0-is-false is a peculiarity of
the C-heritage languages, check out the following Lisp fragments:

(eql NIL 0)
NIL

(if 0 'true 'false)
TRUE

matthew smillie.


Matthew S. [email protected]
Institute for Communicating and Collaborative Systems
University of Edinburgh

On Thu, Feb 09, 2006 at 09:15:54AM +0900, Austin Z. wrote:

“Okay, today any -1 is false”… or even at different levels than
that. Part of the problem is that C doesn’t have an actual NULL…
NULL is just defined as zero. Overlap, explosions, crash and burn…
If you are inspecting a integer against nothing end everything is just
legal, why are you inspecting it?

Sorry, but that doesn’t work. If you have something that returns an
integer value – consider strtol(3). This can return any valid integer
value, but if you get a 0, LONG_MAX, or LONG_MIN, you have to then
check errno to see if the conversion was, in fact, successful. (And the
conversion could be unsuccessful for any number of reasons.)

This is not correct. strtol(3) does not return 0 in case of an error. It
sets errno to ERANGE and returns LONG_MAX or LONG_MIN.