The duck's backside

Mark W. wrote:
On 30.05.2008 15:58

I could refer you to any number of the sources I learned OOP from - except that
I’ve given them all away by now. :slight_smile:

Just a hint at two OOP sources I’ve bookmarked:

The three pillars of Object Orientation

http://www.kirit.com/The%20three%20pillars%20of%20Object%20Orientation

Ducktyping vs Interface

http://www.kirit.com/_fslib/_content/thread.asp?id=1720512

Cheers,

jk

On Fri, May 30, 2008 at 3:12 PM, Mark W. [email protected] wrote:

If classes are not categories, what are they?

A convenient tool for specifying behavior and using inheritance and
mixins for code reuse.
At least that is what Ruby classes give us. Free to you to apply
paradigms that can be implemented with this tool. To claim that Ruby
classes are that kind of paradigms does not meet my approval. Maybe
classes are very easily to be used as categories but I do not know
that.

I’ve done a lot of C++, and I remember when RTTI (runtime type information)
was introduced. I was pretty strongly against its use, believing that
polymorphism through virtual functions was the way to “branch” on type. I
would have strongly avoided code like v === Numeric.
For a very good reason, because it is Numeric === v. :wink:
But I feel a need to to justify my original reply to OP, he just
really needed that kind of check to make something work quickly, so I
showed him how to do it.
I however completely agree with you that code like that rings an alarm
bell.

As I would avoid it today, in Ruby. We’re actually talking about something
that I think should be rare, which is branching on type.
I boldly (my turn now :wink: state that there is no type in Ruby, but I
know what you mean and still agree!
We’re only doing it
to validate parameters. In good Ruby, we would not do this, relying on the
runtime to tell us when we’re using the wrong type. To me, duck typing
doesn’t mean “ask me if I quack” as much as “quack, you!”.
I would not like to ban early failure checks like that totally. Again
it just rings an alarmbell, but in some contexts TTD is difficult and
then extremely defensive programming might be in order,.
So I think this
whole discussion is about an edge case.
Right indeed, this was forgotten OP even stated so, good point.

Now, for the simple case of a Numeric, we’re probably over-analyzing it.

Yes, I don’t think I would have made the same case for a less basic class.

For me there is a substantial difference though, and my bad that I did
not mention it in my
original post. Checking things like Integer ===, or String === are
surely less errorprone than
MyOwnClass ===, as I strongly feel that the randomly chosen Ruby
programmer will mess around with them less, especially when it comes
to override or revoke standard behavior.

However, I still think using responds_to? to identify an object’s type is
risky.
It is even plain wrong, it specifically ask if behavior carrying a
given name is implemented or not, well you put that quite nicely into
the next sentence anyway :slight_smile:
As I said, it relies upon a much larger namespace than class
constants, hence much more chance for accidental positives (e.g., #draw
meaning different things to different objects).

But I’m finding these remarks about “attititude” kind of amusing. It’s
like Ruby isn’t a tool, it’s a “mindset.” We don’t forgo something
like Numeric === v because it’s wrong (the OP in fact said it was
slightly more readable), but because Ruby has duck-typing,
It really feels wrong because indeed each tool creates its own
mindset, yet people are quite tolerant with it, but you just have to
remember all the time in Ruby (not as a mindset but as a simple
consequence of how Ruby works)

Class != Behavior

There’s nothing wrong with that. But I hope most of us forgo it in favor
of:

data.each { |datum|
do_something_with datum
}
Amen.

Which ends up being more readable, among other things.

Exactly. It has absolutely nothing to do with Ruby’s “mindset,” or “the way
we do things in Ruby.” We choose the latter because it’s better, not because
it’s the Ruby way.
It’s the other way around: it’s the Ruby way because it’s
better; not that it’s better because it’s the Ruby way. The former attitude
is pragmatic, the latter is religious.
Strangely enough I see it the other way round:
“It is better” ( a very dangerous statement indeed) seems dogmatic to
me.
using the mindset implied by semantics OTOH feels very pragmatic.

Firstly, #class doesn’t provide any way to be in multiple categories. Ruby
doesn’t support multiple inheritance.
You just have come to the end of your own argument I am afraid. Not
being an expert
I might be wrong of course, do you mean that mixins are not powerfull
enough to satisfy that model?
Robert

On 30 May 2008, at 14:23, Mark W. wrote:

in the long term you’ll find yourself doing much more work with very
dispatch a given message to the right method.
That’s why I said it was a static typing mindset - because it’s
impossible to do static typing in Ruby. However putting this kind of
type-checking boilerplate into code is attempting to do the same
thing: allow only objects of a very limited type to be used in a given
context. This isn’t pushing against the Ruby Way because we’re putting
philosophy ahead of good design, but because the very shape of the
language makes it ugly and cumbersome to do so.

I’m often minded of the following extract from Lewis Carroll when
discussing this topic:

'When I use a word,' Humpty Dumpty said, in a rather scornful tone,' it means just what I choose it to mean, neither more nor less.' 'The question is,' said Alice, 'whether you can make words mean so many different things.'

‘The question is,’ said Humpty Dumpty, ‘which is to be master - that’s
all.’

Alice was too much puzzled to say anything; so after a minute Humpty
Dumpty began again. ‘They’ve a temper, some of them - particularly
verbs: they’re the proudest - adjectives you can do anything with, but
not verbs - however, I can manage the whole lot of them!
Impenetrability! That’s what I say!’

‘Would you tell me, please,’ said Alice, ‘what that means?’

‘Now you talk like a reasonable child,’ said Humpty Dumpty, looking
very much pleased. ‘I meant by “impenetrability” that we’ve had enough
of that subject, and it would be just as well if you’d mention what
you mean to do next, as I suppose you don’t mean to stop here all the
rest of your life.’

‘That’s a great deal to make one word mean,’ Alice said in a
thoughtful tone.

‘When I make a word do a lot of work like that,’ said Humpty Dumpty,
‘I always pay it extra.’

In Ruby methods have a terrible temper and will soon tell you if
you’re misapplying them, so instead of trying to restrict the
allowable types they can work on it makes much more sense to handle
their temper tantrums when they occasionally throw them. The only
reason I see any justification for not doing that in the original
poster’s code is because the method was talking to a remote web
service and that could add an appreciable delay into finding out
whether or not an error had occurred, but to be honest I’m not sure
that in practice I’d be that concerned about it: instead I’d focus my
energies on finding places in the application where the context could
become muddled and redesigning so that they didn’t occur in the first
place.

Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net

raise ArgumentError unless @reality.responds_to? :reason

On May 30, 2:18 pm, Mark W. [email protected] wrote:

On May 29, 2008, at 3:37 AM, David A. Black wrote:

You can ask it what its class is, what its ancestors are,
what methods it responds to, and so forth… but in the end, you hit
the singularity: the sending of the message.

However, an object’s class and ancestors determine what messages an
object responds to. That’s what they’re there for - to hold message
maps

But in Ruby knowing an objects class and an ancestors is not
sufficient, as the object can have methods added to it’s eigenclass as
well. More importantly, it can have methods removed by calling
#undef in it’s eigenclass, which means that if you want to call #foo,
checking an objects class is not sufficient - the object might be the
only one of it’s class not to answer to #foo.

Furthermore, the object may answer to a message by implementing
#method_missing.

The only guaranteed way of knowing whether or not an object answers to
a message in Ruby is to try to send it.

Vidar

On 30 May 2008, at 14:18, Mark W. wrote:

method for the message.
There are many other kinds of runtime error than that, and it will be
a highly unusual (and I suspect carefully designed) method which
doesn’t raise any of them when provided with garbage inputs. Hence why
Duck Typing is the preferred approach in Ruby.

You can ask it what its class is, what its ancestors are,
what methods it responds to, and so forth… but in the end, you hit
the singularity: the sending of the message.

However, an object’s class and ancestors determine what messages
an object responds to. That’s what they’re there for - to hold
message maps.

To some extent. But as an individual object’s message maps are
alterable at runtime, knowing its class and ancestors is a very
incomplete picture of what that object is.

Hmmm. Is it that objects change their type, or is it that variables
do?

Unless the definition of type is “a mutable correlation of behaviour
and state, integrated over time” it’s clear objects can and do change
their type at runtime.

Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net

raise ArgumentError unless @reality.responds_to? :reason

Hi –

On Fri, 30 May 2008, Mark W. wrote:

On May 29, 2008, at 3:37 AM, David A. Black wrote:

Duck typing, as a way of thinking, meshes nicely with Ruby, as a tool,
because of how Ruby objects are engineered: the only thing that you
can measure with certainty is what an object does when you send it a
message.

No, you cannot possibly measure with certainty what an object does when you
send it a message. The only thing you can measure is whether it will throw a
runtime error because it doesn’t implement a method for the message.

I mean you can measure what it’s done after it’s done it. I don’t mean
you can measure what it will do; that’s precisely my point (you
can’t).

You can ask it what its class is, what its ancestors are,
what methods it responds to, and so forth… but in the end, you hit
the singularity: the sending of the message.

However, an object’s class and ancestors determine what messages an object
responds to. That’s what they’re there for - to hold message maps.

No; they determine what messages a freshly-minted object responds to.
Objects can change, and classes are objects and can also change.

it stands
to reason that a language with objects whose types can change during
runtime would be well-served by a programming style that deals as
directly and economically as possible with precisely that condition.

Hmmm. Is it that objects change their type, or is it that variables do?

Objects. Variables are untyped.

I always end up feeling in these discussions like it sounds like I’m
advocating some kind of wild, chaotic, pseudo-non-deterministic
programming style for Ruby. Far from it. I only want to point out that
Ruby imposes certain conditions on us (the changeability of objects),
and that I’d rather embrace that and try to find ways to leverage it
than interpret it as a language design flaw or a test of my resistance
to temptation.

David

On Friday 30 May 2008 08:12:35 Mark W. wrote:

If classes are not categories, what are they?

Classes are categories. But not all categories are classes.

Which ends up being more readable, among other things.

Exactly. It has absolutely nothing to do with Ruby’s “mindset,” or
“the way we do things in Ruby.” We choose the latter because it’s
better, not because it’s the Ruby way. It’s the other way around: it’s
the Ruby way because it’s better; not that it’s better because it’s
the Ruby way. The former attitude is pragmatic, the latter is religious.

There are fundamentally different ways to do it – I’m guessing
functional
languages would do a recursive function. Lisp would iterate, but with…
whatever the opposite of a callback is. (Long weekend, I’m slipping…)

We do it because we believe it’s better. We call it “the way we do
things”
when a majority of us believe it’s better. But obviously, not everyone
agrees
on what’s better – insisting that one particular way is better than
every
other way is also religious.

There is something wrong with it. It’s harder to read and more prone
to error.

No, there’s nothing wrong with it. Just because there’s a better way
doesn’t
mean the old way is wrong.

Can you outline a few real-world examples, where it’s OK to get a
descendant
of Numeric, but not something which merely implements to_i or to_f?

The obvious case is an object that implements to_i in a noncongruent
way. For example, let’s say the number passed in is to be used as a
count. An IP address may respond to to_i, but it can’t possibly be
correct in this context.

I think that’s the disagreement. I prefer being optimistic about type
checking – assume everything’s alright until something breaks a test.
Others
prefer being pessimistic – assume a Numeric is a Numeric until you need
to
do something else.

Also: It depends on the context. It might well be that an IP address is
fine.

Firstly, #class doesn’t provide any way to be in multiple
categories. Ruby
doesn’t support multiple inheritance.

We’re not talking about multiple categories - just one. In all my
years of using MI in C++, I’ve very rarely, if ever, seen a class that
didn’t have a “dominant” base class. Most of the time MI is used for
mixing in. You might have a class, for example, that descends both
from Numeric and Observable, but obviously, its main category is
Numeric.

I’m not convinced that there’s always a “dominant” base class. Simple
example:
IP addresses are both numeric (in a sense) and byte strings (and by
extension
arrays/enumerables), and their own thing. There might not be a class
that
supports all of these things, but the thing itself does.

From: “Mark W.” [email protected]

Hmmm. Is it that objects change their type, or is it that variables do?

I don’t see variables in ruby as having any type at all. They merely
hold a reference to some object (any object.)

On a related note:

If you think about it, the class of an object
is continually being interrogated at runtime, in order to dispatch a
given message to the right method.

Ruby is more dynamic than that.

class Widget
def initialize(x)
puts “i’m turning #{x.inspect} into a widget!”
end
end
=> nil
x = “ordinary”
=> “ordinary”
y = “fancy”
=> “fancy”
def y.to_widget
Widget.new(self)
end
=> nil
x.respond_to? :to_widget
=> false
y.respond_to? :to_widget
=> true
y.to_widget
i’m turning “fancy” into a widget!
=> #Widget:0x2c978b8
x
=> “ordinary”

In my view, neither the variable ‘x’ nor ‘y’ changed in the
above example. The object referenced by ‘y’ definitely
changed. If an object’s type is defined by what methods
it responds to, then the type of the object referenced by
‘y’ changed. But its class hierarchy did NOT change:

y.class
=> String
x.class
=> String
y.class.ancestors
=> [String, Enumerable, Comparable, Object, Kernel]
x.class.ancestors
=> [String, Enumerable, Comparable, Object, Kernel]

The only way to tell that the object referenced by ‘y’
responded to :to_widget was to ask it. However, in ruby,
even asking doesn’t always prove anything:

z = “fancier”
=> “fancier”
def z.method_missing(id, *args)
if id == :to_widget
Widget.new(self)
else
super
end
end
=> nil
z.respond_to? :to_widget
=> false
z.to_widget
i’m turning “fancier” into a widget!
=> #Widget:0x2c95b18

The use of method_missing is a perfectly valid programming
approach in ruby, used often to forward method calls on to
some delegate, or also used to manufacture previously
nonexisting methods on-the-fly.

An example of the latter is the Og (Object Graph) ORM
library.[1] One can invoke methods on a class representing
a database table, without said methods previously existing:

class IPHost
property :ip, String, :unique => true
property :hostname, String
end

rec = IPHost.find_or_create_by_ip_and_hostname(“1.2.3.4”,
example.com”)

Og is smart enough, using method_missing, to notice that
the method name begins with find_or_create_by_… and as
I recall it does a fairly simple split(/and/) on the
remainder of the method name to determine the column names
being referenced.

For optimization purposes, Og may also then define that
method dynamically, so that it will now exist to speed up
further invocations. (But, semantically, of course, there’s
no need for the method to literally be defined, as it could
continue to handle it via method_missing.)

One further small point of interest. Notice that IPHost
doesn’t inherit from anything. (Except Object, by default.)

When the Og library is initialized, it reflects through
ObjectSpace looking for classes which have used its :property
method to define database columns. And it includes an
appropriate module on such classes to imbue them with the
smarts they need to function.

All of the above being reasons why in Ruby, duck typing is
preferentially approached as “tell, don’t ask.”

There is no infallible way to ask objects whether they
respond to a particular method, so just Trust The Programmer
and call the method you expect to be there. In the more
rare cases (like the OP in this thread) where one finds a
need to ask, then ask the particular object if it responds
to the method in question. Because querying its class
inheritance hierarchy, as we have seen above, is about the
least reliable and most overly restrictive approach.

[1] Og: http://oxywtf.de/tutorial/1.html

Regards,

Bill

Hi –

On Mon, 2 Jun 2008, David M. wrote:

message.

As long as we’re nitpicking, you can’t necessarily measure what’s happened
after the fact, either. The object may well swallow everything with
method_missing and do nothing. It may be possible to play tricks with
respond_to?, send, and so on, to achieve the same effect.

I’m thinking of the effect, though. When you send a message to an
object, something comes back, or an exception is raised. My point is
just that none of that can be known with certainty (I can’t absolutely
know that a.b will return c, or an object of class D, or whatever)
before it happens.

No; they determine what messages a freshly-minted object responds to.

Unless I override #new, or #method_missing, or a number of other tricks.

It depends how you define “responds to”. But mainly my point is that,
at most, the class tells you about the state of things at the birth of
the object – the nature part, that is, but not the nurture part.

David

On Friday 30 May 2008 18:21:51 David A. Black wrote:

No, you cannot possibly measure with certainty what an object does when
you

send it a message. The only thing you can measure is whether it will throw
a

runtime error because it doesn’t implement a method for the message.

I mean you can measure what it’s done after it’s done it. I don’t mean
you can measure what it will do; that’s precisely my point (you
can’t).

As long as we’re nitpicking, you can’t necessarily measure what’s
happened
after the fact, either. The object may well swallow everything with
method_missing and do nothing. It may be possible to play tricks with
respond_to?, send, and so on, to achieve the same effect.

After all, your only way to check what goes on inside the class is to
ask it,
by sending a message.

Of course, at this point, it doesn’t really matter. Your definition of
success
is probably based on what your program actually does – what it actually
inputs and outputs – and not on the internal state of some object.

However, an object’s class and ancestors determine what messages an
object

responds to. That’s what they’re there for - to hold message maps.

No; they determine what messages a freshly-minted object responds to.

Unless I override #new, or #method_missing, or a number of other tricks.

On Sunday 01 June 2008 21:24:30 Mark W. wrote:

On Sun, Jun 1, 2008 at 7:04 PM, David M. [email protected] wrote:

There is something wrong with it. It’s harder to read and more prone
to error.

No, there’s nothing wrong with it. Just because there’s a better way
doesn’t
mean the old way is wrong.

So I’ll just say that we’re not going agree on a lot if you don’t think it’s
wrong to do something in the old C style when an easier, more readable, and
less bug-prone way is available.

Depends very much on context, and on the definition of “wrong”. It’s a
semantic argument – we both agree that #each is better.

On Sun, Jun 1, 2008 at 7:04 PM, David M. [email protected]
wrote:

There is something wrong with it. It’s harder to read and more prone
to error.

No, there’s nothing wrong with it. Just because there’s a better way
doesn’t
mean the old way is wrong.

Sorry I haven’t replied to a lot of very interesting posts in this
thread
lately. I changed my host and things have been up in the air lately.

So I’ll just say that we’re not going agree on a lot if you don’t think
it’s
wrong to do something in the old C style when an easier, more readable,
and
less bug-prone way is available.

///ark

On 2 Jun 2008, at 03:24, David A. Black wrote:

know that a.b will return c, or an object of class D, or whatever)
before it happens.

Exactly.

This is the same argument that split physics a century ago. The
classical view relied on the precision of mathematics to provide a
clockwork understanding of the universe whilst the modern view used
real-world experiments to show that below a certain level of
granularity such certainty was an illusion. At the time many critics
of the new physics claimed that it couldn’t possibly be right for very
similar reasons to those given by advocates of immutable and static
typing: that runtime uncertainty makes a nonsense of provability/
causality and hence must be something other than it appears. That
argument still rages in some corners of physics (cf Bohm’s Implicate
Calculus) but for all intents and purposes uncertainty is the dominant
view and the bedrock of our digital electronic technology.

How does this apply to Ruby? Because of method_missing and the open
nature of classes and objects the only way to know anything about an
individual object is to make an observation, sending it a message that
queries its internal state. The very act of observation may or may not
change that internal state, and the observer will never be entirely
certain that the latter is not the case. That’s just the nature of the
language, much as in C programs there is no way to know in advance
what type of memory structure a void pointer will actually reference
or the damage that operating on it may cause to a program’s integrity

  • but there are still cases where a void pointer is an appropriate
    solution.

If certainty is important you can apply all kinds of design
conventions to support it. Unit testing performs a battery of
experiments to ensure requirements are met. Behaviour driven
development encourages a minimal implementation closely mirroring
expressed requirements. Tight coding standards might forbid use of
certain ‘dangerous’ language features such as method_missing or
dynamic module inclusion, or perhaps even mandate specific design
techniques such as exception-driven goal direction, runtime contracts,
design patterns or whatever else happens works for the developers
concerned.

The point with all these approaches is that they are focused on
reducing the number of ways in which an object will act at runtime so
that the underlying uncertainty is managed. Effectively they move the
granularity of the system so that it obeys classical expectations.

But the uncertainty is still there under the covers, and when applied
appropriately it can be used to provide elegant solutions to problems
that would otherwise be tedious and/or impossible to tackle with
static approaches. And once embraced for what it is, it opens a range
of new possibilities for writing reliable and robust applications.

Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net

raise ArgumentError unless @reality.responds_to? :reason

On 2 Jun 2008, at 12:10, Robert K. wrote:

You could even say that static typing conveys a false sense of safety
(which could lead you to neglect testing) whereas this effect does not
happen with “uncertain” (aka “dynamic”) languages.

As is often the case on big C++ and Java projects where complexity (in
the form of uncertainty over requirement correctness) is a dominant
factor.

I am not sure about DbC languages such as Eiffel. These go much
further in defining semantics and you cannot easily violate assertions
that they provide, i.e. you get more safety than just static types. I
have always wanted to work with Eiffel but unfortunately never found
the time. Also, from what I read it would feel a bit like a
straitjacket - and given the option I much more prefer Ruby to get
things done. :slight_smile:

I’ve played with DbC by convention on embedded projects where the
overhead was much less than the reward, but my attempts to get into
Eiffel always fall foul of a low boredom threshold (as is the case
with Ada). I guess like most hackers I’m lazy, and languages like Ruby
allow that laziness to be productive :wink:

Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net

raise ArgumentError unless @reality.responds_to? :reason

Hi –

On Mon, 2 Jun 2008, Eleanor McHugh wrote:

that they provide, i.e. you get more safety than just static types. I
have always wanted to work with Eiffel but unfortunately never found
the time. Also, from what I read it would feel a bit like a
straitjacket - and given the option I much more prefer Ruby to get
things done. :slight_smile:

I’ve played with DbC by convention on embedded projects where the overhead
was much less than the reward, but my attempts to get into Eiffel always fall
foul of a low boredom threshold (as is the case with Ada). I guess like most
hackers I’m lazy, and languages like Ruby allow that laziness to be
productive :wink:

I took an extended look at Eiffel 10 or 12 years ago, and thought it
was very cool, in ways that are almost diametrically opposed to Ruby’s
coolness, of course. I’ve always thought Eiffel would be a good
alternate name for Ruby, though, because apparently the Eiffel tower
is lighter than the cylinder of air that contains it (!) and I think
of Ruby as having that quality of more power than can be accounted for
by what you actually see. (Or something.) But it’s also a good name
for Eiffel, for other reasons.

David

On Mon, Jun 2, 2008 at 4:04 AM, David M. [email protected]
wrote:

On Friday 30 May 2008 08:12:35 Mark W. wrote:

If classes are not categories, what are they?

Classes are categories. But not all categories are classes.

IMHO this is a gross generalization!

I guess that you might indeed use Ruby classes as categories in your
designs. I do not know what others do, but the simple fact that I use
Ruby classes for other things falsifies your statement. BTW,
performance apart, I can perfectly life without classes in Ruby.

Cheers
Robert

On Mon, Jun 2, 2008 at 9:05 AM, David A. Black [email protected]
wrote:

I took an extended look at Eiffel 10 or 12 years ago, and thought it
was very cool, in ways that are almost diametrically opposed to Ruby’s
coolness, of course. I’ve always thought Eiffel would be a good
alternate name for Ruby, though, because apparently the Eiffel tower
is lighter than the cylinder of air that contains it (!) and I think
of Ruby as having that quality of more power than can be accounted for
by what you actually see. (Or something.) But it’s also a good name
for Eiffel, for other reasons.

A little less than 18 years ago, I chaired an OOPSLA panel called “OOP
in
the Real World” http://portal.acm.org/citation.cfm?id=97946.97981

One of the panelists was Burton Leathers from Cognos. He gave a
characterization of the popular OO languages at that time based on their
“origin.” As I remember some of the highlights were:

Objective-C reflects it’s Yankee origins (It was originally developed by
Brad Cox and Tom Love at an ITT lab in Connecticut), it does just what
it
has to do and nothing more.

C++ is like Hagen-Dazs ice cream, you think it’s from Scandanavia, but
it’s
really an industrial product from New Jersey.

As I recall he said he hadn’t really used Smalltalk (which comes from
California) but he got the impression that if he did it would be like
surfing in baggie shorts.

And his description of Eiffel, “Quintessentially French!”

I think that there were a few more, but I can’t recall them. Neither
Java
nor Ruby were on the radar in 1990. Java (actually Oak) wasn’t planted
until
1991, and Ruby didn’t appear until 1993.

I wonder who Burton would have characterized them.


Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

2008/6/2 Eleanor McHugh [email protected]:

Absolutely.

The point with all these approaches is that they are focused on reducing the
number of ways in which an object will act at runtime so that the underlying
uncertainty is managed. Effectively they move the granularity of the system
so that it obeys classical expectations.

But the uncertainty is still there under the covers, and when applied
appropriately it can be used to provide elegant solutions to problems that
would otherwise be tedious and/or impossible to tackle with static
approaches. And once embraced for what it is, it opens a range of new
possibilities for writing reliable and robust applications.

You could even say that static typing conveys a false sense of safety
(which could lead you to neglect testing) whereas this effect does not
happen with “uncertain” (aka “dynamic”) languages.

I am not sure about DbC languages such as Eiffel. These go much
further in defining semantics and you cannot easily violate assertions
that they provide, i.e. you get more safety than just static types. I
have always wanted to work with Eiffel but unfortunately never found
the time. Also, from what I read it would feel a bit like a
straitjacket - and given the option I much more prefer Ruby to get
things done. :slight_smile:

Cheers

robert

David A. Black [email protected] wrote:

Dave T., who coined the term “duck typing”, has described it as
“a way of thinking about programming in Ruby.”

Do you know when he actually started speaking about “duck typing”?

David A. Black [email protected] wrote:

No, I don’t know exactly when it was.

That is because the first time I read about duck typing was in a post
by Alex Martelli in early 2000 and he wrote as if the term was already
estabilished (which could be meaningless).

I was just curious whether someone could date the first uses from
Thomas. Of course Thomas Duck Typing in Google gives too many results to
check.

Nothing important, just curious. :slight_smile: