Why Ruby?

Twitter just needed to spin up a few more dynos :wink: <3 Heroku

So I started programming in C, a couple years ago, and pretty much
started
in Ruby at that same time, but I didn’t know anything about programming,
and
anything about Ruby. My class taught me C, and I figured out how to
translate that into Ruby. As a consequence, my Ruby programs were
directly
mappable (pretty much line by line) into C programs, lol.

I think that C has some important things to offer, but as a first
language,
it’s offerings seem limited to weeding out students in a 100 student
“first
programming” course. C, being directly translatable into assembly, which
is
just a software representation of actual hardware implemented commands,
is
very “close to the road”. And there are a lot of potholes, and steep
embankments. I think you should know about pointers, and you should have
some idea of how what you are learning is represented in hardware. C
seems
good for this purpose, and it does seem like a good language for really
learning about data structures, because it is so close to the road, very
little implementation is hidden away in magic. But that said, while it
has
some educational benefits, students will not come out of it feeling
prepared
(or being prepared) to do anything actually useful. It will feel too
much
like academic masturbation, and will be frustrating as well, because
there
are so many nuances and subtleties in C that will keep you up all night
banging your head until you finally find some blog post or article about
it.
So, I don’t recommend C as a first language, it is too discouraging,
with
too little reward.

Java was the next thing we learned, and my C-styled ruby code suddenly
began
to look like my Java code, because I finally figured out what a class,
instance, method, etc were. I absolutely loved this class ( to be fair,
I
mostly loved the C class as well, b/c of what we were learning, even
though
I didn’t like the language ). Java is able to show you a lot of the same
things as C, without all the headache. We went through a thousand pages
in
Walter Savitch’s book Absolute Java (honestly, the best text book I’ve
ever
had), and I decided that I wished I would have learned Java first, you
can
get all the same knowledge (and more), except for pointers, which
probably
need to be understood before references make sense (meaning underneath
the
hood) And it has a nice API with lots of functionality. But, it does
have a
lot of boiler plate code, you make your method, then you make your
setter,
then you make your getter. Think about the wall of code necessary to
implement a Swing application. So all in all, I would consider Java a
decent
first language.

Then I took Assembly, which was where I really began to figure out what
goes
on underneath the hood of C. Earlier I said you should be able to map
what
you learn to what the computer is actually doing, well, I think most
of
that happened when I took Assembly. Which is even more of a pain to
write in
than C. But it implies something interesting, it contradicts most of the
academic reasons for choosing C as the first language. It seems that to
someone starting out, programming is programming, it is all magic,
regardless of it’s abstraction, and only later does it get grounded to
anything realistic. With that thought, it makes sense to choose a
language
that allows for high levels of productivity, with low levels of
configuration, boiler-plate, headache, hoop-jumping, special cases, etc.
They say if you throw a frog into a boiling pot, he’ll jump right out,
but
slowly increase the temperature, and he’ll stay until cooked. If you are
trying to keep the interests of beginning programmers, take out
pointless
hurdles, give them a language with a nice learning curve, that they can
quickly see the fruits of their efforts, and enjoy it long enough to
want to
take languages like C and Assembly, for the knowledge it will give them.

So I was able to do Ruby, at least as well as my C and Java, then I got
excited about it, started reading books, began interning at a design
shop
that used Rails, and somewhere along the way realized that I quite like
Ruby
:slight_smile: My ability to do anything meaningful with Ruby exceeds my ability to
do
anything meaningful with C or Java a thousand times over, because of
things
like ruby-toolbox.com and gemcutter.org

You can still teach recursion and stacks and queues and hashes with
Ruby. It
might be a little more mocking of your efforts since those are all
implemented in the language by default, but you’re reinventing the wheel
regardless of whether you code one in C or Java or Ruby. The point is
the
abstract knowledge, which you can get with any of these languages.

So, why Ruby as opposed to some other high level language? I don’t know,
probably any high level language would be appropriate. I don’t know
enough
about others to compare them. But I will say that Ruby has a certain
ability
to express ideas in ways that feel natural, as opposed to syntactic. It
is
also very easy to get started, but there are a large amount of things
you
can learn, which will give you a better survey of other languages out
there.

As far as Rails goes, I tried going through AWDWR, and it took forever,
and
was frustrating. Probably this is due to the book being aimed at web
devs
from other languages, which I was not, but there is a lot to know with
Rails
before you can get started. In this regard, I think that Sinatra makes a
lot
of sense as a first web framework. Like Ruby, it has a very low barrier
for
entry, and you can get off the ground and start going with just a little
bit
of knowledge. It also translates quite nicely into Rails relevant
knowledge
as it’s programs grow in complexity. Rails I think is worth learning, it
is
a truly amazing framework written in Ruby. It will help you see
effective
ways to create, use, and design your code, and teaches a lot of good
coding
practices. If you are interested in Rails, I think that after you get
the
really basic basics down, then guides.rubyonrails.org is probably the
best
resource out there (better than the api, and most books).

So, yes, I think that Ruby would be a good first language, and will
progress
nicely into fun/interesting/useful code quickly, but wouldn’t say it
should be the first language, just that it would probably be a
rewarding
choice, allowing you to focus on programming at an abstract enough level
to
easily touch on important concepts without the overhead of syntax vomit,
contorted workflows, and tedious grinding away that some languages turn
into. And I think that Rails is a fantastic framework, but for a first
framework, I’d suggest Sinatra.

I know of three Ruby books aimed at newcomers to Ruby, but haven’t read
them.

The Well-Grounded Rubyist (Manning) *http://tinyurl.com/yba72rn *by
David A
Black, which is supposed to really present Ruby in a clear way that
makes it
easy to understand what is happening and why. I’d expect to come away
with a
really solid understanding of the language itself, and how to program in
general. In other words, I’d expect to “get it”.

Beginning Ruby (Apress) http://tinyurl.com/ybssb2x by Peter C.,
which
is aimed at taking people from beginner status, teaching them Ruby, and
introducing them to lots of useful/cool/rewarding gems, the Amazon page
says
it hits Sinatra, so that might make a good choice if you are interested
in
it.

Learn to Program (PragProg) *http://tinyurl.com/y9a9g4x *by Chris P.,
I
don’t know much about him/the book, but it seems geared towards people
brand
new to programming.

Hope that helps :)*
*

Ruby is my language of choice for many applications because of the
following:

(1) the lack of boilerplate. It encourages you to partition your problem
into small digestible bits because you only need to type

class Foo

end

to get a class - virtually zero effort. Compare the line noise required
in Perl to do the same thing.

(2) the supremely sane data model - all values are references to
objects. (In Perl you have Scalars, Arrays, and References to Arrays,
the latter being held in a scalar variable, and a load of special case
nonsense like filehandles and typeglobs. C++ is worse; you have
integers, pointers to integers and references to integers).

(3) pure personal preference, e.g. I don’t like python’s run-off-a-cliff
indentation syntax. I had to use it in Occam years ago, and I didn’t
like it then either.

(4) the ruby 1.8 C intepreter (“MRI”) is portable and runs on lots of
things, even tiny embedded systems with 4MB of flash (e.g. OpenWrt)

(5) it’s easy to work in different programming styles. Functional
languages made a lot more sense once I was familiar with things like
blocks and enumerables in ruby.

(6) it’s not Java.

The downsides for me are a lot of accumulated warts and special cases in
the language, generally aimed at “doing the right thing” but sometimes
catching you out. Examples: auto-splat, lambda-vs-proc-vs-block,
different behaviour of ^ and $ in regular expressions. I also detest
ruby 1.9’s String handling which means I’m staying on 1.8; I know I’ll
ultimately have to move to a different language entirely.

The documentation is variable from poor to bad. The language is not
formally defined, neither its syntax nor semantics, and sometimes you
just have to treat it like a black box and experiment to find out how
things actually behave.

Sometimes it can be hard to find your way around other people’s code,
because ruby doesn’t enforce that class Foo::Bar is defined in file
foo/bar.rb (or that it’s even defined in source code at all; it might be
defined dynamically at run time). You’re reliant on the good sense of
the person who wrote the code to organise it sensibly, and you can write
bad code in Ruby just as in any other language.

Finally, in some spheres there are simply better tools for the job. If
you want to handle ten thousand concurrent client connections then
erlang is probably a better fit (yeah, there are event-driven libraries
in Ruby which with effort can achieve the same, but this is an area
where erlang excels). Ditto if you want to build systems with huge
uptime and zero-downtime live code upgrades. But try deciphering an
erlang backtrace and you’ll wish you were back with ruby.

HTH,

Brian.

I realize I’m late to the party, but…

On Tuesday 02 February 2010 09:19:32 am Jim M. wrote:

I’ve asked several friends and associates (application developers) what
programming language they recommend for new development. The most
prevalent answer was Ruby (with Ruby-On-Rails a close second). This was
surprising to me, since my understanding is that Java and C (et al) are
most prevalent.

Quite possible, but it depends entirely what you’re doing.

Is Ruby a good programming language for general purpose usage?

That depends what you’re trying to do.

I don’t want to skew responses by specifying a particular application or
usage. However, please DO respond with qualified answers if you feel
that is appropriate.

A quick analysis of Ruby’s weaknesses:

  • Even once you hack in support for Lisp-like macros, it’s likely not
    going
    to feel as natural as Lisp.
  • Slow. Not as slow as people suppose it is, but it’s not C, or even
    Lisp.
  • Can be difficult to bundle into one exe, so it may be difficult for
    Windows desktop applications.

Now, I never got enough into Lisp to get really good at macros, and
Ruby’s
syntax is still flexible enough to do interesting things with it – in
fact,
at least a few of the great examples I’ve seen of Lisp macros can be
done in
Ruby, though they obviously aren’t Macros in Ruby.

So my answers are mostly going to be qualified by the other two
concerns. Ruby
is my favorite language in every other respect, so I’m going to say,
choose
Ruby for everything except places where you actually need vertical
performance
(performance on a single machine) – but actually measure it, don’t just
assume! – and for places where your target output is a single .exe,
unless
you can figure out a better way to bundle a Ruby app for Windows.

This does mean, by the way, that Ruby is ideal for web development. You
control the installation (so you just make sure to get a web host which
supports Ruby, or which gives you enough control to use it), and you can
always throw more hardware at it, which is cheaper than developer time.
There
are exceptions to this rule, but when you actually get to the point
where
you’re so big that it’s worth a few months of developer time to shave
10% off
in performance, that’s a nice problem to have, and it’s worth getting
there
before your competitors do.

That is,
is it worth the time and effort to become proficient?

That’s a different question.

I never use Lisp, and I still consider it worth the time and effort to
at
least learn the language. Ruby is very easy to pick up, and you should
be able
to see very quickly whether or not it’s worth the time and effort to
become
more proficient.

If you already know Java, many concepts will translate right over, but
the
beauty of Ruby’s syntax will make it hard for you to look at a Java
program
again.

The biggest reason you should learn Ruby is to understand what it means
for
code to look pretty, and why you might want your code to look pretty.
Look at
some Ruby on Rails examples, and try to keep in mind that Rails is
written in
pure Ruby – that is, Rails is a Ruby library that adds this kind of
thing:

30.seconds.from_now

However…

Again, I don’t want to sway responses by
specifying a background for the learner. Might be a relatively new
student of programming, might be an old-timer with decades of
development experience.

This is also not something you can remove from the question. Again, if
you
already know Java, some of the object model will be easier to
understand, like
the concept of object references. If you know C++, it may take a bit for
you
to understand why it’s weird to ask about “passing by value” in Ruby,
versus
“passing by reference”.

Similarly, if you’re just starting out, it depends who you ask – I
would say
you should learn Ruby, so you get excited about programming, and so you
actually start programming faster, without having to learn about nasty
low-
level things like pointers and memory allocation. Others would say just
the
opposite – you should start low-level, so that by the time you get to
Ruby,
you understand exactly what the language is doing under the covers.
Either
way, you should eventually learn both high-level and low-level
languages, for
the same reason – you want to understand just what you’re asking the
language
to do for you.

On the other hand, if you are already incredibly proficient in something
like
assembly language or COBOL, you might find that you’ve already found
your
niche, and your job will likely not become obsolete – so you might want
to
learn Ruby as a curiosity, but it’s questionable how useful it will be
to the
actual work you do. If you’re already incredibly proficient in Lisp,
Ruby
might be a hard sell, because there are specific, measurable ways that
Ruby is
less powerful than Lisp – the biggest reason I prefer Ruby is syntax,
and
most Lisp people like s-expressions.

On 2010-02-06, Brian C. [email protected] wrote:

I also detest
ruby 1.9’s String handling which means I’m staying on 1.8; I know I’ll
ultimately have to move to a different language entirely.

I’m stumped on this one. Overall, I rather prefer 1.9’s string model,
and
wish it had been that way all along. It makes more sense to me that
“foo”[1] == “o” than that “foo”[1] = 111. It is a little surprising if
I
think of it from a C perspective, which is certainly my native
perspective,
but overall it’s a cleaner answer and more consistent with string
handling.
In particular, it’s cleaner because it’s consistent with slices of more
than
one character. :slight_smile:

Or is there something else in the new String that you don’t like?

-s

Seebs wrote:

Or is there something else in the new String that you don’t like?

It’s as complex as hell. I took the trouble to document about 200
behaviours of String in 1.9, and I still haven’t really scratched the
surface. string19/string19.rb at master · candlerb/string19 · GitHub

The scariest bit for me is that a simple expression like

a = b + c

(where b and c are both Strings) can raise exceptions. Writing your
program so that you can be sure it won’t raise an exception is hard.
Even the same program running on two different computers with the same
version of ruby 1.9 and the same input data may crash on one but not on
the other.

I don’t want to have to expend effort working around artefacts of the
language, especially when dealing with binary data.

Brian C. wrote:

Seebs wrote:

Or is there something else in the new String that you don’t like?

It’s as complex as hell. I took the trouble to document about 200
behaviours of String in 1.9, and I still haven’t really scratched the
surface. string19/string19.rb at master · candlerb/string19 · GitHub

The scariest bit for me is that a simple expression like

a = b + c

(where b and c are both Strings) can raise exceptions.

So what?

Writing your
program so that you can be sure it won’t raise an exception is hard.

Not at all. That’s what rescue is for.

Even the same program running on two different computers with the same
version of ruby 1.9 and the same input data may crash on one but not on
the other.

I don’t want to have to expend effort working around artefacts of the
language, especially when dealing with binary data.

Binary data doesn’t belong in Strings. Period. The only reason you
have it in there in the first place is that 1.8’s piss-poor String
handling allows you to treat strings as byte arrays.

I haven’t used 1.9 yet, so take this with a grain of salt, but my
impression is that encoding-aware Strings that aren’t byte arrays is
exactly the right thing for Ruby to have.

Best,
–Â
Marnen Laibow-Koser
http://www.marnen.org
[email protected]

On 8 February 2010 21:12, Brian C. [email protected] wrote:

(where b and c are both Strings) can raise exceptions. Writing your
program so that you can be sure it won’t raise an exception is hard.
Even the same program running on two different computers with the same
version of ruby 1.9 and the same input data may crash on one but not on
the other.

I don’t want to have to expend effort working around artefacts of the
language, especially when dealing with binary data.

The complexity stems from the inherent issues of handling strings in
multiple encodings. In 1.8 the support was nearly non-existent. In 1.9
the support is improved at the cost of increased complexity.

I think it would be nice if somebody wrote a library that adds
autoconversion to strings. While it’s not hard to hack the support for
a particular piece of code doing it as a general library would
probably require a bit of thinking, especially since we still don’t
have namespaces.

You can’t do much better than what 1.9 has. In 1.8 a = b + c was
guaranteed to not throw an exception but it could easily produce
complete nonsense as result in exactly the cases where 1.9 would throw
an exception. Obviously, you can override the 1.9 + method to do a
conversion automatically instead and face the consequences if the
encoding information in the string was wrong.

I agree that having to deal with this for binary data as well is the
somewhat unfortunate result of sharing the string class for both text
strings and binary data. The upside of such sharing, especially in 1.8
which lacked the subtyping of String by encoding was the ability to
interpret binary data as text when looking for textual magic such as
GIF89a.

You can’t have everything at once. A simple solution fails for some
more complex problems, a more complete solution has to be set up for
any particular simple case.

Thanks

Michal

Marnen Laibow-Koser wrote:

Binary data doesn’t belong in Strings. Period.

And Ruby doesn’t provide any other suitable data type. At least, IO#read
and #write only operate with Strings.

Python 3 is going down the route of two different data types: one for
binary data, one for character data. Erlang similarly has “binaries” but
also list of integers (if you want a list of codepoints)

On Mon, Feb 8, 2010 at 2:00 PM, Seebs [email protected] wrote:

I’d rather get an exception than silently get incoherent output, though.

Amen to that, nothing worse than PHP’s “3 dog night” + 2 = 5

On 2010-02-08, Brian C. [email protected] wrote:

It’s as complex as hell. I took the trouble to document about 200
behaviours of String in 1.9, and I still haven’t really scratched the
surface. string19/string19.rb at master · candlerb/string19 · GitHub

Ahh.

The scariest bit for me is that a simple expression like

a = b + c

(where b and c are both Strings) can raise exceptions. Writing your
program so that you can be sure it won’t raise an exception is hard.

I’d rather get an exception than silently get incoherent output, though.

I don’t want to have to expend effort working around artefacts of the
language, especially when dealing with binary data.

To some extent, I agree, but I was under the impression that you could
address this by specifying a desired encoding.

-s

On 8 February 2010 22:00, Seebs [email protected] wrote:

(where b and c are both Strings) can raise exceptions. Writing your
program so that you can be sure it won’t raise an exception is hard.

I’d rather get an exception than silently get incoherent output, though.

I don’t want to have to expend effort working around artefacts of the
language, especially when dealing with binary data.

To some extent, I agree, but I was under the impression that you could
address this by specifying a desired encoding.

Unless you forget :wink:

Thanks

Michal

On 2010-02-08, Marnen Laibow-Koser [email protected] wrote:

I haven’t used 1.9 yet, so take this with a grain of salt, but my
impression is that encoding-aware Strings that aren’t byte arrays is
exactly the right thing for Ruby to have.

It is certainly a useful thing to have, but I’m not sure that it’s a
good
idea to do away with byte arrays.

I have a program which listens for UDP packets containing a hunk of
data,
which is a string of binary bits and pieces, such as 3-byte integer
values,
flag bits, and so on. I can’t change the format of the packets. I have
some Ruby code which is doing the obvious thing – taking the byte
arrays
that are returned as string objects by the underlying syscall, and
managing
it using unpack(), etcetera.

If strings are not the right tool for holding hunks of binary data, such
as
those you’d get from performing a raw binary read(2) on a data file,
what is?
The array type seems INCREDIBLY expensive for this – do I really want
to
allocate over two thousand objects to read in a 2KB chunk of data?

-s

On 08.02.2010 20:55, Seebs wrote:

It makes more sense to me that
“foo”[1] == “o” than that “foo”[1] = 111. It is a little surprising if I
think of it from a C perspective, which is certainly my native perspective,
but overall it’s a cleaner answer and more consistent with string handling.
In particular, it’s cleaner because it’s consistent with slices of more than
one character.:slight_smile:

By that logic array[index] should return a single-element array
instead of the element itself to be more consistent with array
slicing.

Strings are just fine for binary data, IMNSHO. That’s what ‘BINARY’
encoding is there for.

Is this a poll? laugh It’s starting to sound like one.

Aaron out.

Hi!

Ruby is nice when you want to have a straight-forward, easy to use
language.
You can write stuff very quickly, there are few restrictions and it is
very
constistent (OOP). There are also some nice specials like blocks and
mixins.
Ruby has good support for additional libraries (e.g. Qt+KDE for GUI).
But also when you simply want to calculate some things you can easily
write
100 lines of Ruby. You should also consider that Ruby is terribly slow.
E.g.
if you want to try it a few billion cases in an algorithm, that is often
possible in realistic time in C++ or even Java but not in Ruby. It is
even
slower than scripting-languages like Python or Falcon. Currently C++ and
Ruby
are my favourtire language, C++ does not know a lot of limits, is very
fast
and provides very cool things with templates etc. (compile time meta-
programming), but it is also a bit complicated and inconstistent, Ruby
is a
really nice toy I use wheneverit is easily possible.
A lot of people say that C++ is horrible, very inconsistent and
complicate.
But that is not true. Most stuff is very consistent but it takes more
time to
learn it, it is easier to think like Ruby than to think like C++.
And please do not learn Java, it is simply a stupid language,
inconsistent
like C++, not as dynamic as Ruby and not as fast and not as many
compile-time
capabilities as C++.

Jonathan


Automatisch eingefügte Signatur:
Es lebe die Freiheit!
Stoppt den Gebrauch proprietärer Software!
Operating System: GNU/Linux
Kernel: Linux 2.6.31.8-0.1-default
Distribution: openSuSE 11.2
Qt: 4.6.2
KDE: 4.4.62 (KDE 4.4.62 (KDE 4.5 >= 20100203)) “release 2”
KMail: 1.13.0

http://windows7sins.org/

Seebs wrote:

On 2010-02-08, Brian C. [email protected] wrote:

The scariest bit for me is that a simple expression like

a = b + c

(where b and c are both Strings) can raise exceptions. Writing your
program so that you can be sure it won’t raise an exception is hard.

I’d rather get an exception than silently get incoherent output, though.

Likewise.

I don’t want to have to expend effort working around artefacts of the
language, especially when dealing with binary data.

It seems to me encodings are less artifacts of the language and more
artifacts of language.

To some extent, I agree, but I was under the impression that you could
address this by specifying a desired encoding.

Indeed, one can force_encoding ASCII-8BIT, if one wants “a = b + c” to
simply concatenate bytes without complaining that one may be jamming two
incompatible encodings together.

Also, reading a file opened in “rb” mode returns strings with encoding
already set to ASCII-8BIT.

So it’s still possible to treat strings as binary in 1.9.

If it were really true that at any given point in my program, I can’t
be sure that string ‘b’ doesn’t have some random, incompatible encoding
from string ‘c’, then I think I’d agree with Brian that string handling
in 1.9 has become unreasonably complex.

But in practice, so far it has worked well for me to transcode to UTF-8
at I/O boundaries. (Or, to use “rb” or force ASCII-8BIT if I know I’m
specifically dealing with binary data.)

So far, I’m just not experiencing much pain in dealing with encodings in
1.9. And the places I have encountered exceptions, have been occasions
when I really would have been jamming incompatible encodings together,
and I was glad to know about it rather than be producing bogus data.

(In this case I was reading lines via popen() from a program ostensibly
outputting ISO_8859_1, but which under some circumstances, for some
fields, could output UTF-8 or MACROMAN. So yes, I had to do some extra
work at the I/O boundary to try to handle such cases as well as
possible;
but that is hardly Ruby’s fault.)

Regards,

Bill

Jonathan Schmidt-Dominé - Developer wrote:

Hi!

Ruby is nice when you want to have a straight-forward, easy to use
language.
You can write stuff very quickly, there are few restrictions and it is
very
constistent (OOP).

Yup.

There are also some nice specials like blocks and
mixins.

Those aren’t specials; they’re core language features.

Ruby has good support for additional libraries (e.g. Qt+KDE for GUI).
But also when you simply want to calculate some things you can easily
write
100 lines of Ruby. You should also consider that Ruby is terribly slow.
E.g.
if you want to try it a few billion cases in an algorithm, that is often
possible in realistic time in C++ or even Java but not in Ruby.

Depends on the implementation. MRI is slow (I wouldn’t say “terribly”
slow). Ruby EE and YARV are faster. JRuby is probably faster yet. All
are plenty fast enough for most general-purpose applications.

It is
even
slower than scripting-languages like Python or Falcon. Currently C++ and
Ruby
are my favourtire language, C++ does not know a lot of limits, is very
fast
and provides very cool things with templates etc. (compile time meta-
programming), but it is also a bit complicated and inconstistent, Ruby
is a
really nice toy I use wheneverit is easily possible.
A lot of people say that C++ is horrible, very inconsistent and
complicate.
But that is not true. Most stuff is very consistent but it takes more
time to
learn it, it is easier to think like Ruby than to think like C++.

Sorry, no. C++ looks inconsistent because it is: it’s C with some
object orientation bolted on.

And please do not learn Java, it is simply a stupid language,
inconsistent
like C++, not as dynamic as Ruby and not as fast and not as many
compile-time
capabilities as C++.

Java’s more consistent than C++, and more portable. I don’t much like
Java, but I’ll use it over C++ any day. And of course the JVM is
fabulous when coupled with a decent language like JRuby.

Jonathan


Automatisch eingef�gte Signatur:
Es lebe die Freiheit!
Stoppt den Gebrauch propriet�rer Software!
Operating System: GNU/Linux
Kernel: Linux 2.6.31.8-0.1-default
Distribution: openSuSE 11.2
Qt: 4.6.2
KDE: 4.4.62 (KDE 4.4.62 (KDE 4.5 >= 20100203)) “release 2”
KMail: 1.13.0
http://gnu.org/
http://kde.org/
http://windows7sins.org/

Best,
–Â
Marnen Laibow-Koser
http://www.marnen.org
[email protected]

On 09.02.2010 00:20, Seebs wrote:

I guess, to me, “foo”[1] should be an o. If printing it yields a number,
instead of the letter o, something has gone wrong.

We agree on that. I’ve always thought ruby should have a Char class, so
“foo” could
behave basically like a collection of Char. At least as far as [] is
concerned.

On 2010-02-08, Sebastian H. [email protected] wrote:

On 08.02.2010 20:55, Seebs wrote:

It makes more sense to me that
“foo”[1] == “o” than that “foo”[1] = 111. It is a little surprising if I
think of it from a C perspective, which is certainly my native perspective,
but overall it’s a cleaner answer and more consistent with string handling.
In particular, it’s cleaner because it’s consistent with slices of more than
one character.:slight_smile:

By that logic array[index] should return a single-element array
instead of the element itself to be more consistent with array
slicing.

Hmm. You have a point.

I guess, to me, “foo”[1] should be an o. If printing it yields a
number,
instead of the letter o, something has gone wrong.

-s

On 2010-02-08, Sebastian H. [email protected] wrote:

On 09.02.2010 00:20, Seebs wrote:

I guess, to me, “foo”[1] should be an o. If printing it yields a number,
instead of the letter o, something has gone wrong.

We agree on that. I’ve always thought ruby should have a Char class, so
“foo” could
behave basically like a collection of Char. At least as far as [] is
concerned.

That might work.

I think the reason you need a single-character-string now is that things
like UTF-8 may make it ambiguous what the next “character” is, and not
all
characters are a single byte.

So there’s really two separate semantic changes.

  1. Subscripting gives textual data rather than raw numbers.
  2. Sometimes that textual data isn’t a single byte.

These are related, but not quite the same. The issue, I think, is that
the
first implies the second, because some encodings have single bytes which
are
not a character, but rather, the preamble to a character.

-s