Hmm, replying to two in one message now
I wouldn’t want your strip! added to Ruby.
See, exactly that is the dilemma and is one reason why facets
is a
part of outside Ruby. The same could be reasoned for 1000 different use
cases as well, and the netto result will be that everyone has its own
share of coding style.
This is no problem if everyone works alone, but when you do extend
core classes a lot and rely on solutions which you think are shorter and
more elegant, people are quick to shout “monkey patching” or refuse to
adopt another style. The more projects involved, the more different
styles involved, the higher the chances of conflict. For instance, I do
not like code parts involving @@, $ or lambdas.
Let me put it another way. A ruby project I know of - which is somewhat
successful - is happy for patches contributed by other people. This will
lead to different styles, for instance one author used .kind_of? whereas
pretty much everyone else preferred .is_a?
I can think of at least four very different things to expect an
Array#strip! to do. Why add foggy methods to Ruby?
Quite obviously .strip! on Array simply applies what .strip! does on
Strings.
It isn’t wastful to have classes with very few methods.
The point is that I regard the String class as more central to ruby than
Array class as far as everyday usage is concerned. This is not about
being wasteful or not.
Why didn’t you choose to add this to module Enumerable? Restricting
this to Arrays does not seem to make much sense.
Quite frankly, I didn’t need it outside of Strings or Arrays so far.
Maybe it should be part of Enumerable, but I really have needed it
outside there yet. It seems easier to add it only where it is needed. As
written above the only problem is when I release projects and use my own
style here which extends ruby classes. How do other people solve this?
Not extend ruby core classes?
This would be the logical solution for me, but it quite makes the
flexibility of ruby less usable if everyone restricts oneself to what is
inside the language only, because they would have to make sure that
their code works with all their “personal extensions” as well (and add
that to projects).
The reason to include something in standard Ruby is not the ease of
implementation but rather the usefulness for a wide audience and many
uses of the class.
But how is this usefulness measured? Take the facets library for
example, how many people will use it? I mean in theory it sounds like a
great idea, but on the other hand if noone would use it it would make
the project quite … useless. http://facets.rubyforge.org/
Do you mean by the core team or ad hoc by application programmers?
Core team of course. The application programmers created facets, after
all …
Modification of core classes is done cautiously (seldom) while
application programmers seem to often augment core classes with
additional methods.
Does anyone know how Rails is doing this? Do they extend core classes
heavily?
This totally depends on the application case.
True. But for the perhaps 5000 .rb files I wrote so far, most objects
dealt
with strings, even if they are stored inside hashes or arrays.
I can’t really say how it is for other projects, but I dare claim now
that strings are the core everywhere. Bold statements catch attention
my impression is that people tend to create too few classes
(common example: methods are added to Hash instead of
creating a class whose instances contain a Hash instance
and use it appropriately).
I would instantly believe you. I myself used to create huge classes
years ago until I realized that classes in itself are quite good at
simplifying problems by divide&conquer. If it becomes smaller, it
seems easier to manage.
The sole fact that there are more instances of class X vs. class Y
does not tell me anything about what methods should be added to X or Y.
I guess I tried to make a point that different classes in ruby are more
useful than others. Does this influence decisions to add or remove
methods at all?
I still think the String class is really the core of ruby as far as
everyday useage is concerned, and thus more important than let’s say
Array.
But on the other hand, how many people subclass String objects, and how
many people subclass Array or Hashes?
I am actually interested in the complete useage patterns for all kind of
ruby objects, and whether people rather subclass and extend, or directly
extend core classes to solve a given problem at hand.
As I said, I guess if I never want to work on projects which other
people can use too, I have no problem at all, because my code is fine
already no matter what style I use (after all, I know my own code). But
if 1000 people have 1000 projects, it seems as if the netto result is a
huge spaghetti design, where others have partially called it monkey
patching or worse.
Regards,
Marc
PS: To be honest, I can not think of any project that did
require ‘facets’
so far. It seems to me that people are happier to not have a
dependency on something, if they can avoid it. If anyone knows projects
that use or require facets, let me know people - the bigger the project,
the better for me to have a look.