Begining programmer questions

Francis C. wrote:

Well, I have a somewhat different theory. Because remember five to ten
years ago, when all the young people were coming out having learned
Scheme and nothing else, so they were useless? I think someone told
the professors they need to train people for the real world, but they
responded by picking Java as the default choice. Seems like it would
be nice to find a happy medium between solid training in CS
fundamentals and practical knowledge. The Java training they’re
getting now seems to be little more than recipes.

This is exactly correct. A lot of CS programs (my own included) are
struggling with a balance between marketable skills (ugh) and a solid
theoretical base. My school is currently considering switching from C++
to Java for intro CS classes. I was lucky enough to have classes with
both C++ and Java, as well as a programming languages class that used
Scheme. Most of the CS students younger than myself will be lucky if
they are exposed to Python.

I think the main problem is that some students just want to learn to be
software engineers, others want to be computer scientists. The best
proposal I’ve heard so far is to split the CS program into the two, one
track for the practical side and the other for theoretical. Of course,
there would be quite a bit of overlap.

In my opinion, it should be straight computer science, drop the software
engineering or make it a different major :slight_smile: Too bad the department is
heading in the direct opposite direction.

-Justin

In your example:

" In C++ you can do this:

MyClass * m_class = NULL;
m_class->do_something();

And the results are undefined (by the language standard).[1] If you
are
lucky, you might get segmentation fault.
"
This would also generate a error just like it would in Ruby.

In your other example:

" MyClass * m_class = (MyClass) m_otherclass;
m_class->do_something();

Obviously the hammer cast breaks the type safety, and produces similar
undefined behavior. It all about having the wrong data in the m_class
pointer variable. In one case it’s a null pointer, in the other case
it’s an inappropriate pointer. I would call them both type errors."

This is something that YOU are forcing the compiler to do.
You are FORCING the compiler to accept this assignment even if it’s
wrong.
That is not the fault of the language but the fault of the developer.

But in Ruby you are not forcing it, it just allows it to happen with no
warning.

Jim W. wrote:

Regg wrote:

But when I hear statements about Ruby being a strong type language, I
would expect that to be applied to variables.

One must be clear on exactly what “strong typing” means. Often people
confuse strong typing with the static declaration of variable types.
There are actually (at least) three dimensions to the type question:

static VS dynamic
strong VS weak
manifest VS implicit

C++ is statically typed (variable types are known statically at compile
time), manifestly typed (type declarations must be explicitly made for
all variables), and mildly strongly typed (some common type errors are
caught at compile time, but no runtime checking whatsoever, leaving type
holes big enough to drive a truck through).

Ruby, on the other hand, is dynamically typed (the type of objects
associated with variable names are determined at run time), implicitly
typed (no need to declare types of variables) and strongly typed (type
errors are always caught).

Most assignments happen between variables.

In C, C++, Java and many other languages, an assignment statement means
“copy this data from that location to this location”. In Ruby, Python,
Lisp and most other dynamic languages, an assignment means “bind this
name to that object”.

In languages that have “copy” semantics, its important to know that the
copied data ends up in a location where it can be properly interpreted.
This is especially important because the interpretation of that data
depends on the declared type of that memory location.

In languages that “bind names”, that issue is not nearly as important.
Since the object itself (not the declared type of the location)
determines its interpretation, there is never any confusion.

But if what you are saying is that variables are “typeless”, then I have
to go with the belief that Ruby is a “weak” typed langauge and not a
strong one.

In C++, a variable is a location in memory that contains the data in
question. Variables in Ruby are not locations at all, i.e. they have
no L-Value. Variables in Ruby are truely just names used to lookup
objects. It is the Objects that have a “type”.

In C++ (a strong typed language) you can’t do this:

MyOtherClass *m_otherclass = new MyOtherClass();
MyClass *m_class = new MyClass();

The purpose of strong typing is to prevent performing type-inappropriate
actions on objects. Since C++ carries almost no runtime type
information about its objects, the only way it can prevent inappropriate
type actions is to do all the checking at compile time.

In C++ you can do this:

MyClass * m_class = NULL;
m_class->do_something();

And the results are undefined (by the language standard).[1] If you are
lucky, you might get segmentation fault. However, the equivalent in
Ruby:

m_class = nil
m_class.do_something

is a predictable runtime error than can be handled like all the other
runtime errors that are possible in a program.

m_class = m_otherclass; <–ERROR (Can’t convert MyOtherClass to
MyClass)

but this seems to be possible in Ruby.

In summary:

Unlike C++, variable names are not associated with a particular type.
Unlike C++, it is not possible to perform type-unsafe operations.

– Jim W.

[1] Some might quibble that attempting to dereference a null pointer is
not really a type violation. Perhaps. But consider the following code:

MyClass * m_class = (MyClass) m_otherclass;
m_class->do_something();

Obviously the hammer cast breaks the type safety, and produces similar
undefined behavior. It all about having the wrong data in the m_class
pointer variable. In one case it’s a null pointer, in the other case
it’s an inappropriate pointer. I would call them both type errors.

Elliot T. wrote:

fundamentals and practical knowledge. The Java training they’re
I think the main problem is that some students just want to learn to
an AVL tree falls under?

– Elliot T.
Curiosity Blog – Elliot Temple

At my school, tree structures are introduced in the third CS class,
‘Data Structures’. It’s likely that particular class would be required
for both (in my hypothetical curriculum :slight_smile:

-Justin

On May 15, 2006, at 1:54 PM, Justin C. wrote:

The best proposal I’ve heard so far is to split the CS program into
the two, one track for the practical side and the other for
theoretical. Of course, there would be quite a bit of overlap.

In my opinion, it should be straight computer science, drop the
software engineering or make it a different major :slight_smile: Too bad the
department is heading in the direct opposite direction.

Which half of the split would you say that learning how to implement
an AVL tree falls under?

– Elliot T.

Francis C. wrote:

Well, I have a somewhat different theory. Because remember five to ten
years ago, when all the young people were coming out having learned
Scheme and nothing else, so they were useless? I think someone told
the professors they need to train people for the real world, but they
responded by picking Java as the default choice. Seems like it would
be nice to find a happy medium between solid training in CS
fundamentals and practical knowledge. The Java training they’re
getting now seems to be little more than recipes.

by the thread i started has turned into a very large conversation.

yeah i have actually been in a computer information systems program for
2 years i’ll be done in august so i am actually not a complete beginner
i have had java 1, 2 vb.net 1 2, programming lgi, systems analysis etc.
Yet i have serious problems understanding programming for some reason
even though i get good grades. Part of the problem is that the education
i am getting isnt very good. All the classes are only 7 weeks long so we
dont have alot of time to get involved in things, also all they are
teaching is the language syntax, there is absolutly no problem solving
or algorithm development. Its just making toy programs while the
instructor basicall holds our hands and its been that way or the past 2
years. So now in august i will have a crappy education and be $30,000 in
debt. Pretty frustrating. I cant get hired anywhere at any programming
job because i just dont have the skills and i am almost done with this
degree. Networking is a different story though its a bit more
straightforward pluging things in and configuring them etc. So i think
i’ll just compliment my crappy education with some certifications like a
CCNA or something to make myself marketable. If i thought i would win
i’d take this school to court for screwing me over with a crappy
education.

Regg wrote:

In your example:

" In C++ you can do this:

MyClass * m_class = NULL;
m_class->do_something();

And the results are undefined (by the language standard).[1] If you
are
lucky, you might get segmentation fault.
"
This would also generate a error just like it would in Ruby.

Ummm … its undefined. That means it is, ummm, undefined. As in every
system might handle it differently.

Most modern systems will give you a Segmentation Fault and dump the
program. Is there a language defined way to handle that? (I don’t
recall one … but my C++ is a bit rusty).

In Ruby however, it is a standard exception. You can handle it as
easily as handling end of file exceptions or file not found exceptions.

This is something that YOU are forcing the compiler to do.
You are FORCING the compiler to accept this assignment even if it’s
wrong.

It IS wrong (well, most of the time), and yes, I’m able to force the
compiler to accept it anyways (because occasionally it is not wrong).

But in Ruby you are not forcing it, it just allows it to happen with no
warning.

That’s because it is NOT wrong in Ruby.

It’s not wrong until you use the object inappropriately, and that that
point you get a well defined exception that you can deal with inside the
language.

C++ attempts to prevent object misuse by making the assignment an error,
but letting you override that error and make any misuse your problem.

Ruby prevents object misuse by preventing the misuse when it happens,
not by objecting to assignments that may or may not be correct.

Is this helping? A lot of people coming from statically typed languages
deal with this same issue (I was one at one time). We put a lot of
faith in the hope that static typing will catch a lot of our errors for
us. And it seems that a dynamic language is too “loosey-goosey” to be
safe. It turns out (at least for me) the errors that are caught by a
static type system and aren’t caught by a dynamic system are not all
that common.

But me telling you these things probably isn’t going to convince you one
way or another. You need to use the language a while and see if your
fears are born out.

At any rate, I hope you enjoy some Ruby programming while you are
learning!

– Jim W.

On May 15, 2006, at 3:04 PM, Regg wrote:

do something that the compiler otherwise would not allow.
from
one type to another? (This is an honest question, maybe I’m missing
something)

It seems like a recipe for disaster.

Thanks

I like to explain the differences like this:

  1. With static, strong typing the variables are typed
  2. With weak typing the operations are typed
  3. With dynamic, strong typing the objects are typed.

So with 1) we have C++

string a = 7;
compile time error

  1. we have perl

$a = 1 + 7;
$b = 1 . 7;

$a is 8, $b is “17”.

  1. We have ruby and python and smalltalk and some others

a = 7
b = “string”
c = a + b
Runtime error, you can’t add a string to a number

The (4) one which I didn’t mention was static weak-typing which is C.
You’re right that you have to explicitly cast the type, but the
problem is unless it’s one of those “magic” casts like from double to
int what you are literally doing is changing the type (w/o changing
the representation). In C I can actually write this:
double q = 2.0;
printf("%s\n", (char *)q);

(char *)q does not make me a string “2.0”, it just says take this
memory address and treat it as a pointer to a char. This will
hopefully segfault, but maybe it won’t and it will print something
very odd. You can see the weak-typing I hope. Of course the upside is
is that it is exactly this capability that lets us have our Fixnums,
nil, true, false, and Symbols as immediate values.

yeah i just need to do it for some reason, so i guess thats makes sense,
if it wasnt natural then i would have quit a long time ago. Its just
discouraging sometimes because i am going on 30 here and i meet 18 year
olds that can just do amazing things with programming. When i talked
about natural i meant people that have been doing programming since they
were kids, i have been into computers for about 3 years now thats about
it. I didnt even know what the internet was when i was 18 and it didnt
exist when i was a kid all i had was my commadore 64 and copy of ultima
4, lol.

Pat M. wrote:

On 5/14/06, Bill G. [email protected] wrote:

On 5/14/06, corey konrad [email protected] wrote:

yup i’ll get it, need a break now though, you know when you read tacos
as a misspelling of “to cos” that you have been studying programming and
math all day.

Yeah, definitely a natural. It won’t be long before you wake up
dreaming of code :wink:

Oh man, code comes to you at the WEIRDEST times. I can’t tell you how
many times I’ve half-woken up at 3am, and in that mid-dreamy state
somehow figured out the answer to a problem. Then I hop out of bed
and make it happen. Sometimes it doesn’t work, and then you’re like,
“It’ll only take 10 minutes or so” and you find yourself still
pounding at the keyboard at 7am. “hrm, better shower and go to work.”
I like it better when my dream code works perfectly though :slight_smile:

On 5/15/06, Jake McArthur [email protected] wrote:

variable point to a different object entirely which can be of a
different type.

Perhaps this is nitpicking, perhaps not, but it’s incorrect to say
that you cannot change an object’s type (to paraphrase).

class Dog
def woof; “woof”; end
end

class Duck
def quack; “quack”; end
end

dog = Dog.new
Okay so dog points to a Dog object, and is obviously not a Duck. Can
we change its type to Duck though? Instead of full on changing it,
can we ADD the Duck type? Yeah, it’s pretty simple:

def dog.quack
“identity crisis”
end

You can now call dog.quack, and Duck is one of its types.

You might say “Well no, all you did is add a quack method to that
particular object, it’s still a Dog.” You’re right…but for the most
part we don’t care about an object’s class - that’s only interesting
at creation time. An object’s type is simply its interface, and as
long as my Dog object responds to each method that a Duck does, then
the Dog is also of type Duck. That may seem kind of confusing, and
that’s really because you shouldn’t be thinking of it in these
partitioned terms anyway. All we really care about when playing with
an object is whether we can make method calls on it without our
program blowing up.

Pat

Logan C. wrote:

I like to explain the differences like this:

  1. With static, strong typing the variables are typed
  2. With weak typing the operations are typed
  3. With dynamic, strong typing the objects are typed.

I hadn’t seen it called out like this before. I like it.

– Jim W.

I prefer to just think of Ruby as not having any types at all. All I
care about is whether the object responds to the correct messages,
and if it doesn’t, all you have to do is make it so that it does. Add
methods, mix in modules, whatever suits the case.

  • Jake McArthur

Exactly.

“That may seem kind of confusing, and that’s really because you
shouldn’t be thinking of it in these partitioned terms anyway. All we
really care about when playing with an object is whether we can make
method calls on it without our program blowing up.”

In my opinion, thinking in terms of types serves merely as an obstacle
to taking advantage of Ruby’s power.

Pat

Elliot T. wrote:

In my opinion, it should be straight computer science, drop the
software engineering or make it a different major :slight_smile: Too bad the
department is heading in the direct opposite direction.

Which half of the split would you say that learning how to implement an
AVL tree falls under?

Heh heh heh… the “AVL tree” part is conmputer science,
and the “implementing” part is software engineering… :wink:

Hal

On May 15, 2006, at 2:16 PM, Justin C. wrote:

At my school, tree structures are introduced in the third CS class,
‘Data Structures’. It’s likely that particular class would be
required for both (in my hypothetical curriculum :slight_smile:

This is totally off topic (so i’ve changed the subject), but learning
implementation details, like AVL trees, – which I can easily look up
myself if I ever want to – is one of the main sorts of things that
made me quit University. (also there was the java)

this was aggravated by things like being asked, on tests, to
implement data structures with pen and paper. and then being graded
on syntax. glare. but anyway…

under my ideal curriculum, students wouldn’t be told “learn this” or
“learn that” for specific details. they would instead be given
interesting problems to work on, and teaching would focus more on
general methods of solving problems. so, for example, instead being
taught AVL trees and given a practice problem where the solution is
an AVL tree, they’d be given a problem with a non-obvious solution
and taught how to figure out what data structure to use (and how to
look up or otherwise learn an implementation).

– Elliot T.

Jim W. wrote:

Logan C. wrote:

I like to explain the differences like this:

  1. With static, strong typing the variables are typed
  2. With weak typing the operations are typed
  3. With dynamic, strong typing the objects are typed.

I hadn’t seen it called out like this before. I like it.

– Jim W.

Yes, it’s so good a summary. I like it too , it’s kind of wisdom.