Guys, I’m not sure if this is the right place to ask questions like
this.
I’m a software tester, working for 4.5 years.
Most of the work is about manual testing, for 1-2 month per year, I
need to write my own scripts for automation or tools.
For the first 2 years, all scripts are pure steps, do step1, step2, if
fail, exit.
Then I came across this book: clean-code: a hand book to agile
software craftsmanship.
I read it through once, several chapters twice or even more.
My code starts to evolve with more structured functions, meaningful
names, etc.
Now that I want to try OO, to change my procedure function level to
class level.
But I just don’t know how to evaluate my OO designs, how to start
refactor functions to classes.
Is there any books, tutorials, heurstic lists that I can start reading
or practice?
Hi JaordZZ,
I began software development using OOP can’t recall any books to
recommend. I would encourage you to think critically about when to use
OOP, rather than doing it simply because of Agile dogma. When
reviewing my code, I’ve often found that I tend to use classes for
everything – just out of habit (and due to a heavy Java background).
When coding, I now try to ask myself, “Does this code represent
something that benefits from OOP, or should it exist as a function(or
Module or static method)?”.
Here are some of the most common reasons to consider OOP:
Making state information handy to the functions that need it.
If you find yourself passing the same data in as parameters to related
functions, then consider wrapping the data and functions up into a
class.
Organizing large amounts of code
OOP allows you to present public interfaces that simplify the usage of
code. This is very helpful for large projects, but not usually
necessary for smaller scripts and projects.
Use of inheritance or polymorphism
If you want to benefit from subclassing objects.
And some common reasons to consider your current approach:
You are already using it and it works (or does it?)
There is a small performance penalty with each object that is
instantiated
I know this is very oversimplified, but my lunch break is ending
soon.
Hope that helps a little.
Here are some of the most common reasons to consider OOP:
Making state information handy to the functions that need it.
If you find yourself passing the same data in as parameters to related
functions, then consider wrapping the data and functions up into a
class.
The key word for this is “encapsulation”. I believe it is the most
important aspect of OO.
Organizing large amounts of code
OOP allows you to present public interfaces that simplify the usage of
code. This is very helpful for large projects, but not usually
necessary for smaller scripts and projects.
I think this is normally named “information hiding”. I use it even for
smaller scripts.
Use of inheritance or polymorphism
If you want to benefit from subclassing objects.
IMHO inheritance is overrated (or maybe overused). Often people turn to
inheritance where composition would be a better choice. But I agree,
this is another important aspect of OO.
And some common reasons to consider your current approach:
You are already using it and it works (or does it?)
There is a small performance penalty with each object that is
instantiated
I know this is very oversimplified, but my lunch break is ending
soon.
… and I have to go to bed now.
Cheers
robert
PS: JaordZZ, a book or tutorial about patterns might help. See here for
a start:
I started programming with C, not the type of a natural OO programmer.
And yes, the Procedure Oriented way, or function way works fine for
me, almost all the time.
The funny thing is, you can code OO even in a language like C. Of
course it is much easier in a truly OO language but if you look at the C
standard library you can look at it in a OO way in parts. For example
look at syscalls open(), write(), read() and close(). open() returns a
file descriptor (aka object id) and this is what you pass to the other
three methods - pardon: functions - along with more parameters. You can
see read() and write() as ordinary methods and close() as the
destructor. Of course you do not have polymorphism in C but these
functions are definitively a case of encapsulation: you do not really
know (nor do you need to know) what your operating system’s kernel
stores along with the file descriptor and how it performs all those
operations.
So here’s what I’ll try:
I have a copy of Design Patterns by “gang of 4”, seldom read, I’ll start now
I think if unit testing is what I need, I should go through unit
testing patterns, see how OO works for a real problem
I’ll definately follow your suggestions, see if there’s sign for OO
in my scripts, (encapsulation, etc).
Sounds like a plan. I am not sure though about the connection you are
doing between unit testing and OO. Although most testing frameworks in
Ruby are in fact object oriented the whole concept of unit testing also
works for non OO code - and a testing framework does not necessarily
need to be OO.
A strategy for finding classes that I find pretty slick is CRC Cards.
This does not have too much overhead and leaves out a lot of detail in
the first step. IMHO that helps concentrating on abstract entities.
Another book I usually recommend for in depth coverage of OO is OOSC:
Although it does cover a completely different programming language
(Eiffel) it covers all OO concepts I am aware of in a very minute
manner.
I started programming with C, not the type of a natural OO programmer.
And yes, the Procedure Oriented way, or function way works fine for
me, almost all the time.
Until I’m considering adding unit tests to my own scripts.
With the mock pattern, an interface like Cat with a function
“catchRats” is a must to have, then CatImp and CatMock.
That’s how mock works.
That’s where I start thinking, maybe it’s time to OO.
But as you say, it doesn’t have to be OO all the time,
unless there is a specific problem better be solved that way.
So here’s what I’ll try:
I have a copy of Design Patterns by “gang of 4”, seldom read, I’ll
start now
I think if unit testing is what I need, I should go through unit
testing patterns, see how OO works for a real problem
I’ll definately follow your suggestions, see if there’s sign for OO
in my scripts, (encapsulation, etc).
The key word for this is “encapsulation”. I believe it is the most
IMHO inheritance is overrated (or maybe overused). Often people turn to
inheritance where composition would be a better choice. But I agree, this
is another important aspect of OO.
There are (at least) two interesting posts on this on Rick Denatale’s
Talk
Like A Duck, in which (amongst other things) Rick quotes Alan Kay:
http://talklikeaduck.denhaven2.com/2008/01/01/alan-kay-on-the-meaning-of-oop
I’ve written before in this blog about how the meaning of the term
“object-oriented programming” got hijacked from it’s original meaning.
For
example I go into this in some length in my mini-memoirs. [see link
below]
I recently ran into an interesting site with links to “Classical
Computer
Science Texts”, which in turn led me to this e-mail exchange with Alan
Kay
on the meaning of OOP from July of 2003. This exchange gives support,
with
details, for my description of Kay’s concept of what Object-Oriented
Programming was supposed to mean. http://www.purl.org/stefan_ram/pub/doc_kay_oop_en
http://talklikeaduck.denhaven2.com/articles/2006/07/29/about-me
…
One of the things which always attracted me to Smalltalk was that it
placed
encapsulation above all else. As Alan Kay noted in his memoir about the
origins of Smalltalk, his original conception of object-oriented
programming
was that software should be composed of objects which were, in effect,
little computers themselves, which encapsulated both data and behavior,
and
hid the implementation of both from other objects, with objects
interacting
via sending messages to each other and replying. This uniform object
model
separates languages like Smalltalk and Ruby from other “object-orientedâ€
languages. The various versions of Smalltalk all shared this model,
although
they varied as to some of the semantics of message sending and
reception.
The idea of classes and inheritance as a way of factoring implementation
was
actually a rather late addition to Smalltalk. Although Kay acknowledges
the
Simula language, which also lacked classes and inheritance, as one of
the
influences on his thinking leading up to Smalltalk, it’s been a popular
misconception that the better known Simula-67 was his real influence,
when
Smalltalk and Simula actually evolved independently.
Kay’s term “object-oriented†got hijacked when Peter Wegner published
paper
entitled “Dimensions of Object Based Language Design†at the second
OOPSLA
conference in which he defined “object-oriented†as “objects + classes +
inheritance.†http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html
…
Kay’s term “object-oriented” got hijacked when Peter Wegner published paper
entitled “Dimensions of Object Based Language Design” at the second OOPSLA
conference in which he defined “object-oriented” as “objects + classes +
inheritance.” http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html
Thanks for the mention Colin.
Recently, one of Wegner’s students, William Cook, drafted a paper http://userweb.cs.utexas.edu/~wcook/Drafts/2009/essay.pdf which points
out what was missed back in the 1980s, that there is a vast difference
between Abstract Data Types and Objects, something which I’d argue
caused a lot of confusion over the past 25 or so years.