Object/Relational Mapping is the Vietnam of Computer Science

Ok. I can stand the SQL love-in no longer. :slight_smile:

Anyone who’s actually used db4o:

  • Knows it’s a perfectly viable solution for the majority of
    applications out today as they don’t approach it’s limits performance,
    or storage wise
  • Knows it’s a simpler database to develop for than generating reams
    of mapping files or accepting the limitations of a system like
    ActiveRecord
  • Knows the data is safe because the database is open-source,
    exports very easily, and no one is about to timebomb the frameworks
  • Knows that for many common scenarios the performance will wipe the
    floor with many popular RDBMS’s

Oh, and “toy” comments are tired. Most developers would probably still
call Ruby a “toy” language. That doesn’t mean they know something you
don’t. More than likely they’re just uninformed and biased. I’d hope
we could do better.

On 21.03.2007 23:22, Sam S. wrote:

  • Knows the data is safe because the database is open-source,
    exports very easily, and no one is about to timebomb the frameworks
  • Knows that for many common scenarios the performance will wipe the
    floor with many popular RDBMS’s

How does it do schema migration? Do you have experience with that?

Kind regards

robert

On Thu, Mar 22, 2007 at 06:53:54AM +0900, [email protected] wrote:

fact that the data has been stored and replayed from a crappy magnetic tape
which then relayed the stuff to a downlink and then bounces a few hops
around
the world to get to us. in reality the ‘raw’ data is only as good as the
weakest link in all those applications and hardware bits.

The software that collects the data in the first place is only
collecting data. Data exists everywhere around us, waiting to be
collected, collated, stored, and transformed into information. We care
about the information that comes out, and that depends on the data that
goes in. We’d rather not have to do everything between the two with
rulers, pencils, and our gray matter, so we use applications – but it’s
still about the data, ultimately.

Sam S. wrote:

  • Knows the data is safe because the database is open-source,

How does that help ensure no committed transaction is lost when
someone trips over the power cord?

On 3/21/07, Ryan D. [email protected] wrote:

relational databases are evil."
And what about those of use who don’t speak out of ignorance and
STILL don’t like relational DBs??? Or would you just assume we’re
ignorant too???

I don’t particularly like SQL databases. I just won’t pretend – like
some people, which may or may not include yourself; I don’t know since
you’ve never made a statement on it, certainly not as stupid as
“relational databases are evil” – that object databases or xml
databases or anything else like that is a good thing.

And I thought I wouldn’t touch this topic with a 10 foot pole… I
generally won’t touch a thread that is one of your hot topics because
it just isn’t worth it (see your comment about Pascal above). You
entered this thread as abusively as you could, pretty much on par
with all your other hot topic threads. I think you do a lot of good
work, but this regrettably makes pretty much most of it unapproachable.

Ryan, that’s really rich. I strongly suggest you look over your own
contributions and harsh stances before you try to pull this particular
stunt. I choose my battles carefully, and I don’t tend to talk about
stuff in harsh terms that I don’t have extensive experience with. I
know you do the same, but you’d really best look at how you’re
perceived before trying to tell me that my work is unapproachable for
doing exactly what you do.

-austin

How does any of that help you when you want to query your data in a way
that
your object model doesn’t make easy? Honest question. I may not have a
lot
of experience with object oriented DBs, but it doesn’t take much
imagination
to draw parallels between the hierarchical databases that were around in
the
70’s (or so) and the current crop of OO databases. How does the OO
model
solve the problems that hierarchical mainframe databases failed to?

Bottom line is that (other people’s) hard won experience has show that
it is
generally unwise to store your data in a format that’s easy to get at in
one
way, and one way only. Relational databases, shortcomings of SQL aside,
are
still the best way to get at data in a fairly general and speedy
fashion.
It doesn’t mean they will always be, but as far as OO databases go, well
using a hierarchy as a general data store has been tried. It didn’t
work
then. There’s no reason to think it will now.

MBL

On Mar 21, 2007, at 15:43 , Austin Z. wrote:

I don’t particularly like SQL databases. I just won’t pretend – like
some people, which may or may not include yourself; I don’t know since
you’ve never made a statement on it, certainly not as stupid as
“relational databases are evil” – that object databases or xml
databases or anything else like that is a good thing.

I actually love OODBs, and having worked for a major vendor in that
space, would consider myself knowledgeable in that arena.

Ryan, that’s really rich. I strongly suggest you look over your own
contributions and harsh stances before you try to pull this particular
stunt. I choose my battles carefully, and I don’t tend to talk about
stuff in harsh terms that I don’t have extensive experience with. I
know you do the same, but you’d really best look at how you’re
perceived before trying to tell me that my work is unapproachable for
doing exactly what you do.

I hear you and for the most part agree. We’re polarizers.

Checking my last 200 mail going to ruby-talk (to sept 2006), I don’t
remember (I’m not about to re-read all of them) a single one of them
having a directed personal attack. The harshest I’ve gotten on the
list in the past year or two centered around one of my favorite gems
being poisoned, and it still didn’t have a personal attack in it. I
think the second harshest I’ve gotten centers around writing
image_science because rmagick/imagemagick sucks (and well, it does).
I try keep my polarized opinions on my blog, where they belong.

I think the main difference is that I know when to stop. I doubt
there is a single thread in the past 4 years I’ve participated in
with more than say… 5 emails from me (that’s a total guess). I see
at least 10 such threads for you in your last 200 (going back to Oct
2006–so pretty close to the same posting rate) There’d probably be
some value in aggregating posts by author and then subject.

So this isn’t entirely a pot calling a kettle black.

Ahhhh… the combination of multi-values & Ruby… (aroma of
caramel…).

With such a strong connection with Pick, there is an inherent binding
of the language in use (Pick/nameHere/Basic).
I have tried to interest the Cache Cadre of some Ruby adventures here in
Sydney.
Any talk of another language going anywhere near it is resisted.

From http://www.intersystems.com/mv/RobertNagelReprint.pdf
“What we have added is the ability to have MultiValue style
access—opens, reads, and writes to files. So you can access your data,
which is stored in our globals, using reads and writes from MV
BASIC,just as you would in any PICK format.You’ve got an amazingly
reliable storage engine underneath you that has 25 years of
development underneath it, so it’s got high performance, extreme
reliability,…”

Intersystems’ “Enemble” is a “hyper-4GL” visual system building tool.
Not too shabby. Very high customer satisfaction.
High degree of xml use & integration.
Very little recourse to the language level is necessary to produce a
(basic) system.
I cannot say what level of access/development is required to deliver:
http://www.intersystems.com/ensemble/enterprise_benefits.html
<-----------------------------
The very high level of dynamic ability of Ruby may be a dis-incentive
to some of these. |

Markt

Robert K. schrieb:
[…]

How does it do schema migration? Do you have experience with that?

look at
http://developer.db4o.com/ProjectSpaces/view.aspx/Db4o-Out-Of-The-Box_Presentation
the part named “Refactoring and Schema Evolution”
and at http://developer.db4o.com/forums/thread/26997.aspx

I think that covers most cases. There are other ways too, like
translators and reflectors, but I am not fit in that part.

bye blackdrag

On Mar 21, 6:00 pm, Clifford H. [email protected] wrote:

SamSmootwrote:

  • Knows the data is safe because the database is open-source,

How does that help ensure no committed transaction is lost when
someone trips over the power cord?

Eh? I’m not talking about ACIDity. I’m talking about the horse people
love to beat about loosing your data due to vendor lock-in, completely
ignoring that there’s no rational difference with your average RDBMS.

On the subject of ACIDity though, here’s a developer press-release on
an older version, along with benchmarks and tests for crash
simulations:
http://developer.db4o.com/blogs/product_news/archive/2006/06/02/25420.aspx

Now, could be that you just don’t trust them. Ok, but since it’s
unlikely you’ve vetted your RDBMS of choice in the same manner, is
that really fair? (Just an idea, not accusing you of anything; you
very may well have written a suite of tests to verify your RDBMS
acidity. :slight_smile:

This is the droid your looking for:

F-logic - Wikipedia

T.

On Mar 21, 5:34 pm, Robert K. [email protected] wrote:

of mapping files or accepting the limitations of a system like
robert
My experience with it was in the 4.x line, but I managed it then much
like I do now. If I want to export some data from MySQL, I don’t
typically use a sql dump. I drop into irb, and write 3 or 4 lines
using ActiveRecord and FasterCSV. :slight_smile:

You can replicate with db4o, or I imagine now that they have an
administrative GUI you could probably dump the data through that.

Or do you mean class migrations? To the best of my recollection, that
was one of the advantages of db4o, automatic versioning without any of
the hoops some of the other vendors made you jump through. Don’t quote
me on that, but I’m fairly certain there was no assembly/module
registration.

James M. wrote:

On 3/20/07, Austin Z. [email protected] wrote:

Data is king. Applications are pawns.

Data is a dead fish. Applications are knowing how to fish.

Data is king crab! Applications are prawns! Don’t you get it???
Crustaceans rulez!

Austin Z. wrote:

isn’t the programs, it’s the amount of DATA that Google contains about
people.

What is Google’s most valuable asset? Not data. They recreate their data
constantly.

Sam S. wrote:

On Mar 21, 6:00 pm, Clifford H. [email protected] wrote:

SamSmootwrote:

  • Knows the data is safe because the database is open-source,
    How does that help ensure no committed transaction is lost when
    someone trips over the power cord?
    Eh? I’m not talking about ACIDity.

I’m sorry, but the possibility of a power fail is an infinitely
greater risk than that I won’t be able to use my existing software
and hardware to extract data from the proprietary (or not) storage
format that software might use. You’re the one that said “the data
is safe”. I beg to differ.

I’m talking about the horse people
love to beat about loosing your data due to vendor lock-in

Ok. Perhaps you can explain just how vendor lockin would cause me
to loose (sic) data? I still have the files, and the software, and
the hardware, and backups or redundancy for all. Where’s the chance
of loss that’s mitigated by having the source code as well?

On the subject of ACIDity though, here’s a developer press-release on
an older version, along with benchmarks and tests for crash
simulations: http://developer.db4o.com/blogs/product_news/archive/2006/06/02/25420.aspx

Now, could be that you just don’t trust them.

No, I trust them. I don’t, however, trust them as much as tests
that I know have been conducted, using thread-scheduling hooks
to explore very many of the infinite combinatoric paths of such
things, and in the process, do the same “stop the world” recovery
tests. Such exploration takes years, thousands of clients, and
trillions of transactions, before real trust is deserved.

But in any case, it’s not my data that’s at risk, and it’s not me
who needs to be convinced. It’s my dozens of customers who are
backing up tens of gigabytes of transaction log every day, from
machines costing hundred of $K, and who are using software that’s
doing the same for tens of thousands of other customers for years,
without the vendor being sued our of existence - as happened to
inferior players during the 80’s - who need to be convinced.

For better or for worse, and even though they now seem to have
risen above their ignorance, the authors of MySQL, who apparently
didn’t know what a transaction is, have unfortunately tarred most
of the open source database world with the same brush. Unfair, but
life is.

Clifford H…

On 21.03.2007 17:01, John J. wrote:

Ok, if you say so. Let’s call it a describing language, but operations
like AUTO INCREMENT seem an awful lot like programming. I guess we have
to say Ruby is not a programming language either. It is a scripting
language.
hmm…
many sources do describe (no pun intended) SQL as a declarative
programming language. It isn’t ‘Turing complete’ because it can’t create
an infinite loop. Big deal.
That’s academic nitpicking.

Actually it’s not because this fact has indeed practical consequences.
For example, try to retrieve a tree structure from a table without
defined depth limit in standard SQL.

Regards

robert

On 22.03.2007 02:01, Jochen T. wrote:

I think that covers most cases. There are other ways too, like
translators and reflectors, but I am not fit in that part.

Thanks for the pointer! It seems at least not too big a pain to do
although the “simple use the following code to resave all objects with
UUIDs and VersionNumbers enabled” made me a little nervous. :slight_smile: But if
UUIDs and VersionNumbers are switched on then that should not be a big
issue. Still “ALTER TABLE ADD ( foo VARCHAR2(100) DEFAULT ‘-’ )” feels
a bit simpler…

Kind regards

robert

Robert K. schrieb:
[…]

Thanks for the pointer! It seems at least not too big a pain to do
although the “simple use the following code to resave all objects with
UUIDs and VersionNumbers enabled” made me a little nervous. :slight_smile: But if
UUIDs and VersionNumbers are switched on then that should not be a big
issue. Still “ALTER TABLE ADD ( foo VARCHAR2(100) DEFAULT ‘-’ )” feels
a bit simpler…

db4o allows you to add fields to object without UUIDs and
VersionNumbers. You just change the class and it works. The only two
things that are working better in a rdbms is updating a large amount of
data using a single sql command and removing a large amount of rows
using a single sql command.

That is, because db4o does not have a query mechnism that allows to
update or to remove without creating the object first. And creating a
huge amount of needless object means to lose processing power. But I
also think that if enough customers say they want to have this, then it
can be put into db4o. It is not a big problem to design it and I think
adding it to the database is also not too much of a pain. So, I won’t
say it is a general disadvantage of oodbms, it is only one for db4o that
could be overcome. Ah well, maybe there is already an mechnism for this
and I just missed that.

bye blackdrag

On 3/22/07, Joel VanderWerf [email protected] wrote:

  • What’s the biggest worry intelligent people have about Google? It
    isn’t the programs, it’s the amount of DATA that Google contains about
    people.
    What is Google’s most valuable asset? Not data. They recreate their data
    constantly.

I would (mostly) disagree. Obviously, Google provides value to its
customers/users because of the algorithms it applies to the data it
collects. However, the data Google has is of immense intrinsic value.
Part of the value that Google continually refreshes the data, but
saying that it’s “recreated” constantly isn’t quite true; it’s
partially refreshed constantly. If they lost 20% of their data, it
would take them a long time to recreate that 20% because of the
sheer volume – and some portion of that 20% would be forever lost.

Data matters immensely to Google.

-austin

On Thu, Mar 22, 2007 at 07:25:09PM +0900, Clifford H. wrote:

Ok. Perhaps you can explain just how vendor lockin would cause me
to loose (sic) data? I still have the files, and the software, and
the hardware, and backups or redundancy for all. Where’s the chance
of loss that’s mitigated by having the source code as well?

One doesn’t lose data because of vendor lock-in. One loses (easy)
access to data because of vendor lock-in (coupled with some form of
vendor lock-out, of course – data locked into a given format, user
locked out of the software one uses to access it).