Why should I use Ruby on Rails

[email protected] wrote the following on 15.02.2007 11:01 :

You got the breakdown correctly.

Pizza boxes make great front end processors, but they do not handle
database tranactioning on the scale of the big iron. When you have
large scale real-time process management problems with dynamic data,
it is time to bring in the big iron for the back end processing.

Agreed. But isn’t it an argument against your previous statement? Rails
doesn’t sit on the backend server and is in itself inherently scalable
on pizza boxes, only the DB and its related needs can lead to big iron
for some needs. Whatever the application server you use, if your DB is a
bottleneck in your kind of workload, you won’t get around it and have an
inherent cost that you can’t hope to get down by changing the
application server technology. So going with Rails in this kind of
situation is still the smart choice.

There is an impact of the application layer technology choice on the
DB load but when you hit the kind of loads that mandates big iron, you
should already have the ressources (smart people and money) needed to
remove the bottlenecks (are you are doomed anyway). In ActiveRecord I
identified the following problems with some existing or projected
solutions:

  • no integrated read cache: plugins using MemCache exist (although I
    avoided them and prefered to implement cache at a higher level myself
    until now),
  • high number of DB connexions when handling large site (one for each
    Rails process): you can use connexion poolers (I know that there is at
    least one project for PostgreSQL designed for this), this is not a huge
    problem as modern RDBMs can handle thousands of simultaneous connexions
    without trouble.
  • basic Ruby drivers and no prepared statements: AFAIKT the
    ActiveRecord::Base API should allow its code to be modified to handle
    prepared statements transparently so it will probably come as a plugin
    (isn’t there already one?) but the drivers should support them too.

Does this help?

Yes, I can imagine the kind of data workflow at Walmart that make a
mainframe more suitable.

A related thought : the size of the data/problems doesn’t always mean
going on bigger iron, sometimes the DB can be distributed.
Myspace/Youtube/Google/DailyMotion all have a huge DB to handle and
don’t use mainframes but simple x86/x86_64 boxes.

Lionel

On Feb 14, 1:56 pm, “Richard C.” [email protected] wrote:

On 2/14/07, r [email protected] wrote:

But I don’t know the word or description to counter the
minimization. Mostly I’m in shock that a programmer could be exposed
to Ruby and still be spewing such distortions.

Career protection denial. Java takes a monstrous amount of your
career investment in technology. Its not easy to take all at once,
seeing years of specialisation erode like that.

I tried to counter the protectionism with Chad’s blog entry, didn’t
work.
http://www.chadfowler.com/2007/1/10/supply-and-demand-in-technology-skills

Let me try to think back to when Java was good:

  • you had connectors to all enterprise data sources so you could
    stitch stuff together in the middleware

  • you had an event-based middleware that could be used to unify and
    rationalize and make manageable and scalable all the kludgey hacks
    that were holding everything together at that point.

  • you had JHTML and servlets (oooh, extending the server itself) that
    was pretty wild

Today:

  • nobody used the connector architecture, they still just use JDBC,
    using the connector architecture is scary, very little support and
    you end up paying a million bucks for it. Nothing is connected.

  • nobody used the event-based middleware. scary to get dependent on
    that. there is no support or industry to support getting tied to the
    million dollar middleware. Today, 2007, we still have all the
    kludgey hacks, even more of them, holding together the various
    enterprise initiatives. Nothing is unified.

  • JHTML is old hat, servlets old hat

  • we’re stuck with the clunky web app in the clunky language that was
    intended to work seamlessly with the above two technologies, that
    never panned out.

  • and, if you managed to be standardized and connected and unified in
    what you did, your job would be offshored.

ways to break the ice…ways to break the ice…it’s inertia. The
problem is inertia. What to do?

-r

johnso… wrote:

Phlip wrote:

But I thought hardware was cheaper than
programmers.

Have you priced a maxed out IBM 900 series application server with 90%
average utilization recently?

from $150,000 per year per CPU per title

My bad. The current formula is:

Web developer salary < hot server TCO < Software engineer salary

BTW I develop Rails on an old 750Mhz Pentium III notebook running
Kubuntu, so I don’t think I’ll need a hot server…

As per an earlier note, it is rare that Rails programs have to deal
with this scale of system. Most rails programmers don’t have to deal
with anything but commodity systems.

? you mean they are naive, or blessed?

I would not hesitate to get a computer from a swap-meet, put Linux on
it, and put it on the internet. If it becomes popular, then we have
the problems we wanted to have!


Phlip

On 2/15/07, [email protected] [email protected] wrote:

like English – and that’s OK too :slight_smile:
:lol

Had a debate about this recently with a colleague, where we are doing
some pretty funky stuff at the WATIR/Ruby Testing end of the spectrum.
He has got some pretty serious turnkey testing done and he quizzed me
about what made for ‘good’ or ‘elegant’ Ruby syntax. After a really long
response by me, I basically concluded:

Write it, DRY it, get the tester to read it/understand it. Natural
language
coding looks lovely, but I will leave it for the DSLs. I don’t want him
distracted from what he is doing, which could be the start of a testing
breakthrough at our company.

David

regards,
Richard.

Lionel B. wrote:

Agreed. But isn’t it an argument against your previous statement? Rails
doesn’t sit on the backend server and is in itself inherently scalable
on pizza boxes, only the DB and its related needs can lead to big iron
for some needs.

That’s what I meant by the LAMP scalability model. You develop in a
RAD language that takes forever to execute, you offload the
CPU-heating stuff into the database and web server, and you build a
stack of cheap boxes to run sessions, mostly in that RAD language…


Phlip
http://c2.com/cgi/wiki?ZeekLand ← NOT a blog!!

On Feb 15, 10:21 am, “Phlip” [email protected] wrote:

? you mean they are naive, or blessed?

Neither. WIntel hardware is the majority of what’s out there, so it’s
what most people work with and to. Most applications are small enough
that WIntel hardware will support it without optimization, so there’s
no need to go to specialty systems.

Ironically, thinking like a large systems designer allows you to move
the boundary up before you have to go to the specialty hardware - that
is to get more work done on the same hardware. It’s about playing
nice with the system resources and squeezing as much bang for the buck
from it before you move up a level in the hardware department.

Does this help?

On Feb 15, 10:11 am, Lionel B. [email protected]
wrote:

Agreed. But isn’t it an argument against your previous statement? Rails
doesn’t sit on the backend server and is in itself inherently scalable
on pizza boxes, only the DB and its related needs can lead to big iron
for some needs. Whatever the application server you use, if your DB is a
bottleneck in your kind of workload, you won’t get around it and have an
inherent cost that you can’t hope to get down by changing the
application server technology. So going with Rails in this kind of
situation is still the smart choice.

No - the prepare happens on the database server. It is 50 to 80
percent of the processing required when running dynamic SQL. By
running this overhead once, you almost triple ability of the DB server
to handle transactions on average.

Prepared statement is not just a way to hide string substitution. It
can be expressed like this, with the understanding that the equivalent
to this is being run on the server platform:

class database_server
def prepare (stmt)
access_path =compute_access_path ()
cachedStatements.add (handle, access_path);
return handle
end;

def unprepare (handle)
access_path = cachedStatement[handle]
cachedStatements.remove(handle)
access_path.cleanup_and_dispose ()
end;

def executeDynamicSql (stmt)
handle = prepare (stmt) # 50 to 85% of processing occurs here
cursor = execute (handle) # 15 to 50% of processing occurs here
unprepare (handle)
end

def executePreparedStmt (handle, params)
apply (handle, params)
cursor = execute (handle) # note that we only have the 15 to 50%
of the work of dynamic SQL to do now on the typical execution
end
end;

  • high number of DB connexions when handling large site (one for each
    Rails process): you can use connexion poolers (I know that there is at
    least one project for PostgreSQL designed for this), this is not a huge
    problem as modern RDBMs can handle thousands of simultaneous connexions
    without trouble.

Not true - DB2 only supports 500 concurrent distributed connections.
As one of the big three RDBMS platforms, and in some respects being
more advanced than its competitors, it cannot be discounted. It is
also true that it lags behind its competitors in other areas.

A related thought : the size of the data/problems doesn’t always mean
going on bigger iron, sometimes the DB can be distributed.
Myspace/Youtube/Google/DailyMotion all have a huge DB to handle and
don’t use mainframes but simple x86/x86_64 boxes.

Correct - the examples you post are all presenting data that is
primarily static, which does not need to pass audit, and for which an
hour’s delay is not generally an issue.

On the other hand, real time mission critical business control
processes with highly dynamic data, that must pass audit (or the
programmers and company execs face jail time), that must always be in
a valid state, even when transactions fail, demand more responsive
systems.

FYI … a single CPU on a current 900 series is 10 processors
equivalent to opterons on a single board. Up to 32 boards can be
enabled, for a total of 320 opteron class CPU’s. However, the real
strength of the big iron is in the bus and I/O processing capability.
These systems are built for database transaction servers, and they
perform very poorly for most other tasks.