Enterprise rails app with - about 100 tables

I’ve just started creating a web application with approx. 100 tables.
Possibly more. Oracle is the DB. There exists a legacy schema I that i
will use some of.

So far i have created a small bit of the app, approx 10 tables.

Im using webrick on my local machine. The Oracle DB is on its own
server.

My boss has noticed that its not really that zippy, there is a small
delay between eeach screen. There is no reason it should be slow. There
aren’t any images and the html is clean.

Looking at the development.log file tells me that the DB requests are
taking very little time < 0.1 sec.

Has anyone got any ideas what i can do? or what is making it slow?
It’s not very slow, things just take a second or so to appear.

Ultimately the DB will have millions of rows.

Thanks,
Chris

For one, you are in a development environment. This means, that on
every request you fire a new instance of ruby, and go through all the
startup/loading required…for every request. See how this might slow
things down?

If I were you, I’d setup a simple test server with lighttpd, if you’re
running *nix, otherwise with Apache and FastCGI. And of course, be
sure to set your environment to ‘production’. Plenty of examples on
how to do that. I think you’ll noticed quite a difference in
performance.

-Nick

Chris wrote:

I’ve just started creating a web application with approx. 100 tables.
Possibly more. Oracle is the DB. There exists a legacy schema I that i
will use some of.

So far i have created a small bit of the app, approx 10 tables.

Im using webrick on my local machine. The Oracle DB is on its own
server.

My boss has noticed that its not really that zippy, there is a small
delay between eeach screen. There is no reason it should be slow. There
aren’t any images and the html is clean.

Looking at the development.log file tells me that the DB requests are
taking very little time < 0.1 sec.

Has anyone got any ideas what i can do? or what is making it slow?
It’s not very slow, things just take a second or so to appear.

Ultimately the DB will have millions of rows.

Thanks,
Chris

Use one of:

  • Apache 2.x + mod_fcgid (my preference for now)
  • Lighttpd-1.4.11

I’ve been pretty happy with Apache 2.0.55 (MPM Worker) + mod_fcgid. I
changed a few lines in mod_fcgid-1.08 so that one process ignores the
idletimeout. Super fast and very reliable compared to the other
solutions I’ve tested on Linux.

If the next release of Lighttpd fixes the SSL and htdigest issues, then
it will become a very attractive choice as well.

On 3/30/06, Chris [email protected] wrote:

delay between eeach screen. There is no reason it should be slow. There
aren’t any images and the html is clean.

Looking at the development.log file tells me that the DB requests are
taking very little time < 0.1 sec.

Has anyone got any ideas what i can do? or what is making it slow?
It’s not very slow, things just take a second or so to appear.

Ultimately the DB will have millions of rows.

Make sure you are running Rails 1.1; the Oracle ‘describe’ query in
1.0 is insanely slow. Michael Schoen fixed it for 1.1, and it makes
development mode much faster. Previously, production mode was nice
and fast, and development was painfully slow on Oracle.

–Wilson.

Good point, but that still slows it down (with WEBrick not being the
fastest server on the planet).

Nick S. wrote:

For one, you are in a development environment. This means, that on
every request you fire a new instance of ruby, and go through all the
startup/loading required…for every request. See how this might slow
things down?

You are describing how the application would work under CGI. Running in
WEBrick in development doesn’t reload Ruby or Rails for each request -
but it does reload changed application classes.

Justin

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Chris wrote:

Has anyone got any ideas what i can do? or what is making it slow?

I saw the same slight delay.

Development mode and webrick are the large culprits. Use mongrel:
http://mongrel.rubyforge.org/ as it runs much faster, and those delays
went away.

Once you go into production mode, it will be even faster.


David M.
Maia Mailguard - http://www.maiamailguard.com
Morton Software Design and Consulting - http://www.dgrmm.net
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFELxwESIxC85HZHLMRArOqAJwJ90iGQG2Eb0kcQDNW+eHzZBTDVwCeJ85B
HI35a99vnKK9IFlcCWPQnUE=
=1nix
-----END PGP SIGNATURE-----

Nick S. wrote:

Good point, but that still slows it down (with WEBrick not being the
fastest server on the planet).

I don’t think WEBrick itself is much of a problem. About a month ago, in
the thread “Webrick in production?”, Eric H. wrote:

On Feb 27, 2006, at 9:55 PM, Ben M. wrote:

Eric H. wrote:

I serve over two million image hits a day via WEBrick and startup
and shutdown are handled by FreeBSD’s rc.subr just like apache.

So what about the mythical “mutex on each request” that WEBrick has?

I don’t know about Rails, but no such thing exists in WEBrick. (Note
that I’m not using WEBrick to serve Rails requests, I’m using it for
“image hits”.)

You don’t ever have long page waits?

Do you consider ~ 50-60ms long? A typical request is handled in the
10ms-25ms range.

There still may be a few optimizations I can make in the image lookup
code that will reduce the standard deviation.

It never crashes?

USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
www 6036 2.0 0.5 25268 10040 ?? S Sun11PM 23:00.12
/usr/local/bin/rub

No.

Is this one instance of WEBrick handling all those hits?

I fork 8 processes per machine on three machines to distribute the load
a little better since static image requests are very bursty and I have
machines with multiple CPUs. I may cut it back to 6 or 4 processes now
that I correctly handle conditional HTTP requests to save on memory.
(2% CPU is high for these processes.)

I ask because I’ve wished for a web container to run my rails apps
in, and WEBrick could be that other than for all the stern warnings
about it not being suitable for production.

I can’t tell you if Rails + WEBrick is suitable for serving high-volume
traffic or not, I haven’t looked. WEBrick itself is certainly suitable
for high-volume traffic when you write your servlets correctly (and,
possibly, fork()). Rails may simply be missing something that allows it
to run well multi-threaded.

(Been meaning to check out Mongrel too… but I’m still on 1.82)

If I can serve static images with a lookup in MogileFS in 12ms and
Apache can serve static content without any lookups straight off disk in
6ms I don’t see what benefit Mongrel would give me.

My speed boost comes from using sendfile(). WEBrick makes it easy to
match a URL with a File so I can pass it right in.

But it is the case that Rails itself is single-threaded, and this means
that Rails on WEBrick can only handle one request at a time

regards

Justin