Forum: Ruby Mongrel 0.1.1 -- A Fast Ruby Web Server (It Works Now, Maybe

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
8c43ed7f065406bf171c0f3eb32cf615?d=identicon&s=25 Zed Shaw (Guest)
on 2006-01-20 14:09
(Received via mailing list)
Hi All,

I previously announce Mongrel 0.1.0, but since I released that late
at night it of course had errors.  This is just a small announcement
for the fixed source:

http://www.zedshaw.com/downloads/mongrel-0.1.1.tar.bz2

Please grab this one and give it a try.


INSTALL

You can grab the above source and then do:

   rake
   ruby -Ilib examples/simpletest.rb &
   curl http://localhost:3000/test

It should print out "hello!".  Check the source of examples/
simpletest.rb to see how it's used.

*** It requires Ruby 1.8.4 to work and a C compiler to compile a
small portable extension.  ***


WHAT IT IS

Mongrel is a web server I wrote this week that performs *much* better
than WEBrick (1350 vs 175 req/sec) and only has one small C
extension.  I'm looking to make Mongrel the answer to Java's Tomcat
as a means of hosting Ruby web applications.   Feel free to send me
your dreams about a (sort of) Ruby hosting library.


Special thanks to Ezra and Sascha for testing and finding this.

Zed A. Shaw
http://www.zedshaw.com/
Db212dec0d83349ef63c6100957b52d4?d=identicon&s=25 Robert Feldt (Guest)
on 2006-01-20 14:42
(Received via mailing list)
In future perf comparisons it would be great if you also compare it to
apache and/or lighttpd in a similar setup/situation just to factor out
the specifics of the machine you test it on.

Thanks,

Robert Feldt
30b29269602091da56c023a4cb9bbf43?d=identicon&s=25 Alciato (Guest)
on 2006-01-20 15:20
Compiles and run without problems under fedora core4 (with an AMD 800
Mhz 500M Ram)

siege -u http://localhost:3000/test -d1 -r10 -c25

mongrel
_______

Elapsed time:                   8.22 secs
Data transferred:               0.00 MB
Response time:                  0.01 secs
Transaction rate:              30.41 trans/sec
Concurrency:                    0.20
Longest transaction:            0.12
Shortest transaction:           0.00

webwrick
________

Elapsed time:                  10.99 secs
Data transferred:               0.00 MB
Response time:                  0.20 secs
Transaction rate:              22.75 trans/sec
Concurrency:                    4.53
Longest transaction:            3.17
Shortest transaction:           0.01
722a18819725c0f6275b556ced89a3f4?d=identicon&s=25 Sascha Ebach (Guest)
on 2006-01-20 18:53
(Received via mailing list)
this is my output under Cygwin now:

$ ruby -v
ruby 1.8.3 (2005-09-21) [i386-cygwin]

$ rake
(in /cygdrive/h/Download/Browser/mongrel-0.1.1)
/usr/bin/ruby extconf.rb
checking for main() in -lc... yes
creating Makefile
make
gcc -g -O2   -I. -I/usr/lib/ruby/1.8/i386-cygwin
-I/usr/lib/ruby/1.8/i386-cygwin -I.   -c http11.c
gcc -g -O2   -I. -I/usr/lib/ruby/1.8/i386-cygwin
-I/usr/lib/ruby/1.8/i386-cygwin -I.   -c http11_parser.c
gcc -shared -s -Wl,--enable-auto-import,--export-all  -L"/usr/lib" -o
http11.so http11.o http11_parser.o  -lruby -lc  -lcrypt
cp ext/http11/http11.so lib
/usr/bin/ruby -Ilib:test
"/usr/lib/ruby/gems/1.8/gems/rake-0.6.2/lib/rake/rake_test_loader.rb"
"test/test_http11.rb" "test/test_trie.rb" "te
st/test_ws.rb"
Loaded suite
/usr/lib/ruby/gems/1.8/gems/rake-0.6.2/lib/rake/rake_test_loader
Started
Error result after 6 bytes of 15
.Read 18 string was 18
...Hitting server
.
Finished in 2.048 seconds.

5 tests, 10 assertions, 0 failures, 0 errors

$ ruby -Ilib examples/simpletest.rb &
[1] 4692

$ curl http://localhost:3000/test
hello!


$ ./ab.exe -S -n 3000 http://localhost:3000/test
Concurrency Level:      1
Time taken for tests:   5.656250 seconds
Complete requests:      3000
Failed requests:        0
Write errors:           0
Total transferred:      156000 bytes
HTML transferred:       21000 bytes
Requests per second:    530.39 [#/sec] (mean)
Time per request:       1.885 [ms] (mean)
Time per request:       1.885 [ms] (mean, across all concurrent
requests)
Transfer rate:          26.87 [Kbytes/sec] received

WOW, that is really something for Cygwin (Ruby/Cygwin is _really_ slow).
It
would be nice if there were at least a WEBrick benchmark included, so
one
could make a direct comparison. If you could get Mongrel to be at least
half as fast as lighttpd that would be really something. Than it would
be
an alternative for production.

-Sascha
4674615d2cf231975c741731be9a8685?d=identicon&s=25 why the lucky stiff (Guest)
on 2006-01-20 21:10
(Received via mailing list)
Zed Shaw wrote:

> I previously announce Mongrel 0.1.0, but since I released that late
> at night it of course had errors.  This is just a small announcement
> for the fixed source:
>
> http://www.zedshaw.com/downloads/mongrel-0.1.1.tar.bz2
>
> Please grab this one and give it a try.

This is a fantasy come true.  Works nicely on Linux.  Sensational!  An
era has opened.

I got Camping working with `register', but it needs both SCRIPT_NAME and
PATH_INFO filled properly to work right.  In the case of a script
mounted at /blog, a /blog/view request should end up as (following the
traditional CGI ways):

  SCRIPT_NAME = /blog
  PATH_INFO = /view

Anyway, here's a crappy postamble for any Camping scripts out there
(none):

  if __FILE__ == $0
    Camping::Models::Base.establish_connection :adapter => 'sqlite3',
:database => 'blog3.db'
    Camping::Models::Base.logger = Logger.new('camping.log')
    require 'mongrel'

    class CampingHandler < Mongrel::HttpHandler
        def process(request, response)
            Object.instance_eval do
                remove_const :ENV
                const_set :ENV, request
            end
            ENV['PATH_INFO'] = '/'
            s = response.socket
            def s.<<(str)
                write("HTTP/1.1 200 OK\r\n#{str}")
            end
            Camping.run('', response.socket)
        end
    end

    h = Mongrel::HttpServer.new("0.0.0.0", "3000")
    h.register("/blog", CampingHandler.new)
    h.run.join
  end

_why
E75cda3e915fa209386fae3de962cb6a?d=identicon&s=25 Michael Schoen (Guest)
on 2006-01-20 21:13
(Received via mailing list)
Zed,

Cool stuff. Is the intent that this enables an even more straightforward
approach for running apps (such as a RoR app) than scgi under lighttpd?
Ie., rather than scgi under lighttpd, just use mod_proxy?

Did you read this post, where Mark Mayo wonders why the newer frameworks
haven't just been using an http interface (and instead have struggled
with fcgi)?

http://www.vmunix.com/mark/blog/archives/2006/01/0...

thanks,
Michael
1b5341b64f7ce0244366eae17f06c801?d=identicon&s=25 Kirk Haines (Guest)
on 2006-01-20 22:58
(Received via mailing list)
On Friday 20 January 2006 12:33 pm, Michael Schoen wrote:

> Cool stuff. Is the intent that this enables an even more straightforward
> approach for running apps (such as a RoR app) than scgi under lighttpd?
> Ie., rather than scgi under lighttpd, just use mod_proxy?
>
> Did you read this post, where Mark Mayo wonders why the newer frameworks
> haven't just been using an http interface (and instead have struggled
> with fcgi)?

When running IOWA apps under webrick, one can do exactly this.  I don't
do
that for production apps, however, because the performance penalty.
Even
with the socket overhead, Apache or lighttpd with fcgi, or Apache with
mod_ruby are both substantially faster.

If mongrel proves fast enough, though, using a proxy approach would
become a
very viable alternative for production apps.


Kirk Haines
8c43ed7f065406bf171c0f3eb32cf615?d=identicon&s=25 Zed Shaw (Guest)
on 2006-01-21 02:54
(Received via mailing list)
I'm planning a better performance comparison in the future, those
metrics were just quick dirty ones to proof the concept.

Zed A. Shaw
http://www.zedshaw.com/
8c43ed7f065406bf171c0f3eb32cf615?d=identicon&s=25 Zed Shaw (Guest)
on 2006-01-21 02:57
(Received via mailing list)
Thanks Alciato, it seems to be pretty portable so far, so the
inclusion of the small C extension isn't such a bad idea.

One thing though--and I'm just picking nits though--Siege is a really
bad tool for performance testing.  Try out httperf or even
apachebench since they give much more accurate statistics.  Also,
you'll want to run many more requests.   It takes 10000 on my machine
to get a stable performance characteristics.

Anyway, thanks for the feedback and watch for more.

Zed A. Shaw
http://www.zedshaw.com/
8c43ed7f065406bf171c0f3eb32cf615?d=identicon&s=25 Zed Shaw (Guest)
on 2006-01-21 03:00
(Received via mailing list)
That's sweet.  Didn't even think it would go that fast on Cygwin.
Great news as this means there's an exit strategy for IIS and Win32
people other than SCGI.  Now if I can just the win32 version of fork
working I'd be gold.

One thing, could you do the same test against WEBrick using the
examples/webrick_compare.rb script?  Just for my info.  You'll need
to > /dev/null the output to make the test more fair.

Zed A. Shaw
http://www.zedshaw.com/
8c43ed7f065406bf171c0f3eb32cf615?d=identicon&s=25 Zed Shaw (Guest)
on 2006-01-21 03:03
(Received via mailing list)
On Jan 20, 2006, at 2:32 PM, why the lucky stiff wrote:

>
> This is a fantasy come true.  Works nicely on Linux.  Sensational!
> An era has opened.
>
Thanks!  Yeah, I'm totally excited about it.  I may finally be able
to shut those "performance" whiners up. :-)

> I got Camping working with `register', but it needs both
> SCRIPT_NAME and PATH_INFO filled properly to work right.  In the
> case of a script mounted at /blog, a /blog/view request should end
> up as (following the traditional CGI ways):

Right, I have to work out the interplay of the register function and
how script/path info would work.  Also have to CGI convert the
request parameters and a few other niceties.  I'm contemplating where
you would register handlers, and you'd be able to say whether it has
to be an exact match, or a partial match.  If it's partial then the
matched part would be the SCRIPT_NAME, the rest PATH_INFO.  If it's
exact then I'm not sure what.

Any suggestions on this would be great.  Basically, what would be
your dream URI response scheme?

Zed A. Shaw
http://www.zedshaw.com/
8c43ed7f065406bf171c0f3eb32cf615?d=identicon&s=25 Zed Shaw (Guest)
on 2006-01-21 03:12
(Received via mailing list)
On Jan 20, 2006, at 2:33 PM, Michael Schoen wrote:

> Zed,
>
> Cool stuff. Is the intent that this enables an even more
> straightforward approach for running apps (such as a RoR app) than
> scgi under lighttpd? Ie., rather than scgi under lighttpd, just use
> mod_proxy?
>

Yes, the intention is to produce the optimal deployment scenario for
Ruby web applications.   A secondary purpose is to prove my
unofficial "Zed's STFU Performance Razor":

"All languages are as fast as the fastest language they can access so
STFU."

:-)

I'm imagining that if I can get the various web frameworks to run
well and fast under Mongrel, and sprinkle in a few sexy features,
then it'd be the best way to deploy the applications.  I'm thinking a
primary concern is performance, followed by ease of deployment, then
management, and finally API simplicity.  What's nice with just plain
HTTP is that there's already a mountain of support for production
HTTP hosting and deployment technology.

> Did you read this post, where Mark Mayo wonders why the newer
> frameworks haven't just been using an http interface (and instead
> have struggled with fcgi)?
>
> http://www.vmunix.com/mark/blog/archives/2006/01/0...
> and-apache-background-and-future/
>

Yeah, I read that post literally as I was building the precursor to
SCGI, and then decided to take the plunge and just do it.  The story
goes like this:

1)  Win32 IIS dudes beg me to get SCGI working under IIS.
2)  I like Win32 dudes, hell I love everybody the same (which is very
little, but it's equal at least).
3)  I start working on an HTTP->SCGI proxy.  I use Ragel to make a
clean, fast, and correct parser.
4)  madrobby schools me on my parser with his mountain of browsers
until I get it working better.
5)  I keep asking myself, "Why don't I just take the parser and make
a Ruby web server?"
6)  I keep asking this but it doesn't sink in.  I decide to toy with
the idea.  Someone shows me the articl  Then I read the above article
and just do it.
7)  What you have is about 3-4 days later.

So, that's the semi-official story.  What I also have though is a
HTTP parser that I can break out and make into a library.  This means
that other languages could possibly pick it up and write their own
similarly fast web servers with minimal effort.  Keep your fingers
crossed.

Zed A. Shaw
http://www.zedshaw.com/
8c43ed7f065406bf171c0f3eb32cf615?d=identicon&s=25 Zed Shaw (Guest)
on 2006-01-21 03:12
(Received via mailing list)
On Jan 20, 2006, at 4:21 PM, Kirk Haines wrote:

>> haven't just been using an http interface (and instead have struggled
> become a
> very viable alternative for production apps.

I'd love to make it flexible enough to work with any framework, but
I'm starting with ruby on rails.  If you've got some sample code of
how IOWA maybe runs in WEBrick, or suggestions on what you'd need to
get IOWA running, I'd love it.

Zed A. Shaw
http://www.zedshaw.com/
6076c22b65b36f5d75c30bdcfb2fda85?d=identicon&s=25 Ezra Zygmuntowicz (Guest)
on 2006-01-21 03:30
(Received via mailing list)
Zed-

	I'm still getting a timeout error even with the new version. Am I
doing something wrong here?

root@grunt:~/mongrel-0.1.1# rake
(in /home/ez/mongrel-0.1.1)
make
make: Nothing to be done for `all'.
cp ext/http11/http11.so lib
/usr/local/bin/ruby -Ilib:test "/usr/local/lib/ruby/gems/1.8/gems/
rake-0.6.2/lib/rake/rake_test_loader.rb" "test/test_http11.rb" "test/
test_trie.rb" "test/test_ws.rb"
Loaded suite /usr/local/lib/ruby/gems/1.8/gems/rake-0.6.2/lib/rake/
rake_test_loader
Started
Error result after 6 bytes of 15
.Read 18 string was 18
...Hitting server
E
Finished in 190.974664 seconds.

   1) Error:
test_simple_server(WSTest):
Errno::ETIMEDOUT: Connection timed out - connect(2)
     /usr/local/lib/ruby/1.8/net/http.rb:562:in `initialize'
     /usr/local/lib/ruby/1.8/net/http.rb:562:in `connect'
     /usr/local/lib/ruby/1.8/timeout.rb:48:in `timeout'
     /usr/local/lib/ruby/1.8/timeout.rb:76:in `timeout'
     /usr/local/lib/ruby/1.8/net/http.rb:562:in `connect'
     /usr/local/lib/ruby/1.8/net/http.rb:555:in `do_start'
     /usr/local/lib/ruby/1.8/net/http.rb:544:in `start'
     /usr/local/lib/ruby/1.8/net/http.rb:379:in `get_response'
     /usr/local/lib/ruby/1.8/net/http.rb:356:in `get'
     ./test/test_ws.rb:28:in `test_simple_server'

5 tests, 8 assertions, 0 failures, 1 errors
rake aborted!
Command failed with status (1): [/usr/local/bin/ruby -Ilib:test "/usr/
local...]


root@grunt:~/mongrel-0.1.1# ruby -v
ruby 1.8.4 (2005-12-24) [i686-linux]


Cheers-
-Ezra Zygmuntowicz
WebMaster
Yakima Herald-Republic Newspaper
http://yakimaherald.com
ezra@yakima-herald.com
blog: http://brainspl.at
8c43ed7f065406bf171c0f3eb32cf615?d=identicon&s=25 Zed Shaw (Guest)
on 2006-01-21 04:12
(Received via mailing list)
You didn't happen to install the previous version did you?
Otherwise, try the examples/simpletest.rb and if that works then it
might be the test bombing for some reason.  Others got it working
under Linux, so maybe I'll try to catch you on IRC (#rubyonrails) and
troubleshoot with you.

Zed A. Shaw
http://www.zedshaw.com/
722a18819725c0f6275b556ced89a3f4?d=identicon&s=25 Sascha Ebach (Guest)
on 2006-01-21 14:13
(Received via mailing list)
> One thing, could you do the same test against WEBrick using the
> examples/webrick_compare.rb script?  Just for my info.  You'll need to >
> /dev/null the output to make the test more fair.

Alright, here you go.

first Mongrel:

ab.exe -S -n 10000 http://localhost:3000/test
Concurrency Level:      1
Time taken for tests:   19.437500 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      520000 bytes
HTML transferred:       70000 bytes
Requests per second:    514.47 [#/sec] (mean)
Time per request:       1.944 [ms] (mean)
Time per request:       1.944 [ms] (mean, across all concurrent
requests)
Transfer rate:          26.08 [Kbytes/sec] received


and WEBrick started with

$ ruby -Ilib examples/webrick_compare.rb 2&>1 /dev/null &

ab.exe -S -n 10000 http://localhost:4000/test
Concurrency Level:      1
Time taken for tests:   118.62500 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      1730000 bytes
HTML transferred:       60000 bytes
Requests per second:    84.70 [#/sec] (mean)
Time per request:       11.806 [ms] (mean)
Time per request:       11.806 [ms] (mean, across all concurrent
requests)
Transfer rate:          14.31 [Kbytes/sec] received

These results are very promising. I bet if you can manage to make a
native
win32 version the results will double or triple. A 5 times boost will
even
make Cygwin a viable development environment. I suspect that the win32
version will even be fast enough for production.

-Sascha


PS: With output on the Cygwin/WEBrick combo will drop below 10 req/s on
my box.
8c43ed7f065406bf171c0f3eb32cf615?d=identicon&s=25 Zed Shaw (Guest)
on 2006-01-21 17:55
(Received via mailing list)
On Jan 21, 2006, at 7:37 AM, Sascha Ebach wrote:

> and WEBrick started with
> ...
> Requests per second:    84.70 [#/sec] (mean)
>

That's great.  The performance is consistently better.

> These results are very promising. I bet if you can manage to make a
> native win32 version the results will double or triple. A 5 times
> boost will even make Cygwin a viable development environment. I
> suspect that the win32 version will even be fast enough for
> production.

Yes, I'm gonna have to really investigate a native win32 one.  If
it's this fast, and the features are there, it'll be the easiest way
possible.  Only thing in win32 that would be missing is basic server
stuff like daemonize and so on.

Thanks for the testing.

Zed A. Shaw
http://www.zedshaw.com/
7264fb16beeea92b89bb42023738259d?d=identicon&s=25 Christian Neukirchen (Guest)
on 2006-01-21 19:15
(Received via mailing list)
Zed Shaw <zedshaw@zedshaw.com> writes:

> Yes, I'm gonna have to really investigate a native win32 one.  If
> it's this fast, and the features are there, it'll be the easiest way
> possible.  Only thing in win32 that would be missing is basic server
> stuff like daemonize and so on.

I think there are tools to run any program as service, so that
shouldn't be an issue.
722a18819725c0f6275b556ced89a3f4?d=identicon&s=25 Sascha Ebach (Guest)
on 2006-01-21 21:09
(Received via mailing list)
Christian Neukirchen wrote:
> Zed Shaw <zedshaw@zedshaw.com> writes:
>
>> Yes, I'm gonna have to really investigate a native win32 one.  If
>> it's this fast, and the features are there, it'll be the easiest way
>> possible.  Only thing in win32 that would be missing is basic server
>> stuff like daemonize and so on.
>
> I think there are tools to run any program as service, so that
> shouldn't be an issue.

Namely the win32-service package by Daniel Berger

http://rubyforge.org/projects/win32utils/

-Sascha
Faf3b56a44269e2c5b92cf97435e29f6?d=identicon&s=25 PA (Guest)
on 2006-01-21 22:08
(Received via mailing list)
On Jan 20, 2006, at 13:31, Zed Shaw wrote:

> Mongrel is a web server I wrote this week that performs *much* better
> than WEBrick (1350 vs 175 req/sec) and only has one small C extension.

Being a sucker for meaningless benchmarks I had to run this as well :))

[Mongrel]
% ruby -v
ruby 1.8.4 (2005-12-24) [powerpc-darwin7.9.0]
% ruby simpletest.rb
% ab -n 10000 http://localhost:3000/test
Requests per second:    660.20 [#/sec] (mean)

(I get a terse "ERROR: Object" from time to time)

[Webrick]
% ruby -v
ruby 1.8.4 (2005-12-24) [powerpc-darwin7.9.0]
% ruby webrick_compare.rb >& /dev/null
% ab -n 10000 http://localhost:4000/test
Requests per second:    37.90 [#/sec] (mean)

Here is something in python:

[Cherrypy][1]
% python -V
Python 2.4.2
% python tut01_helloworld.py
% ab -n 10000 http://localhost:8080/
Requests per second:    164.92 [#/sec] (mean)

And a bit of Lua [2] to round it up:

[LuaWeb][3]
% lua -v
Lua 5.1  Copyright (C) 1994-2006 Lua.org, PUC-Rio
% lua Test.lua
% ab -n 10000 http://localhost:1080/hello
Requests per second:    948.32 [#/sec] (mean)

Cheers

--
PA, Onnay Equitursay
http://alt.textdrive.com/


[1] http://www.cherrypy.org/
[2] http://www.lua.org/about.html
[3] http://dev.alt.textdrive.com/browser/LW/
A6ce942e03edad55d9b504c1e1d859d6?d=identicon&s=25 Jim Freeze (Guest)
on 2006-01-22 16:30
(Received via mailing list)
On Jan 21, 2006, at 3:07 PM, PA wrote:

> Requests per second:    164.92 [#/sec] (mean)
> [LuaWeb][3]
> Requests per second:    948.32 [#/sec] (mean)

This is great.
Can you add Apache to your list of benchmarks?



Jim Freeze
8dad1ec4d769734583f45fbbee5cd009?d=identicon&s=25 Jeff Pritchard (jeffpritchard)
on 2006-01-22 18:42
Noob question here.

No intent to impugn Zed's mad skilz or the need for something like
Mongrel.  I'm just confused by why it would be common to develop or
deploy a ruby on rails app with something other than production servers
like Apache.

So far, all of the rails demos I have seen are using webrick.  This has
been true even for setups like macosx that come with apache already set
up and running.

Does apache not come standard with everything needed to serve a rails
app?  If not, is there an add-on module for apache that makes it
rails-savvy?

Or is it the case that all rails apps have to be served by a special
rails server like mongrel or webrick?

thanks,
jp
9d4ec8946f933a18a1d15b094cc3c425?d=identicon&s=25 Jonathan Leighton (Guest)
on 2006-01-22 19:08
(Received via mailing list)
On Mon, 2006-01-23 at 02:42 +0900, Jeff Pritchard wrote:
>
> Does apache not come standard with everything needed to serve a rails
> app?  If not, is there an add-on module for apache that makes it
> rails-savvy?
>
> Or is it the case that all rails apps have to be served by a special
> rails server like mongrel or webrick?

Rails can be run through flat CGI with Apache, but that's really slow
because *every single time* you make a request, the code has to be
reloaded into memory from disk. The step up from this is something like
FastCGI, or SCGI which will keep the code in memory between requests.
The performance is then WAY better, but in development changes you make
to the code won't work until you reload the server. Obviously that's no
good so WEBrick is a lightweight server intended for development use,
which will just reload the parts of the code you change between
requests. It's not recommended for deployment though because it isn't
fast enough.

This is a very good article to read:
http://duncandavidson.com/essay/2005/12/railsdeployment

Hope that helps

Jon
Faf3b56a44269e2c5b92cf97435e29f6?d=identicon&s=25 PA (Guest)
on 2006-01-22 19:14
(Received via mailing list)
On Jan 22, 2006, at 16:28, Jim Freeze wrote:

> Can you add Apache to your list of benchmarks?

[httpd]
% httpd -v
Server version: Apache/1.3.33 (Darwin)
% ab -n 10000 http://localhost/test.txt
Requests per second:    1218.47 [#/sec] (mean)

[lighttpd]
% lighttpd -v
lighttpd-1.4.9 - a light and fast webserver
% ab -n 10000 http://localhost:8888/test.txt
Requests per second:    3652.30 [#/sec] (mean)

Cheers
8c43ed7f065406bf171c0f3eb32cf615?d=identicon&s=25 Zed Shaw (Guest)
on 2006-01-22 19:48
(Received via mailing list)
On Jan 22, 2006, at 12:42 PM, Jeff Pritchard wrote:

> Noob question here.
>
I like noobs.  Especially with BBQ sauce.  :-)

> No intent to impugn Zed's mad skilz or the need for something like
> Mongrel.  I'm just confused by why it would be common to develop or
> deploy a ruby on rails app with something other than production
> servers
> like Apache.
>
Good question.  It really comes down to nothing more than the fastest
simplest way to serve up a Rails (or Nitro, Camping, IOWA, etc.)
application.  You've currently got various options:

* CGI -- slow, resource hogging, but works everywhere.
* FastCGI -- Fast, current best practice, a pain in the ass to
install and real painful for win32 people.
* SCGI -- Fast, pure ruby (runs everywhere Ruby does), works with a
few servers, very simple to install, use, and cluster, good
monitoring (warning, I wrote this).
* mod_ruby -- Works but haven't heard of a lot of success with it,
couples your app to your web server making upgrades difficult.
* WEBrick -- Runs in pure ruby, easy to deploy, you can put it behind
any web server supporting something like mod_proxy.  Fairly slow.

Now, the sweet spot would be something that was kind of at the
optimal axis of FastCGI, SCGI, and WEBrick:

* Runs everywhere Ruby does and is easy to install and use.
* Fast as hell with very little overhead above the web app framework.
* Uses plain HTTP so that it can sit behind anything that can proxy
HTTP.  That's apache, lighttpd, IIS, squid, a huge amount of
deployment options open up.

This would be where I'm trying to place Mongrel.  It's not intended
as a replacement for a full web server like Apache, but rather just
enough web server to run the app frameworks efficiently as backend
processes.  Based on my work with SCGI (which will inherit some stuff
from Mongrel soon), it will hopefully meet a niche that's not being
met right now with the current options.

> So far, all of the rails demos I have seen are using webrick.  This
> has
> been true even for setups like macosx that come with apache already
> set
> up and running.
>
> Does apache not come standard with everything needed to serve a rails
> app?  If not, is there an add-on module for apache that makes it
> rails-savvy?
>
Apache or lighttpd are the big ones on Unix systems.  When you get
over to the win32 camp though lighttpd just don't work, and many
people insist on using IIS.  In my own experiences, if you can't hook
it into a portal or Apache without installing any software then
you're dead.  Sure this is probably an attempt to stop a disruptive
technology, but if there's a solid fast way to deploy using HTTP then
that's one more chink in the armor sealed up.

Go talk to someone who's forced to IIS and you'll see why something
other than WEBrick is really needed.  Actually, WEBrick would be fine
if it weren't so damn slow.

> Or is it the case that all rails apps have to be served by a special
> rails server like mongrel or webrick?
>
Well, they have to be served by something running Ruby.  I know
there's people who have tried with mod_ruby, but I haven't heard of a
lot of success.  I could be wrong on that.  Also, many people don't
like tightly coupling their applications into their web server.
D23f436b8e718e80f447712cdac67083?d=identicon&s=25 Amr Malik (Guest)
on 2006-01-22 22:06
Zed Shaw wrote:
> On Jan 22, 2006, at 12:42 PM, Jeff Pritchard wrote:
>
snip..
> Go talk to someone who's forced to IIS and you'll see why something
> other than WEBrick is really needed.  Actually, WEBrick would be fine
> if it weren't so damn slow.
>
snip..

Thanks for your work on this. Can you elaborate on what makes Mongrel so
much faster than Webrick? What kind of optimization techniques did you
use to make this faster. Are you using C extensions etc in part to speed
things up. (I guess I'm looking for a bit of a architectural overview
with a webrick arch. comparison to boot if you used that as inspiration)

Just curious! :)

-Amr
722a18819725c0f6275b556ced89a3f4?d=identicon&s=25 Sascha Ebach (Guest)
on 2006-01-22 23:13
(Received via mailing list)
> Thanks for your work on this. Can you elaborate on what makes Mongrel so
> much faster than Webrick? What kind of optimization techniques did you
> use to make this faster. Are you using C extensions etc in part to speed
> things up. (I guess I'm looking for a bit of a architectural overview
> with a webrick arch. comparison to boot if you used that as inspiration)
>
> Just curious! :)

Yeah, me too. What I wonder about in specific is why not just rewrite
the
performance critical parts of webrick in C. That way you would already
have
the massive amount of features webrick offers without having to
duplicate
all of this. I wonder if you have been thinking about that and the
reason
you might have decided against doing it this way.

Just curious, too! :)

-Sascha
8c43ed7f065406bf171c0f3eb32cf615?d=identicon&s=25 Zed Shaw (Guest)
on 2006-01-23 02:51
(Received via mailing list)
On Jan 22, 2006, at 4:07 PM, Amr Malik wrote:

>
> Thanks for your work on this. Can you elaborate on what makes
> Mongrel so
> much faster than Webrick? What kind of optimization techniques did you
> use to make this faster. Are you using C extensions etc in part to
> speed
> things up. (I guess I'm looking for a bit of a architectural overview
> with a webrick arch. comparison to boot if you used that as
> inspiration)
>

You're going to laugh but right now it's down to a bit of Ruby and a
nifty C extension.  Seriously.  No need yet of much more than some
threads that crank on output, a parser (in C) that makes a hash, and
a way to quickly lookup URI mappings.  The rest is done with handlers
that process the result of this.  It may get a bit larger than this,
but this core will probably be more than enough to at least service
basic requests.  I'm currently testing out a way to drop the threads
in favor of IO.select, but it looks like that messes with threads in
some weird ways.

Once I figure out all the nooks and crannies of the thing then I'll
do a more formal design, but even then it's going to be ruthlessly
simplistic.

Zed A. Shaw
http://www.zedshaw.com/
8c43ed7f065406bf171c0f3eb32cf615?d=identicon&s=25 Zed Shaw (Guest)
on 2006-01-23 03:09
(Received via mailing list)
On Jan 22, 2006, at 5:10 PM, Sascha Ebach wrote:

> you would already have the massive amount of features webrick
> offers without having to duplicate all of this. I wonder if you
> have been thinking about that and the reason you might have decided
> against doing it this way.
>

Well, this may be mean, but have you ever considered that "the
massive amount of features webrick offers" is part of the problem?
It's difficult to go into a large (or even medium) code base and
profile it and then add bolt on performance improvements.  It can be
done, but it usually ends up as a wart on the system.

So, rather than try to "fix" WEBrick I'm just considering it a
different solution to a different set of problems.  Mongrel may pick
up all the features WEBrick has, but right now it's targeted at just
serving Ruby web apps as fast as possible.

Zed A. Shaw
http://www.zedshaw.com/
722a18819725c0f6275b556ced89a3f4?d=identicon&s=25 Sascha Ebach (Guest)
on 2006-01-23 13:18
(Received via mailing list)
> Well, this may be mean, but have you ever considered that "the massive
> amount of features webrick offers" is part of the problem?  It's
> difficult to go into a large (or even medium) code base and profile it
> and then add bolt on performance improvements.  It can be done, but it
> usually ends up as a wart on the system.
>
> So, rather than try to "fix" WEBrick I'm just considering it a different
> solution to a different set of problems.  Mongrel may pick up all the
> features WEBrick has, but right now it's targeted at just serving Ruby
> web apps as fast as possible.

I suspected something along those lines :) I would probably do the same
because starting from the beginning is always more fun than trying to
understand a large code base. Although I personally think that the
latter
doesn't have to be slower. How long could it take to find a dozen slow
spots in webrick? Maybe 2-3 days? Another 2-3 days to tune them?

Anyway, I was just curious, and I am looking forward to following along
and
look and learn from the C code. I personally never had the need for
anything to be faster than Ruby *except* the http stuff. But since I
have
never actually written more than a couple of lines of C I shyed away
from
starting such a thing.

Another tip: Maybe you want to look at Will Glozer's Cerise.

http://rubyforge.org/projects/cerise/

It has a minimum bare bones http server entirely written in Ruby. Maybe
it
is of help. Just a thought.

-Sascha
159cd33667611645164e8623de42f67e?d=identicon&s=25 Toby DiPasquale (Guest)
on 2006-01-24 03:27
Zed Shaw wrote:
> You're going to laugh but right now it's down to a bit of Ruby and a
> nifty C extension.  Seriously.  No need yet of much more than some
> threads that crank on output, a parser (in C) that makes a hash, and
> a way to quickly lookup URI mappings.  The rest is done with handlers
> that process the result of this.

That's a PATRICIA trie for URL lookup, a finite state machine compiled
Ragel->C->binary for HTTP protocol parsing and an implicit use of
select(2) (via Thread), for the even-more-curious out there ;) (first
hit on Google for "Ragel" will tell you what you need to know about
that)

> It may get a bit larger than this,
> but this core will probably be more than enough to at least service
> basic requests.  I'm currently testing out a way to drop the threads
> in favor of IO.select, but it looks like that messes with threads in
> some weird ways.

Ok, so here's where I fell off your train. On your Ruby/Event page, you
said that you killed the project b/c Ruby's Thread class multiplexes via
the use of select(2), which undermines libevent's ability to effectively
manage events (which I had discovered while writing some extensions a
while back and thought "how unfortunate"). But I have some questions
about the above:

1. As above, the Thread class uses select(2) (or poll(2)) internally;
what would be the difference in using IO::select explicitly besides more
code to write to manage it all?

2. What are these "weird ways" you keep referring to? I got the
select-hogging-the-event-party thing, but what else?

I am interested b/c I am currently trying to write a microthreading
library for Ruby based on some of the more performing event multiplexing
techniques (kqueue, port_create, epoll, etc) so I can use it for other
stuff I want to write (^_^)

> Once I figure out all the nooks and crannies of the thing then I'll
> do a more formal design, but even then it's going to be ruthlessly
> simplistic.

Simple is good, m'kay? ;-) Great show in any case! I know I'll be using
this for my next internal Rails app.
8c43ed7f065406bf171c0f3eb32cf615?d=identicon&s=25 Zed Shaw (Guest)
on 2006-01-24 04:52
(Received via mailing list)
On Jan 23, 2006, at 9:27 PM, Toby DiPasquale wrote:

> hit on Google for "Ragel" will tell you what you need to know about
> that)
>

Ooohh, *that's* what people want to know.  You're right.  Here's the
main gear involved in the process:

1)  Basic Ruby TCPServer is used to create the server socket.  No
magic here.  A thread then just runs in a loop accepting connections.
2)  When a client is accepted it's passed to a "client processor".
This processor is a single function that runs in a loop doing a
readpartial on the socket to get a chunk of data.
3)  That chunk's passed to a HTTP parser which makes a Ruby Hash with
the CGI vars in it.  The parser is written with Ragel 5.2 (which has
problems compiling on some systems).  This parser is the first key to
Mongrel's speed.
4)  With a completed HTTP parse, and the body of the request waiting
to be processed, Mongrel tries to find the handler for the URI.  It
does this with a modified trie that returns the handler as well as
break the prefix and postfix of the URI into SCRIPT_INFO and
PATH_INFO components.
5) Once I've got the handler, the request hash variables, and a
request object I just call the "process" method and it does it's work.

Unhandled issues are:

* The trie was written in ruby and isn't all that fast.  A trie might
also be overkill for what will typically be a few URIs.  I was
thinking though that the trie would be great for storing cached
results and looking them up really fast.
* The thread handling has limitations that make it not quite as
efficient as I'd like.  For example, I read 2k chunks off the wire
and parse them.  If the request doesn't fit in the 2k then I have to
reset the parser, keep the data, and parse it again.  I'd really much
rather use a nice ring buffer for this.
* The threads create a ton of objects which can make the GC cause
large pauses.  I've tried a group of threads waiting on a queue of
requests, but that's not much faster or better.  So far the fastest
is using IO.select (see below).


> the use of select(2), which undermines libevent's ability to
> effectively
> manage events (which I had discovered while writing some extensions a
> while back and thought "how unfortunate"). But I have some questions
> about the above:
>

Yes, that's still true since Ruby and libevent don't know about the
other.  They fight like twenty rabid cats in a pillow case.  The main
difference is that IO.select knows about Ruby's threads, so it's
supposed to be safe to use.

> 1. As above, the Thread class uses select(2) (or poll(2)) internally;
> what would be the difference in using IO::select explicitly besides
> more
> code to write to manage it all?
>
It does use select transparently, but it seems to add a bunch of
overhead to the select processing it uses.  I'm sorting out the
IO.select and thread relationship.

> 2. What are these "weird ways" you keep referring to? I got the
> select-hogging-the-event-party thing, but what else?
>
Basically select hogs the party, threads just kind of stop for no
reason, select just stops, etc.  I really which they'd just use pth
so I could get on with my life. :-)   I've been playing with it, and
I think I have something that might work.

> I am interested b/c I am currently trying to write a microthreading
> library for Ruby based on some of the more performing event
> multiplexing
> techniques (kqueue, port_create, epoll, etc) so I can use it for other
> stuff I want to write (^_^)
>
You know, having tried this, I have to say you'll be fighting a
losing battle.  Ruby's thread implementation just isn't able to work
with external multiplexing methods.  I couldn't figure it out, so if
you do then let me know.

>> Once I figure out all the nooks and crannies of the thing then I'll
>> do a more formal design, but even then it's going to be ruthlessly
>> simplistic.
>
> Simple is good, m'kay? ;-) Great show in any case! I know I'll be
> using
> this for my next internal Rails app.
>

Thanks!

Zed A. Shaw
http://www.zedshaw.com/
32edd0717b3144d5c58a352d613abdc9?d=identicon&s=25 gabriele renzi (Guest)
on 2006-01-24 17:22
(Received via mailing list)
PA ha scritto:
>
> On Jan 20, 2006, at 13:31, Zed Shaw wrote:
>
>> Mongrel is a web server I wrote this week that performs *much* better
>> than WEBrick (1350 vs 175 req/sec) and only has one small C extension.
>
>
> Being a sucker for meaningless benchmarks I had to run this as well :))

<snip>
Hey, that was cool. Any chance yo see how would they run with -c 10?
(and I wonder how fast twisted.web would be :)
159cd33667611645164e8623de42f67e?d=identicon&s=25 Toby DiPasquale (Guest)
on 2006-01-24 18:53
Zed Shaw wrote:
> * The threads create a ton of objects which can make the GC cause
> large pauses.  I've tried a group of threads waiting on a queue of
> requests, but that's not much faster or better.  So far the fastest
> is using IO.select (see below).

Have you checked to see if your C extension is "leaking" memory by
virtue of Ruby not correctly handling it? This happened to me recently
with a similarly purposed C extension, so much so that I had to do it in
pure C and simply fork/pipe in Ruby to use it. The problem was that my
extension was using ALLOC() and friends for allocation, but Ruby didn't
understand that it could release that memory, even after the usage of
the process was 3GB+. I moved on, but I will eventually get back there
to find out why that was happening...

> Yes, that's still true since Ruby and libevent don't know about the
> other.  They fight like twenty rabid cats in a pillow case.  The main
> difference is that IO.select knows about Ruby's threads, so it's
> supposed to be safe to use.

As far as I understand, at the base of it, IO::select's C handler,
rb_f_select() calls rb_thread_select() to do the actual select'ing. It
appears that there are more functions on top of the rb_thread_select()
when coming at it from the co-op thread scheduling callchain, however.
This would be in line with what you were saying.

>> 2. What are these "weird ways" you keep referring to? I got the
>> select-hogging-the-event-party thing, but what else?
>>
> Basically select hogs the party, threads just kind of stop for no
> reason, select just stops, etc.  I really which they'd just use pth
> so I could get on with my life. :-)   I've been playing with it, and
> I think I have something that might work.

Does Ruby spawn Thread objects even when not requested by the
programmer? I seem to remember that GC was in a Thread? Is that right?

If not, can you just not spawn any and avoid these issues altogether
(perhaps alias Thread's new method to raise an exception to make sure it
doesn't happen?)

> You know, having tried this, I have to say you'll be fighting a
> losing battle.  Ruby's thread implementation just isn't able to work
> with external multiplexing methods.  I couldn't figure it out, so if
> you do then let me know.

I'm not at all put off by simply replacing select(2) in the Ruby core
with something else, just so I can get what I need,
[porta|releasa]bility be damned. I know this is not the best solution,
but it might be the fastest. I would really like something I could
gem-ify, though, if at all possible. I thought about trying to work this
into YARV and just use that, but that's nigh-on-unusable at the moment
for other reasons.
Faf3b56a44269e2c5b92cf97435e29f6?d=identicon&s=25 PA (Guest)
on 2006-01-24 20:40
(Received via mailing list)
On Jan 24, 2006, at 16:43, gabriele renzi wrote:

> Hey, that was cool. Any chance yo see how would they run with -c 10?

[Mongrel]
% ruby -v
ruby 1.8.4 (2005-12-24) [powerpc-darwin7.9.0]
% ruby simpletest.rb
% ab -n 10000 -c 10 http://localhost:3000/test
Requests per second:    386.31 [#/sec] (mean)

[Webrick]
% ruby -v
ruby 1.8.4 (2005-12-24) [powerpc-darwin7.9.0]
% ruby webrick_compare.rb >& /dev/null
% ab -n 10000 -c 10 http://localhost:3000/test
Requests per second:    27.58 [#/sec] (mean)

[Cherrypy]
% python -V
Python 2.4.2
% python tut01_helloworld.py
% ab -n 10000 -c 10 http://localhost:8080/
Requests per second:    164.77 [#/sec] (mean)

[LuaWeb]
% lua -v
Lua 5.1  Copyright (C) 1994-2006 Lua.org, PUC-Rio
% lua Test.lua
% ab -n 10000 -c 10 http://localhost:1080/hello
Requests per second:    927.04 [#/sec] (mean)

[httpd]
% httpd -v
Server version: Apache/1.3.33 (Darwin)
% ab -n 10000 -c 10 http://localhost/test.txt
Requests per second:    1186.10 [#/sec] (mean)

[lighttpd]
% lighttpd -v
lighttpd-1.4.9 - a light and fast webserver
% ab -n 10000 -c 10 http://localhost:8888/test.txt
Called sick today (fdevent.c.170: aborted)


Cheers
Faf3b56a44269e2c5b92cf97435e29f6?d=identicon&s=25 PA (Guest)
on 2006-01-24 20:43
(Received via mailing list)
On Jan 24, 2006, at 04:51, Zed Shaw wrote:

> Ooohh, *that's* what people want to know.  You're right.  Here's the
> main gear involved in the process:

Have you tried something like LibHTTPD perhaps?

http://www.hughes.com.au/products/libhttpd/

Cheers
8c43ed7f065406bf171c0f3eb32cf615?d=identicon&s=25 Zed Shaw (Guest)
on 2006-01-24 21:38
(Received via mailing list)
Yep, libhttpd is pretty cool.  I've used it before.  It's also GPL so
it might not work for most folks.  It also uses a select loop I
believe so it would fight with Ruby's threads the same way as other
external select methods.

Zed A. Shaw
http://www.zedshaw.com/
32edd0717b3144d5c58a352d613abdc9?d=identicon&s=25 gabriele renzi (Guest)
on 2006-01-24 21:59
(Received via mailing list)
PA ha scritto:
>
> On Jan 24, 2006, at 16:43, gabriele renzi wrote:
>
>> Hey, that was cool. Any chance yo see how would they run with -c 10?
>
<snip again>

great, thanks for realizing my wish :)
C4d7b6c49eb4324319e18eb7c6394623?d=identicon&s=25 kellan (Guest)
on 2006-01-25 00:09
(Received via mailing list)
Zed Shaw wrote:
> On Jan 23, 2006, at 9:27 PM, Toby DiPasquale wrote:

> > I am interested b/c I am currently trying to write a microthreading
> > library for Ruby based on some of the more performing event
> > multiplexing...
>
> You know, having tried this, I have to say you'll be fighting a
> losing battle.  Ruby's thread implementation just isn't able to work
> with external multiplexing methods.  I couldn't figure it out, so if
> you do then let me know.
>

I've been meaning to ask about this as well ever since I saw you killed
Ruby/Event.   In your experience is it only a Bad Idea(tm) to use
poll/libevent in your Ruby app if you'll also be using Threads, or is
it always a bad idea, even if you can guarentee that "require 'thread'"
is never issued?

Also, did you ever get a chance to write a port mortem discussing your
findings and the problems you ran into?

This seems like a fairly major serious problem with Ruby that should
be addressed.  Event driven programming really enables the whole
"pieces loosely joined" paradigm.  I mean its kind of embarrasing that
the best way to do async programming with Ruby is to use Rails'
javascript libraries. (okay, I'm enough of a web geek to think that
that is actually kind of cool, but we'll ignore that)

Thanks,
kellan
C98b27e3f7ba7ffe0c95e85341bd424f?d=identicon&s=25 Booker C.Bense (Guest)
on 2006-01-25 01:15
(Received via mailing list)
-----BEGIN PGP SIGNED MESSAGE-----

In article <43D4C8E8.7000606@digitale-wertschoepfung.de>,
Sascha Ebach  <se@digitale-wertschoepfung.de> wrote:
>
>I suspected something along those lines :) I would probably do the same
>because starting from the beginning is always more fun than trying to
>understand a large code base. Although I personally think that the latter
>doesn't have to be slower. How long could it take to find a dozen slow
>spots in webrick? Maybe 2-3 days? Another 2-3 days to tune them?

If it were that easy somebody would have already done
it. Profiling and optimizing languages like Ruby can be
quite difficult, if you do it at the C level you often get
results very difficult to interpret or even do anything useful
with. I.e. the profiler shows you spending 80% your time in
some basic underlying routine of ruby. If you do it at a higher
level, the overhead of benchmarking can often skew the results
badly.

So it's hard to get the data, and optimizing w/o real profiling
data is one of the great evils of programming. With simpler
apps you can often make a good guess, but in my experience
guessing where the time is spent in a more complex application
is almost always wrong.

_ Booker C. Bense


-----BEGIN PGP SIGNATURE-----
Version: 2.6.2

iQCVAwUBQ9bAIWTWTAjn5N/lAQGJQAQAgMHiY0RF+WR72pcQi0f67w2q9lUXa9wG
4pB0SfD73IiOU6D9khf8iL2Kf8dpfQ1Ubsmgpi+cVsKYADXnbZSC1Krjd6HT6Uq7
gFaGnNyj3T6VyRZDbacBR6p2NJSZRa68R2o9kkRo0g160H/a47cE+J7fi22HGjbb
1kvYfxXLgso=
=Kv8Y
-----END PGP SIGNATURE-----
7264fb16beeea92b89bb42023738259d?d=identicon&s=25 Christian Neukirchen (Guest)
on 2006-01-25 14:19
(Received via mailing list)
PA <petite.abeille@gmail.com> writes:

> On Jan 24, 2006, at 16:43, gabriele renzi wrote:
>
>> Hey, that was cool. Any chance yo see how would they run with -c 10?
>
> [lighttpd]
> % lighttpd -v
> lighttpd-1.4.9 - a light and fast webserver
> % ab -n 10000 -c 10 http://localhost:8888/test.txt
> Called sick today (fdevent.c.170: aborted)

Try adding this to lighttpd.conf:

  server.event-handler = "freebsd-kqueue"

The default handler "poll" is extremely unstable on OS X 10.3.
This topic is locked and can not be replied to.