Speed curiosity

As a note–using the mongrel example from

http://mongrel.rubyforge.org/web/mongrel/files/README.html and Mongrel
1.1.5

It yielded (for me) ~800 req/s [running ab -n 1000 -c 1
http://localhost:3000/test]

and if I changed
out.write(“hello!\n”)

to
out.write(“hello!\n”*10_000)

it yielded ~300 req/s.

I was unable to get evented mongrel to run so wasn’t able to compare the
two.

Doing a little bit of investigating, kcachegrind+ruby-prof points the
latency to http_response.rb line 137

@socket.write(data)

Experimenting by changing this line haphazardly to

  while data and data.length > 0
    wrote = @socket.write_nonblock(data)
    data = data[wrote..-1]
  end

yielded ~938 req/s [AFAICT]

Thoughts?

-=R

On 30 Aug 2008, at 21:57, Roger P. wrote:

to
out.write(“hello!\n”*10_000)

AFAIK that’s not the fastest of operations.

latency to http_response.rb line 137
yielded ~938 req/s [AFAICT]
Thin and ebb both write more like this.

   out.write("hello!\n"*10_000)

AFAIK that’s not the fastest of operations.

String creation itself turns out to not take too long:

Benchmark.measure { “hello!\n”*10_000}
=> #<Benchmark… @real=0.000353097915649414, @utime=0.0, @cstime=0.0>

So it’s not a huge bottleneck. Replacing it with a static string yields
approx. the same results, which I did thanks to your suggestion.

latency to http_response.rb line 137
yielded ~938 req/s [AFAICT]
Thin and ebb both write more like this.

Here’s some results [ruby 1.8.6p287 OS X] running ab -n 300

7B response: old: 1595 req/s new: 1690 req/s (thin: 1901)
7K response: old: 1168 req/s new: 1559 req/s (thin: 1849)
70K response: old: 366 req/s new: 1140 req/s (thin: 1160)
700K response: old: 46 req/s new: 286 [or 48] req/s (thin: 295)*

So overall better results, but most noticeable at the 70K level. It
seems roughly on par with thin.
IO#write is [I think] ruby thread friendly, so I’m not sure why the
difference.

Thanks!
-=R

  • or 48: With some mongrel tests it would have a single long, out of
    300, that would take 4s while the others all took 20ms. Not sure why.
    Excluding that, it ran at the 286

patch:
Index: lib/mongrel/http_response.rb

— lib/mongrel/http_response.rb (revision 1036)
+++ lib/mongrel/http_response.rb (working copy)
@@ -137,7 +137,15 @@
end

 def write(data)
  •  @socket.write(data)
    
  •  while data and data.length > 0
    
  • begin
  • amount_wrote = @socket.write_nonblock(data)
  •      data = data[amount_wrote..-1]
    
  • rescue Errno::EAGAIN
  • wait for it to become writable again

  • select nil, [@socket], nil, nil
  • end
  •  end
    
    rescue => details
    socket_error(details)
    end

Wayne Seguin wrote:

Using write_nonblock is an absolutely great suggestion; the only real
issue with write_nonblock is that it doesn’t work in all environments.
While Ruby is supposed to fall back on blocking IO when async is
unavailable in the underlying system the reality is sketchy at best.

Yeah I guess the best thing’d be to fix IO#write in the core, instead of
a hack to work around it :slight_smile: I have no idea what #write does but it
appears that it is suboptimal, at least for this one distro on this one
machine.

Maybe I should just file a ruby bug report that says “IO#write seems
slow!” :slight_smile:

Unfortunately it seems that on 1.9 it has the same speed pattern, so no
help there.

Thanks for your help.

-=R

approx. the same results, which I did thanks to your suggestion.
70K response: old: 366 req/s new: 1140 req/s (thin: 1160)
-=R

Roger,

Using write_nonblock is an absolutely great suggestion; the only real
issue with write_nonblock is that it doesn’t work in all environments.
While Ruby is supposed to fall back on blocking IO when async is
unavailable in the underlying system the reality is sketchy at best.
Through in Ruby implementations like Ruby and you end up with code
that seemingly arbitrarily breaks. I’m all for write_nonblock however
if this route is pursued then there has to be a whole chain of
capability and environment detection around this chunk not a simple
rescue (yes I have done this before).

So, +1 from me as long as we modify as I suggested.

~Wayne

On Sep 1, 2008, at 13:08 , Roger P. wrote:

approx. the same results, which I did thanks to your suggestion.
70K response: old: 366 req/s new: 1140 req/s (thin: 1160)
-=R

Yeah I guess the best thing’d be to fix IO#write in the core, instead
of a hack to work around it :slight_smile: I have no idea what #write does but
it appears that it is suboptimal, at least for this one distro on
this one machine.

I do not think so, IO#write writes “exactly” n bytes even if the
operating systems socket buffer infrastructure (and ruby’s one) cannot
handle “exactly” n bytes with optimal efficiency. It maybe force
sending fragmented packages, do costly buffer expansion or other time
consuming operations.

IO#writepartial let the os (and ruby?) write the “optimal” amount
of data in the current buffer and socket environment.

Maybe the usage of the select (ruby) system call Kernel#select can be
used to write to the socket “only if it would not block and or do
other costly operations” - see above.

Ruby 1.8 (maybe 1.9 too?) internally useing the select system call, to
manage threading.

Year’s ago i tried to improve the throughput using select,
unfortunately ruby 1.8 did not merge “my select call” in it’s
internal select calls.

strace showed various “internal selects” and “my io selects” in
unpredictable order, interrupting each other (ruby’s thread scheduler
timer?)…

It was not he perfect solution :wink:
Maybe situation has changed?

Mfg

Markus