Forum: Ruby dRuby file transfer performance issue

Announcement (2017-05-07): is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see and for other Rails- und Ruby-related community platforms.
009584de9b9374916de89d9e5f9cdf19?d=identicon&s=25 Eivind (Guest)
on 2007-02-15 16:25
(Received via mailing list)

I'm a Ruby newbie fra Norway (say that many times fast:)

Currently i'm trying to send files from one application to another
using distributed ruby (dRuby).

The files are sent, but it takes "forever".
I tried to send a Word-document (about 600 kB), and it took more than
two minutes when both applications ran locally on the same machine.

Do I have to do something special if I'm working with files other than
ordinary text?

This is the code I'm using:

     def fetch(fname), 'r') do |fp|
         while buf =
       return nil

     def store_from(fname, there)
       size = there.size(fname)
       wrote = 0

       File.rename(fname, fname + '.bak') if File.exists? fname, 'w') do |fp|
         yield([wrote, size]) if block_given?
         there.fetch(fname) do |buf|
           wrote += fp.write(buf)
           yield([wrote, size]) if block_given?

       return wrote
Fa2521c6539342333de9f42502657e5a?d=identicon&s=25 Eleanor McHugh (Guest)
on 2007-02-16 12:08
(Received via mailing list)
On 15 Feb 2007, at 15:25, Eivind wrote:
>      def store_from(fname, there)
>            nil
>          end
>          fp.close
>        end
>        return wrote
>      end

Your slowdown is an artefact of breaking the file read and transmit
operations down into chunks of 4096 bytes. This will cause your 600kb
word document to be sent as 150 discrete messages across the network,
each time incurring the cost of a disk seek and probably the cost of
network congestion. The fact that you're running both pieces of code
on the same machine will also add 150 additional disk seeks into the
equation for the write process. These all incur non-deterministic
costs based on the actual layout of the file system, task switching
by the OS between disk operations, particular OSs disk caching
mechanisms, etc.

If you read the entire file into memory in one chunk that will reduce
the cost at one end, then by buffering the whole thing in memory at
the other end until the transfer is complete you'll reduce the other
cost. As you are probably transmitting over TCP I also wouldn't
bother to break the file up into discrete chunks as the underlying
transport will take care of that for you (and 4096 is very rarely an
optimal block size: for ethernet traffic try somewhere around 1536,
and for disk access it'll depend on the settings for the file-system
and the physical geometry of the disk).

As a general rule of thumb, always seek to minimise the number of I/O
operations that your code is performing if you want to avoid these
kinds of problems. I/O is orders of magnitude slower than anything else.


Eleanor McHugh
Games With Brains
raise ArgumentError unless @reality.responds_to? :reason
6076c22b65b36f5d75c30bdcfb2fda85?d=identicon&s=25 Ezra Zygmuntowicz (Guest)
on 2007-02-17 00:02
(Received via mailing list)

On Feb 16, 2007, at 3:08 AM, Eleanor McHugh wrote:

>>            yield([wrote, size]) if block_given?
> 600kb word document to be sent as 150 discrete messages across the
> memory at the other end until the transfer is complete you'll
> anything else.
> Ellie

  Sending a file across drb like that is also incurring the cost of
Marshalling and unmarshaling the file. I would think you would be
better off having one of the drb processes use net/sftp to transfer
the file to the other node and then send a drb message with the file

-- Ezra Zygmuntowicz
-- Lead Rails Evangelist
-- Engine Yard, Serious Rails Hosting
-- (866) 518-YARD (9273)
This topic is locked and can not be replied to.