Mirroring large files over HTTP

I’m working on a script where I want to download large files off a
remote
web server and store on a local filesystem.

At the moment I’m using code like this:

require ‘open-uri’

open(filename, ‘w’) do |file|
file.write(open(remote_url).read)
end

I assume this will read the complete content of the remote file into
memory before writing it to the local file. If that assumption is
correct, what is the best/easiest way to do a buffered piecemeal
fetch/store? I’ve looked at the net/http library but haven’t found
anything in there that looks relevant to this.


Lars H.

“If anyone disagrees with anything I say, I am quite prepared not only
to
retract it, but also to deny under oath that I ever said it.” -Tom
Lehrer

end

I assume this will read the complete content of the remote file into
memory before writing it to the local file. If that assumption is
correct, what is the best/easiest way to do a buffered piecemeal
fetch/store? I’ve looked at the net/http library but haven’t found
anything in there that looks relevant to this.

Turns out the OpenURI module is indeed fetching the remote resource
in segments and storing to a temporary file. However, my code above
will read the complete contents of that file into memory before
writing it back out to another file.

By inspecting the OpenURI source code I’ve learned that this is how
it’s done (sans proxy handling, error handling etc.):

uri = URI.parse(url)
http = Net::HTTP.new(uri.host, uri.port)
http.request_get(uri.path) do |response|
open(filename, ‘w’) do |file|
response.read_body do |segment|
file.write(segment)
end
end
end

I’m a little surprised not to find any convenience method in the
standard
libraries doing all this for me, though.


Lars H.

“If anyone disagrees with anything I say, I am quite prepared not only
to
retract it, but also to deny under oath that I ever said it.” -Tom
Lehrer

On Oct 2, 2008, at 01:03 AM, Lars H. wrote:

Turns out the OpenURI module is indeed fetching the remote resource
in segments and storing to a temporary file. However, my code above
will read the complete contents of that file into memory before
writing it back out to another file.

I’m a little surprised not to find any convenience method in the
standard
libraries doing all this for me, though.

Why? It’s all of one line:

output.write input.read(16384) until input.eof?

standard
libraries doing all this for me, though.

Why? It’s all of one line:

output.write input.read(16384) until input.eof?

Nice enough, but one will need a bit more than that singke line to do the
whole operation from start to finish.

I was thinking more of something like SomeClass.mirror(url, filename).

Today I came across the the curb¹ gem (Ruby bindings for libcurl) while
reading a blog posting² about net/http performance, and this gem
provides
a convenient class method that does exactly what I want:

require ‘curb’
Curl::Easy.download(url, filename)

It also provides lots of other nice stuff, so I will definitely look
into using this one for my future HTTP client needs.

[1] http://curb.rubyforge.org/
[2] http://apocryph.org/analysis_ruby_18x_http_client_performance


Lars H.

“If anyone disagrees with anything I say, I am quite prepared not only
to
retract it, but also to deny under oath that I ever said it.” -Tom
Lehrer

Why? It’s all of one line:

output.write input.read(16384) until input.eof?

Nice enough, but one will need a bit more than that singke line to do
the
whole operation from start to finish.

I was thinking more of something like SomeClass.mirror(url, filename).


Lars H.

“If anyone disagrees with anything I say, I am quite prepared not only
to
retract it, but also to deny under oath that I ever said it.” -Tom
Lehrer