I’m using open-uri to download files using a buffer. It seems very
inefficient in terms of resource usage (CPU is ~10-20% in usage).
If possible, I’d like some suggestions for downloading a file which
names the outputted file the same as the URL, and does not actually
write if the file comes out to a 404 (or some other exception hits).
Current code:
BUFFER_SIZE=4096
def download(url)
from = open(url)
if (buffer = from.read(BUFFER_SIZE))
puts “Downloading #{url}”
File.open(url.split(’/’).last, ‘wb’) do |file|
begin
file.write(buffer)
end while (buffer = from.read(BUFFER_SIZE))
end
end
end
Ta dah! there’s a lot of magic behind it right now, and torrentz
don’t work (fixed on my machine, need to release it). It does
segmented downloading, ideal for large files. For smaller ones, it
still works fine.
The problem with open-uri is this: it downloads the whole thing to
your tmp directory first, so using the BUFFER_SIZE thing won’t
actually help.
snoopy won’t not write the file if there’s an error.
-------------------------------------------------------|
~ Ari
Some people want love
Others want money
Me… Well…
I just want this code to compile