Best way to download >1GB files

On 1/2/08, Casimir [email protected] wrote:

Giles B. kirjoitti:

Wouldn’t it be cool if we could keep Zed S. in a cage and feed him newbies?

You mean AFTER you have sniped at the newbies, right? The kettle, the
pot, et cetera.

What are you talking about? I don’t get it. Yes, after I snipe at
newbies, I say wouldn’t it be great if we could just let Zed handle
it. Because he’s better at it. Where does a pot and a kettle enter the
equation?


Giles B.

Podcast: http://hollywoodgrit.blogspot.com
Blog: http://gilesbowkett.blogspot.com
Portfolio: http://www.gilesgoatboy.org
Tumblelog: http://giles.tumblr.com

[email protected] wrote:

same time. This code downloads the whole file, but 8kb at a time.

Fascinating. Learning every day…

Where did it download the file to? Did it write it to disk or just
keep it all in memory?

On Jan 1, 1:56 pm, Tim H. [email protected] wrote:

how to download it in small chunks so it’s not all in memory at the
same time. This code downloads the whole file, but 8kb at a time.

No, I thought when you use Kernel#open with open-uri, it FIRST downloads
the entire 1GB file to your temp folder, and THEN runs your block on
that file in temp

Interesting. I just tried downloading a 6.1MB file with open-uri and
didn’t see that behavior. I’m using Ruby 1.8.6 on OS X 10.5.

Hmmm … fetching a ~5MB file over HTTP, the entire file was
downloaded prior to the 8192 chunk reads. Ruby 1.8.6 p111 on Solaris
2.11. Same behavior with JRuby. FWIW, I’m observing interface stats
to make my determination.