Hi all, Does anyone have the experience of using Mechanize to scrape webpages? I encountered a problem using Mechanize to parse enormous number of pages. As each time I use agent.get function to fetch a page, it keeps the log, and thus as time grows, the object size of the agent grows bigger and bigger, and consuming all my memory (4G). Is there any solution to this problem?
on 2008-11-15 20:42