Get all site tree with Ruby


I need to grab all site data with all tree structure. Every page have
to children pages. How to build site tree with Nokogiri? It must be
recursive page visiting and scraping all directory links, but I can’t
recognize full algorhytm. How to do that?
P.S. And I don’t need to “Save all site on disk with HTTRack”. Data will
processed and copied on the new version of redesigned original site.

At which point you’re get stuck?

Simply GET index page, parse it via nokogiri, select tags which you
interested in, extract urls from href attribute, do recursive GET on
Each page type should have its own function that performs GET and

If you have to fetch pretty huge amount of pages, then you need to store
your grabbing state somewhere in database. For example, keep separate
for urls to be parsed. (url is a unique key), and mark rows a “to be
parsed” and “already parsed”. Of course you need to normalize all urls,
avoid duplicates in table.

Да и мог бы спросить в ror2ru.

Some time ago I solved similar problem (but I needed continuous
organizing several workers:
(in Russian language)
Probably you do not need such a complex thing, but you may get some
from it.

On May 12, 2015, at 5:21 PM, Роман Ярыгин [email protected] wrote:

I stuck exactly on recursive algoritm. Can’t find out how to build that
recursive function

It’s recursion, you call it again…

def start

def get_subtree(url)
#fetch the page
#parse it
#for each link
#normalize the link
#if link not already visited
#add link to table of visited links

Scott R.
[email protected]
(303) 722-0567 voice

I stuck exactly on recursive algoritm. Can’t find out how to build that
recursive function

Вот как раз на этой рекурсивной функции я и застрял. Не могу допетрить
ее написать.

вторник, 12 мая 2015 г., 20:09:51 UTC+10 пользователь Vladimir Gordeev

Yeah, thanks. I figured it out. Now I stuck with million other problems,
but this is another theme =)

среда, 13 мая 2015 г., 10:36:34 UTC+10 пользователь Scott R. написал: