Forum: Ruby Anyone scraping dynamic AJAX sites?

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
2cfdef9e499a6fad5d009b8d79c868f3?d=identicon&s=25 Becca Girl (newrubygirl)
on 2008-11-30 01:29
Hello.

Is there anyone who has successfully found a way to scrape a dynamically
generated AJAX web site?  If I view the source, it gives me the
variables.  If I use Firebug to view the DOM, it gives me the actual
values.  Any ideas?

Thanks.
71770d043c0f7e3c7bc5f74190015c26?d=identicon&s=25 gf (Guest)
on 2008-11-30 02:08
(Received via mailing list)
The problem is you need a DOM-aware Javascript interpreter in your
code to execute the Javascript, manipulate the DOM in the HTML, and
then allow you to extract the data you need.

There are projects like Rhino, which is a Javascript engine you can
embed in other apps, but you still won't have the DOM of the page nor
will you be able to manipulate it then extract the values, at least as
far as I understand.

You could use something like Ruby driving some sort of WebKit
interface on Mac OS or Linux, but I have no idea where to start. That,
to me, seems like the best answer. Maybe even a Ruby-based Cocoa app
would be the trick.
E0c987f680cd640c14912ebfbf0f0f07?d=identicon&s=25 unknown (Guest)
on 2008-11-30 02:37
(Received via mailing list)
On Sat, Nov 29, 2008 at 7:25 PM, Becca Girl <cschall@yahoo.com> wrote:
> Hello.
>
> Is there anyone who has successfully found a way to scrape a dynamically
> generated AJAX web site?  If I view the source, it gives me the
> variables.  If I use Firebug to view the DOM, it gives me the actual
> values.  Any ideas?

http://code.google.com/p/firewatir/
F50f5d582d76f98686da34917531fe56?d=identicon&s=25 Peter Szinek (Guest)
on 2008-11-30 14:34
(Received via mailing list)
scRUBYt! - http://scrubyt.org

e.g. scraping your linkedin contacts:

require 'rubygems'
require 'scrubyt'


property_data = Scrubyt::Extractor.define :agent => :firefox do

   fetch          'https://www.linkedin.com/secure/login'
   fill_textfield 'session_key', '****'
   fill_textfield 'session_password', '****'
   submit

   click_link_and_wait 'Connections', 5

   vcard "//li[@class='vcard']" do
     first_name  "//span[@class='given-name']"
     second_name "//span[@class='family-name']"
     email       "//a[@class='email']"
   end

end

puts property_data.to_xml

Cheers,
Peter
___
http://www.rubyrailways.com
http://scrubyt.org
9dec3df8319c613f6f4f14a27da0fdb4?d=identicon&s=25 Kyle Schmitt (Guest)
on 2008-12-01 03:45
(Received via mailing list)
On Sat, Nov 29, 2008 at 6:25 PM, Becca Girl <cschall@yahoo.com> wrote:
> Hello.
>
> Is there anyone who has successfully found a way to scrape a dynamically
> generated AJAX web site?  If I view the source, it gives me the
> variables.  If I use Firebug to view the DOM, it gives me the actual
> values.  Any ideas?
>
> Thanks.
> --
> Posted via http://www.ruby-forum.com/.

As gf pointed out, the problem is you need a full DOM and working
javascript for this, sometimes even working css, to really do it
properly, you need a full blown, fully supported, web browser.

Short story, use the WATIR library to interact with your browser's DOM
to do this.

http://wtr.rubyforge.org/

I used to do this all the time for work, in a testing capacity.  I
tried a number of diferent solutions, and found WATIR far superior to
anything else out there, including the very pricey pay packages.  If
you cut through all the marketing BS, half the pay-packages are
functional the same as WATIR, and the other half are more primitive.

--Kyle
F50f5d582d76f98686da34917531fe56?d=identicon&s=25 Peter Szinek (Guest)
on 2008-12-01 10:53
(Received via mailing list)
Just for completeness sake: scRUBYt! (since 0.4.05) is using FireWatir
as the agent (or mechanize - you can choose whether you want scrape
AJAX or not) so you can do full blown AJAX scraping - but with a
scraping DSL which usually speeds up the scraper creation, especially
in the case of complicated scrapers.

Cheers,
Peter
___
http://www.rubyrailways.com
http://scrubyt.org
9dec3df8319c613f6f4f14a27da0fdb4?d=identicon&s=25 Kyle Schmitt (Guest)
on 2008-12-01 16:16
(Received via mailing list)
On Mon, Dec 1, 2008 at 3:48 AM, Peter Szinek <peter@rubyrailways.com>
wrote:
> Just for completeness sake: scRUBYt! (since 0.4.05) is using FireWatir as
> the agent (or mechanize - you can choose whether you want scrape AJAX or
> not) so you can do full blown AJAX scraping - but with a scraping DSL which
> usually speeds up the scraper creation, especially in the case of
> complicated scrapers.
>
> Cheers,
> Peter

Peter
Neat.  I'll have to give that a try next time I need to revisit
scraping.
D1f1c20467562fc1d8c8aa0d328def62?d=identicon&s=25 Florian Gilcher (skade)
on 2008-12-01 16:29
(Received via mailing list)
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Actually, firewatir and scRUBYt! are nice.

But is there a possibility to start firefox with a second profile (so
that it circumvents the "one instance"-rule) and rendering to a hidden
display? [1][2]

Otherwise, this really hurts testablity (as the browser might retain
your personal session) and usability on a deployment server.

Regards,
Florian Gilcher

[1]: Preferably a virtal one on a console-only machine.
[2]: Sadly, afaik, firefox has no hidden-mode.

On Dec 1, 2008, at 10:48 AM, Peter Szinek wrote:

> http://scrubyt.org
>

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (Darwin)

iEYEARECAAYFAkk0ApYACgkQyLKU2FMxSOJHmgCgjZnIkD/c4yoq//bcF31fOpD7
80sAnRNrqb3QBtaOpVJCE0Z8LgNb1TIz
=k3XY
-----END PGP SIGNATURE-----
E0c987f680cd640c14912ebfbf0f0f07?d=identicon&s=25 unknown (Guest)
on 2008-12-02 23:23
(Received via mailing list)
On Mon, Dec 1, 2008 at 10:23 AM, Florian Gilcher <flo@andersground.net>
wrote:
> personal session) and usability on a deployment server.
>
> Regards,
> Florian Gilcher
>
> [1]: Preferably a virtal one on a console-only machine.
> [2]: Sadly, afaik, firefox has no hidden-mode.

http://coderrr.wordpress.com/2007/10/15/patch-to-f...
7ade9d6500879573ab775e3c46c300c0?d=identicon&s=25 Will Simpson (wjrsimpson)
on 2008-12-20 13:35
>>
>> [1]: Preferably a virtal one on a console-only machine.
>> [2]: Sadly, afaik, firefox has no hidden-mode.
>

You could try using a virtual frame buffer if you are using Linux or
similar.

Xfvb :99 -ac &
export DISPLAY=:99

Will
F42ca7213b32b4382fb79e05f5ab7619?d=identicon&s=25 Daniel Finnie (Guest)
on 2008-12-21 00:23
(Received via mailing list)
If the site is truely AJAX, i.e. the data is loaded from an HTTP call
from
JavaScript, you could monitor the HTTP requests made by the browser.  On
Firefox, I use the LiveHTTPHeaders extension.  Just load go
view-->sidebar-->HTTP Headers, load the page with whatever data, and
look
thru the requests for anything interesting.

I used this method to get Facebook contact info and it worked fairly
well.
As a bonus, any data found with this method is usually in a very
machine-understandable format like JSON or RSS.  There are Ruby
libraries
for both.

Dan
E0c987f680cd640c14912ebfbf0f0f07?d=identicon&s=25 unknown (Guest)
on 2008-12-21 01:27
(Received via mailing list)
On Sat, Dec 20, 2008 at 7:27 AM, Will Simpson <will1@wjrs.co.uk> wrote:
>>>
>>> [1]: Preferably a virtal one on a console-only machine.
>>> [2]: Sadly, afaik, firefox has no hidden-mode.


I've never used it, but Celerity appears to have Javascript support:

http://celerity.rubyforge.org/

> You could try using a virtual frame buffer if you are using Linux or
> similar.
>
> Xfvb :99 -ac &
> export DISPLAY=:99

Or, start a vncserver with xstartup set to launch the scraper script.
This topic is locked and can not be replied to.