Forum: Ruby Gathering Ruby Quiz 2 Data (#189)

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
33117162fff8a9cf50544a604f60c045?d=identicon&s=25 Daniel X Moore (yahivin)
on 2009-01-23 07:44
(Received via mailing list)
Greetings!

Welcome to the inaugural Ruby Quiz 3!

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

The three rules of Ruby Quiz:

1.  Please do not post any solutions or spoiler discussion for this
quiz until 48 hours have elapsed from the time this message was
sent.

2.  Support Ruby Quiz by submitting ideas and responses
as often as you can! Visit: <http://rubyquiz.strd6.com>

3.  Enjoy!

Suggestion:  A [QUIZ] in the subject of emails about the problem
helps everyone on Ruby Talk follow the discussion.  Please reply to
the original quiz message, if you can.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

## Gathering Ruby Quiz 2 Data

I'm building the new Ruby Quiz website and I need your help...

This week's quiz involves gathering the existing Ruby Quiz 2 data from
the Ruby Quiz website: <http://splatbang.com/rubyquiz/>

Each quiz entry contains the following information:

 * id
 * title
 * description
 * summary

There are also many quiz solutions that belong to each quiz. The quiz
solutions have the following:

 * quiz_id
 * author
 * ruby_talk_reference
 * text

Matthew has some advice for getting at the data:
>
>  * quiz.txt -- the quiz description
>  * sols.txt -- a list of author names and the ruby-talk message # of the submission
>  * summ.txt -- the quiz summary
>
> Examples:
>  * <http://splatbang.com/rubyquiz/184_Befunge/quiz.txt>
>  * <http://splatbang.com/rubyquiz/184_Befunge/sols.txt>
>  * <http://splatbang.com/rubyquiz/184_Befunge/summ.txt>
>

Your program will collect and output this data as yaml (or your favorite
data
serialization standard; xml, json, etc.).
703fbc991fd63e0e1db54dca9ea31b53?d=identicon&s=25 Robert Dober (Guest)
on 2009-01-23 19:33
(Received via mailing list)
On Fri, Jan 23, 2009 at 7:42 AM, Daniel Moore <yahivin@gmail.com> wrote:
> sent.
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>  * id
>  * text
>> So there is a subdirectory called "184_Befunge". There
>>
>
> Your program will collect and output this data as yaml (or your favorite data
> serialization standard; xml, json, etc.).
>
> --
> -Daniel
>
>
Daniel in which time zone are you? What do you and the others think if
we give our friends in GMT-x some more time? My suggestion would be to
extend the spoiler period to something like Sunday 13h or 14h GMT.
Actually I do not care about the Americans ;) I just sleep that long on
WEs.
Just 0.02€.
Robert
A61ecce13ed142622f24a5ca3a123922?d=identicon&s=25 Matthew Moss (Guest)
on 2009-01-23 20:06
(Received via mailing list)
> Daniel in which time zone are you? What do you and the others think if
> we give our friends in GMT-x some more time? My suggestion would be to
> extend the spoiler period to something like Sunday 13h or 14h GMT.
> Actually I do not care about the Americans ;) I just sleep that long
> on WEs.


Are you suggesting that a duration of 48 hours varies in duration from
time zone to time zone?

:D

*wink wink*
03394cad6c346c9cccef6390e8f7cd89?d=identicon&s=25 Andy Cooper (Guest)
on 2009-01-23 20:09
(Received via mailing list)
> Are you suggesting that a duration of 48 hours varies in
> duration from
> time zone to time zone?


American dollars are not worth as much as the Euro, so I would guess
that is exactly what he is saying.  I mean time IS money afterall.

Andy Cooper.
31e038e4e9330f6c75ccfd1fca8010ee?d=identicon&s=25 Gregory Brown (Guest)
on 2009-01-23 23:26
(Received via mailing list)
On Fri, Jan 23, 2009 at 2:05 PM, Andy Cooper <acooper@pritchettnet.com>
wrote:
>>
>> Are you suggesting that a duration of 48 hours varies in
>> duration from
>> time zone to time zone?
>
>
> American dollars are not worth as much as the Euro, so I would guess
> that is exactly what he is saying.  I mean time IS money afterall.

Damn you Daniel!  First day on the job and you've got your hand in my
pocket! :)

-greg
33117162fff8a9cf50544a604f60c045?d=identicon&s=25 Daniel X Moore (yahivin)
on 2009-01-24 02:48
(Received via mailing list)
I'm not opposed to extending the no spoiler period to give everyone
more of the weekend to contemplate. **So everyone, please no spoilers
until Sun 14:00 GMT**. As always feel free to ask questions and post
non-spoiler discussion any time.

My local time is UTC-8 so I posted the quiz Thursday night right
before going to be, which works out well for my schedule.

Open question to everyone: What day and time would you prefer to have
the new quizzes posted and how long of a no-spoiler period do you
prefer?
F50f5d582d76f98686da34917531fe56?d=identicon&s=25 unknown (Guest)
on 2009-01-27 01:21
(Received via mailing list)
What's the deadline btw? I am almost ready with the solution since the
weekend, but have too much on my plate to finish it right now :)

Cheers,
Peter
__
http://www.rubyrailways.com
31e038e4e9330f6c75ccfd1fca8010ee?d=identicon&s=25 Gregory Brown (Guest)
on 2009-01-27 01:35
(Received via mailing list)
On Mon, Jan 26, 2009 at 7:18 PM,  <peter@rubyrailways.com> wrote:
> What's the deadline btw? I am almost ready with the solution since the
> weekend, but have too much on my plate to finish it right now :)

Historically there have been no deadlines that I know of, just that if
you aren't reasonably timely, you won't have a shot at being mentioned
in the summary.  But at least when James ran it, you could certainly
submit late solutions for the archives.  I hope this tradition is
continued, but you can always of course post here at any rate.

-greg
33117162fff8a9cf50544a604f60c045?d=identicon&s=25 Daniel X Moore (yahivin)
on 2009-01-27 01:57
(Received via mailing list)
On Mon, Jan 26, 2009 at 4:33 PM, Gregory Brown
<gregory.t.brown@gmail.com> wrote:
> -greg
>
>
> --
> Technical Blaag at: http://blog.majesticseacreature.com
>  Non-tech stuff at: http://metametta.blogspot.com
> "Ruby Best Practices"  Book now in O'Reilly Roughcuts:
> http://rubybestpractices.com
>
>

Gregory is correct, there aren't any hard deadlines. However, if you
post your solution by early Thursday then it stands a better chance to
get into the quiz summary.
F50f5d582d76f98686da34917531fe56?d=identicon&s=25 unknown (Guest)
on 2009-01-29 22:13
(Received via mailing list)
> Greetings!
>
> Welcome to the inaugural Ruby Quiz 3!

Here is my scRUBYt! and Nokogiri based solution:

http://pastie.org/374542

As far as I can tell (the script is generating a several MB single XML
file, so it's not trivial do determine) it is working well and it's also
complete.
If you need the XML file, drop me a msg.

A writeup will follow on my blog soon, will post a message here.

Cheers,
Peter
___
http://www.rubyrailways.com
33117162fff8a9cf50544a604f60c045?d=identicon&s=25 Daniel X Moore (yahivin)
on 2009-01-31 20:37
(Received via mailing list)
This quiz was an exercise in Web Scraping
[http://en.wikipedia.org/wiki/Web_scraping]. As more and more
information becomes available on the internet it is useful to have a
programatic way to access it. This can be done through web APIs, but
not all websites have such APIs available or not all information is
available via the APIs. Scraping may be against the terms of use for
some sites and smaller sites may suffer if large amounts of data are
being pulled, so be sure to ask permission and be prudent!

The one solution to this week's quiz come from Peter Szinek using
scRUBYt [http://scrubyt.org/]. Despite being just over fifty lines
long there is a lot packed in here, so let's dive in.

Here we begin by seting up a scRUBYt Extractor and set it to get the
main Ruby Quiz 2 page.

  #scrape the stuff with sRUBYt!
  data = Scrubyt::Extractor.define do
    fetch 'http://splatbang.com/rubyquiz/'

The 'quiz' sets up a node in the XML document, retrieving elements
that match the XPath. This yields all the links in the side area, that
is, links to all the quizzes.

    quiz "//div[@id='side']/ol/li/a[1]" do
      link_url do
        quiz_id /id=(\d+)/
        quiz_link /id=(.+)/ do

These next two sections download the description and summary for each
quiz. They are saved into temporary files to be loaded into the XML
document at the end. Notice the use of lambda, it takes in the match
from /id=(.+)/ in the quiz_link. So for example when the link is
'quiz.rhtml?id=157_The_Smallest_Circle' it matches
'157_The_Smallest_Circle' and passes it into the lambda which returns
it as "http://splatbang.com/rubyquiz/157_The_Smallest_Cir...
which is the text for the quiz. The summary is gathered in a likewise
fashion.

          quiz_desc_url(lambda {|quiz_dir|
"http://splatbang.com/rubyquiz/#{quiz_dir}/quiz.txt"}, :type =>
:script) do
            quiz_dl 'descriptions', :type => :download
          end
          quiz_summary_url(lambda {|quiz_dir|
"http://splatbang.com/rubyquiz/#{quiz_dir}/summ.txt"}, :type =>
:script) do
            quiz_dl 'summaries', :type => :download
          end
        end
      end

This next part gets all the solutions for each quiz. It follows the
link_url from the side area. Once on the new page it creates a node
for each solution, again by using XPath to get all the links in the
list on the side. It populates each solution with an author: the text
from the html anchor tag. It populates the ruby_talk_reference with
the href attribute of the tag. In order to get the solution text it
follows (resolves) the link and returns the text within the "//pre[1]"
element, again using XPath to specify. The text node is added as a
child node to the solution.

      quiz_detail :resolve => "http://splatbang.com/rubyquiz" do
        solution "/html/body/div/div[2]/ol/li/a" do
          author lambda {|solution_link_text| solution_link_text},
:type => :script
          ruby_talk_reference "href", :type => :attribute
          solution_detail :resolve => :full do
            text "//pre[1]"
          end
        end
      end

This select_indices limits the scope of the quiz gathering to just the
first three, usefull for testing since we don't want to have to
traverse the entire site to see if code works. I removed it when
gathering the full dataset.

    end.select_indices(0..2)
  end

This next part, using Nokogiri, loads the files that were saved
temporarily and inserts them into the XML document. It also removes
the link_url nodes to clean up the final output to match the output
specified in the quiz.

  result = Nokogiri::XML(data.to_xml)

  (result/"//quiz").each do |quiz|
    quiz_id = quiz.text[/\s(\d+)\s/,1].to_i
    file_index = quiz_id > 157 ? "_#{(quiz_id - 157)}" : ""
    (quiz/"//link_url").first.unlink

    desc = Nokogiri::XML::Element.new("description", quiz.document)
    desc.content =open("descriptions/quiz#{file_index}.txt").read
    quiz.add_child(desc)

    summary = Nokogiri::XML::Element.new("summary", quiz.document)
    summary.content =open("summaries/summ#{file_index}.txt").read
    quiz.add_child(summary)
  end

And finally save the result to an xml file on the filesystem:

  open("ruby_quiz_archive.xml", "w") {|f| f.write result}

This was my first experience with scRUBYt and it took me a little
while to "get it". It packs a lot of power into a concise syntax and
is definitely worth considering for your next web scraping needs.
4299e35bacef054df40583da2d51edea?d=identicon&s=25 James Gray (bbazzarrakk)
on 2009-01-31 21:37
(Received via mailing list)
On Jan 31, 2009, at 1:36 PM, Daniel Moore wrote:

> This quiz was an exercise in Web Scraping
> [http://en.wikipedia.org/wiki/Web_scraping].

Great summary Daniel.  You've got the new quiz off to a great start.

James Edward Gray II
This topic is locked and can not be replied to.