Hi all.
Sorry, if the question seems dumb.
My task is: I have some HTML fragment; no limitations on it correctness,
except of there can’t be tag cutted:
This is possible: […] (fragment starts with closing tag)
This is not: [tr>…]
I need to do tasks:
- Cut some tags with those contents, for ex., all tables
[beforeafter] => [before after]
- cut some tags, leaving content:
[beforeafter] => [before after]
- other tags to make “consistent”:
[beforeafter] => [before after]
[before
after] => [before
after]
…
Can it be done with Hpricot? Or any other options?
Thanks.
V.
On 11/30/06, Victor Zverok S. [email protected] wrote:
My task is: I have some HTML fragment; no limitations on it correctness,
except of there can’t be tag cutted:
(…)
Can it be done with Hpricot? Or any other options?
Tried HTMLTidy[0]? Sometimes it tries to be too smart, but it has a
lot of options. The way I do it[1] probably won’t suit you, but might
give you some ideas.
[0] http://rubyforge.org/projects/tidy/
[1]
http://cvs.savannah.gnu.org/viewcvs/samizdat/samizdat/lib/samizdat/sanitize.rb?rev=1.99
From: Dmitry B. [mailto:[email protected]]
Sent: Thursday, November 30, 2006 4:21 PM
On 11/30/06, Victor Zverok S. [email protected] wrote:
My task is: I have some HTML fragment; no limitations on it correctness,
except of there can’t be tag cutted:
(…)
Can it be done with Hpricot? Or any other options?
Tried HTMLTidy[0]?
Not really tried, but had thought about.
The problem is I need something really “small, smart and simple” not
“huge
and almighty” (as Tidy seems).
But thanks for advice.
Dmitry B.
V.
Victor “Zverok” Shepelev wrote:
Not really tried, but had thought about.
The problem is I need something really “small, smart and simple” not “huge
and almighty” (as Tidy seems).
Not “huge and almighty” but “small, smart and simple” … I believe
that’s
my cue.
Have you considered writing your own miniature library? Maybe, a library
consisting of 20 lines of Ruby instructions (regulars: note the absence
of
a certain trigger word)?
Why not express the problem to be solved more explicitly and clearly?
And … were the HTML pages written by humans or a machine? I ask
because
machine-generated HTML tends to be more syntactically reliable.
If I can have a sufficiently clear statement of the problem to be
solved, I
can suggest a solution – or post one.
On re-reading your first post in this thread, I venture to say that the
pages are sufficiently disorganized that an ad hoc solution is the best
approach overall, one in which various regular expression filters are
used
to extract essential page data, and the pages can then be reconstructed
using stricter HTML or XHTML syntax.
So, let’s write some cod … oops, I mean let’s write a small library.
Victor “Zverok” Shepelev wrote:
/ …
Now we have some part of page, need to delete all tables, images, and so
on, and strip all “non-content” tags (everything but p, ul, ol, li, b,
i…), and I need to have “consistent” HTML to show.
Easy to say in one word, but that one word cannot be turned into code.
It is a task definition.
The task may vary for different dictionaries. For ex., with some
dictionaries tables must not be deleted, but “normalized”:
“
text1 |
text2” => “”
Both the before and after forms show big syntax errors. I hope you
understand HTML syntax, if not, this may be more difficult than I
thought.
Or even XHTMLish “
”
Well, your description of the problem is way too general for any
progress
toward a solution.
Perhaps you could post what you consider to be the desired end result
for a
particular entry from the “dictionary” site of your choice.
By the way (my boilerplate remark about page scraping), if this is for
any
purpose other than your own personal use, it represents a copyright
problem.
I want to emphasize this is not difficult at all, once there is a clear
statement of purpose. In can be done in a few (maybe a few dozen) lines
of
Ruby code.
|
From: Paul L. [mailto:[email protected]]
Sent: Thursday, November 30, 2006 8:20 PM
Have you considered writing your own miniature library? Maybe, a library
can suggest a solution – or post one.
On re-reading your first post in this thread, I venture to say that the
pages are sufficiently disorganized that an ad hoc solution is the best
approach overall, one in which various regular expression filters are used
to extract essential page data, and the pages can then be reconstructed
using stricter HTML or XHTML syntax.
So, let’s write some cod … oops, I mean let’s write a small library.
OK, here’s the model of what I’m doing: small app, which interacts with
dictionaries like Wikipedia:
- user inputs something like “w matz”
- the software download first lines of Matz - Wikipedia
(first one or two meaningful paragraphs) and displays them.
What to download and to show is setted by simple templates (regexpes for
now, but may be something Xpath-like).
Now we have some part of page, need to delete all tables, images, and so
on,
and strip all “non-content” tags (everything but p, ul, ol, li, b,
i…),
and I need to have “consistent” HTML to show.
It is a task definition.
The task may vary for different dictionaries. For ex., with some
dictionaries tables must not be deleted, but “normalized”:
“text1text2” => “
”
Or even XHTMLish “
”
–
Paul L.
http://www.arachnoid.com
V.
Victor “Zverok” Shepelev wrote:
/ …
That’s all.
You project is extremely ambitious, and will outstrip all but the most
ambitious, dedicated effort. Every location – indeed, every page – you
visit will require different filtering.
Good luck with your project.
From: Paul L. [mailto:[email protected]]
Sent: Thursday, November 30, 2006 11:00 PM
Victor “Zverok” Shepelev wrote:
It is a task definition.
The task may vary for different dictionaries. For ex., with some
dictionaries tables must not be deleted, but “normalized”:
“text1text2” => “
”
Both the before and after forms show big syntax errors. I hope you
understand HTML syntax, if not, this may be more difficult than I thought.
I understand HTML syntax. And I see no problem in above.
Closing tags for and are both optional in HTML 4.01 w3c spec.
Perhaps you could post what you consider to be the desired end result for a
particular entry from the “dictionary” site of your choice.
OK. Here it is:
Source page: Ukraine - Wikipedia
Start pattern:
End pattern:
Elements to exclude: tables, images.
Desired output (with text in middle of paragraph skipped):
Ukraine (Ukrainian: Україна,
Ukraina, /ukraˈjina/) is a country in Eastern Europe.
....
It became independent again after the Soviet Union's collapse in
1991.
---------------------------
That’s all.
By the way (my boilerplate remark about page scraping), if this is for any
purpose other than your own personal use, it represents a copyright
problem.
My application would be kinda browser (nano-browser), I don’t want to
“grab” dictionaries.
I want to emphasize this is not difficult at all, once there is a clear
statement of purpose. In can be done in a few (maybe a few dozen) lines of
Ruby code.
I know. I’m not a nuby (my poor language in mails is due to natural
language problems, not very low knowledge).
I’ve just asked about existing libraries.
–
Paul L.
http://www.arachnoid.com
V.