I’m writing a web crawler, and in that crawler I want to remove all
scripts in the pages I crawl.
I should be able to do a simple gsub!(//,"") right? Well, I do
that and unfortunately it doesn’t remove some scripts. Take google for
instance. It removes the first script, but not the second. I’m really
confused. Since google has two scripts,
so it’s not like the full regexp should ever fail to be triggered.
I should be able to do a simple gsub!(//,"") right? Well, I do
that and unfortunately it doesn’t remove some scripts. Take google for
instance. It removes the first script, but not the second. I’m really
confused. Since google has two scripts,
so it’s not like the full regexp should ever fail to be triggered.
gsub(//m,"")
If there are new lines inside the string you need to use the m
modifier to make the dot (.) include new lines as well.
And the ? is to make the match non-greedy. Without it it would match
the start of the first script and the end of the last script.
Thanks a lot, I didn’t know that regexp only applied to one line by
default, HRM!
Thanks,
Kyle H.
Actually, it is the . expression that doesn’t match a newline without
the ‘m’ option. That option just changes ‘.’ from matching “any
character except a newline” to matching “any character”. The Regular
Expression section of chapter 22 in the pickaxe covers all this (p.
324-328)
I’m writing a web crawler, and in that crawler I want to remove all
scripts in the pages I crawl.
I should be able to do a simple gsub!(//,"") right? Well, I do
that and unfortunately it doesn’t remove some scripts. Take google for
instance. It removes the first script, but not the second. I’m really
confused. Since google has two scripts,
so it’s not like the full regexp should ever fail to be triggered.
Any insight on the issue would be GREAT?!
Thanks,
Kyle H.
I’m not sure what are you after actually, but apart from the
tags Rob mentioned, you might need to remove the onClick, onMouseOver
and other handlers. And since the handlers can be within almost any tag
it would be very hard to find and remove them correctly with just a few
regexps. You should use a real HTML parser (the preffered Ruby one seems
to be called hpricot … I guess the author wanted to be funny). If this
is meant to make the display of the pages secure you should also rather
“keep only the tags and attributes that are safe” than “remove stuff
that’s not safe”. You might easily overlook something.