Preventing crawlers on link_to's

My understanding was that using the :post=>true on a link_to() was
supposed
to prevent search engine crawlers from triggering the link. However,
this
does not seem to be working for me. Is there something else that I
should
be/can be doing to accomplish this? Thanks.

-Matt

On Sunday 16 Apr 2006 19:23, Belorion wrote:

My understanding was that using the :post=>true on a link_to() was supposed
to prevent search engine crawlers from triggering the link. However, this
does not seem to be working for me. Is there something else that I should
be/can be doing to accomplish this? Thanks.

Adding :post doesn’t do anything other than insert dynamically generated
(Javascript) tags, so it won’t do anything for non-Javascript
clients
such as web crawlers.

From the Rails API:

“And a third for making the link do a POST request (instead of the
regular
GET) through a dynamically added form element that is instantly
submitted.
Note that if the user has turned off Javascript, the request will fall
back
on the GET. So its your responsibility to determine what the action
should be
once it arrives at the controller. The POST form is turned on by
passing :post as true.”

So basically, you need to check if the request is a GET, and if so most
likely
fall back to a second action which displays an actual form in order to
call
the first action again.

In your controller…

def destructive_action
if request.post?
# do some destructive action
else
redirect_to :action => “confirm_destruct”
end
end

… where confirm_destruct will be another action that displays an
actual POST
form which then goes on to call destructive_action again using the same
parameters but requiring an extra click of a form submission button.

HTH.

~Dave

Dave S.
Rent-A-Monkey Website Development

PGP Key: http://www.rentamonkey.com/pgpkey.asc

You can always use robots.txt to prevent search engines from indexing
certain areas of your site

Belorion <belorion@…> writes:

My understanding was that using the :post=>true on a link_to() was supposed to
prevent search engine crawlers from triggering the link. However, this
does not
seem to be working for me. Is there something else that I should be/can
be
doing to accomplish this? Thanks.
-Matt

Google, Yahoo! and MSFT honor the rel=“nofollow” attribute on a link.
Here’s
some info:

Also note that some crawlers are starting to parse an execute
javascript. We
noticed that the Google crawler started executing code sitting behind
Ajax.Updaters earlier this year.

i was wondering the opposite. if you have a ‘single page’ site model,
are the crawlers smart enough to sift thru the javascript? what about
contextual ads, is there a way to trigger updates to these when
substantial parts of the page content change?