Automated Content Access Protocol: Difference between revisions

Content deleted Content added
m Reverted edits by Morellilawfirmgf (talk): using Wikipedia for advertising/promotion (HG) (3.4.12)
Bender the Bot (talk | contribs)
m HTTP to HTTPS for Blogspot
 
(One intermediate revision by one other user not shown)
Line 12:
|publisher=iTWire
|access-date=March 11, 2018
}}</ref> No progress has been announced since the remarks in March 2008 and Google,<ref>[httphttps://googlewebmastercentral.blogspot.com/2008/06/improving-on-robots-exclusion-protocol.html Improving on Robots Exclusion Protocol: Official Google Webmaster Central Blog]</ref> along with Yahoo! and MSN, have since reaffirmed their commitment to the use of [[robots.txt]] and [[sitemaps]].
 
In 2011 management of ACAP was turned over to the [[International Press Telecommunications Council]] and announced that ACAP 2.0 would be based on [[ODRL|Open Digital Rights Language]] 2.0.<ref>[http://www.iptc.org/site/Home/Media_Releases/News_syndication_version_of_ACAP_ready_for_launch_and_management_handed_over_to_the_IPTC IPTC Media Release: News syndication version of ACAP ready for launch and management handed over to the IPTC] {{webarchive |url=https://web.archive.org/web/20110715223737/http://www.iptc.org/site/Home/Media_Releases/News_syndication_version_of_ACAP_ready_for_launch_and_management_handed_over_to_the_IPTC |date=15 July 2011 }}</ref>
Line 26:
ACAP rules can be considered as an extension to the [[Robots Exclusion Standard]] (or ''"robots.txt"'') for communicating [[website]] access information to automated [[web crawler]]s.
 
It has been suggested<ref>[httphttps://googlesystem.blogspot.com/2006/09/news-publishers-want-full-control-of.html News Publishers Want Full Control of the Search Results]</ref> that ACAP is unnecessary, since the ''robots.txt'' protocol already exists for the purpose of managing search engine access to websites. However, others<ref>{{cite web
|url=http://www.yelvington.com/20061016/why_you_should_care_about_automated_content_access_protocol
|title=Why you should care about Automated Content Access Protocol
Line 48:
As an early priority, ACAP is intended to provide a practical and consensual solution to some of the rights-related issues which in some cases have led to litigation<ref>[http://www.out-law.com/page-7427 "Is Google Legal?" OutLaw article about Copiepresse litigation]</ref><ref>[http://media.guardian.co.uk/newmedia/comment/0,,2013051,00.html Guardian article about Google's failed appeal in Copiepresse case]</ref> between publishers and search engines.
 
The Robots Exclusion Standard has always been implemented voluntarily by both content providers and search engines, and ACAP implementation is similarly voluntary for both parties.<ref name="Paul 2008">{{cite magazine |last=Paul |first=Ryan |title=A skeptical look at the Automated Content Access Protocol | workmagazine=Ars Technica | date=14 January 2008 | url=https://arstechnica.com/information-technology/2008/01/skeptical-look-at-acap/ | access-date=9 January 2018}}</ref> However, Beth Noveck has expressed concern that the emphasis on communicating access permissions in legal terms will lead to lawsuits if search engines do not comply with ACAP permissions.<ref>{{cite web | last=Noveck |first=Beth Simone |title=Automated Content Access Protocol | website=Cairns Blog | date=1 December 2007 | url=http://cairns.typepad.com/blog/2007/12/automated-conte.html | access-date=9 January 2018}}</ref>
 
No public search engines recognise ACAP. Only one, [[Exalead]], ever confirmed that they will be adopting the standard,<ref>[http://www.exalead.com/software/news/press-releases/2007/07-01.php Exalead Joins Pilot Project on Automated Content Access]</ref> but they have since ceased functioning as a search portal to focus on the software side of their business.