Automated Content Access Protocol: Difference between revisions

Content deleted Content added
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.1
Bender the Bot (talk | contribs)
m HTTP to HTTPS for Blogspot
 
(5 intermediate revisions by 5 users not shown)
Line 1:
{{Distinguish|text= [[Application Configuration Access Protocol]] (ACAP)}}
'''Automated Content Access Protocol''' ("ACAP") was proposed in 2006 as a method of providing machine-readable permissions information for content, in the hope that it would have allowed automated processes (such as search-engine web crawling) to be compliant with publishers' policies without the need for human interpretation of legal terms. ACAP was developed by organisations that claimed to represent sections of the publishing industry ([[World Association of Newspapers]], [[European Publishers Council]], [[International Publishers Association]]).<ref>[http://www.the-acap.org/FAQs.php#faq15 ACAP FAQ: Where is the driving force behind ACAP?]</ref> It was intended to provide support for more sophisticated online publishing business models, but was criticised for being biased towards the fears of publishers who see search and aggregation as a threat<ref name="douglas">{{cite web |url=http://blogs.telegraph.co.uk/technology/iandouglas/3624601/Acap_a_shot_in_the_foot_for_publishing/ |archive-url=https://web.archive.org/web/20091114081002/http://blogs.telegraph.co.uk/technology/iandouglas/3624601/Acap_a_shot_in_the_foot_for_publishing/ |url-status=dead |archive-date=14 November 2009 |title=Acap: a shot in the foot for publishing |first=Ian |last=Douglas |date=3 December 2007 |work=[[The Daily Telegraph]] |publisher= |accessdate=3 May 2012}}</ref> rather than as a source of traffic and new readers.
 
== Status ==
Line 12:
|publisher=iTWire
|access-date=March 11, 2018
}}</ref> No progress has been announced since the remarks in March 2008 and Google,<ref>[httphttps://googlewebmastercentral.blogspot.com/2008/06/improving-on-robots-exclusion-protocol.html Improving on Robots Exclusion Protocol: Official Google Webmaster Central Blog]</ref> along with Yahoo! and MSN, have since reaffirmed their commitment to the use of [[robots.txt]] and [[sitemaps]].
 
In 2011 management of ACAP was turned over to the [[International Press Telecommunications Council]] and announced that ACAP 2.0 would be based on [[ODRL|Open Digital Rights Language]] 2.0.<ref>[http://www.iptc.org/site/Home/Media_Releases/News_syndication_version_of_ACAP_ready_for_launch_and_management_handed_over_to_the_IPTC IPTC Media Release: News syndication version of ACAP ready for launch and management handed over to the IPTC] {{webarchive |url=https://web.archive.org/web/20110715223737/http://www.iptc.org/site/Home/Media_Releases/News_syndication_version_of_ACAP_ready_for_launch_and_management_handed_over_to_the_IPTC |date=15 July 2011 }}</ref>
Line 26:
ACAP rules can be considered as an extension to the [[Robots Exclusion Standard]] (or ''"robots.txt"'') for communicating [[website]] access information to automated [[web crawler]]s.
 
It has been suggested<ref>[httphttps://googlesystem.blogspot.com/2006/09/news-publishers-want-full-control-of.html News Publishers Want Full Control of the Search Results]</ref> that ACAP is unnecessary, since the ''robots.txt'' protocol already exists for the purpose of managing search engine access to websites. However, others<ref>{{cite web
|url=http://www.yelvington.com/20061016/why_you_should_care_about_automated_content_access_protocol
|title=Why you should care about Automated Content Access Protocol
Line 48:
As an early priority, ACAP is intended to provide a practical and consensual solution to some of the rights-related issues which in some cases have led to litigation<ref>[http://www.out-law.com/page-7427 "Is Google Legal?" OutLaw article about Copiepresse litigation]</ref><ref>[http://media.guardian.co.uk/newmedia/comment/0,,2013051,00.html Guardian article about Google's failed appeal in Copiepresse case]</ref> between publishers and search engines.
 
The Robots Exclusion Standard has always been implemented voluntarily by both content providers and search engines, and ACAP implementation is similarly voluntary for both parties.<ref name="Paul 2008">{{cite magazine |last=Paul |first=Ryan |title=A skeptical look at the Automated Content Access Protocol | workmagazine=Ars Technica | date=14 January 2008 | url=https://arstechnica.com/information-technology/2008/01/skeptical-look-at-acap/ | access-date=9 January 2018}}</ref> However, Beth Noveck has expressed concern that the emphasis on communicating access permissions in legal terms will lead to lawsuits if search engines do not comply with ACAP permissions.<ref>{{cite web | last=Noveck |first=Beth Simone |title=Automated Content Access Protocol | website=Cairns Blog | date=1 December 2007 | url=http://cairns.typepad.com/blog/2007/12/automated-conte.html | access-date=9 January 2018}}</ref>
 
No public search engines recognise ACAP. Only one, [[Exalead]], ever confirmed that they will be adopting the standard,<ref>[http://www.exalead.com/software/news/press-releases/2007/07-01.php Exalead Joins Pilot Project on Automated Content Access]</ref> but they have since ceased functioning as a search portal to focus on the software side of their business.
Line 82:
{{Use dmy dates|date=January 2018}}
 
[[Category:WorldWeb Wide Webtechnology]]
[[Category:Internet bots]]