Content deleted Content added
Soupy sautoy (talk | contribs) Added notability tag. |
m Dated {{Notability}}. (Build p613) |
||
Line 1:
{{Notability|date=July 2011}}
'''Automated Content Access Protocol''' ("ACAP") was proposed in 2006 as a method of providing machine-readable permissions information for content, in the hope that it would have allowed automated processes (such as search-engine web crawling) to be compliant with publishers' policies without the need for human interpretation of legal terms. ACAP was developed by organisations that claimed to represent sections of the publishing industry ([[World Association of Newspapers]], [[European Publishers Council]], [[International Publishers Association]]).<ref>[http://www.the-acap.org/FAQs.aspx#FAQ10 ACAP FAQ: Where is the driving force behind ACAP?]</ref> It was intended to provide support for more sophisticated online publishing business models, but was criticised for being biased towards the fears of publishers who see search and aggregation as a threat<ref name="douglas">[http://blogs.telegraph.co.uk/ian_douglas/blog/2007/12/03/acap_a_shot_in_the_foot_for_publishing Acap: a shot in the foot for publishing]</ref> rather than as a source of traffic and new readers.
Line 14:
==ACAP and search engines==
ACAP rules can be considered as an extension to the [[Robots Exclusion Standard]] (or ''"robots.txt"'') for communicating [[website]] access information to automated [[web crawler]]s.
It has been suggested<ref>[http://googlesystem.blogspot.com/2006/09/news-publishers-want-full-control-of.html News Publishers Want Full Control of the Search Results]</ref> that ACAP is unnecessary, since the ''robots.txt'' protocol already exists for the purpose of managing search engine access to websites. However, others<ref>[http://www.yelvington.com/20061016/why_you_should_care_about_automated_content_access_protocol Why you should care about Automated Content Access Protocol]</ref> support ACAP’s view<ref>[http://www.the-acap.org/faqs.php#existing_protocols ACAP FAQ on robots.txt]</ref> that ''robots.txt'' is no longer sufficient. ACAP argues that ''robots.txt'' was devised at a time when both search engines and online publishing were in their infancy and as a result is insufficiently nuanced to support today’s much more sophisticated business models of search and online publishing. ACAP aims to make it possible to express more complex permissions than the simple binary choice of “inclusion” or “exclusion”.
As an early priority, ACAP is intended to provide a practical and consensual solution to some of the rights-related issues which in some cases have led to litigation<ref>[http://www.out-law.com/page-7427 "Is Google Legal?" OutLaw article about Copiepresse litigation]</ref><ref>[http://media.guardian.co.uk/newmedia/comment/0,,2013051,00.html Guardian article about Google's failed appeal in Copiepresse case]</ref> between publishers and search engines.
Line 41:
* [http://www.mediainfo.com/eandp/departments/online/article_display.jsp?vnu_content_id=1003724998 WAN calls on Google to embrace Acap] - Editor and Publisher
* [http://www.journalism.co.uk/2/articles/531181.php Google rejects adoption of Acap standard] - journalism.co.uk
{{Use dmy dates|date=July 2011}}
|