Automated Content Access Protocol: Difference between revisions

Content deleted Content added
Nunquam Dormio (talk | contribs)
External links: Correction
Line 1:
'''Automated Content Access Protocol''' ("ACAP") was proposed in 2006 as a method of providing machine-readable permissions information for content, in the hope that it would have allowed automated processes (such as search-engine web crawling) to be compliant with publishers' policies without the need for human interpretation of legal terms. ACAP was developed by organisations that claimed to represent sections of the publishing industry ([[World Association of Newspapers]], [[European Publishers Council]], [[International Publishers Association]]).<ref>[http://www.the-acap.org/FAQs.aspx#FAQ10 ACAP FAQ: Where is the driving force behind ACAP?]</ref> It was intended to provide support for more sophisticated online publishing business models, but was criticised for being biased towards the fears of publishers who see search and aggregation as a threat<ref name="douglas">[http://blogs.telegraph.co.uk/ian_douglas/blog/2007/12/03/acap_a_shot_in_the_foot_for_publishing Acap: a shot in the foot for publishing]</ref> rather than as a source of traffic and new readers.
 
==Current statusStatus==
In November 2007 ACAP announced that the first version of the standard was ready. No non-ACAP members, whether publishers or search engines, have adopted it so far. A Google spokesman appeared to have ruled out adoption.<ref>[http://blog.searchenginewatch.com/blog/080313-090443 Search Engine Watch report of Rob Jonas' comments on ACAP]</ref> In March 2008, Google's CEO [[Eric Schmidt]] stated that "At present it does not fit with the way our systems operate".<ref>[http://www.itwire.com/content/view/17206/53/ IT Wire report of Eric Schmidt's comments on ACAP]</ref> No progress has been announced since the remarks in March 2008 and Google <ref>[http://googlewebmastercentral.blogspot.com/2008/06/improving-on-robots-exclusion-protocol.html Improving on Robots Exclusion Protocol: Official Google Webmaster Central Blog]</ref>, along with Yahoo and MSN, have since reaffirmed their commitment to the use of [[robots.txt]] and [[Sitemaps]].