Content deleted Content added
Format further reading section |
m Tidy formatting using User:GregU/dashes.js, m:User:TMg/autoFormatter, and User:PC-XT/Advisor |
||
Line 1:
'''Automated Content Access Protocol''' ("ACAP") was proposed in 2006 as a method of providing machine-readable permissions information for content, in the hope that it would have allowed automated processes (such as search-engine web crawling) to be compliant with publishers' policies without the need for human interpretation of legal terms. ACAP was developed by organisations that claimed to represent sections of the publishing industry ([[World Association of Newspapers]], [[European Publishers Council]], [[International Publishers Association]]).<ref>[http://www.the-acap.org/FAQs.php#faq15 ACAP FAQ: Where is the driving force behind ACAP?]</ref> It was intended to provide support for more sophisticated online publishing business models, but was criticised for being biased towards the fears of publishers who see search and aggregation as a threat<ref name="douglas">{{cite web |url=http://blogs.telegraph.co.uk/technology/iandouglas/3624601/Acap_a_shot_in_the_foot_for_publishing/ |title=Acap: a shot in the foot for publishing |first=Ian |last=Douglas |date=2007-12-03 |work=[[The Daily Telegraph]] |publisher= |accessdate=2012-05-03}}</ref> rather than as a source of traffic and new readers.
== Status ==
In November 2007 ACAP announced that the first version of the standard was ready. No non-ACAP members, whether publishers or search engines, have adopted it so far. A Google spokesman appeared to have ruled out adoption.<ref>[http://blog.searchenginewatch.com/blog/080313-090443 Search Engine Watch report of Rob Jonas' comments on ACAP] {{webarchive |url=https://web.archive.org/web/20080318054618/http://blog.searchenginewatch.com/blog/080313-090443 |date=18 March 2008 }}</ref> In March 2008, Google's CEO [[Eric Schmidt]] stated that "At present it does not fit with the way our systems operate".<ref>[http://www.itwire.com/content/view/17206/53/ IT Wire report of Eric Schmidt's comments on ACAP] {{webarchive |url=https://web.archive.org/web/20080318122928/http://www.itwire.com/content/view/17206/53/ |date=18 March 2008 }}</ref> No progress has been announced since the remarks in March 2008 and Google,<ref>[http://googlewebmastercentral.blogspot.com/2008/06/improving-on-robots-exclusion-protocol.html Improving on Robots Exclusion Protocol: Official Google Webmaster Central Blog]</ref> along with Yahoo! and MSN, have since reaffirmed their commitment to the use of [[robots.txt]] and [[sitemaps]].
In 2011 management of ACAP was turned over to the [[International Press Telecommunications Council]] and announced that ACAP 2.0 would be based on [[ODRL|Open Digital Rights Language]] 2.0.<ref>[http://www.iptc.org/site/Home/Media_Releases/News_syndication_version_of_ACAP_ready_for_launch_and_management_handed_over_to_the_IPTC IPTC Media Release: News syndication version of ACAP ready for launch and management handed over to the IPTC] {{webarchive |url=https://web.archive.org/web/20110715223737/http://www.iptc.org/site/Home/Media_Releases/News_syndication_version_of_ACAP_ready_for_launch_and_management_handed_over_to_the_IPTC |date=15 July 2011 }}</ref>
== Previous milestones ==
In April 2007 ACAP commenced a pilot project in which the participants and technical partners undertook to specify and agree various use cases for ACAP to address. A technical workshop, attended by the participants and invited experts, has been held in London to discuss the use cases and agree next steps.
Line 13:
By October 2006, ACAP had completed a feasibility stage and was formally announced<ref>[http://www.the-acap.org/press_releases/Frankfurt_acap_press_release_6_oct_06.pdf Official ACAP press release announcing project launch] {{webarchive |url=https://web.archive.org/web/20070610171119/http://www.the-acap.org/press_releases/Frankfurt_acap_press_release_6_oct_06.pdf |date=10 June 2007 }}</ref> at the [[Frankfurt Book Fair]] on 6 October 2006. A pilot program commenced in January 2007 involving a group of major publishers and media groups working alongside search engines and other technical partners.
== ACAP and search engines ==
ACAP rules can be considered as an extension to the [[Robots Exclusion Standard]] (or ''"robots.txt"'') for communicating [[website]] access information to automated [[web crawler]]s.
Line 24:
No public search engines recognise ACAP. Only one, [[Exalead]], ever confirmed that they will be adopting the standard,<ref>[http://www.exalead.com/software/news/press-releases/2007/07-01.php Exalead Joins Pilot Project on Automated Content Access]</ref> but they have since ceased functioning as a search portal to focus on the software side of their business.
== Comment and debate ==
The project has generated considerable online debate, in the search,<ref>[http://blog.searchenginewatch.com/blog/060922-104102 Search Engine Watch article] {{webarchive |url=https://web.archive.org/web/20070127201118/http://blog.searchenginewatch.com/blog/060922-104102 |date=27 January 2007 }}</ref> content<ref>[http://shore.com/commentary/newsanal/items/2006/200601002publishdrm.html Shore.com article about ACAP] {{webarchive |url=https://web.archive.org/web/20061021020607/http://shore.com/commentary/newsanal/items/2006/200601002publishdrm.html |date=21 October 2006 }}</ref> and intellectual property<ref>[http://www.ip-watch.org/weblog/index.php?p=408&res=1280_ff&print=0 IP Watch article about ACAP]</ref> communities. If there are any common themes in commentary, they are
# that keeping the specification simple will be critical to its successful implementation, and
# that the aims of the project are focussed on the needs of publishers, rather than readers. Many have seen this as a flaw.<ref name="douglas" /><ref>{{cite web |url=http://blogs.telegraph.co.uk/technology/iandouglas/jan2008/acapshootsback.htm |title=Acap shoots back |first=Ian |last=Douglas |date=2007-12-23 |work=[[The Daily Telegraph]] |dead-url=yes |archive-url=https://web.archive.org/web/20080907233655/http://blogs.telegraph.co.uk/technology/iandouglas/jan2008/acapshootsback.htm |archive-date=7 September 2008 <!--alternate archive-url=https://web.archive.org/web/20160304085425/http://blogs.telegraph.co.uk/technology/iandouglas/3624261/Acap_shoots_back/ -->}}</ref>
== See also ==
Line 33:
* [[Sitemaps]]
== References ==
{{Reflist}}
== External links ==
* [http://www.the-acap.org/ Official website]
* [http://media.guardian.co.uk/columnists/story/0,,1935057,00.html Google's hunger for the news] in ''[[The Guardian]]'' newspaper
* [https://web.archive.org/web/20061111015733/http://www.yelvington.com/20061016/why_you_should_care_about_automated_content_access_protocol Why you should care about Automated Content Access Protocol] (Steve Yelvington)
* [https://web.archive.org/web/20061114082058/http://www.wildlyappropriate.com/article/139/automated-content-access-protocol-why Automated Content Access Protocol: Why?]
* [http://www.currybet.net/cbet_blog/2007/12/acap_flawed_and_broken.php Acap: flawed and broken from the start]
* [http://www.laboratorium.net/archive/2007/12/08/automated_content_access_progress Automated Content Access Progress
* [http://www.mediainfo.com/eandp/departments/online/article_display.jsp?vnu_content_id=1003724998 WAN calls on Google to embrace Acap]
== Further reading ==
|