Competitions and prizes in artificial intelligence: Difference between revisions

Content deleted Content added
rm spam link
Rescuing 18 sources and tagging 0 as dead.) #IABot (v2.0.9.5) (Eastmain - 16078
Line 6:
The [[Rumelhart Prize|David E. Rumelhart prize]] is an annual award for making a "significant contemporary contribution to the theoretical foundations of human cognition". The prize is $100,000.
 
The Human-Competitive Award<ref>{{Cite web|url=http://www.human-competitive.org/|title=Human Competitive|website=www.human-competitive.org|access-date=2008-02-22|archive-date=2008-10-06|archive-url=https://web.archive.org/web/20081006162048/http://www.human-competitive.org/|url-status=live}}</ref> is an annual challenge started in 2004 to reward results "competitive with the work of creative and inventive humans". The prize is $10,000. Entries are required to use [[evolutionary computing]].
 
The Intel AI Global Impact Festival is an international annual competition held by Intel Corporation <ref>{{Cite web |title=Intel {{!}} Data Center Solutions, IoT, and PC Innovation |url=https://www.intel.com/content/www/in/en/homepage.html |access-date=2023-06-21 |website=Intel |language=en |archive-date=2013-07-09 |archive-url=https://web.archive.org/web/20130709144017/http://www.intel.com/content/www/in/en/homepage.html |url-status=live }}</ref> for school, and college students with prizes upwards of $15,000. It is about artificial intelligence technology. There are two age brackets in this competition, 13-18 Age Group, and 18 and Above Age Group.
 
The [[IJCAI Award for Research Excellence]] is a biannual award given at the [[IJCAI]] conference to researcher in [[artificial intelligence]] as a recognition of excellence of their career.
 
The 2011 [[Federal Virtual World Challenge]], advertised by The White House<ref name="White House Publication, Challenge.Gov Fact Sheet">{{cite web |url=https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/challenge-gov-fact-sheet.pdf |year=2010 |access-date=June 7, 2013 |via=[[NARA|National Archives]] |work=[[Office of Science and Technology Policy]] |title= White House Publication, Challenge.Gov Fact Sheet |archive-date=January 26, 2017 |archive-url=https://web.archive.org/web/20170126193322/https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/challenge-gov-fact-sheet.pdf |url-status=live }}</ref> and sponsored by the [[U.S. Army Research Laboratory]]'s Simulation and Training Technology Center,<ref name="White House Publication, Challenge.Gov Fact Sheet" /><ref name="Federal Virtual Worlds Challenge Winners Announced">{{cite web|url=http://www.arl.army.mil/www/default.cfm?page=571 |publisher=United States Army Research Laboratory |year=2011|access-date=June 7, 2013|title= Federal Virtual Worlds Challenge Winners Announced|archive-date=March 7, 2013|archive-url=https://web.archive.org/web/20130307090909/https://www.arl.army.mil/www/default.cfm?page=571|url-status=live}}</ref><ref name="Army chooses winners in battle of the virtual worlds">{{cite web|url=http://defensesystems.com/articles/2011/06/02/army-names-winners-of-federal-virtual-worlds-contest.aspx |publisher=DefenseSystems.com |year=2011|access-date=June 7, 2013|title= Army chooses winners in battle of the virtual worlds|archive-date=February 28, 2014|archive-url=https://web.archive.org/web/20140228024132/http://defensesystems.com/articles/2011/06/02/army-names-winners-of-federal-virtual-worlds-contest.aspx|url-status=live}}</ref> held a competition offering a total of US$52,000 in cash prize awards for general artificial intelligence applications, including "adaptive learning systems, intelligent conversational bots, adaptive behavior (objects or processes)" and more.<ref name="2011 FVWC">{{cite web |url=http://science.dodlive.mil/2010/09/08/announcing-the-2011-federal-virtual-worlds-challenge/ |publisher="Armed with Science", a daily blog site published by the United States Department of Defense |year=2010 |access-date=September 7, 2013 |title= 2011 US DoD Artificial Intelligence Competition |archive-date=April 24, 2013 |archive-url=https://web.archive.org/web/20130424124120/http://science.dodlive.mil/2010/09/08/announcing-the-2011-federal-virtual-worlds-challenge/ |url-status=live }}</ref>
 
The Machine Intelligence Prize is awarded annually by the [[British Computer Society]] for progress towards machine intelligence.<ref>{{Cite web|url=http://www.bcs-sgai.org/micomp/|title=SGAI: BCS Machine Intelligence Competition|website=www.bcs-sgai.org|access-date=2008-02-22|archive-date=2008-02-06|archive-url=https://web.archive.org/web/20080206145311/http://www.bcs-sgai.org/micomp/|url-status=live}}</ref>
 
The [[Kaggle]] – "the world's largest community of data scientists compete to solve most valuable problems".
Line 31:
The [[DARPA Grand Challenge]] is a series of competitions to promote [[driverless car]] technology, aimed at a congressional mandate stating that by 2015 one-third of the operational ground combat vehicles of the US Armed Forces should be unmanned.<ref>[http://www.darpa.mil/grandchallenge04/sponsor_toolkit/congress_lang.pdf Congressional Mandate] {{Webarchive|url=https://web.archive.org/web/20080216094908/http://www.darpa.mil/grandchallenge04/sponsor_toolkit/congress_lang.pdf |date=2008-02-16 }} DARPA</ref> While the first race had no winner, the second awarded a $2 million prize for the autonomous navigation of a hundred-mile trail, using [[GPS]], computers and a sophisticated array of sensors. In November 2007, DARPA introduced the [[DARPA Urban Challenge]], a sixty-mile urban area race requiring vehicles to navigate through traffic. In November 2010 the US Armed Forces extended the competition with the $1.6 million prize [[Multi Autonomous Ground-robotic International Challenge]] to consider cooperation between multiple vehicles in a simulated-combat situation.
 
[[Roborace]] will be a global motorsport championship with [[Autonomous car|autonomously driving]], [[Electric vehicle|electrically powered]] vehicles. The series will be run as a support series during the [[Formula E]] championship for electric vehicles.<ref name="announcement">{{cite web|url=http://www.fiaformulae.com/en/news/2015/november/formula-e-kinetik-announce-roborace-a-global-driverless-championship.aspx|title=Formula E & Kinetik announce driverless support series|publisher=fiaformulae.com|date=2015-11-27|access-date=2015-12-12|archive-date=2016-02-02|archive-url=https://web.archive.org/web/20160202030131/http://www.fiaformulae.com/en/news/2015/november/formula-e-kinetik-announce-roborace-a-global-driverless-championship.aspx|url-status=dead}}</ref> This will be the first global championship for driverless cars.<ref>{{cite web|url=https://www.engadget.com/2015/11/28/formula-e-roborace/|title=Formula E is planning the first racing series for driverless cars|publisher=engadget.com|date=2015-11-28|access-date=2017-08-26|archive-date=2017-07-29|archive-url=https://web.archive.org/web/20170729010248/https://www.engadget.com/2015/11/28/formula-e-roborace/|url-status=live}}</ref>
 
==Data-mining and prediction==
Line 60:
The Cyc TPTP Challenge is a competition to develop reasoning methods for the [[Cyc]] comprehensive ontology and database of everyday common sense knowledge.<ref>{{cite web|url=http://www.opencyc.org/doc/tptp_challenge_problem_set|archive-url=https://web.archive.org/web/20120319123128/http://www.opencyc.org/doc/tptp_challenge_problem_set|url-status=dead|archive-date=2012-03-19|title=The Cyc TPTP Challenge Problem Set|website=opencyc.org}}</ref> The prize is 100 euros for "each winner of two related challenges".{{citation needed|date=March 2023}}
 
The [[Eternity II]] challenge was a [[constraint satisfaction]] problem very similar to the [[Tetravex]] game. The objective is to lay 256 tiles on a 16x16 grid while satisfying a number of constraints. The problem is known to be [[NP-complete]].<ref>{{cite journal | doi = 10.1016/j.ipl.2006.04.010 | volume=99 | title=Tetravex is NP-complete | journal=Information Processing Letters | year=2006 | pages=171–174| arxiv=0903.1147 | last1=Takenaga | first1=Yasuhiko | last2=Walsh | first2=Toby | issue=5 | s2cid=7228681 }}</ref> The prize was US$2,000,000.<ref>http://uk.eternityii.com/competition-rules-eternity-2/ {{Webarchive|url=https://web.archive.org/web/20090120043104/http://uk.eternityii.com/competition%2Drules%2Deternity%2D2/ |date=2009-01-20 }} {{bare URL inline|date=April 2023}}</ref> The competition ended in December 2010.
 
==Games==
Line 67:
The Ing Prize was a substantial money prize attached to the World [[Computer Go]] Congress, starting from 1985 and expiring in 2000. It was a graduated set of handicap challenges against young professional players with increasing prizes as the handicap was lowered. At the time it expired in 2000, the unclaimed prize was 400,000 NT dollars for winning a 9-stone handicap match.
 
The AAAI [[General Game Playing]] Competition is a competition to develop programs that are effective at [[General Game Playing|general game playing]].<ref>{{Cite web|url=http://games.stanford.edu/competition/competition.html|archive-url=https://web.archive.org/web/20080629220940/http://games.stanford.edu/competition/competition.html|url-status=dead|title=General Game Playing<!-- Bot generated title -->|archive-date=June 29, 2008}}</ref><ref>{{Cite web|url=http://www.aaai.org/Conferences/AAAI/2007/aaai07game.php|title=AAAI-07 General Game Playing Competition|website=www.aaai.org|access-date=2008-05-14|archive-date=2008-07-20|archive-url=https://web.archive.org/web/20080720031254/http://www.aaai.org/Conferences/AAAI/2007/aaai07game.php|url-status=live}}</ref> Given a definition of a game, the program must play it effectively without human intervention. Since the game is not known in advance the competitors cannot especially adapt their programs to a particular scenario. The prize in 2006 and 2007 was $10,000.
 
The General Video Game AI Competition (GVGAI<ref>{{Cite web|url=http://www.gvgai.net/|title=The GVG-AI Competition|website=www.gvgai.net|access-date=2020-03-09|archive-date=2020-02-28|archive-url=https://web.archive.org/web/20200228091134/http://www.gvgai.net/|url-status=live}}</ref>) poses the problem of creating artificial intelligence that can play a wide, and in principle unlimited, range of games. Concretely, it tackles the problem of devising an algorithm that is able to play any game it is given, even if the game is not known a priori. Additionally, the contests poses the challenge of creating level and rule generators for any game is given. This area of study can be seen as an approximation of General Artificial Intelligence, with very little room for game dependent heuristics. The competition runs yearly in different tracks: single player planning,<ref>{{Cite web|url=http://www.diego-perez.net/papers/GVGAI2014Competition.pdf|title=Single Player Planning GVGAI|access-date=2018-01-26|archive-date=2018-06-14|archive-url=https://web.archive.org/web/20180614022408/http://www.diego-perez.net/papers/GVGAI2014Competition.pdf|url-status=live}}</ref> two-player planning,<ref>{{Cite web|url=http://www.diego-perez.net/papers/GVGAI20162PCompetition.pdf|title=Two-Player Planning GVGAI|access-date=2018-01-26|archive-date=2018-01-27|archive-url=https://web.archive.org/web/20180127084114/http://www.diego-perez.net/papers/GVGAI20162PCompetition.pdf|url-status=live}}</ref> single player learning,<ref>{{Cite web|url=http://www.liujialin.tech/publications/GVGAISingleLearning_manual.pdf|title=Single Player Learning GVGAI|access-date=2018-01-26|archive-date=2018-01-27|archive-url=https://web.archive.org/web/20180127143214/http://www.liujialin.tech/publications/GVGAISingleLearning_manual.pdf|url-status=live}}</ref> level<ref>{{Cite web|url=http://www.diego-perez.net/papers/GVGLG.pdf|title=Level Generation GVGAI|access-date=2018-01-26|archive-date=2018-09-27|archive-url=https://web.archive.org/web/20180927065826/http://www.diego-perez.net/papers/GVGLG.pdf|url-status=live}}</ref> and rule<ref>{{Cite web|url=http://www.diego-perez.net/papers/GVGRuleGeneration.pdf|title=Rule Generation GVGAI|access-date=2018-01-26|archive-date=2018-01-27|archive-url=https://web.archive.org/web/20180127084000/http://www.diego-perez.net/papers/GVGRuleGeneration.pdf|url-status=live}}</ref> generation, and each track prizes ranging from 200 to 500 US dollars for winners and runner-ups.
 
The 2007 Ultimate Computer Chess Challenge was a competition organised by [[FIDE|World Chess Federation]] that pitted
[[Deep Fritz]] against [[Deep Junior]]. The prize was $100,000.
 
The annual [[Arimaa#Arimaa Challenge|Arimaa Challenge]] offered a $10,000 prize until the year 2020 to develop a program that plays the board game [[Arimaa]] and defeats a group of selected human opponents. In 2015, David Wu's bot bot_sharp beat the humans, losing only 2 games out of 9.<ref>{{Cite web|url=http://arimaa.com/arimaa/challenge/2015/showGames.cgi|title=2015 Arimaa Challenge Match|website=arimaa.com|access-date=2015-09-26|archive-date=2015-10-19|archive-url=https://web.archive.org/web/20151019162359/http://arimaa.com/arimaa/challenge/2015/showGames.cgi|url-status=live}}</ref> As a result, the Arimaa Challenge was declared over and David Wu received the prize of $12,000 ($2,000 being offered by third-parties for 2015's championship).
 
[[2K Australia]] is offering a prize worth A$10,000 to develop a game-playing bot that plays a [[first-person shooter]]. The aim is to convince a panel of judges that it is actually a human player. The competition started in 2008 and was won in 2012. A new competition is planned for 2014.<ref>{{Cite web|url=http://www.botprize.org/|title=Bot Prize &#124; Robots, AI, and Media|access-date=2008-11-13|archive-date=2008-12-22|archive-url=https://web.archive.org/web/20081222044949/http://www.botprize.org/|url-status=live}}</ref>
 
The [[Google AI Challenge]]<ref>{{cite web |url=http://www.ai-contest.com/index.php |title=Google AI Challenge |website=www.ai-contest.com |access-date=13 January 2022 |archive-url=https://web.archive.org/web/20100908001350/http://www.ai-contest.com/index.php |archive-date=8 September 2010 |url-status=dead}}</ref> was a bi-annual online contest organized by the [[University of Waterloo]] Computer Science Club and sponsored by [[Google]] that ran from 2009 to 2011. Each year a game was chosen and contestants submitted specialized [[computer game bot|automated bots]] to play against other competing bots.