Explainable artificial intelligence: Difference between revisions

Content deleted Content added
No edit summary
m minor copy edits
Line 1:
'''Explainable AI''' ('''XAI'')''') is a neologism that has recently reached the parlance of [[Artificialartificial Intelligenceintelligence]]. Its purpose is to provide accountability when addressing technological innovations ascribed to dynamic and none non-linearly programmed systems, e.g. [[Artificialartificial neural networks]], [[Deepdeep learning]], and [[Geneticgenetic Algorithmsalgorithms]], etc.
 
It is about asking the question of '''how''' algorithms arrive at their decisions. In a sense, it is a technical discipline providing transparency into the notion of the [[Right_to_explanationright to explanation]].
 
AI -related algorithmic (supervised and unsupervised) practices work on a model of success that orientates towards some form of correct state, with singular focus placed on an expected output. eE.g., an image recognition algorithm's level of success will be based on the algorithmsalgorithm's ability to recognize certain objects, and failure to do so will indicate that the algorithm requires further tuning. As the tuning level is dynamic, closely correlated to function refinement and training data-set, granular understanding of the underlying operational vectors is rarely introspected.
 
XAI aims to address this black-box approach and allow introspection of these dynamic systems tractable, allowing humans to understand how computational machines develop their own models for solving tasks.
 
== Definition ==
A universal definition of this term has yet to have been fully established; however, the DARPA XAI program defines it'sits aimaims as the following:
 
* Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
* Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.<ref>{{cite web|title=Explainable Artificial Intelligence (XAI)|url=https://www.darpa.mil/program/explainable-artificial-intelligence|website=DARPA|publisher=DARPA|accessdate=17 July 2017}}</ref>
 
== History ==
Since DARPA's introduction of it'sits program in 2016, a number of initiatives have started to address the issue of algorithmic accountability and provide transparency concerning how technologies within this ___domain function.
 
* 25.04. April 2017: Nvidia publishespublished it'sthe paper on: "Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car"<ref>{{cite web|title=Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car|url=https://arxiv.org/pdf/1704.07911.pdf|website=Arxiv|publisher=Arxiv|accessdate=17 July 2017}}</ref>
* 13.07. July 2017: Accenture recommends,recommended "Responsible AI: Why we need Explainable AI"<ref>{{cite web|title=Responsible AI: Why we need Explainable AI|url=https://www.youtube.com/watch?v=A668RoogabM|website=YouTube|publisher=Accenture|accessdate=17 July 2017}}</ref>
 
== Accountability ==
Line 24:
Examples of these effects have already been seen in the following sectors:
* Neural Network Tank imaging<ref>{{cite web|title=Neual Network Tank image|url=https://neil.fraser.name/writing/tank/|website=Neil Fraser|publisher=Neil Fraser|accessdate=17 July 2017}}</ref>
* Antenna design ([[Evolvedevolved Antennaantenna]])<ref>{{cite web|title=NASA 'Evolutionary' software automatically designs antenna|url=https://www.nasa.gov/mission_pages/st-5/main/04-55AR.html|website=NASA|publisher=NASA|accessdate=17 July 2017}}</ref>
* Algorithmic trading ([[Highhigh-frequency trading]])<ref>{{cite web|title=The Flash Crash: The Impact of High Frequency Trading on an Electronic Market|url=http://www.cftc.gov/idc/groups/public/@economicanalysis/documents/file/oce_flashcrash0314.pdf|website=CFTC|publisher=CFTC|accessdate=17 July 2017}}</ref>
* Medical diagnosisdiagnoses<ref>{{cite web|title=Can machine-learning improve cardiovascular risk prediction using routine clinical data?|url=http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0174944|website=PLOS One|publisher=PLOS One|accessdate=17 July 2017}}</ref>
* Autonomous vehicles<ref>{{cite web|title=Tesla says it has 'no way of knowing' if autopilot was used in fatal Chinese crash|url=https://www.theguardian.com/technology/2016/sep/14/tesla-fatal-crash-china-autopilot-gao-yaning|website=Guardian|publisher=Guardian|accessdate=17 July 2017}}</ref><ref>{{cite web|title=Joshua Brown, Who Died in Self-Driving Accident, Tested Limits of His Tesla|url=https://www.nytimes.com/2016/07/02/business/joshua-brown-technology-enthusiast-tested-the-limits-of-his-tesla.html|website=New York Times|publisher=New York Times|accessdate=17 July 2017}}</ref>
 
== Recent developments ==
As regulators, official bodies and general users dependency on AI-based dynamic systems, clearer accountability will be required for decision making processes to ensure trust and transparency. Evidence of this requirement gaining more momentum can be seen with the launch of the first global conference exclusively dedicated to this emerging discipline, the International Joint Conference on Artificial Intelligence: Workshop on Explainable Artificial Intelligence (XAI).<ref>{{cite web|title=IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI)|url=http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai/|website=Earthlink|publisher=IJCAI |accessdate=17 July 2017}}</ref>
 
* International Joint Conference on Artificial Intelligence: Workshop on Explainable Artificial Intelligence (XAI)<ref>{{cite web|title=IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI)|url=http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai/|website=Earthlink|publisher=IJCAI |accessdate=17 July 2017}}</ref>
 
== References ==