Content deleted Content added
mNo edit summary |
→History: Added history, as this field is significantly older than the article indicates |
||
Line 1:
'''Explainable AI''' ('''XAI''') is a neologism that has recently reached the parlance of [[artificial intelligence]]. Its purpose is to provide accountability when addressing technological innovations ascribed to dynamic and non-linearly programmed systems, e.g. [[artificial neural networks]], [[deep learning]], and [[genetic algorithms]].
It is about asking the question of how algorithms arrive at their decisions. In a sense, it is a technical discipline providing operational tools that might be useful for explaining systems, such as in implementing a [[right to explanation]].<ref name=":0">{{Cite journal|last=Edwards|first=Lilian|last2=Veale|first2=Michael|date=2017|title=Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For|url=https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2972855|journal=Duke Law and Technology Review|volume=|pages=|via=}}</ref>
AI-related algorithmic (supervised and unsupervised) practices work on a model of success that orientates towards some form of correct state, with singular focus placed on an expected output. E.g., an image recognition algorithm's level of success will be based on the algorithm's ability to recognize certain objects, and failure to do so will indicate that the algorithm requires further tuning. As the tuning level is dynamic, closely correlated to function refinement and training data-set, granular understanding of the underlying operational vectors is rarely introspected.
Line 14:
== History ==
While the term "Explainable AI" is new, the field of understanding the knowledge embedded in machine learning systems itself has a long history. Researchers have long been interested in whether it is possible to extract rules from trained neural networks<ref>{{Cite journal|last=Tickle|first=A. B.|last2=Andrews|first2=R.|last3=Golea|first3=M.|last4=Diederich|first4=J.|date=November 1998|title=The truth will come to light: directions and challenges in extracting the knowledge embedded within trained artificial neural networks|url=http://ieeexplore.ieee.org/document/728352/|journal=IEEE Transactions on Neural Networks|volume=9|issue=6|pages=1057–1068|doi=10.1109/72.728352|issn=1045-9227}}</ref>, and researchers in clinical expert systems creating neural network-powered decision support for clinicians have sought to develop dynamic explanations that allow these technologies to be more trusted and trustworthy in practice.<ref name=":0" />
Since DARPA's introduction of its program in 2016, a number of initiatives have started to address the issue of algorithmic accountability and provide transparency concerning how technologies within this ___domain function.▼
▲Newer however is the focus on explaining machine learning and AI to those whom the decisions concern, rather than the designers or direct users of decision systems. Since DARPA's introduction of its program in 2016, a number of
* 25 April 2017: Nvidia published the paper "Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car"<ref>{{cite web|title=Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car|url=https://arxiv.org/pdf/1704.07911.pdf|website=Arxiv|publisher=Arxiv|accessdate=17 July 2017}}</ref>
|