Explainable artificial intelligence: Difference between revisions

Content deleted Content added
Mirive (talk | contribs)
mNo edit summary
Line 1:
'''Explainable AI''' ('''XAI''') is a neologism that has recently reached the parlance of [[artificial intelligence]]. Its purpose is to provide accountability when addressing technological innovations ascribed to dynamic and non-linearly programmed systems, e.g. [[artificial neural networks]], [[deep learning]], and [[genetic algorithms]].
 
It is about asking the question of how algorithms arrive at their decisions. In a sense, it is a technical discipline providing transparencyoperational intotools thethat notionmight ofbe theuseful for explaining systems, such as in implementing a [[right to explanation]].<ref>{{Cite journal|last=Edwards|first=Lilian|last2=Veale|first2=Michael|date=2017|title=Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For|url=https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2972855|journal=Duke Law and Technology Review|volume=|pages=|via=}}</ref>
 
AI-related algorithmic (supervised and unsupervised) practices work on a model of success that orientates towards some form of correct state, with singular focus placed on an expected output. E.g., an image recognition algorithm's level of success will be based on the algorithm's ability to recognize certain objects, and failure to do so will indicate that the algorithm requires further tuning. As the tuning level is dynamic, closely correlated to function refinement and training data-set, granular understanding of the underlying operational vectors is rarely introspected.