Explainable artificial intelligence: Difference between revisions

Content deleted Content added
cleaned the external links (one was redundant, another one dead)
m reference "journal" field
Line 3:
{{artificial intelligence}}
 
'''Explainable AI''' ('''XAI'''), often overlapping with '''interpretable AI''', or '''explainable machine learning''' ('''XML'''), either refers to an [[artificial intelligence]] (AI) system over which it is possible for humans to retain ''intellectual oversight'', or refers to the methods to achieve this.<ref>{{Cite journal|last=Longo|first=Luca|display-authors=etal|date=2024 |title=Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions|url=https://www.sciencedirect.com/science/article/pii/S1566253524000794 |journal=Information Fusion|volume=106|doi=10.1016/j.inffus.2024.102301}}</ref><ref>{{Cite journal |last=Mihály |first=Héder |date=2023 |title=Explainable AI: A Brief History of the Concept |url=https://ercim-news.ercim.eu/images/stories/EN134/EN134-web.pdf |journal=ERCIM News |issue=134 |pages=9–10}}</ref> The main focus is usually on the reasoning behind the decisions or predictions made by the AI<ref>{{Cite journal |last1=Phillips |first1=P. Jonathon |last2=Hahn |first2=Carina A. |last3=Fontana |first3=Peter C. |last4=Yates |first4=Amy N. |last5=Greene |first5=Kristen |last6=Broniatowski |first6=David A. |last7=Przybocki |first7=Mark A. |date=2021-09-29 |title=Four Principles of Explainable Artificial Intelligence |url=https://doi.org/10.6028/NIST.IR.8312 |journal=NIST |doi=10.6028/nist.ir.8312}}</ref> which are made more understandable and transparent.<ref>{{Cite journal|last1=Vilone|first1=Giulia|last2=Longo|first2=Luca|title=Notions of explainability and evaluation approaches for explainable artificial intelligence|url=https://www.sciencedirect.com/science/article/pii/S1566253521001093|journal=Information Fusion|year=2021|volume= December 2021 - Volume 76 |pages=89–106|doi=10.1016/j.inffus.2021.05.009}}</ref> XAI counters the "[[black box]]" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.<ref>{{Cite journal |last=Castelvecchi |first=Davide |date=2016-10-06 |title=Can we open the black box of AI? |url=http://www.nature.com/articles/538020a |journal=Nature |language=en |volume=538 |issue=7623 |pages=20–23 |doi=10.1038/538020a |pmid=27708329 |bibcode=2016Natur.538...20C |s2cid=4465871 |issn=0028-0836}}</ref><ref name=guardian>{{cite news|last1=Sample|first1=Ian|title=Computer says no: why making AIs fair, accountable and transparent is crucial|url=https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial|access-date=30 January 2018|work=The Guardian |date=5 November 2017|language=en}}</ref>
 
XAI hopes to help users of AI-powered systems perform more effectively by improving their understanding of how those systems reason.<ref>{{Cite journal|last=Alizadeh|first=Fatemeh|date=2021|title=I Don't Know, Is AI Also Used in Airbags?: An Empirical Study of Folk Concepts and People's Expectations of Current and Future Artificial Intelligence|url=https://www.researchgate.net/publication/352638184|journal=Icom|volume=20 |issue=1 |pages=3–17 |doi=10.1515/icom-2021-0009|s2cid=233328352}}</ref> XAI may be an implementation of the social [[right to explanation]].<ref name=":0">{{Cite journal|last1=Edwards|first1=Lilian|last2=Veale|first2=Michael|date=2017|title=Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For|journal=Duke Law and Technology Review|volume=16|pages=18|ssrn=2972855}}</ref> Even if there is no such legal right or regulatory requirement, XAI can improve the [[user experience]] of a product or service by helping end users trust that the AI is making good decisions.<ref>{{Cite web |last=Do Couto |first=Mark |date=February 22, 2024 |title=Entering the Age of Explainable AI |url=https://tdwi.org/Articles/2024/02/22/ADV-ALL-Entering-the-Age-of-Explainable-AI.aspx |access-date=2024-09-11 |website=TDWI}}</ref> XAI aims to explain what has been done, what is being done, and what will be done next, and to unveil which information these actions are based on.<ref name=":3">{{Cite journal|last1=Gunning|first1=D.|last2=Stefik|first2=M.|last3=Choi|first3=J.|last4=Miller|first4=T.|last5=Stumpf|first5=S.|last6=Yang|first6=G.-Z.|date=2019-12-18|title=XAI-Explainable artificial intelligence|url=https://openaccess.city.ac.uk/id/eprint/23405/|journal=Science Robotics|language=en|volume=4|issue=37|pages=eaay7120|doi=10.1126/scirobotics.aay7120|pmid=33137719|issn=2470-9476|doi-access=free}}</ref> This makes it possible to confirm existing knowledge, challenge existing knowledge, and generate new assumptions.<ref>{{Cite journal|last1=Rieg|first1=Thilo|last2=Frick|first2=Janek|last3=Baumgartl|first3=Hermann|last4=Buettner|first4=Ricardo|date=2020-12-17|title=Demonstration of the potential of white-box machine learning approaches to gain insights from cardiovascular disease electrocardiograms|journal=PLOS ONE|language=en|volume=15|issue=12|pages=e0243615|doi=10.1371/journal.pone.0243615|issn=1932-6203|pmc=7746264|pmid=33332440|bibcode=2020PLoSO..1543615R|doi-access=free}}</ref>
Line 92:
By making an AI system more explainable, we also reveal more of its inner workings. For example, the explainability method of feature importance identifies features or variables that are most important in determining the model's output, while the influential samples method identifies the training samples that are most influential in determining the output, given a particular input.<ref name="Explainable Machine Learning in Deployment">{{cite book | last1=Bhatt | first1=Umang | last2=Xiang | first2=Alice | last3=Sharma | first3=Shubham | last4=Weller | first4=Adrian | last5=Taly | first5=Ankur | last6=Jia | first6=Yunhan | last7=Ghosh | first7=Joydeep | last8=Puri | first8=Richir | last9=M.F. Moura | first9=José | last10=Eckersley | first10=Peter | title=Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency | chapter=Explainable Machine Learning in Deployment | date=2022 | pages=648–657 | doi=10.1145/3351095.3375624 | isbn=9781450369367 | s2cid=202572724 | chapter-url=https://dl.acm.org/doi/pdf/10.1145/3351095.3375624 }}</ref> Adversarial parties could take advantage of this knowledge.
 
For example, competitor firms could replicate aspects of the original AI system in their own product, thus reducing competitive advantage.<ref name="How the machine 'thinks'">{{cite journal | last1=Burrel | first1=Jenna |date=2016 |title=How the machine 'thinks': Understanding opacity in machine learning algorithms |url=https://journals.sagepub.com/doi/pdf/10.1177/2053951715622512 series|journal=Big Data & Society | dateseries=2016Big |Data & Society |volume=3 | issue=1 | doi=10.1177/2053951715622512 | s2cid=61330970 | url=https://journals.sagepub.com/doi/pdf/10.1177/2053951715622512 }}</ref> An explainable AI system is also susceptible to being “gamed”—influenced in a way that undermines its intended purpose. One study gives the example of a predictive policing system; in this case, those who could potentially “game” the system are the criminals subject to the system's decisions. In this study, developers of the system discussed the issue of criminal gangs looking to illegally obtain passports, and they expressed concerns that, if given an idea of what factors might trigger an alert in the passport application process, those gangs would be able to “send guinea pigs” to test those triggers, eventually finding a loophole that would allow them to “reliably get passports from under the noses of the authorities”.<ref>{{cite book | last1=Veale | first1=Michael | last2=Van Kleek | first2=Max | last3=Binns | first3=Reuben | title=Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems | chapter=Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making | date=2018 | volume=40 | pages=1–14 | doi=10.1145/3173574.3174014 | isbn=9781450356206 | s2cid=3639135 | chapter-url=https://dl.acm.org/doi/pdf/10.1145/3173574.3174014 }}</ref>
 
=== Technical complexity ===