Content deleted Content added
Citation bot (talk | contribs) Alter: date, doi, title, template type. Add: pages, issue, journal, arxiv, doi, volume, series, doi-broken-date, authors 1-1. Removed parameters. Some additions/deletions were parameter name changes. Upgrade ISBN10 to 13. | Use this bot. Report bugs. | Suggested by Headbomb | #UCB_toolbar |
Citation bot (talk | contribs) Removed parameters. | Use this bot. Report bugs. | Suggested by Dominic3203 | Category:Artificial intelligence engineering | #UCB_Category 24/30 |
||
Line 5:
'''Explainable AI''' ('''XAI'''), often overlapping with '''interpretable AI''', or '''explainable machine learning''' ('''XML'''), either refers to an [[artificial intelligence]] (AI) system over which it is possible for humans to retain ''intellectual oversight'', or refers to the methods to achieve this.<ref>{{Cite journal|last=Longo|first=Luca|display-authors=etal|date=2024 |title=Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions|url=https://www.sciencedirect.com/science/article/pii/S1566253524000794 |journal=Information Fusion|volume=106|doi=10.1016/j.inffus.2024.102301}}</ref><ref>{{Cite journal |last=Mihály |first=Héder |date=2023 |title=Explainable AI: A Brief History of the Concept |url=https://ercim-news.ercim.eu/images/stories/EN134/EN134-web.pdf |journal=ERCIM News |issue=134 |pages=9–10}}</ref> The main focus is usually on the reasoning behind the decisions or predictions made by the AI<ref>{{Cite journal |last1=Phillips |first1=P. Jonathon |last2=Hahn |first2=Carina A. |last3=Fontana |first3=Peter C. |last4=Yates |first4=Amy N. |last5=Greene |first5=Kristen |last6=Broniatowski |first6=David A. |last7=Przybocki |first7=Mark A. |date=2021-09-29 |title=Four Principles of Explainable Artificial Intelligence |url=https://doi.org/10.6028/NIST.IR.8312 |journal=NIST |doi=10.6028/nist.ir.8312}}</ref> which are made more understandable and transparent.<ref>{{Cite journal|last1=Vilone|first1=Giulia|last2=Longo|first2=Luca|title=Notions of explainability and evaluation approaches for explainable artificial intelligence|url=https://www.sciencedirect.com/science/article/pii/S1566253521001093|journal=Information Fusion|year=2021|volume= December 2021 - Volume 76 |pages=89–106|doi=10.1016/j.inffus.2021.05.009}}</ref> This has been brought up again as a topic of active research as users now need to know the safety and explain what automated decision making is in different applications.<ref>{{Cite journal |last1=Confalonieri |first1=Roberto |last2=Coba |first2=Ludovik |last3=Wagner |first3=Benedikt |last4=Besold |first4=Tarek R. |date=January 2021 |title=A historical perspective of explainable Artificial Intelligence |url=https://wires.onlinelibrary.wiley.com/doi/10.1002/widm.1391 |journal=WIREs Data Mining and Knowledge Discovery |language=en |volume=11 |issue=1 |doi=10.1002/widm.1391 |issn=1942-4787}}</ref> XAI counters the "[[black box]]" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.<ref>{{Cite journal |last=Castelvecchi |first=Davide |date=2016-10-06 |title=Can we open the black box of AI? |url=http://www.nature.com/articles/538020a |journal=Nature |language=en |volume=538 |issue=7623 |pages=20–23 |doi=10.1038/538020a |pmid=27708329 |bibcode=2016Natur.538...20C |s2cid=4465871 |issn=0028-0836}}</ref><ref name=guardian>{{cite news|last1=Sample|first1=Ian|title=Computer says no: why making AIs fair, accountable and transparent is crucial|url=https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial|access-date=30 January 2018|work=The Guardian |date=5 November 2017|language=en}}</ref>
XAI hopes to help users of AI-powered systems perform more effectively by improving their understanding of how those systems reason.<ref>{{Cite journal|last=Alizadeh|first=Fatemeh|date=2021|title=I Don't Know, Is AI Also Used in Airbags?: An Empirical Study of Folk Concepts and People's Expectations of Current and Future Artificial Intelligence|url=https://www.researchgate.net/publication/352638184|journal=Icom|volume=20 |issue=1 |pages=3–17 |doi=10.1515/icom-2021-0009
[[Machine learning]] (ML) algorithms used in AI can be categorized as [[White-box testing|white-box]] or [[Black box|black-box]].<ref>{{Cite journal|last1=Vilone|first1=Giulia|last2=Longo|first2=Luca|title= Classification of Explainable Artificial Intelligence Methods through Their Output Formats |journal=Machine Learning and Knowledge Extraction|year=2021|volume=3|issue=3|pages=615–661|doi=10.3390/make3030032|doi-access=free }}</ref> White-box models provide results that are understandable to experts in the ___domain. Black-box models, on the other hand, are extremely hard to explain and may not be understood even by ___domain experts.<ref>{{Cite journal|last=Loyola-González|first=O.|date=2019|title=Black-Box vs. White-Box: Understanding Their Advantages and Weaknesses From a Practical Point of View|journal=IEEE Access|volume=7|pages=154096–154113|doi=10.1109/ACCESS.2019.2949286|bibcode=2019IEEEA...7o4096L |issn=2169-3536|doi-access=free}}</ref> XAI algorithms follow the three principles of transparency, interpretability, and explainability. A model is transparent "if the processes that extract model parameters from training data and generate labels from testing data can be described and motivated by the approach designer."<ref name=":4">{{Cite journal|last1=Roscher|first1=R.|last2=Bohn|first2=B.|last3=Duarte|first3=M. F.|last4=Garcke|first4=J.|date=2020|title=Explainable Machine Learning for Scientific Insights and Discoveries|journal=IEEE Access|volume=8|pages=42200–42216|doi=10.1109/ACCESS.2020.2976199|arxiv=1905.08883 |bibcode=2020IEEEA...842200R |issn=2169-3536|doi-access=free}}</ref> Interpretability describes the possibility of comprehending the ML model and presenting the underlying basis for decision-making in a way that is understandable to humans.<ref name="Interpretable machine learning: def">{{cite journal|last1=Murdoch|first1=W. James|last2=Singh|first2=Chandan|last3=Kumbier|first3=Karl|last4=Abbasi-Asl|first4=Reza|last5=Yu|first5=Bin|date=2019-01-14|title=Interpretable machine learning: definitions, methods, and applications|journal=Proceedings of the National Academy of Sciences of the United States of America|volume=116|issue=44|pages=22071–22080|arxiv=1901.04592|doi=10.1073/pnas.1900654116|pmid=31619572|pmc=6825274|bibcode= |doi-access=free}}</ref><ref name="Lipton 31–57">{{Cite journal|last=Lipton|first=Zachary C.|date=June 2018|title=The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery.|journal=Queue|language=en|volume=16|issue=3|pages=31–57|doi=10.1145/3236386.3241340|issn=1542-7730|doi-access=free}}</ref><ref>{{Cite web|date=2019-10-22|title=Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI|url=https://deepai.org/publication/explainable-artificial-intelligence-xai-concepts-taxonomies-opportunities-and-challenges-toward-responsible-ai|access-date=2021-01-13|website=DeepAI}}</ref> Explainability is a concept that is recognized as important, but a consensus definition is not yet available;<ref name=":4" /> one possibility is "the collection of features of the interpretable ___domain that have contributed, for a given example, to producing a decision (e.g., classification or regression)".<ref>{{Cite journal|date=2018-02-01|title=Methods for interpreting and understanding deep neural networks|journal=Digital Signal Processing|language=en|volume=73|pages=1–15|doi=10.1016/j.dsp.2017.10.011|issn=1051-2004|doi-access=free|last1=Montavon|first1=Grégoire|last2=Samek|first2=Wojciech|last3=Müller|first3=Klaus-Robert|arxiv=1706.07979 |bibcode=2018DSP....73....1M |author-link3=Klaus-Robert Müller}}</ref>
|