Content deleted Content added
→External links: Added reference on symbolic observer dynamics Tags: Reverted Mobile edit Mobile web edit |
m Disambiguating links to Defense (link changed to Defense industry) using DisamAssist. |
||
Line 16:
If algorithms fulfill these principles, they provide a basis for justifying decisions, tracking them and thereby verifying them, improving the algorithms, and exploring new facts.<ref>{{Cite journal|last1=Adadi|first1=A.|last2=Berrada|first2=M.|date=2018|title=Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)|journal=IEEE Access|volume=6|pages=52138–52160|doi=10.1109/ACCESS.2018.2870052|bibcode=2018IEEEA...652138A |issn=2169-3536|doi-access=free}}</ref>
Sometimes it is also possible to achieve a high-accuracy result with white-box ML algorithms. These algorithms have an interpretable structure that can be used to explain predictions.<ref name=":6">{{Cite journal|last=Rudin|first=Cynthia|date=2019|title=Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead|journal=Nature Machine Intelligence|language=en|volume=1|issue=5|pages=206–215|doi=10.1038/s42256-019-0048-x|pmid=35603010 |pmc=9122117 |arxiv=1811.10154|issn=2522-5839|doi-access=free}}</ref> Concept Bottleneck Models, which use concept-level abstractions to explain model reasoning, are examples of this and can be applied in both image<ref name="Koh Nguyen Tang Mussmann Pierson Kim Liang 2020">{{Cite conference|last1=Koh|first1=P. W.|last2=Nguyen|first2=T.|last3=Tang|first3=Y. S.|last4=Mussmann|first4=S.|last5=Pierson|first5=E.|last6=Kim|first6=B.|last7=Liang|first7=P.|date=November 2020|title=Concept bottleneck models|book-title=International Conference on Machine Learning|pages=5338–5348|publisher=PMLR}}</ref> and text<ref name="Ludan Lyu Yang Dugan Yatskar Callison-Burch 2023">{{Cite arXiv|last1=Ludan|first1=J. M.|last2=Lyu|first2=Q.|last3=Yang|first3=Y.|last4=Dugan|first4=L.|last5=Yatskar|first5=M.|last6=Callison-Burch|first6=C.|date=2023|title=Interpretable-by-Design Text Classification with Iteratively Generated Concept Bottleneck|class=cs.CL |eprint=2310.19660}}</ref> prediction tasks. This is especially important in domains like [[medicine]], [[Defense industry|defense]], [[finance]], and [[law]], where it is crucial to understand decisions and build trust in the algorithms.<ref name=":3" /> Many researchers argue that, at least for [[supervised machine learning]], the way forward is [[symbolic regression]], where the algorithm searches the space of mathematical expressions to find the model that best fits a given dataset.<ref name="Wenninger Kaymakci Wiethe 2022 p=118300">{{cite journal | last1=Wenninger | first1=Simon | last2=Kaymakci | first2=Can | last3=Wiethe | first3=Christian | title=Explainable long-term building energy consumption prediction using QLattice | journal=Applied Energy | publisher=Elsevier BV | volume=308 | year=2022 | issn=0306-2619 | doi=10.1016/j.apenergy.2021.118300 | page=118300| bibcode=2022ApEn..30818300W | s2cid=245428233 }}</ref><ref name="Christiansen Wilstrup Hedley 2022 p.">{{cite journal | last1=Christiansen | first1=Michael | last2=Wilstrup | first2=Casper | last3=Hedley | first3=Paula L. | title=Explainable "white-box" machine learning is the way forward in preeclampsia screening | journal=American Journal of Obstetrics and Gynecology | publisher=Elsevier BV | year=2022 | volume=227 | issue=5 | issn=0002-9378 | doi=10.1016/j.ajog.2022.06.057 | page=791| pmid=35779588 | s2cid=250160871 }}</ref><ref name="Wilstup Cave p.">{{citation | last1=Wilstup | first1=Casper | last2=Cave | first2=Chris | title=Combining symbolic regression with the Cox proportional hazards model improves prediction of heart failure deaths | publisher=Cold Spring Harbor Laboratory | date=2021-01-15 | doi=10.1101/2021.01.15.21249874 | page=| s2cid=231609904 }}</ref>
AI systems optimize behavior to satisfy a mathematically specified goal system chosen by the system designers, such as the command "maximize the accuracy of [[sentiment analysis|assessing how positive]] film reviews are in the test dataset." The AI may learn useful general rules from the test set, such as "reviews containing the word "horrible" are likely to be negative." However, it may also learn inappropriate rules, such as "reviews containing '[[Daniel Day-Lewis]]' are usually positive"; such rules may be undesirable if they are likely to fail to generalize outside the training set, or if people consider the rule to be "cheating" or "unfair." A human can audit rules in an XAI to get an idea of how likely the system is to generalize to future real-world data outside the test set.<ref name="science">{{cite journal|date=5 July 2017|title=How AI detectives are cracking open the black box of deep learning|url=https://www.science.org/content/article/how-ai-detectives-are-cracking-open-black-box-deep-learning|journal=Science|language=en|access-date=30 January 2018}}.</ref>
|