Content deleted Content added
Fix formating |
Formatting |
||
Line 87:
As regulators, official bodies, and general users come to depend on AI-based dynamic systems, clearer accountability will be required for [[automated decision-making]] processes to ensure trust and transparency. The first global conference exclusively dedicated to this emerging discipline was the 2017 [[International Joint Conference on Artificial Intelligence]]: Workshop on Explainable Artificial Intelligence (XAI).<ref>{{cite web|title=IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI)|url=http://www.intelligentrobots.org/files/IJCAI2017/IJCAI-17_XAI_WS_Proceedings.pdf|website=Earthlink|publisher=IJCAI|access-date=17 July 2017|archive-date=4 April 2019|archive-url=https://web.archive.org/web/20190404131609/http://www.intelligentrobots.org/files/IJCAI2017/IJCAI-17_XAI_WS_Proceedings.pdf|url-status=dead}}</ref> It has evolved over the years, with various workshops organised and co-located to many other international conferences, and it has now a dedicated global event, "The world conference on eXplainable Artificial Intelligence", with its own proceedings.<ref name="XAI-2023">{{cite book |author=<!--Not stated--> |date= 2023| title= Explainable Artificial Intelligence, First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Parts I/II/III |series= Communications in Computer and Information Science|volume= 1903|url= https://link.springer.com/book/10.1007/978-3-031-44070-0 |publisher=springer |doi= 10.1007/978-3-031-44070-0|isbn=978-3-031-44070-0}}</ref><ref name="XAI-2024">{{cite book |author=<!--Not stated-->|date= 2024| title= Explainable Artificial Intelligence, Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024, Proceedings, Part I/II/III/IV |series= Communications in Computer and Information Science|volume= 2153|url=https://link.springer.com/book/10.1007/978-3-031-63787-2 |publisher=springer |doi= 10.1007/978-3-031-63787-2|isbn=978-3-031-63787-2}}</ref>
The European Union introduced a [[right to explanation]] in the [[General Data Protection Regulation
▲The European Union introduced a [[right to explanation]] in the [[General Data Protection Regulation|General Data Protection Right (GDPR)]] to address potential problems stemming from the rising importance of algorithms. The implementation of the regulation began in 2018. However, the right to explanation in GDPR covers only the local aspect of interpretability. In the United States, insurance companies are required to be able to explain their rate and coverage decisions.<ref>{{cite news |last1=Kahn |first1=Jeremy |title=Artificial Intelligence Has Some Explaining to Do |url=https://www.bloomberg.com/news/articles/2018-12-12/artificial-intelligence-has-some-explaining-to-do |access-date=17 December 2018 |work=[[Bloomberg Businessweek]] |date=12 December 2018}}</ref> In France the [[Loi pour une République numérique]] (Digital Republic Act) grants subjects the right to request and receive information pertaining to the implementation of algorithms that process data about them.
== Limitations ==
Line 98 ⟶ 97:
For example, competitor firms could replicate aspects of the original AI system in their own product, thus reducing competitive advantage.<ref name="How the machine 'thinks'">{{cite journal |last1=Burrel |first1=Jenna |date=2016 |title=How the machine 'thinks': Understanding opacity in machine learning algorithms |url=https://journals.sagepub.com/doi/pdf/10.1177/2053951715622512 |journal=Big Data & Society |volume=3 |issue=1 |doi=10.1177/2053951715622512 |s2cid=61330970}}</ref> An explainable AI system is also susceptible to being “gamed”—influenced in a way that undermines its intended purpose. One study gives the example of a predictive policing system; in this case, those who could potentially “game” the system are the criminals subject to the system's decisions. In this study, developers of the system discussed the issue of criminal gangs looking to illegally obtain passports, and they expressed concerns that, if given an idea of what factors might trigger an alert in the passport application process, those gangs would be able to “send guinea pigs” to test those triggers, eventually finding a loophole that would allow them to “reliably get passports from under the noses of the authorities”.<ref>{{cite book | last1=Veale | first1=Michael | last2=Van Kleek | first2=Max | last3=Binns | first3=Reuben | title=Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems | chapter=Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making | date=2018 | volume=40 | pages=1–14 | doi=10.1145/3173574.3174014 | isbn=9781450356206 | s2cid=3639135 | chapter-url=https://dl.acm.org/doi/pdf/10.1145/3173574.3174014 }}</ref>
=== Adaptive
Many approaches that it uses provides explanation in general, it doesn't take account for the diverse backgrounds and knowledge level of the users. This leads to challenges with accurate comprehension for all users. Expert users can find the explanations lacking in depth, and are oversimplified, while a beginner user may struggle understanding the explanations as they are complex. This limitation downplays the ability of the XAI techniques to appeal to their users with different levels of knowledge, which can impact the trust from users and who uses it. The quality of explanations can be different amongst their users as they all have different expertise levels, including different situation and conditions<ref>{{Cite journal |last1=Yang |first1=Wenli |last2=Wei |first2=Yuchen |last3=Wei |first3=Hanyu |last4=Chen |first4=Yanyu |last5=Huang |first5=Guan |last6=Li |first6=Xiang |last7=Li |first7=Renjie |last8=Yao |first8=Naimeng |last9=Wang |first9=Xinyi |last10=Gu |first10=Xiaotong |last11=Amin |first11=Muhammad Bilal |last12=Kang |first12=Byeong |date=2023-08-10 |title=Survey on Explainable AI: From Approaches, Limitations and Applications Aspects |url=https://link.springer.com/10.1007/s44230-023-00038-y |journal=Human-Centric Intelligent Systems |language=en |volume=3 |issue=3 |pages=161–188 |doi=10.1007/s44230-023-00038-y |issn=2667-1336}}</ref>
Line 132 ⟶ 131:
=== Payoff allocation ===
Nizri, Azaria and Hazon<ref>{{Cite journal |last1=Nizri |first1=Meir |last2=Hazon |first2=Noam |last3=Azaria |first3=Amos |date=2022-06-28 |title=Explainable Shapley-Based Allocation (Student Abstract) |url=https://ojs.aaai.org/index.php/AAAI/article/view/21648 |journal=Proceedings of the AAAI Conference on Artificial Intelligence |language=en |volume=36 |issue=11 |pages=13023–13024 |doi=10.1609/aaai.v36i11.21648 |s2cid=250296641 |issn=2374-3468}}</ref> present an algorithm for computing explanations for the [[Shapley value]]. Given a coalitional game, their algorithm
== See also ==
|