Content deleted Content added
m I added a period separating one sentence into two. |
Fix formating |
||
Line 98:
For example, competitor firms could replicate aspects of the original AI system in their own product, thus reducing competitive advantage.<ref name="How the machine 'thinks'">{{cite journal |last1=Burrel |first1=Jenna |date=2016 |title=How the machine 'thinks': Understanding opacity in machine learning algorithms |url=https://journals.sagepub.com/doi/pdf/10.1177/2053951715622512 |journal=Big Data & Society |volume=3 |issue=1 |doi=10.1177/2053951715622512 |s2cid=61330970}}</ref> An explainable AI system is also susceptible to being “gamed”—influenced in a way that undermines its intended purpose. One study gives the example of a predictive policing system; in this case, those who could potentially “game” the system are the criminals subject to the system's decisions. In this study, developers of the system discussed the issue of criminal gangs looking to illegally obtain passports, and they expressed concerns that, if given an idea of what factors might trigger an alert in the passport application process, those gangs would be able to “send guinea pigs” to test those triggers, eventually finding a loophole that would allow them to “reliably get passports from under the noses of the authorities”.<ref>{{cite book | last1=Veale | first1=Michael | last2=Van Kleek | first2=Max | last3=Binns | first3=Reuben | title=Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems | chapter=Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making | date=2018 | volume=40 | pages=1–14 | doi=10.1145/3173574.3174014 | isbn=9781450356206 | s2cid=3639135 | chapter-url=https://dl.acm.org/doi/pdf/10.1145/3173574.3174014 }}</ref>
===
Many approaches that it uses provides explanation in general, it doesn't take account for the diverse backgrounds and knowledge level of the users. This leads to challenges with accurate comprehension for all users. Expert users can find the explanations lacking in depth, and are oversimplified, while a beginner user may struggle understanding the explanations as they are complex. This limitation downplays the ability of the XAI techniques to appeal to their users with different levels of knowledge, which can impact the trust from users and who uses it. The quality of explanations can be different amongst their users as they all have different expertise levels, including different situation and conditions<ref>{{Cite journal |last1=Yang |first1=Wenli |last2=Wei |first2=Yuchen |last3=Wei |first3=Hanyu |last4=Chen |first4=Yanyu |last5=Huang |first5=Guan |last6=Li |first6=Xiang |last7=Li |first7=Renjie |last8=Yao |first8=Naimeng |last9=Wang |first9=Xinyi |last10=Gu |first10=Xiaotong |last11=Amin |first11=Muhammad Bilal |last12=Kang |first12=Byeong |date=2023-08-10 |title=Survey on Explainable AI: From Approaches, Limitations and Applications Aspects |url=https://link.springer.com/10.1007/s44230-023-00038-y |journal=Human-Centric Intelligent Systems |language=en |volume=3 |issue=3 |pages=161–188 |doi=10.1007/s44230-023-00038-y |issn=2667-1336}}</ref>
|