Explainable artificial intelligence: Difference between revisions

Content deleted Content added
No edit summary
mNo edit summary
Line 104:
 
=== Adaptive integration and explanation ===
Many approaches that it uses provides explanation in general, it doesn't take account for the diverse backgrounds and knowledge level of the users. This leads to challenges with accurate comprehension for all users. Expert users can find the explanations lacking in depth, and are oversimplified, while a beginner user may struggle understanding the explanations as they are complex. This limitation downplays the ability of the XAI techniques to appeal to their users with different levels of knowledge, which can impact the trust from users and who uses it. The quality of explanations can be different amongst their users as they all have different expertise levels, including different situation and conditions.<ref>{{Cite journal |last1=Yang |first1=Wenli |last2=Wei |first2=Yuchen |last3=Wei |first3=Hanyu |last4=Chen |first4=Yanyu |last5=Huang |first5=Guan |last6=Li |first6=Xiang |last7=Li |first7=Renjie |last8=Yao |first8=Naimeng |last9=Wang |first9=Xinyi |last10=Gu |first10=Xiaotong |last11=Amin |first11=Muhammad Bilal |last12=Kang |first12=Byeong |date=2023-08-10 |title=Survey on Explainable AI: From Approaches, Limitations and Applications Aspects |url=https://link.springer.com/10.1007/s44230-023-00038-y |journal=Human-Centric Intelligent Systems |language=en |volume=3 |issue=3 |pages=161–188 |doi=10.1007/s44230-023-00038-y |issn=2667-1336}}</ref>
 
=== Technical complexity ===