Explainable artificial intelligence: Difference between revisions

Content deleted Content added
Filled in 0 bare reference(s) with reFill 2
m Fixed typo "my" -> "by"
Tags: Visual edit Mobile edit Mobile web edit
Line 47:
For images, [[Saliency map|saliency maps]] highlight the parts of an image that most influenced the result.<ref>{{Cite web |last=Sharma |first=Abhishek |date=2018-07-11 |title=What Are Saliency Maps In Deep Learning? |url=https://analyticsindiamag.com/what-are-saliency-maps-in-deep-learning/ |access-date=2024-07-10 |website=Analytics India Magazine |language=en-US}}</ref>
 
Systems that are expert or knowledge based are software systems that are made myby experts. This system consists of a knowledge based encoding for the ___domain knowledge. This system is usually modeled as production rules, and someone uses this knowledge base which the user can question the system for knowledge. In expert systems, the language and explanations are understood with an explanation for the reasoning or a problem solving activity.<ref name="auto"/>
 
However, these techniques are not very suitable for [[Language model|language models]] like [[Generative pre-trained transformer|generative pretrained transformers]]. Since these models generate language, they can provide an explanation, but which may not be reliable. Other techniques include attention analysis (examining how the model focuses on different parts of the input), probing methods (testing what information is captured in the model's representations), causal tracing (tracing the flow of information through the model) and circuit discovery (identifying specific subnetworks responsible for certain behaviors). Explainability research in this area overlaps significantly with interpretability and [[AI alignment|alignment]] research.<ref>{{cite arXiv |last1=Luo |first1=Haoyan |title=From Understanding to Utilization: A Survey on Explainability for Large Language Models |date=2024-02-21 |eprint=2401.12874 |last2=Specia |first2=Lucia|class=cs.CL }}</ref>