Explainable artificial intelligence: Difference between revisions

Content deleted Content added
m typo
cleaned the external links (one was redundant, another one dead)
Line 138:
== External links ==
* {{ cite web | url=http://xaiworldconference.com/ | title=the World Conference on eXplainable Artificial Intelligence}}
* {{ cite web | url=https://fatconference.org/ | title=FAT*ACM Conference on Fairness, Accountability, and Transparency (FAccT) }}
* {{ Cite journal | title= Random Forest similarity maps: A Scalable Visual Representation for Global and Local Interpretation| year= 2021| doi= 10.3390/electronics10222862| doi-access= free| last1= Mazumdar| first1= Dipankar| last2= Neto| first2= Mário Popolin| last3= Paulovich| first3= Fernando V.| journal= Electronics| volume= 10| issue= 22| page= 2862}}
* {{cite arXiv | last1=Park | first1=Dong Huk | last2=Hendricks | first2=Lisa Anne | last3=Akata | first3=Zeynep | last4=Schiele | first4=Bernt | last5=Darrell | first5=Trevor | last6=Rohrbach | first6=Marcus | title=Attentive Explanations: Justifying Decisions and Pointing to the Evidence | date=2016-12-14 | eprint=1612.04757 | class=cs.CV }}
* {{ cite web | url=https://fatconference.org/ | title=FAT* Conference on Fairness, Accountability, and Transparency }}
* {{cite web |title='Explainable Artificial Intelligence': Cracking open the black box of AI |website=Computerworld |date=2017-11-02 |url=https://www.computerworld.com.au/article/617359/explainable-artificial-intelligence-cracking-open-black-box-ai/ |ref={{sfnref | Computerworld | 2017}} |access-date=2017-11-02 |archive-date=2020-10-22 |archive-url=https://web.archive.org/web/20201022062307/https://www2.computerworld.com.au/article/617359/explainable-artificial-intelligence-cracking-open-black-box-ai/ |url-status=dead}}
* {{cite arXiv | last1=Park | first1=Dong Huk | last2=Hendricks | first2=Lisa Anne | last3=Akata | first3=Zeynep | last4=Schiele | first4=Bernt | last5=Darrell | first5=Trevor | last6=Rohrbach | first6=Marcus | title=Attentive Explanations: Justifying Decisions and Pointing to the Evidence | date=2016-12-14 | eprint=1612.04757 | class=cs.CV }}
* {{cite web | title=Explainable AI: Making machines understandable for humans | website=Explainable AI: Making machines understandable for humans | url=https://explainableai.com/ | ref={{sfnref | Explainable AI: Making machines understandable for humans}} | access-date=2017-11-02}}
* {{cite web |title=Explaining How End-to-End Deep Learning Steers a Self-Driving Car |website=Parallel Forall |date=2017-05-23 |url=https://devblogs.nvidia.com/parallelforall/explaining-deep-learning-self-driving-car/ |ref={{sfnref | Parallel Forall | 2017}} |access-date=2017-11-02}}
Line 147 ⟶ 146:
* {{cite arXiv | last1=Alvarez-Melis | first1=David | last2=Jaakkola | first2=Tommi S. |title=A causal framework for explaining the predictions of black-box sequence-to-sequence models | date=2017-07-06 | eprint=1707.01943 | class=cs.LG }}
* {{cite web | title=Similarity Cracks the Code Of Explainable AI | website=simMachines | date=2017-10-12 | url=http://simmachines.com/similarity-cracks-code-explainable-ai/ | ref={{sfnref | simMachines | 2017}} | access-date=2018-02-02}}
* {{cite arXiv | last1=Bojarski | first1=Mariusz | last2=Yeres | first2=Philip | last3=Choromanska | first3=Anna | last4=Choromanski | first4=Krzysztof | last5=Firner | first5=Bernhard | last6=Jackel | first6=Lawrence | last7=Muller | first7=Urs | title=Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car | date=2017-04-25 | eprint=1704.07911 | class=cs.CV }}
 
{{Differentiable computing}}