Content deleted Content added
m Robert McClenon moved page User:JonathanSinclair/sandbox to Draft:Explainable AI: Preferred ___location for AfC submissions |
Commenting on submission (AFCH 0.9) |
||
Line 1:
{{AFC submission|||u=JonathanSinclair|ns=2|ts=20170717144255}} <!-- Do not remove this line! -->
{{AFC comment|1=A review at [[WP:WikiProject Computing]] is in order. It isn't entirely clear what is meant by XAI; the article mentions that. [[User:Robert McClenon|Robert McClenon]] ([[User talk:Robert McClenon|talk]]) 04:26, 18 July 2017 (UTC)}}
----
<!-- EDIT BELOW THIS LINE -->
Line 17 ⟶ 22:
Since DARPA's introduction of it's program in 2016, a number of initiatives have started to address the issue of algorithmic accountability and provide transparency concerning how technologies within this ___domain function.
* 25.04.2017: Nvidia publishes it's paper on: "Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car"
* 13.07.2017: Accenture recommends, "Responsible AI: Why we need Explainable AI"
== Accountability ==
Line 24 ⟶ 29:
Examples of these effects have already been seen in the following sectors:
* Neural Network Tank imaging
* Antenna design ([[Evolved Antenna]])
* Algorithmic trading ([[High-frequency trading]])
* Medical diagnosis
* Autonomous vehicles
== Recent developments ==
As regulators, official bodies and general users dependency on AI-based dynamic systems, clearer accountability will be required for decision making processes to ensure trust and transparency. Evidence of this requirement gaining more momentum can be seen with the launch of the first global conference exclusively dedicated to this emerging discipline:
* International Joint Conference on Artificial Intelligence: Workshop on Explainable Artificial Intelligence (XAI)
== External links ==
Line 47 ⟶ 52:
{{DEFAULTSORT:Explainable AI}}
[[:Category:Artifical Intelligence]]
[[:Category:Autonomous Vehicles]]
[[:Category:XAI]]
[[:Category:Accountability]]
|