'''Algorithmic transparency''' is the principle that the factors that influence the decisions made by [[algorithms]] should be visible, or transparent, to the people who use, regulate, and are affected by systems that employ those algorithms. Although the phrase was coined in 2016 by Nicholas Diakopoulos and Michael Koliska about the role of algorithms in deciding the content of digital journalism services,<ref>Nicholas Diakopoulos & Michael Koliska (2016): Algorithmic Transparency in the News Media, Digital Journalism, {{doi|10.1080/21670811.2016.1208053}}</ref> the underlying principle dates back to the 1970s and the rise of automated systems for scoring consumer credit.
{{AFC submission|t||ts=20170104130205|u=Gwenchlan|ns=118|demo=}}<!-- Important, do not remove this line before article has been created. -->
The phrases "algorithmic transparency" and "algorithmic accountability"<ref>{{cite journal|last1=Diakopoulos|first1=Nicholas|title=Algorithmic Accountability: Journalistic Investigation of Computational Power Structures.|journal=Digital Journalism|date=2015|volume=3|issue=3|pages=398–415|doi=10.1080/21670811.2014.976411|s2cid=42357142|url=https://www.cjr.org/tow_center_reports/algorithmic_accountability_on_the_investigation_of_black_boxes.php|url-access=subscription}}</ref> are sometimes used interchangeably – especially since they were coined by the same people – but they have subtly different meanings. Specifically, "algorithmic transparency" states that the inputs to the algorithm and the algorithm's use itself must be known, but they need not be fair. "[[Algorithmic accountability]]" implies that the organizations that use algorithms must be accountable for the decisions made by those algorithms, even though the decisions are being made by a machine, and not by a human being.<ref name="Dickey">{{cite news|last1=Dickey|first1=Megan Rose|title=Algorithmic Accountability|url=https://techcrunch.com/2017/04/30/algorithmic-accountability/|accessdate=4 September 2017|work=TechCrunch|date=30 April 2017}}</ref>
'''Algorithmic transparency''' is the capacity for the user of an [[Algorithms|algorithm]] to understand its functioning and its resulting output. It is to be opposed to the functioning of an algorithm as a [[black box]], which lacks explainability in its automated decision making.<ref>{{cite journal|last1=Diakopoulos|first1=Nicholas|title=Algorithmic Accountability: Journalistic Investigation of Com- putational Power Structures.|journal=Digital Journalism|date=2015|volume=3|issue=3|page=398-415}}</ref>
Current research around algorithmic transparency is mainly interested in theboth societal effects of accessing remote services running black box algorithms.,<ref>{{cite web|title=Workshop on Data and Algorithmic Transparency|url=http://datworkshop.org/|accessdate=4 January 2017|date=2015}}</ref> Some approachesas proposewell waysas tomathematical gainand understandingcomputer aboutscience specificapproaches remotethat blackcan boxbe algorithms,used byto craftingachieve inputs,algorithmic viatransparency<ref>{{cite serviceweb|title=Fairness, [[APIs]]Accountability, and observingTransparency thein resultingMachine outputLearning|url=http://www.fatml.org/|accessdate=29 May 2017|date=2015}}</ref><ref>{{citeCite journal |last1=TramèrOtt |first1=FlorianTabea |last2=ZhangDabrock |first2=FanPeter |last3date=Juels2022-08-22 |first3title=AriTransparent human – (non-) transparent technology? The Janus-faced call for transparency in AI-based health care technologies |last4journal=K.Frontiers in Genetics Reiter|first4language=MichaelEnglish |last5volume=Ristenpart13 |first5doi=Thomas10.3389/fgene.2022.902960 |titledoi-access=Stealingfree Machine|issn=1664-8021|pmc=9444183 Learning}}</ref> ModelsIn the United viaStates, Predictionthe APIs|journal=USENIX[[Federal SecurityTrade SymposiumCommission]]'s Bureau of Consumer Protection studies how algorithms are used by consumers by conducting its own research on algorithmic transparency and by funding external research.<ref name="Noyes">{{cite news|datelast1=2016Noyes|first1=Katherine|title=The FTC is worried about algorithmic transparency, and you should be too|url=httpshttp://www.usenixpcworld.orgcom/systemarticle/files2908372/conference/usenixsecurity16/sec16_paper_tramerthe-ftc-is-worried-about-algorithmic-transparency-and-you-should-be-too.pdfhtml|accessdate=4 September 2017|work=PCWorld|date=9 April 2015|language=en}}</ref> In the [[European Union]], the data protection laws that came into effect in May 2018 include a "right to explanation" of decisions made by algorithms, though it is unclear what this means.<ref>{{cite journal |last1title=LeFalse Testimony Merrer|first1journal=ErwanNature |last2date=Trédan31 May 2018 |first2volume=Gilles557 |titleissue=Uncovering Influence Cookbooks7707 |page=612|url=https://media.nature.com/original/magazine-assets/d41586-018-05285-9/d41586-018-05285-9.pdf}}</ref> Reverse EngineeringFurthermore, the TopologicalEuropean ImpactUnion infounded PeerThe RankingEuropean Services|journal=Computer-SupportedCenter Cooperativefor WorkAlgorithmic andTransparency Social(ECAT).<ref>{{cite web Computing|date=2017| url=https://arxivalgorithmic-transparency.org/abs/1608ec.07481europa.eu/about_en | title=About - European Commission }}</ref>
==See also==
* [[Black box]]
* [[NeutralityExplainable AI]]
* [[Regulation of algorithms]]
* [[Reverse engineering]]
* [[Right to explanation]]
* [[Algorithmic accountability]]
== References ==
{{reflist}}
[[Category:Accountability]]
{{AFC submission|||ts=20170104130436|u=Gwenchlan|ns=118}}
[[Category:Algorithms]]
[[Category:Theoretical computer science]]
[[Category:Transparency (behavior)]]
|