Algorithmic transparency: Difference between revisions

Content deleted Content added
Added USACM Principles.
OAbot (talk | contribs)
m Open access bot: pmc updated in citation with #oabot.
 
(52 intermediate revisions by 31 users not shown)
Line 1:
'''Algorithmic transparency''' is the principle that the factors that influence the decisions made by [[algorithms]] should be visible, or transparent, to the people who use, regulate, and are impactedaffected by systems that employ those algorithms. Although the phrase was coined in 2016 by Nicholas Diakopoulos and Michael Koliska about the role of algorithms in deciding the content of digital journalism services,<ref>Nicholas Diakopoulos & Michael Koliska (2016): Algorithmic Transparency in the News Media, Digital Journalism, DOI: {{doi|10.1080/21670811.2016.1208053}}</ref>, the underlying principle dates back to the 1970s and the rise of automated systems for scoring consumer credit.
{{AFC submission|d|nn|u=Gwenchlan|ns=118|decliner=Robert McClenon|declinets=20170210171627|ts=20170104130436}} <!-- Do not remove this line! -->
 
The phrases '''"algorithmic transparency'''" and '''"algorithmic accountability'''"<ref>{{cite journal|last1=Diakopoulos|first1=Nicholas|title=Algorithmic Accountability: Journalistic Investigation of Computational Power Structures.|journal=Digital Journalism|date=2015|volume=3|issue=3|pagepages=398398–415|doi=10.1080/21670811.2014.976411|s2cid=42357142|url=https://www.cjr.org/tow_center_reports/algorithmic_accountability_on_the_investigation_of_black_boxes.php|url-415access=subscription}}</ref> are sometimes used interchangeably---especially since they were coined by the same people---but they have subtly different meanings. Specifically, '''"algorithmic transparency''' " states that the inputs to the algorithm and the algorithm's use itself must be known, but they need not be fair. '''"[[Algorithmic accountability''']]" implies that the organizations that use algorithms must be accountable for the decisions made by those algorithms, even though the decisions are being made by a machine, and not by a human being.<ref name="Dickey">{{cite news|last1=Dickey|first1=Megan Rose|title=Algorithmic Accountability|url=https://techcrunch.com/2017/04/30/algorithmic-accountability/|accessdate=4 September 2017|work=TechCrunch|date=30 April 2017}}</ref>
{{AFC comment|1=This draft doesn't really say enough about why this concept is notable.
 
Current research around algorithmic transparency interested in both societal effects of accessing remote services running algorithms.,<ref>{{cite web|title=Workshop on Data and Algorithmic Transparency|url=http://datworkshop.org/|accessdate=4 January 2017|date=2015}}</ref> as well as mathematical and computer science approaches that can be used to achieve algorithmic transparency<ref>{{cite web|title=Fairness, Accountability, and Transparency in Machine Learning|url=http://www.fatml.org/|accessdate=29 May 2017|date=2015}}</ref><ref>{{Cite journal |last1=Ott |first1=Tabea |last2=Dabrock |first2=Peter |date=2022-08-22 |title=Transparent human – (non-) transparent technology? The Janus-faced call for transparency in AI-based health care technologies |journal=Frontiers in Genetics |language=English |volume=13 |doi=10.3389/fgene.2022.902960 |doi-access=free |issn=1664-8021|pmc=9444183 }}</ref> In the United States, the [[Federal Trade Commission]]'s Bureau of Consumer Protection studies how algorithms are used by consumers by conducting its own research on algorithmic transparency and by funding external research.<ref name="Noyes">{{cite news|last1=Noyes|first1=Katherine|title=The FTC is worried about algorithmic transparency, and you should be too|url=http://www.pcworld.com/article/2908372/the-ftc-is-worried-about-algorithmic-transparency-and-you-should-be-too.html|accessdate=4 September 2017|work=PCWorld|date=9 April 2015|language=en}}</ref> In the [[European Union]], the data protection laws that came into effect in May 2018 include a "right to explanation" of decisions made by algorithms, though it is unclear what this means.<ref>{{cite journal |title=False Testimony |journal=Nature |date=31 May 2018 |volume=557 |issue=7707 |page=612|url=https://media.nature.com/original/magazine-assets/d41586-018-05285-9/d41586-018-05285-9.pdf}}</ref> Furthermore, the European Union founded The European Center for Algorithmic Transparency (ECAT).<ref>{{cite web | url=https://algorithmic-transparency.ec.europa.eu/about_en | title=About - European Commission }}</ref>
Please request a review at [[WP:WikiProject Computing]]. [[User:Robert McClenon|Robert McClenon]] ([[User talk:Robert McClenon|talk]]) 17:16, 10 February 2017 (UTC)}}
 
{{AFC comment|1=Pinging {{U|David Eppstein}} who has considerable interest in the subject. <span style="font-size:17px">[[User:Winged Blades of Godric|<span style= "color:green">''Winged Blades''</span>]]<sup>[[User talk:Winged Blades of Godric| Godric]]</sup></span> 06:58, 6 February 2017 (UTC)}}
 
----
 
'''Algorithmic transparency''' is the principle that the factors that influence the decisions made by [[algorithms]] should be visible, or transparent, to the people who use, regulate, and are impacted by systems that employ those algorithms. Although the phrase was coined in 2016 by Nicholas Diakopoulos and Michael Koliska about the role of algorithms in deciding the content of digital journalism services<ref>Nicholas Diakopoulos & Michael Koliska (2016): Algorithmic Transparency in the News Media, Digital Journalism, DOI: 10.1080/21670811.2016.1208053</ref>, the underlying principle dates back to the 1970s and the rise of automated systems for scoring consumer credit.
 
The phrases '''algorithmic transparency''' and '''algorithmic accountability'''<ref>{{cite journal|last1=Diakopoulos|first1=Nicholas|title=Algorithmic Accountability: Journalistic Investigation of Computational Power Structures.|journal=Digital Journalism|date=2015|volume=3|issue=3|page=398-415}}</ref> are sometimes used interchangeably---especially since they were coined by the same people---but they have subtly different meanings. Specifically '''algorithmic transparency''' states that the inputs to the algorithm and the algorithm's use itself must be known, but they need not be fair. '''Algorithmic accountability''' implies that the organizations that use algorithms must be accountable for the decisions made by those algorithms, even though the decisions are being made by a machine, and not by a human being.
 
Current research around algorithmic transparency interested in both societal effects of accessing remote services running algorithms.<ref>{{cite web|title=Workshop on Data and Algorithmic Transparency|url=http://datworkshop.org/|accessdate=4 January 2017|date=2015}}</ref>, as well as mathematical and computer science approaches that can be used to achieve algorithmic transparency<ref>{{cite web|title=Fairness, Accountability, and Transparency in Machine Learning|url=http://www.fatml.org/|accessdate=29 May 2017|date=2015}}</ref>
 
In 2017, the [[Association for Computing Machinery]] US Public Policy Council issued a Statement on Algorithmic Transparency and Accountability that listed seven principles intended to support societal benefits of algorithm use while minimizing the negative potential for society. Those points are:
 
1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be
aware of the possible biases involved in their design, implementation, and use and the potential harm
that biases can cause to individuals and society.
 
2. Access and redress: Regulators should encourage the adoption of mechanisms that enable
questioning and redress for individuals and groups that are adversely affected by algorithmically
informed decisions.
 
3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they
use, even if it is not feasible to explain in detail how the algorithms produce their results.
 
4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to
produce explanations regarding both the procedures followed by the algorithm and the specific
decisions that are made. This is particularly important in public policy contexts.
 
5. Data Provenance: A description of the way in which the training data was collected should be
maintained by the builders of the algorithms, accompanied by an exploration of the potential biases
induced by the human or algorithmic data-gathering process. Public scrutiny of the data provides
maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or
revelation of analytics that might allow malicious actors to game the system can justify restricting access
to qualified and authorized individuals.
 
6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited
in cases where harm is suspected.
 
7. Validation and Testing: Institutions should use rigorous methods to validate their models and
document those methods and results. In particular, they should routinely perform tests to assess and
determine whether the model generates discriminatory harm. Institutions are encouraged to make the
results of such tests public.<ref>USACM Issues Statement on Algorithmic Transparency and Accountability, January 12, 2017. https://www.acm.org/articles/bulletins/2017/january/usacm-statement-algorithmic-accountability</ref>.
 
==See also==
* [[Black box]]
* [[NeutralityExplainable AI]]
* [[Regulation of algorithms]]
* [[Reverse engineering]]
* [[Right to explanation]]
* [[Algorithmic accountability]]
 
== References ==
<!-- Inline citations added to your article will automatically display here. See https://en.wikipedia.org/wiki/WP:REFB for instructions on how to add citations. -->
{{reflist}}
 
[[Category:Accountability]]
[[Category:Algorithms]]
[[Category:Theoretical computer science]]
[[Category:Transparency (behavior)]]