Current research around algorithmic transparency interested in both societal effects of accessing remote services running algorithms.<ref>{{cite web|title=Workshop on Data and Algorithmic Transparency|url=http://datworkshop.org/|accessdate=4 January 2017|date=2015}}</ref>, as well as mathematical and computer science approaches that can be used to achieve algorithmic transparency<ref>{{cite web|title=Fairness, Accountability, and Transparency in Machine Learning|url=http://www.fatml.org/|accessdate=29 May 2017|date=2015}}</ref>
In 2017, the [[Association for Computing Machinery]] US Public Policy Council issued a Statement on Algorithmic Transparency and Accountability that listed seven principles intended to support societal benefits of algorithm use while minimizing the negative potential for society. Those points are:
1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be
aware of the possible biases involved in their design, implementation, and use and the potential harm
that biases can cause to individuals and society.
2. Access and redress: Regulators should encourage the adoption of mechanisms that enable
questioning and redress for individuals and groups that are adversely affected by algorithmically
informed decisions.
3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they
use, even if it is not feasible to explain in detail how the algorithms produce their results.
4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to
produce explanations regarding both the procedures followed by the algorithm and the specific
decisions that are made. This is particularly important in public policy contexts.
5. Data Provenance: A description of the way in which the training data was collected should be
maintained by the builders of the algorithms, accompanied by an exploration of the potential biases
induced by the human or algorithmic data-gathering process. Public scrutiny of the data provides
maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or
revelation of analytics that might allow malicious actors to game the system can justify restricting access
to qualified and authorized individuals.
6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited
in cases where harm is suspected.
7. Validation and Testing: Institutions should use rigorous methods to validate their models and
document those methods and results. In particular, they should routinely perform tests to assess and
determine whether the model generates discriminatory harm. Institutions are encouraged to make the
results of such tests public.<ref>USACM Issues Statement on Algorithmic Transparency and Accountability, January 12, 2017. https://www.acm.org/articles/bulletins/2017/january/usacm-statement-algorithmic-accountability</ref>.
==See also==
|