Content deleted Content added
m Open access bot: doi updated in citation with #oabot. |
No edit summary |
||
Line 6:
'''Algorithmic bias''' describes systematic and repeatable [[error]]s in a [[Computer System|computer system]] that create "[[#Defining fairness|unfair]]" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.
Bias can emerge from many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm.<ref>{{cite journal|last=Van Eyghen|first= Hans|title=AI Algorithms as (Un)virtuous Knowers|work=Discover Artificial Intelligence|volume=5|issue=2|date=2024|url=https://link.springer.com/article/10.1007/s44163-024-00219-z}}</ref> For example, algorithmic bias has been observed in [[Search engine bias|search engine results]] and [[social media bias|social media platforms]]. This bias can have impacts ranging from inadvertent privacy violations to reinforcing [[Bias|social biases]] of race, gender, sexuality, and ethnicity. The study of algorithmic bias is most concerned with algorithms that reflect "systematic and unfair" discrimination.<ref>{{Cite book |last=Marabelli |first=Marco |url=https://link.springer.com/book/10.1007/978-3-031-53919-0 |title=AI, Ethics, and Discrimination in Business |series=Palgrave Studies in Equity, Diversity, Inclusion, and Indigenization in Business |publisher=Springer |year=2024 |isbn=978-3-031-53918-3 |language=en |doi=10.1007/978-3-031-53919-0}}</ref> This bias has only recently been addressed in legal frameworks, such as the European Union's [[General Data Protection Regulation]] (proposed 2018) and the [[Artificial Intelligence Act]] (proposed 2021, approved 2024).
As algorithms expand their ability to organize society, politics, institutions, and behavior, sociologists have become concerned with the ways in which unanticipated output and manipulation of data can impact the physical world. Because algorithms are often considered to be neutral and unbiased, they can inaccurately project greater authority than human expertise (in part due to the psychological phenomenon of [[automation bias]]), and in some cases, reliance on algorithms can displace human responsibility for their outcomes. Bias can enter into algorithmic systems as a result of pre-existing cultural, social, or institutional expectations; by how features and labels are chosen; because of technical limitations of their design; or by being used in unanticipated contexts or by audiences who are not considered in the software's initial design.<ref>{{Cite book |last1=Suresh |first1=Harini |last2=Guttag |first2=John |title=Equity and Access in Algorithms, Mechanisms, and Optimization |chapter=A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle |date=2021-11-04 |chapter-url=https://dl.acm.org/doi/10.1145/3465416.3483305 |series=EAAMO '21 |___location=New York, NY, USA |publisher=Association for Computing Machinery |pages=1–9 |doi=10.1145/3465416.3483305 |isbn=978-1-4503-8553-4|s2cid=235436386 }}</ref>
|