Algorithmic bias: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Add: arxiv, doi, authors 1-1. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 16/364
OAbot (talk | contribs)
m Open access bot: doi updated in citation with #oabot.
Line 12:
As algorithms expand their ability to organize society, politics, institutions, and behavior, sociologists have become concerned with the ways in which unanticipated output and manipulation of data can impact the physical world. Because algorithms are often considered to be neutral and unbiased, they can inaccurately project greater authority than human expertise (in part due to the psychological phenomenon of [[automation bias]]), and in some cases, reliance on algorithms can displace human responsibility for their outcomes. Bias can enter into algorithmic systems as a result of pre-existing cultural, social, or institutional expectations; by how features and labels are chosen; because of technical limitations of their design; or by being used in unanticipated contexts or by audiences who are not considered in the software's initial design.<ref>{{Cite book |last1=Suresh |first1=Harini |last2=Guttag |first2=John |title=Equity and Access in Algorithms, Mechanisms, and Optimization |chapter=A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle |date=2021-11-04 |chapter-url=https://dl.acm.org/doi/10.1145/3465416.3483305 |series=EAAMO '21 |___location=New York, NY, USA |publisher=Association for Computing Machinery |pages=1–9 |doi=10.1145/3465416.3483305 |isbn=978-1-4503-8553-4|s2cid=235436386 }}</ref>
 
Algorithmic bias has been cited in cases ranging from election outcomes to the spread of [[online hate speech]]. It has also arisen in criminal justice,<ref>{{Cite journal |last=Krištofík |first=Andrej |date=2025-04-28 |title=Bias in AI (Supported) Decision Making: Old Problems, New Technologies |url=https://www.iacajournal.org/articles/10.36745/ijca.598/ |journal=International Journal for Court Administration |language=en |volume=16 |issue=1 |doi=10.36745/ijca.598 |issn=2156-7964|doi-access=free }}</ref> healthcare, and hiring, compounding existing racial, socioeconomic, and gender biases. The relative inability of facial recognition technology to accurately identify darker-skinned faces has been linked to multiple wrongful arrests of black men, an issue stemming from imbalanced datasets. Problems in understanding, researching, and discovering algorithmic bias persist due to the proprietary nature of algorithms, which are typically treated as trade secrets. Even when full transparency is provided, the complexity of certain algorithms poses a barrier to understanding their functioning. Furthermore, algorithms may change, or respond to input or output in ways that cannot be anticipated or easily reproduced for analysis. In many cases, even within a single website or application, there is no single "algorithm" to examine, but a network of many interrelated programs and data inputs, even between users of the same service.
 
A 2021 survey identified multiple forms of algorithmic bias, including historical, representation, and measurement biases, each of which can contribute to unfair outcomes.<ref>{{Cite journal |last1=Mehrabi |first1=N. |last2=Morstatter |first2=F. |last3=Saxena |first3=N. |last4=Lerman |first4=K. |last5=Galstyan |first5=A. |title=A survey on bias and fairness in machine learning |journal=ACM Computing Surveys |volume=54 |issue=6 |pages=1–35 |year=2021 |doi=10.1145/3457607 |arxiv=1908.09635 |url=https://dl.acm.org/doi/10.1145/3457607 |access-date=April 30, 2025}}</ref>