Content deleted Content added
→Stereotyping: added ref |
In the definition it was "error" which is too charitable: unfortunately algorithmic bias could be intentional, not an error. The reference to "sociotechnical" was added to indicate that the algorithm+usage pattern together might lead to bias, not the algo alone. Finally, a reference was added to an Open Access handbook where there are examples. |
||
Line 6:
{{Discrimination sidebar}}
'''Algorithmic bias''' describes systematic and repeatable
Bias can emerge from many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm.<ref>{{cite journal|last=Van Eyghen|first= Hans|title=AI Algorithms as (Un)virtuous Knowers|journal=Discover Artificial Intelligence|volume=5|issue=2|date=2025|doi= 10.1007/s44163-024-00219-z|url=https://link.springer.com/article/10.1007/s44163-024-00219-z|doi-access=free}}</ref> For example, algorithmic bias has been observed in [[Search engine bias|search engine results]] and [[social media bias|social media platforms]]. This bias can have impacts ranging from inadvertent privacy violations to reinforcing [[Bias|social biases]] of race, gender, sexuality, and ethnicity. The study of algorithmic bias is most concerned with algorithms that reflect "systematic and unfair" discrimination.<ref>{{Cite book |last=Marabelli |first=Marco |url=https://link.springer.com/book/10.1007/978-3-031-53919-0 |title=AI, Ethics, and Discrimination in Business |series=Palgrave Studies in Equity, Diversity, Inclusion, and Indigenization in Business |publisher=Springer |year=2024 |isbn=978-3-031-53918-3 |language=en |doi=10.1007/978-3-031-53919-0}}</ref> This bias has only recently been addressed in legal frameworks, such as the European Union's [[General Data Protection Regulation]] (proposed 2018) and the [[Artificial Intelligence Act]] (proposed 2021, approved 2024).
|