Algorithmic bias: Difference between revisions

Content deleted Content added
mNo edit summary
Tags: Reverted Visual edit
Restored revision 1282624869 by Qualiesin (talk)
Line 86:
 
===== Racial bias =====
Racial bias refers to the tendency of machine learning models to produce outcomes that unfairly discriminate against or stereotype individuals based on race or ethnicity. This bias often stems from training data that reflects historical and systemic inequalities. For example, AI systems used in hiring, law enforcement, or healthcare may disproportionately disadvantage certain racial groups by reinforcing existing stereotypes or underrepresenting them in key areas. Such biases can manifest in ways like facial recognition systems misidentifying individuals of certain racial backgrounds or healthcare algorithms underestimating the medical needs of minority patients. Addressing racial bias requires careful examination of data, improved transparency in algorithmic processes, and efforts to ensure fairness throughout the AI development lifecycle.<ref>{{Cite web |last=Lazaro |first=Gina |date=May 17, 2024 |title=Understanding Gender and Racial Bias in AI |url=https://www.sir.advancedleadership.harvard.edu/articles/understanding-gender-and-racial-bias-in-ai |access-date=December 11, 2024 |website=Harvard Advanced Leadership Initiative Social Impact Review}}</ref><ref>{{Cite journal |last=Jindal |first=Atin |date=September 5, 2022 |title=Misguided Artificial Intelligence: How Racial Bias is Built Into Clinical Models |url=https://bhm.scholasticahq.com/article/38021-misguided-artificial-intelligence-how-racial-bias-is-built-into-clinical-models |journal=Journal of Brown Hospital Medicine |volume=2 |issue=1 |doi=10.56305/001c.38021 |access-date=December 11, 2024|doi-access=free |pmc=11878858 }}</ref> The sources of racial bias in Artificial Intelligence is usually present in various phases of the algorithmic development process. This type of bias can occur when imbalanced and misrepresentative datasets are used for training, and the development of these data collection systems are influenced by human biases and prejudices, which causes the algorithms to repeat these historical biases and inequalities. As many vulnerable and marginalized groups have a history of being forgotten and misrepresented in already existing datasets, when these AI algorithms are trained on biased datasets, their predictive value is just as biased and limited. Algorithms are effective in identifying certain patterns that are present in the groups they were trained/tested on, but are unable to easily recognize specific patterns in certain groups of patients that were unavailable/not included in the training data.<ref>{{Cite journal |last=Norori |first=Natalia |last2=Hu |first2=Qiyang |last3=Aellen |first3=Florence Marcelle |last4=Faraci |first4=Francesca Dalia |last5=Tzovara |first5=Athina |date=2021-10-08 |title=Addressing bias in big data and AI for health care: A call for open science |url=https://pmc.ncbi.nlm.nih.gov/articles/PMC8515002/ |journal=Patterns (New York, N.Y.) |volume=2 |issue=10 |pages=100347 |doi=10.1016/j.patter.2021.100347 |issn=2666-3899 |pmc=8515002 |pmid=34693373}}</ref>
 
=== Technical ===
=== <ref>{{Cite journal |last=Norori |first=Natalia |last2=Hu |first2=Qiyang |last3=Aellen |first3=Florence Marcelle |last4=Faraci |first4=Francesca Dalia |last5=Tzovara |first5=Athina |date=2021-10 |title=Addressing bias in big data and AI for health care: A call for open science |url=https://linkinghub.elsevier.com/retrieve/pii/S2666389921002026 |journal=Patterns |language=en |volume=2 |issue=10 |pages=100347 |doi=10.1016/j.patter.2021.100347}}</ref><ref>{{Cite journal |last=Norori |first=Natalia |last2=Hu |first2=Qiyang |last3=Aellen |first3=Florence Marcelle |last4=Faraci |first4=Francesca Dalia |last5=Tzovara |first5=Athina |date=2021-10-08 |title=Addressing bias in big data and AI for health care: A call for open science |url=https://pmc.ncbi.nlm.nih.gov/articles/PMC8515002/ |journal=Patterns (New York, N.Y.) |volume=2 |issue=10 |pages=100347 |doi=10.1016/j.patter.2021.100347 |issn=2666-3899 |pmc=8515002 |pmid=34693373}}</ref>Technical ===
[[File:Three Surveillance cameras.jpg|thumb|upright=1.2|Facial recognition software used in conjunction with surveillance cameras was found to display bias in recognizing Asian and black faces over white faces.<ref name="IntronaWood" />{{rp|191}}]]
Technical bias emerges through limitations of a program, computational power, its design, or other constraint on the system.<ref name="FriedmanNissenbaum" />{{rp|332}} Such bias can also be a restraint of design, for example, a search engine that shows three results per screen can be understood to privilege the top three results slightly more than the next three, as in an airline price display.<ref name="FriedmanNissenbaum" />{{rp|336}} Another case is software that relies on [[randomness]] for fair distributions of results. If the [[random number generation]] mechanism is not truly random, it can introduce bias, for example, by skewing selections toward items at the end or beginning of a list.<ref name="FriedmanNissenbaum" />{{rp|332}}