Content deleted Content added
m →Technical: Applied the "cite web" template for a reference. |
added content |
||
Line 147:
Another study, published in August 2024, on [[Large language model]] investigates how language models perpetuate covert racism, particularly through dialect prejudice against speakers of African American English (AAE). It highlights that these models exhibit more negative stereotypes about AAE speakers than any recorded human biases, while their overt stereotypes are more positive. This discrepancy raises concerns about the potential harmful consequences of such biases in decision-making processes.<ref>Hofmann, V., Kalluri, P.R., Jurafsky, D. et al. AI generates covertly racist decisions about people based on their dialect. Nature 633, 147–154 (2024). https://doi.org/10.1038/s41586-024-07856-5</ref>
A study published by the [[Anti-Defamation League]] in 2025 found that several major LLMs, including [[ChatGPT]], [[Llama (language model)|Llama]], [[Claude (language model)|Claude]], and [[Gemini (language model)|Gemini]] showed antisemitic bias.<ref>{{Cite web |last=Stub |first=Zev |title=Study: ChatGPT, Meta’s Llama and all other top AI models show anti-Jewish, anti-Israel bias |url=https://www.timesofisrael.com/study-chatgpt-metas-llama-and-all-other-top-ai-models-show-anti-jewish-anti-israel-bias/ |access-date=2025-03-27 |website=www.timesofisrael.com |language=en-US}}</ref>
==== Law enforcement and legal proceedings ====
Line 174 ⟶ 176:
Facial recognition technology has been seen to cause problems for transgender individuals. In 2018, there were reports of Uber drivers who were transgender or transitioning experiencing difficulty with the facial recognition software that Uber implements as a built-in security measure. As a result of this, some of the accounts of trans Uber drivers were suspended which cost them fares and potentially cost them a job, all due to the facial recognition software experiencing difficulties with recognizing the face of a trans driver who was transitioning.<ref>{{Cite web|url=https://www.vox.com/future-perfect/2019/4/19/18412674/ai-bias-facial-recognition-black-gay-transgender|title=Some AI just shouldn't exist|date=2019-04-19}}</ref> Although the solution to this issue would appear to be including trans individuals in training sets for machine learning models, an instance of trans YouTube videos that were collected to be used in training data did not receive consent from the trans individuals that were included in the videos, which created an issue of violation of privacy.<ref>{{Cite web|url=https://www.vox.com/future-perfect/2019/4/19/18412674/ai-bias-facial-recognition-black-gay-transgender|title=Some AI just shouldn't exist|last=Samuel|first=Sigal|date=2019-04-19|website=Vox|language=en|access-date=2019-12-12}}</ref>
There has also been a study that was conducted at Stanford University in 2017 that tested algorithms in a machine learning system that was said to be able to detect an individual's sexual orientation based on their facial images.<ref>{{Cite journal|last1=Wang|first1=Yilun|last2=Kosinski|first2=Michal|date=2017-02-15|title=Deep neural networks are more accurate than humans at detecting sexual orientation from facial images.|url=https://osf.io/zn79k/|journal=OSF|doi=10.17605/OSF.IO/ZN79K |language=en}}</ref> The model in the study predicted a correct distinction between gay and straight men 81% of the time, and a correct distinction between gay and straight women 74% of the time. This study resulted in a backlash from the LGBTQIA community, who were fearful of the possible negative repercussions that this AI system could have on individuals of the LGBTQIA community by putting individuals at risk of being "[[Outing|outed]]" against their will.<ref>{{Cite news|url=https://www.theguardian.com/world/2017/sep/08/ai-gay-gaydar-algorithm-facial-recognition-criticism-stanford|title=LGBT groups denounce 'dangerous' AI that uses your face to guess sexuality|last=Levin|first=Sam|date=2017-09-09|work=The Guardian|access-date=2019-12-12|language=en-GB|issn=0261-3077}}</ref>
=== Disability discrimination ===
|