Content deleted Content added
→Criticism: style |
|||
Line 163:
* [[Algorithmic bias|algorithms becoming susceptible to bias]],<ref name=":1">{{cite web |url=https://ash.harvard.edu/files/ash/files/artificial_intelligence_for_citizen_services.pdf|title=Artificial Intelligence for Citizen Services and Government|last=Mehr|first=Hila|date=August 2017|website=ash.harvard.edu|access-date=2018-12-31}}</ref>
* a lack of transparency in how an algorithm may make decisions,<ref name=":6">{{cite web|url=https://www.capgemini.com/consulting/wp-content/uploads/sites/30/2017/10/ai-in-public-sector.pdf|title=Unleashing the potential of Artificial Intelligence in the Public Sector|last=Capgemini Consulting|date=2017|website=www.capgemini.com|access-date=2018-12-31}}</ref>
*
According to a 2016's book [[Weapons of Math Destruction]], algorithms and [[big data]] are suspected to increase inequality due to opacity, scale and damage.<ref>{{cite journal |last1=Verma |first1=Shikha |title=Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy |journal=Vikalpa: The Journal for Decision Makers |date=June 2019 |volume=44 |issue=2 |pages=97–98 |doi=10.1177/0256090919853933 |s2cid=198779932 |issn=0256-0909|doi-access=free }}</ref>
|