Algorithm aversion: Difference between revisions

Content deleted Content added
consistency
Tags: Visual edit Mobile edit Mobile web edit Advanced mobile edit
OAbot (talk | contribs)
m Open access bot: url-access updated in citation with #oabot.
Line 6:
[[Algorithm]]s, particularly those utilizing [[machine learning]] methods or [[artificial intelligence]] (AI), play a growing role in decision-making across various fields. Examples include recommender systems in [[e-commerce]] for identifying products a customer might like and AI systems in healthcare that assist in diagnoses and treatment decisions. Despite their proven ability to outperform humans in many contexts, algorithmic recommendations are often met with resistance or rejection, which can lead to inefficiencies and suboptimal outcomes.
 
The study of algorithm aversion is critical as algorithms become increasingly embedded in our daily lives. Factors such as perceived accountability, lack of transparency, and skepticism towards machine judgment contribute to this aversion. Conversely, there are scenarios where individuals are more likely to trust and follow algorithmic advice over human recommendations, a phenomenon referred to as algorithm appreciation.<ref name=":1">{{Cite journal |last1=Logg |first1=Jennifer M. |last2=Minson |first2=Julia A. |last3=Moore |first3=Don A. |date=2019-03-01 |title=Algorithm appreciation: People prefer algorithmic to human judgment |url=https://www.sciencedirect.com/science/article/abs/pii/S0749597818303388 |journal=Organizational Behavior and Human Decision Processes |language=en |volume=151 |pages=90–103 |doi=10.1016/j.obhdp.2018.12.005 |issn=0749-5978|url-access=subscription }}</ref> Understanding these dynamics is essential for improving human-algorithm interactions and fostering greater acceptance of AI-driven decision-making.
 
== Examples of algorithm aversion ==
Line 109:
 
== Proposed methods to overcome algorithm aversion ==
Algorithms are often capable of outperforming humans or performing tasks much more cost-effectively.<ref>{{Cite journal |last1=Dietvorst |first1=Berkeley J. |last2=Simmons |first2=Joseph P. |last3=Massey |first3=Cade |date=2015 |title=Algorithm aversion: People erroneously avoid algorithms after seeing them err. |url=https://doi.apa.org/doi/10.1037/xge0000033 |journal=Journal of Experimental Psychology: General |language=en |volume=144 |issue=1 |pages=114–126 |doi=10.1037/xge0000033 |pmid=25401381 |issn=1939-2222}}</ref><ref>{{Cite journal |last1=Yeomans |first1=Michael |last2=Shah |first2=Anuj |last3=Mullainathan |first3=Sendhil |last4=Kleinberg |first4=Jon |date=October 2019 |title=Making sense of recommendations |url=https://onlinelibrary.wiley.com/doi/10.1002/bdm.2118 |journal=Journal of Behavioral Decision Making |language=en |volume=32 |issue=4 |pages=403–414 |doi=10.1002/bdm.2118 |issn=0894-3257|url-access=subscription }}</ref> Despite this, algorithm aversion persists due to a range of psychological, cultural, and design-related factors. To mitigate resistance and build trust, researchers and practitioners have proposed several strategies.
 
=== Human-in-the-loop ===
Line 124:
 
=== User training ===
Familiarizing users with algorithms through training can significantly reduce aversion, especially for those who are unfamiliar or skeptical. Training programs that simulate real-world interactions with algorithms allow users to see their capabilities and limitations firsthand. For instance, healthcare professionals using diagnostic AI systems can benefit from hands-on training that demonstrates how the system arrives at recommendations and how to interpret its outputs. Such training helps bridge knowledge gaps and demystifies algorithms, making users more comfortable with their use. Furthermore, repeated interactions and feedback loops help users build trust in the system over time. Financial incentives, such as rewards for accurate decisions made with the help of algorithms, have also been shown to encourage users to engage more readily with these systems.<ref>{{Cite journal |last1=Filiz |first1=Ibrahim |last2=Judek |first2=Jan René |last3=Lorenz |first3=Marco |last4=Spiwoks |first4=Markus |date=2021-09-01 |title=Reducing algorithm aversion through experience |url=https://linkinghub.elsevier.com/retrieve/pii/S221463502100068X |journal=Journal of Behavioral and Experimental Finance |volume=31 |pages=100524 |doi=10.1016/j.jbef.2021.100524 |issn=2214-6350|url-access=subscription }}</ref>
 
=== Incorporating user control ===
Line 133:
 
== Algorithm appreciation ==
Studies do not consistently show people demonstrating [[bias]] against algorithms and sometimes show the opposite, preferring advice from an algorithm instead of a human. This effect is called ''algorithm appreciation''.<ref>{{Cite journal |last1=Logg |first1=Jennifer M. |last2=Minson |first2=Julia A. |last3=Moore |first3=Don A. |date=2019-03-01 |title=Algorithm appreciation: People prefer algorithmic to human judgment |url=https://linkinghub.elsevier.com/retrieve/pii/S0749597818303388 |journal=Organizational Behavior and Human Decision Processes |volume=151 |pages=90–103 |doi=10.1016/j.obhdp.2018.12.005 |issn=0749-5978|url-access=subscription }}</ref><ref>{{Cite journal |last1=Mahmud |first1=Hasan |last2=Islam |first2=A. K. M. Najmul |last3=Luo |first3=Xin (Robert) |last4=Mikalef |first4=Patrick |date=2024-04-01 |title=Decoding algorithm appreciation: Unveiling the impact of familiarity with algorithms, tasks, and algorithm performance |url=https://linkinghub.elsevier.com/retrieve/pii/S0167923624000010 |journal=Decision Support Systems |volume=179 |pages=114168 |doi=10.1016/j.dss.2024.114168 |issn=0167-9236}}</ref> Results are mixed, showing that people sometimes seem to prefer advice that comes from an algorithm instead of a human.
 
For example, customers are more likely to indicate initial interest to human sales agents compared to automated sales agents but less likely to provide contact information to them. This is due to "lower levels of performance expectancy and effort expectancy associated with human sales agents versus automated sales agents".<ref>{{Cite journal |last1=Adam |first1=Martin |last2=Roethke |first2=Konstantin |last3=Benlian |first3=Alexander |date=September 2023 |title=Human vs. Automated Sales Agents: How and Why Customer Responses Shift Across Sales Stages |url=https://pubsonline.informs.org/doi/10.1287/isre.2022.1171 |journal=Information Systems Research |language=en |volume=34 |issue=3 |pages=1148–1168 |doi=10.1287/isre.2022.1171 |issn=1047-7047|url-access=subscription }}</ref>
 
== References ==