Algorithm aversion: Difference between revisions

Content deleted Content added
consistency
Tags: Visual edit Mobile edit Mobile web edit Advanced mobile edit
OAbot (talk | contribs)
m Open access bot: url-access=subscription updated in citation with #oabot.
 
(2 intermediate revisions by 2 users not shown)
Line 6:
[[Algorithm]]s, particularly those utilizing [[machine learning]] methods or [[artificial intelligence]] (AI), play a growing role in decision-making across various fields. Examples include recommender systems in [[e-commerce]] for identifying products a customer might like and AI systems in healthcare that assist in diagnoses and treatment decisions. Despite their proven ability to outperform humans in many contexts, algorithmic recommendations are often met with resistance or rejection, which can lead to inefficiencies and suboptimal outcomes.
 
The study of algorithm aversion is critical as algorithms become increasingly embedded in our daily lives. Factors such as perceived accountability, lack of transparency, and skepticism towards machine judgment contribute to this aversion. Conversely, there are scenarios where individuals are more likely to trust and follow algorithmic advice over human recommendations, a phenomenon referred to as algorithm appreciation.<ref name=":1">{{Cite journal |last1=Logg |first1=Jennifer M. |last2=Minson |first2=Julia A. |last3=Moore |first3=Don A. |date=2019-03-01 |title=Algorithm appreciation: People prefer algorithmic to human judgment |url=https://www.sciencedirect.com/science/article/abs/pii/S0749597818303388 |journal=Organizational Behavior and Human Decision Processes |language=en |volume=151 |pages=90–103 |doi=10.1016/j.obhdp.2018.12.005 |issn=0749-5978|url-access=subscription }}</ref> Understanding these dynamics is essential for improving human-algorithm interactions and fostering greater acceptance of AI-driven decision-making.
 
== Examples of algorithm aversion ==
Line 108:
People may also be averse to using algorithms if doing so conveys negative information about the human's ability.<ref>{{cite journal |last1=Weitzner |first1=Gregory |title=Reputational Algorithm Aversion |journal=Working Paper |url=https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4736843}}</ref> This can occur if humans have private information about their own ability.
 
== Proposed Methodsmethods to Overcomeovercome Algorithmalgorithm Aversionaversion ==
Algorithms are often capable of outperforming humans or performing tasks much more cost-effectively.<ref>{{Cite journal |last1=Dietvorst |first1=Berkeley J. |last2=Simmons |first2=Joseph P. |last3=Massey |first3=Cade |date=2015 |title=Algorithm aversion: People erroneously avoid algorithms after seeing them err. |url=https://doi.apa.org/doi/10.1037/xge0000033 |journal=Journal of Experimental Psychology: General |language=en |volume=144 |issue=1 |pages=114–126 |doi=10.1037/xge0000033 |pmid=25401381 |issn=1939-2222|url-access=subscription }}</ref><ref>{{Cite journal |last1=Yeomans |first1=Michael |last2=Shah |first2=Anuj |last3=Mullainathan |first3=Sendhil |last4=Kleinberg |first4=Jon |date=October 2019 |title=Making sense of recommendations |url=https://onlinelibrary.wiley.com/doi/10.1002/bdm.2118 |journal=Journal of Behavioral Decision Making |language=en |volume=32 |issue=4 |pages=403–414 |doi=10.1002/bdm.2118 |issn=0894-3257|url-access=subscription }}</ref> Despite this, algorithm aversion persists due to a range of psychological, cultural, and design-related factors. To mitigate resistance and build trust, researchers and practitioners have proposed several strategies.
 
=== Human-in-the-loop ===
One effective way to reduce algorithmic aversion is by incorporating a [[human-in-the-loop]] approach, where the human decision-maker retains control over the final decision. This approach addresses concerns about agency and accountability by positioning algorithms as advisory tools rather than autonomous decision-makers.
 
==== Advisory Rolerole ====
Algorithms can provide recommendations while leaving the ultimate decision-making authority with humans. This allows users to view algorithms as supportive rather than threatening. For example, in healthcare, AI systems can suggest diagnoses or treatments, but the human doctor makes the final call.
 
==== Collaboration and Trusttrust ====
Integrating humans into algorithmic processes fosters a sense of collaboration and encourages users to engage with the system more openly. This method is particularly effective in domains where human intuition and context are critical, such as recruitment, education, and financial planning.
 
Line 124:
 
=== User training ===
Familiarizing users with algorithms through training can significantly reduce aversion, especially for those who are unfamiliar or skeptical. Training programs that simulate real-world interactions with algorithms allow users to see their capabilities and limitations firsthand. For instance, healthcare professionals using diagnostic AI systems can benefit from hands-on training that demonstrates how the system arrives at recommendations and how to interpret its outputs. Such training helps bridge knowledge gaps and demystifies algorithms, making users more comfortable with their use. Furthermore, repeated interactions and feedback loops help users build trust in the system over time. Financial incentives, such as rewards for accurate decisions made with the help of algorithms, have also been shown to encourage users to engage more readily with these systems.<ref>{{Cite journal |last1=Filiz |first1=Ibrahim |last2=Judek |first2=Jan René |last3=Lorenz |first3=Marco |last4=Spiwoks |first4=Markus |date=2021-09-01 |title=Reducing algorithm aversion through experience |url=https://linkinghub.elsevier.com/retrieve/pii/S221463502100068X |journal=Journal of Behavioral and Experimental Finance |volume=31 |pages=100524 |doi=10.1016/j.jbef.2021.100524 |issn=2214-6350|url-access=subscription }}</ref>
 
=== Incorporating Useruser Controlcontrol ===
Allowing users to interact with and adjust algorithmic outputs can greatly enhance their sense of control, which is a key factor in overcoming aversion. For example, interactive interfaces that let users modify parameters, simulate outcomes, or personalize recommendations make algorithms feel less rigid and more adaptable. Providing confidence thresholds that users can adjust—such as setting stricter criteria for medical diagnoses—further empowers them to feel involved in the decision-making process. Feedback mechanisms are another important feature, as they allow users to provide input or correct errors, fostering a sense of collaboration between the user and the algorithm. These design features not only reduce resistance but also demonstrate that algorithms are flexible tools rather than fixed, inflexible systems.
 
=== Personalization and Customizationcustomization ===
Personalization is another critical factor in reducing algorithm aversion. Algorithms that adapt to individual preferences or contexts are more likely to gain user acceptance. For instance, recommendation systems in e-commerce that learn a user's shopping habits over time are often trusted more than generic systems. Customization features, such as the ability to prioritize certain factors (e.g., cost or sustainability in product recommendations), further enhance user satisfaction by aligning outputs with their unique needs. In healthcare, personalized AI systems that incorporate a patient's medical history and specific conditions are better received than generalized tools. By tailoring outputs to the user's preferences and circumstances, algorithms can foster greater engagement and trust.
 
== Algorithm appreciation ==
Studies do not consistently show people demonstrating [[bias]] against algorithms and sometimes show the opposite, preferring advice from an algorithm instead of a human. This effect is called ''algorithm appreciation''.<ref>{{Cite journal |last1=Logg |first1=Jennifer M. |last2=Minson |first2=Julia A. |last3=Moore |first3=Don A. |date=2019-03-01 |title=Algorithm appreciation: People prefer algorithmic to human judgment |url=https://linkinghub.elsevier.com/retrieve/pii/S0749597818303388 |journal=Organizational Behavior and Human Decision Processes |volume=151 |pages=90–103 |doi=10.1016/j.obhdp.2018.12.005 |issn=0749-5978|url-access=subscription }}</ref><ref>{{Cite journal |last1=Mahmud |first1=Hasan |last2=Islam |first2=A. K. M. Najmul |last3=Luo |first3=Xin (Robert) |last4=Mikalef |first4=Patrick |date=2024-04-01 |title=Decoding algorithm appreciation: Unveiling the impact of familiarity with algorithms, tasks, and algorithm performance |url=https://linkinghub.elsevier.com/retrieve/pii/S0167923624000010 |journal=Decision Support Systems |volume=179 |pages=114168 |doi=10.1016/j.dss.2024.114168 |issn=0167-9236}}</ref> Results are mixed, showing that people sometimes seem to prefer advice that comes from an algorithm instead of a human.
 
For example, customers are more likely to indicate initial interest to human sales agents compared to automated sales agents but less likely to provide contact information to them. This is due to "lower levels of performance expectancy and effort expectancy associated with human sales agents versus automated sales agents".<ref>{{Cite journal |last1=Adam |first1=Martin |last2=Roethke |first2=Konstantin |last3=Benlian |first3=Alexander |date=September 2023 |title=Human vs. Automated Sales Agents: How and Why Customer Responses Shift Across Sales Stages |url=https://pubsonline.informs.org/doi/10.1287/isre.2022.1171 |journal=Information Systems Research |language=en |volume=34 |issue=3 |pages=1148–1168 |doi=10.1287/isre.2022.1171 |issn=1047-7047|url-access=subscription }}</ref>
 
== References ==