Algorithm aversion: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Alter: title, pmc, date. Add: pmid, isbn, series, bibcode, hdl, doi-access, authors 1-1. Removed URL that duplicated identifier. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Jay8g | Category:CS1 maint: PMC format | #UCB_Category 1/2
BunnysBot (talk | contribs)
CW Error #44, typo(s) fixed: ’s → 's (4)
Line 4:
'''Algorithm aversion''' is defined as a "biased assessment of an algorithm which manifests in negative behaviors and attitudes towards the algorithm compared to a human agent."<ref name=":0">{{Cite journal |last1=Jussupow |first1=Ekaterina |last2=Benbasat |first2=Izak |last3=Heinzl |first3=Armin |date=2020 |title=Why Are We Averse Towards Algorithms ? A Comprehensive Literature Review on Algorithm Aversion |url=https://aisel.aisnet.org/ecis2020_rp/168/ |journal=Twenty-Eighth European Conference on Information Systems (ECIS2020) |pages=1–16}}</ref> This phenomenon describes the tendency of humans to reject advice or recommendations from an algorithm in situations where they would accept the same advice if it came from a human.
 
[[Algorithm|Algorithms]]s, particularly those utilizing [[machine learning]] methods or [[artificial intelligence]] (AI), play a growing role in decision-making across various fields. Examples include recommender systems in [[e-commerce]] for identifying products a customer might like and AI systems in healthcare that assist in diagnoses and treatment decisions. Despite their proven ability to outperform humans in many contexts, algorithmic recommendations are often met with resistance or rejection, which can lead to inefficiencies and suboptimal outcomes.
 
The study of algorithm aversion is critical as algorithms become increasingly embedded in our daily lives. Factors such as perceived accountability, lack of transparency, and skepticism towards machine judgment contribute to this aversion. Conversely, there are scenarios where individuals are more likely to trust and follow algorithmic advice over human recommendations, a phenomenon referred to as algorithm appreciation.<ref name=":1">{{Cite journal |last1=Logg |first1=Jennifer M. |last2=Minson |first2=Julia A. |last3=Moore |first3=Don A. |date=2019-03-01 |title=Algorithm appreciation: People prefer algorithmic to human judgment |url=https://www.sciencedirect.com/science/article/abs/pii/S0749597818303388 |journal=Organizational Behavior and Human Decision Processes |language=en |volume=151 |pages=90–103 |doi=10.1016/j.obhdp.2018.12.005 |issn=0749-5978}}</ref> Understanding these dynamics is essential for improving human-algorithm interactions and fostering greater acceptance of AI-driven decision-making.
Line 18:
 
=== Consumer Behavior ===
Consumers generally react less favorably to decisions made by algorithms compared to those made by humans. For example, when a decision results in a positive outcome, consumers find it harder to internalize the result if it comes from an algorithm. Conversely, negative outcomes tend to elicit similar responses regardless of whether the decision was made by an algorithm or a human.<ref name=":26">{{Cite journal |last1=Yalcin |first1=Gizem |last2=Lim |first2=Sarah |last3=Puntoni |first3=Stefano |last4=van Osselaer |first4=Stijn M.J. |date=August 2022 |title=Thumbs Up or Down: Consumer Reactions to Decisions by Algorithms Versus Humans |url=https://journals.sagepub.com/doi/10.1177/00222437211070016 |journal=Journal of Marketing Research |language=en |volume=59 |issue=4 |pages=696–717 |doi=10.1177/00222437211070016 |issn=0022-2437}}</ref>
 
=== Marketing and Content Creation ===
Line 24:
 
=== Cultural Differences ===
Cultural norms play a significant role in algorithm aversion. In individualistic cultures, such as in the United States, there is a higher tendency to reject algorithmic recommendations due to an emphasis on autonomy and personalized decision-making. In contrast, collectivist cultures, such as in India, exhibit lower aversion, particularly when familiarity with algorithms is higher or when decisions align with societal norms.<ref name=":47">{{Cite journal |last1=Liu |first1=Nicole Tsz Yeung |last2=Kirshner |first2=Samuel N. |last3=Lim |first3=Eric T. K. |date=2023-05-01 |title=Is algorithm aversion WEIRD? A cross-country comparison of individual-differences and algorithm aversion |url=https://linkinghub.elsevier.com/retrieve/pii/S0969698923000061 |journal=Journal of Retailing and Consumer Services |volume=72 |pages=103259 |doi=10.1016/j.jretconser.2023.103259 |hdl=1959.4/unsworks_82995 |issn=0969-6989}}</ref>
 
=== '''Moral and Emotional Decisions''' ===
Algorithms are less trusted for tasks involving moral or emotional judgment, such as ethical dilemmas or empathetic decision-making. For example, individuals may reject algorithmic decisions in scenarios where they perceive moral stakes to be high, such as autonomous vehicle decisions or medical life-or-death situations.<ref>{{Cite journal |last1=Castelo |first1=Noah |last2=Ward |first2=Adrian F. |date=2021-12-20 |title=Conservatism predicts aversion to consequential Artificial Intelligence |journal=PLOS ONE |language=en |volume=16 |issue=12 |pages=e0261467 |doi=10.1371/journal.pone.0261467 |doi-access=free |issn=1932-6203 |pmc=8687590 |pmid=34928989|bibcode=2021PLoSO..1661467C }}</ref>
 
Line 32:
Algorithm aversion arises from a combination of psychological, task-related, cultural, and design-related factors. These mechanisms interact to shape individuals' negative perceptions and behaviors toward algorithms, even in cases where algorithmic performance is objectively superior to human decision-making.
 
=== '''Psychological Mechanisms''' ===
 
==== Perceived Responsibility ====
Individuals often feel a heightened sense of accountability when using algorithmic advice compared to human advice. This stems from the belief that, if a decision goes wrong, they will be solely responsible because an algorithm lacks the capacity to share blame. By contrast, decisions made with human input are perceived as more collaborative, allowing for shared accountability. For example, users are less likely to rely on algorithmic recommendations in high-stakes domains like healthcare or financial advising, where the repercussions of errors are significant.<ref>{{Cite journal |last1name=Liu |first1=Nicole Tsz Yeung |last2=Kirshner |first2=Samuel N. |last3=Lim |first3=Eric T. K. |date=2023-05-01 |title=Is algorithm aversion WEIRD? A cross-country comparison of individual-differences and algorithm aversion |url=https":7"//linkinghub.elsevier.com/retrieve/pii/S0969698923000061 |journal=Journal of Retailing and Consumer Services |volume=72 |pages=103259 |doi=10.1016/j.jretconser.2023.103259 |hdl=1959.4/unsworks_82995 |issn=0969-6989}}</ref>
 
==== Locus of Control ====
Line 46:
 
==== Task Complexity and Risk ====
The nature of the task significantly influences algorithm aversion. For routine and low-risk tasks, such as recommending movies or predicting product preferences, users are generally comfortable relying on algorithms. However, for high-stakes or subjective tasks, such as making medical diagnoses, financial decisions, or moral judgments, algorithm aversion increases. Users perceive these tasks as requiring empathy, ethical reasoning, or nuanced understanding—qualities that they believe algorithms lack. This disparity highlights why algorithms are better received in technical fields (e.g., logistics) but face resistance in human-centric domains.<ref name=":6">{{Cite journal |last1=Yalcin |first1=Gizem |last2=Lim |first2=Sarah |last3=Puntoni |first3=Stefano |last4=van Osselaer |first4=Stijn M.J. |date=August 2022 |title=Thumbs Up or Down: Consumer Reactions to Decisions by Algorithms Versus Humans |url=https://journals.sagepub.com/doi/10.1177/00222437211070016 |journal=Journal of Marketing Research |language=en |volume=59 |issue=4 |pages=696–717 |doi=10.1177/00222437211070016 |issn=0022-2437}}</ref>
 
==== Outcome Valence ====
People’sPeople's reactions to algorithmic decisions are influenced by the nature of the decision outcome. When algorithms deliver positive results, users are more likely to trust and accept them. However, when outcomes are negative, users are more inclined to reject algorithms and attribute blame to their use. This phenomenon is linked to the perception that algorithms lack accountability, unlike human decision-makers, who can offer justifications or accept responsibility for failures.<ref name=":6" />
 
=== Cultural Mechanisms ===
 
==== Individualism vs. Collectivism ====
Cultural norms significantly shape attitudes toward algorithmic decision-making. In individualistic cultures, such as the United States, people value autonomy and personalization, making them more skeptical of algorithmic systems that they perceive as impersonal or rigid. Conversely, in collectivist cultures like India, individuals are more likely to accept algorithmic recommendations, particularly when these systems align with group norms or social expectations. Familiarity with algorithms in collectivist societies also reduces aversion, as users view algorithms as tools to reinforce societal goals rather than threats to individual autonomy.<ref name=":7">{{Cite journal |last1=Liu |first1=Nicole Tsz Yeung |last2=Kirshner |first2=Samuel N. |last3=Lim |first3=Eric T. K. |date=2023-05-01 |title=Is algorithm aversion WEIRD? A cross-country comparison of individual-differences and algorithm aversion |url=https://linkinghub.elsevier.com/retrieve/pii/S0969698923000061 |journal=Journal of Retailing and Consumer Services |volume=72 |pages=103259 |doi=10.1016/j.jretconser.2023.103259 |hdl=1959.4/unsworks_82995 |issn=0969-6989}}</ref>
 
==== Cultural Influences ====
Line 100:
 
==== Default Skepticism ====
Many individuals harbor an ingrained skepticism toward algorithms, particularly when they lack familiarity with the system or its capabilities. Early negative experiences with algorithms can entrench this distrust, making it difficult to rebuild confidence. Even when algorithms perform better, this bias often persists, leading to outright rejection.<ref name=":47" />
 
==== Favoritism Toward Humans ====
People often display a preference for human decisions over algorithmic ones, particularly for positive outcomes. Yalsin et al. highlighted that individuals are more likely to internalize favorable decisions made by humans, attributing success to human expertise or effort. In contrast, decisions made by algorithms are viewed as impersonal, reducing the sense of achievement or satisfaction. This favoritism contributes to a persistent bias against algorithmic systems, even when their performance matches or exceeds that of humans.<ref name=":26" />
 
== Proposed Methods to Overcome Algorithm Aversion ==
Line 127:
 
=== Personalization and Customization ===
Personalization is another critical factor in reducing algorithm aversion. Algorithms that adapt to individual preferences or contexts are more likely to gain user acceptance. For instance, recommendation systems in e-commerce that learn a user’suser's shopping habits over time are often trusted more than generic systems. Customization features, such as the ability to prioritize certain factors (e.g., cost or sustainability in product recommendations), further enhance user satisfaction by aligning outputs with their unique needs. In healthcare, personalized AI systems that incorporate a patient’spatient's medical history and specific conditions are better received than generalized tools. By tailoring outputs to the user’suser's preferences and circumstances, algorithms can foster greater engagement and trust.
 
== Algorithm appreciation ==