Algorithm aversion: Difference between revisions

Content deleted Content added
BunnysBot (talk | contribs)
CW Error #44, typo(s) fixed: ’s → 's (4)
OAbot (talk | contribs)
m Open access bot: hdl, doi updated in citation with #oabot.
Line 15:
 
=== Recruitment and Employment ===
Algorithmic agents used in recruitment are often perceived as less capable of fulfilling relational roles, such as providing emotional support or career development. While algorithms are trusted for transactional tasks like salary negotiations, human recruiters are favored for relational tasks due to their perceived ability to connect on an emotional level.<ref>{{Cite journal |last1=Tomprou |first1=Maria |last2=Lee |first2=Min Kyung |date=2022-01-01 |title=Employment relationships in algorithmic management: A psychological contract perspective |url=https://linkinghub.elsevier.com/retrieve/pii/S0747563221003204 |journal=Computers in Human Behavior |volume=126 |pages=106997 |doi=10.1016/j.chb.2021.106997 |issn=0747-5632|doi-access=free }}</ref>
 
=== Consumer Behavior ===
Line 21:
 
=== Marketing and Content Creation ===
In the marketing ___domain, AI influencers can be as effective as human influencers in promoting products. However, trust levels remain lower for AI-driven recommendations, as consumers often perceive human influencers as more authentic. Similarly, participants tend to favor content explicitly identified as human-generated over AI-generated, even when the quality of AI content matches or surpasses human-created content.<ref>{{Cite journal |last1=Sands |first1=Sean |last2=Campbell |first2=Colin L. |last3=Plangger |first3=Kirk |last4=Ferraro |first4=Carla |date=2022-01-01 |title=Unreal influence: leveraging AI in influencer marketing |url=https://www.emerald.com/insight/content/doi/10.1108/ejm-12-2019-0949/full/html |journal=European Journal of Marketing |volume=56 |issue=6 |pages=1721–1747 |doi=10.1108/EJM-12-2019-0949 |issn=0309-0566}}</ref><ref name=":3">{{Cite journal |last1=Zhang |first1=Yunhao |last2=Gosline |first2=Renée |date=January 2023 |title=Human favoritism, not AI aversion: People's perceptions (and bias) toward generative AI, human experts, and human–GAI collaboration in persuasive content generation |url=https://www.cambridge.org/core/journals/judgment-and-decision-making/article/human-favoritism-not-ai-aversion-peoples-perceptions-and-bias-toward-generative-ai-human-experts-and-humangai-collaboration-in-persuasive-content-generation/419C4BD9CE82673EAF1D8F6C350C4FA8 |journal=Judgment and Decision Making |language=en |volume=18 |pages=e41 |doi=10.1017/jdm.2023.37 |issn=1930-2975|doi-access=free }}</ref>
 
=== Cultural Differences ===
Cultural norms play a significant role in algorithm aversion. In individualistic cultures, such as in the United States, there is a higher tendency to reject algorithmic recommendations due to an emphasis on autonomy and personalized decision-making. In contrast, collectivist cultures, such as in India, exhibit lower aversion, particularly when familiarity with algorithms is higher or when decisions align with societal norms.<ref name=":7">{{Cite journal |last1=Liu |first1=Nicole Tsz Yeung |last2=Kirshner |first2=Samuel N. |last3=Lim |first3=Eric T. K. |date=2023-05-01 |title=Is algorithm aversion WEIRD? A cross-country comparison of individual-differences and algorithm aversion |url=https://linkinghub.elsevier.com/retrieve/pii/S0969698923000061 |journal=Journal of Retailing and Consumer Services |volume=72 |pages=103259 |doi=10.1016/j.jretconser.2023.103259 |hdl=1959.4/unsworks_82995 |issn=0969-6989|hdl-access=free }}</ref>
 
=== Moral and Emotional Decisions ===
Line 38:
 
==== Locus of Control ====
People with an internal locus of control, who believe they have direct influence over outcomes, are more reluctant to trust algorithms. They may perceive algorithmic decision-making as undermining their autonomy, preferring human input that feels more modifiable or personal. Conversely, individuals with an external locus of control, who attribute outcomes to external forces, may accept algorithmic decisions more readily, viewing algorithms as neutral and effective tools. This tendency is particularly evident in decision-making contexts where users seek to maintain agency.<ref name=":5">{{Cite journal |last1=Mahmud |first1=Hasan |last2=Islam |first2=A. K. M. Najmul |last3=Ahmed |first3=Syed Ishtiaque |last4=Smolander |first4=Kari |date=2022-02-01 |title=What influences algorithmic decision-making? A systematic literature review on algorithm aversion |url=https://linkinghub.elsevier.com/retrieve/pii/S0040162521008210 |journal=Technological Forecasting and Social Change |volume=175 |pages=121390 |doi=10.1016/j.techfore.2021.121390 |issn=0040-1625|doi-access=free }}</ref>
 
==== Neuroticism ====
Line 92:
 
==== Mode of Delivery ====
The format in which algorithms present their recommendations significantly affects user trust. Systems that use conversational or audio interfaces are generally more trusted than those relying solely on textual outputs, as they create a sense of human-like interaction.<ref>{{Citation |last1=Wischnewski |first1=Magdalena |title=Can AI Reduce Motivated Reasoning in News Consumption? Investigating the Role of Attitudes Towards AI and Prior-Opinion in Shaping Trust Perceptions of News |date=2022 |work=HHAI2022: Augmenting Human Intellect |pages=184–198 |url=https://ebooks.iospress.nl/doi/10.3233/FAIA220198 |access-date=2024-11-18 |publisher=IOS Press |doi=10.3233/faia220198 |last2=Kr&#228 |last3=Mer |first3=Nicole|series=Frontiers in Artificial Intelligence and Applications |isbn=978-1-64368-308-9 |doi-access=free }}</ref>
 
==== Presentation Style ====