Content deleted Content added
Tag: references removed |
Maxeto0910 (talk | contribs) No edit summary Tags: Visual edit Mobile edit Mobile web edit Advanced mobile edit |
||
Line 14:
Patients often resist AI-based medical diagnostics and treatment recommendations, despite the proven accuracy of such systems. For instance, patients tend to trust human doctors more, as they perceive AI systems as lacking empathy and the ability to handle nuanced emotional interactions. Negative emotions are more likely to arise as AI plays a larger role in healthcare decision-making.<ref>{{Cite journal |last1=Zhou |first1=Yuwei |last2=Shi |first2=Yichuan |last3=Lu |first3=Wei |last4=Wan |first4=Fang |date=2022-05-03 |title=Did Artificial Intelligence Invade Humans? The Study on the Mechanism of Patients' Willingness to Accept Artificial Intelligence Medical Care: From the Perspective of Intergroup Threat Theory |journal=Frontiers in Psychology |language=English |volume=13 |doi=10.3389/fpsyg.2022.866124 |doi-access=free |issn=1664-1078 |pmc=9112914 |pmid=35592172}}</ref>
=== Recruitment and
Algorithmic agents used in recruitment are often perceived as less capable of fulfilling relational roles, such as providing emotional support or career development. While algorithms are trusted for transactional tasks like salary negotiations, human recruiters are favored for relational tasks due to their perceived ability to connect on an emotional level.<ref>{{Cite journal |last1=Tomprou |first1=Maria |last2=Lee |first2=Min Kyung |date=2022-01-01 |title=Employment relationships in algorithmic management: A psychological contract perspective |url=https://linkinghub.elsevier.com/retrieve/pii/S0747563221003204 |journal=Computers in Human Behavior |volume=126 |pages=106997 |doi=10.1016/j.chb.2021.106997 |issn=0747-5632|doi-access=free }}</ref>
=== Consumer
Consumers generally react less favorably to decisions made by algorithms compared to those made by humans. For example, when a decision results in a positive outcome, consumers find it harder to internalize the result if it comes from an algorithm. Conversely, negative outcomes tend to elicit similar responses regardless of whether the decision was made by an algorithm or a human.<ref name=":6">{{Cite journal |last1=Yalcin |first1=Gizem |last2=Lim |first2=Sarah |last3=Puntoni |first3=Stefano |last4=van Osselaer |first4=Stijn M.J. |date=August 2022 |title=Thumbs Up or Down: Consumer Reactions to Decisions by Algorithms Versus Humans |url=https://journals.sagepub.com/doi/10.1177/00222437211070016 |journal=Journal of Marketing Research |language=en |volume=59 |issue=4 |pages=696–717 |doi=10.1177/00222437211070016 |issn=0022-2437}}</ref>
=== Marketing and
In the marketing ___domain, AI influencers can be as effective as human influencers in promoting products. However, trust levels remain lower for AI-driven recommendations, as consumers often perceive human influencers as more authentic. Similarly, participants tend to favor content explicitly identified as human-generated over AI-generated, even when the quality of AI content matches or surpasses human-created content.<ref>{{Cite journal |last1=Sands |first1=Sean |last2=Campbell |first2=Colin L. |last3=Plangger |first3=Kirk |last4=Ferraro |first4=Carla |date=2022-01-01 |title=Unreal influence: leveraging AI in influencer marketing |url=https://www.emerald.com/insight/content/doi/10.1108/ejm-12-2019-0949/full/html |journal=European Journal of Marketing |volume=56 |issue=6 |pages=1721–1747 |doi=10.1108/EJM-12-2019-0949 |issn=0309-0566}}</ref><ref name=":3">{{Cite journal |last1=Zhang |first1=Yunhao |last2=Gosline |first2=Renée |date=January 2023 |title=Human favoritism, not AI aversion: People's perceptions (and bias) toward generative AI, human experts, and human–GAI collaboration in persuasive content generation |url=https://www.cambridge.org/core/journals/judgment-and-decision-making/article/human-favoritism-not-ai-aversion-peoples-perceptions-and-bias-toward-generative-ai-human-experts-and-humangai-collaboration-in-persuasive-content-generation/419C4BD9CE82673EAF1D8F6C350C4FA8 |journal=Judgment and Decision Making |language=en |volume=18 |pages=e41 |doi=10.1017/jdm.2023.37 |issn=1930-2975|doi-access=free }}</ref>
=== Cultural
Cultural norms play a significant role in algorithm aversion. In individualistic cultures, such as in the United States, there is a higher tendency to reject algorithmic recommendations due to an emphasis on autonomy and personalized decision-making. In contrast, collectivist cultures, such as in India, exhibit lower aversion, particularly when familiarity with algorithms is higher or when decisions align with societal norms.<ref name=":7">{{Cite journal |last1=Liu |first1=Nicole Tsz Yeung |last2=Kirshner |first2=Samuel N. |last3=Lim |first3=Eric T. K. |date=2023-05-01 |title=Is algorithm aversion WEIRD? A cross-country comparison of individual-differences and algorithm aversion |url=https://linkinghub.elsevier.com/retrieve/pii/S0969698923000061 |journal=Journal of Retailing and Consumer Services |volume=72 |pages=103259 |doi=10.1016/j.jretconser.2023.103259 |hdl=1959.4/unsworks_82995 |issn=0969-6989|hdl-access=free }}</ref>
=== Moral and
Algorithms are less trusted for tasks involving moral or emotional judgment, such as ethical dilemmas or empathetic decision-making. For example, individuals may reject algorithmic decisions in scenarios where they perceive moral stakes to be high, such as autonomous vehicle decisions or medical life-or-death situations.<ref>{{Cite journal |last1=Castelo |first1=Noah |last2=Ward |first2=Adrian F. |date=2021-12-20 |title=Conservatism predicts aversion to consequential Artificial Intelligence |journal=PLOS ONE |language=en |volume=16 |issue=12 |pages=e0261467 |doi=10.1371/journal.pone.0261467 |doi-access=free |issn=1932-6203 |pmc=8687590 |pmid=34928989|bibcode=2021PLoSO..1661467C }}</ref>
|