Content deleted Content added
I edited the examples and mechanisms from the literature. |
Citation bot (talk | contribs) Alter: title, pmc, date. Add: pmid, isbn, series, bibcode, hdl, doi-access, authors 1-1. Removed URL that duplicated identifier. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Jay8g | Category:CS1 maint: PMC format | #UCB_Category 1/2 |
||
Line 12:
=== Healthcare ===
Patients often resist AI-based medical diagnostics and treatment recommendations, despite the proven accuracy of such systems. For instance, patients tend to trust human doctors more, as they perceive AI systems as lacking empathy and the ability to handle nuanced emotional interactions. Negative emotions are more likely to arise as AI plays a larger role in healthcare decision-making.<ref>{{Cite journal |
=== Recruitment and Employment ===
Algorithmic agents used in recruitment are often perceived as less capable of fulfilling relational roles, such as providing emotional support or career development. While algorithms are trusted for transactional tasks like salary negotiations, human recruiters are favored for relational tasks due to their perceived ability to connect on an emotional level.<ref>{{Cite journal |
=== Consumer Behavior ===
Consumers generally react less favorably to decisions made by algorithms compared to those made by humans. For example, when a decision results in a positive outcome, consumers find it harder to internalize the result if it comes from an algorithm. Conversely, negative outcomes tend to elicit similar responses regardless of whether the decision was made by an algorithm or a human.<ref name=":2">{{Cite journal |
=== Marketing and Content Creation ===
In the marketing ___domain, AI influencers can be as effective as human influencers in promoting products. However, trust levels remain lower for AI-driven recommendations, as consumers often perceive human influencers as more authentic. Similarly, participants tend to favor content explicitly identified as human-generated over AI-generated, even when the quality of AI content matches or surpasses human-created content.<ref>{{Cite journal |
=== Cultural Differences ===
Cultural norms play a significant role in algorithm aversion. In individualistic cultures, such as in the United States, there is a higher tendency to reject algorithmic recommendations due to an emphasis on autonomy and personalized decision-making. In contrast, collectivist cultures, such as in India, exhibit lower aversion, particularly when familiarity with algorithms is higher or when decisions align with societal norms.<ref name=":4">{{Cite journal |
=== '''Moral and Emotional Decisions''' ===
Algorithms are less trusted for tasks involving moral or emotional judgment, such as ethical dilemmas or empathetic decision-making. For example, individuals may reject algorithmic decisions in scenarios where they perceive moral stakes to be high, such as autonomous vehicle decisions or medical life-or-death situations.<ref>{{Cite journal |
== Mechanisms Underlying Algorithm Aversion ==
Line 35:
==== Perceived Responsibility ====
Individuals often feel a heightened sense of accountability when using algorithmic advice compared to human advice. This stems from the belief that, if a decision goes wrong, they will be solely responsible because an algorithm lacks the capacity to share blame. By contrast, decisions made with human input are perceived as more collaborative, allowing for shared accountability. For example, users are less likely to rely on algorithmic recommendations in high-stakes domains like healthcare or financial advising, where the repercussions of errors are significant.<ref>{{Cite journal |
==== Locus of Control ====
People with an internal locus of control, who believe they have direct influence over outcomes, are more reluctant to trust algorithms. They may perceive algorithmic decision-making as undermining their autonomy, preferring human input that feels more modifiable or personal. Conversely, individuals with an external locus of control, who attribute outcomes to external forces, may accept algorithmic decisions more readily, viewing algorithms as neutral and effective tools. This tendency is particularly evident in decision-making contexts where users seek to maintain agency.<ref name=":5">{{Cite journal |
==== Neuroticism ====
Neurotic individuals are more prone to anxiety and fear of uncertainty, making them less likely to trust algorithms. This aversion may be fueled by concerns about the perceived "coldness" of algorithms or their inability to account for nuanced emotional factors. For example, in emotionally sensitive tasks like healthcare or recruitment, neurotic individuals may reject algorithmic inputs in favor of human recommendations, even when the algorithm performs equally well or better.<ref>{{Cite journal |
=== Task-Related Mechanisms ===
==== Task Complexity and Risk ====
The nature of the task significantly influences algorithm aversion. For routine and low-risk tasks, such as recommending movies or predicting product preferences, users are generally comfortable relying on algorithms. However, for high-stakes or subjective tasks, such as making medical diagnoses, financial decisions, or moral judgments, algorithm aversion increases. Users perceive these tasks as requiring empathy, ethical reasoning, or nuanced understanding—qualities that they believe algorithms lack. This disparity highlights why algorithms are better received in technical fields (e.g., logistics) but face resistance in human-centric domains.<ref name=":6">{{Cite journal |
==== Outcome Valence ====
Line 54:
==== Individualism vs. Collectivism ====
Cultural norms significantly shape attitudes toward algorithmic decision-making. In individualistic cultures, such as the United States, people value autonomy and personalization, making them more skeptical of algorithmic systems that they perceive as impersonal or rigid. Conversely, in collectivist cultures like India, individuals are more likely to accept algorithmic recommendations, particularly when these systems align with group norms or social expectations. Familiarity with algorithms in collectivist societies also reduces aversion, as users view algorithms as tools to reinforce societal goals rather than threats to individual autonomy.<ref name=":7">{{Cite journal |
==== Cultural Influences ====
Line 92:
==== Mode of Delivery ====
The format in which algorithms present their recommendations significantly affects user trust. Systems that use conversational or audio interfaces are generally more trusted than those relying solely on textual outputs, as they create a sense of human-like interaction.<ref>{{Citation |
==== Presentation Style ====
Line 106:
== Proposed Methods to Overcome Algorithm Aversion ==
Algorithms are often capable of outperforming humans or performing tasks much more cost-effectively.<ref>{{Cite journal |
=== Human-in-the-loop ===
Line 121:
=== User training ===
Familiarizing users with algorithms through training can significantly reduce aversion, especially for those who are unfamiliar or skeptical. Training programs that simulate real-world interactions with algorithms allow users to see their capabilities and limitations firsthand. For instance, healthcare professionals using diagnostic AI systems can benefit from hands-on training that demonstrates how the system arrives at recommendations and how to interpret its outputs. Such training helps bridge knowledge gaps and demystifies algorithms, making users more comfortable with their use. Furthermore, repeated interactions and feedback loops help users build trust in the system over time. Financial incentives, such as rewards for accurate decisions made with the help of algorithms, have also been shown to encourage users to engage more readily with these systems.<ref>{{Cite journal |
=== Incorporating User Control ===
Line 130:
== Algorithm appreciation ==
Studies do not consistently show people demonstrating [[bias]] against algorithms and sometimes show the opposite, preferring advice from an algorithm instead of a human. This effect is called ''algorithm appreciation''.<ref>{{Cite journal |
For example, customers are more likely to indicate initial interest to human sales agents compared to automated sales agents but less likely to provide contact information to them. This is due to "lower levels of performance expectancy and effort expectancy associated with human sales agents versus automated sales agents".<ref>{{Cite journal |
== References ==
|