Algorithm aversion: Difference between revisions

Content deleted Content added
No edit summary
Line 6:
 
== Examples of Algorithm Aversion ==
Algorithm aversion has been studied in a wide variety of contexts. For example, people seem to prefer recommendations for jokes from a human rather than from an algorithm,<ref name=":2">{{Cite journal|last1=Yeomans|first1=Michael|last2=Shah|first2=Anuj|last3=Mullainathan|first3=Sendhil|last4=Kleinberg|first4=Jon|date=2019|title=Making sense of recommendations|url=https://onlinelibrary.wiley.com/doi/abs/10.1002/bdm.2118|journal=Journal of Behavioral Decision Making|language=en|volume=32|issue=4|pages=403–414|doi=10.1002/bdm.2118|issn=1099-0771}}</ref> and would rather rely on a human to predict the number of airline passengers from each US state instead of an algorithm.<ref name=":3">{{Cite journal|last1=Dietvorst|first1=Berkeley J.|last2=Simmons|first2=Joseph P.|last3=Massey|first3=Cade|date=2015|title=Algorithm aversion: People erroneously avoid algorithms after seeing them err.|url=http://doi.apa.org/getdoi.cfm?doi=10.1037/xge0000033|journal=Journal of Experimental Psychology: General|language=en|volume=144|issue=1|pages=114–126|doi=10.1037/xge0000033|pmid=25401381|issn=1939-2222}}</ref> People also seem to prefer medical recommendations from human doctors instead of an algorithm.{{Citation needed|date=September 2021}}
 
== Proposed Factors InfluencingAffecting Algorithm Aversion ==
Various frameworks have been proposed to explain the causes for algorithm aversion and techniques or system features that might help reduce aversion.<ref name=":0" /><ref>{{Cite journal|last1=Burton|first1=Jason W.|last2=Stein|first2=Mari-Klara|last3=Jensen|first3=Tina Blegind|date=2020|title=A systematic review of algorithm aversion in augmented decision making|url=https://onlinelibrary.wiley.com/doi/abs/10.1002/bdm.2155|journal=Journal of Behavioral Decision Making|language=en|volume=33|issue=2|pages=220–239|doi=10.1002/bdm.2155|issn=1099-0771}}</ref>
 
=== Decision control (Role of the "Human in the Loop") ===
Algorithms may either be used in an ''advisory'' role (providing advice to a human who will make the final decision) or in an ''delegatory'' role (where the algorithm makes a decision without human supervision). A movie recommendation system providing a list of suggestions would be in an ''advisory'' role, whereas the human driver ''delegates'' the task of steering the car to [[Tesla Autopilot|Tesla's Autopilot]]. Generally, a lack of decision control tends to increase algorithm aversion.
 
=== Perceptions about Algorithmalgorithm Capabilitiescapabilities and Performanceperformance ===
Overall, people tend to judge machines more critically than they do humans.<ref>{{Cite book|last=Hidalgo|first=Cesar|title=How Humans Judge Machines|publisher=[[MIT Press]]|year=2021|isbn=978-0-262-04552-0|___location=Cambridge, MA}}</ref> Several system characteristics or factors have been shown to influence how people evaluate algorithms.
 
==== Algorithm Process and the role of System Transparency ====
One reason people display resistance to algorithms is a lack of understanding about how the algorithm is arriving at its recommendation.<ref name=":2" /> People also seem to have a better intuition for how another human would make recommendations. Whereas people assume that other humans will account for unique differences between situations, they sometimes perceive algorithms as incapable of considering individual differences and resist the algorithms accordingly.<ref>{{Cite journal|last1=Longoni|first1=Chiara|last2=Bonezzi|first2=Andrea|last3=Morewedge|first3=Carey K|date=2019-05-03|title=Resistance to Medical Artificial Intelligence|url=https://doi.org/10.1093/jcr/ucz013|journal=Journal of Consumer Research|volume=46|issue=4|pages=629–650|doi=10.1093/jcr/ucz013|issn=0093-5301}}</ref> Providing explanations about how algorithms work has been shown to reduce aversion. These explanations can take a variety of forms, including about how the algorithm as a whole works, about why it is making a particular recommendation in a specific case, or how confident it is in its recommendation.
 
==== Decision Domain ====
Line 28:
Expertise in a particular field has been shown to increase algorithm aversion<ref name=":1" /> and reduce use of algorithmic decision rules.<ref>{{Cite journal|date=1986-02-01|title=Factors influencing the use of a decision rule in a probabilistic task|url=https://www.sciencedirect.com/science/article/abs/pii/0749597886900464|journal=Organizational Behavior and Human Decision Processes|language=en|volume=37|issue=1|pages=93–110|doi=10.1016/0749-5978(86)90046-4|issn=0749-5978|last1=Arkes|first1=Hal R.|last2=Dawes|first2=Robyn M.|last3=Christensen|first3=Caryn}}</ref> Overconfidence may partially explain this effect; experts might feel that an algorithm is not capable of the types of judgments they make. Compared to non-experts, experts also have more knowledge of the field and therefore may be more critical of a recommendation. Where a non-expert might accept a recommendation ("The algorithm must know something I don't.") the expert might find specific fault with the algorithm's recommendation ("This recommendation does not account for a particular factor").
 
[[Decision-making]] research has shown that experts in a given field tend to think about decisions differently than a non-expert.<ref>{{Citation|last1=Feltovich|first1=Paul J.|title=Studies of Expertise from Psychological Perspectives|date=2006|url=https://www.cambridge.org/core/books/cambridge-handbook-of-expertise-and-expert-performance/studies-of-expertise-from-psychological-perspectives/3A7FF4C6F3426BE751C71EDF84927741|work=The Cambridge Handbook of Expertise and Expert Performance|pages=41–68|editor-last=Ericsson|editor-first=K. Anders|series=Cambridge Handbooks in Psychology|place=Cambridge|publisher=Cambridge University Press|doi=10.1017/cbo9780511816796.004|isbn=978-1-107-81097-6|access-date=2021-09-08|last2=Prietula|first2=Michael J.|last3=Ericsson|first3=K. Anders|editor2-last=Charness|editor2-first=Neil|editor3-last=Feltovich|editor3-first=Paul J.|editor4-last=Hoffman|editor4-first=Robert R.}}</ref> Experts chunk and group information; for example, expert [[chess]] players[[Grandmaster (chess)|grandmasters]] will see opening positions (e.g., the [[Queen's Gambit]] or the [[Bishop's Opening]]) instead of individual pieces on the board. Experts may see a situation as a functional representation (e.g., a doctor could see a trajectory and predicted outcome for a patient instead of a list of medications and symptoms). These differences may also partly account for the increased algorithm aversion seen in experts.
 
==== Culture ====
Line 35:
==== Age ====
[[Digital natives]] are younger and have known technology their whole lives, while digital immigrants have not. Age is a commonly-cited factor hypothesized to affect whether or not people accept algorithmic recommendations. For example, one study found that trust in an algorithmic financial advisor was lower among older people compared with younger study participants.<ref>{{Cite journal|date=2020-02-01|title=Whose Algorithm Says So: The Relationships Between Type of Firm, Perceptions of Trust and Expertise, and the Acceptance of Financial Robo-Advice|url=https://www.sciencedirect.com/science/article/pii/S1094996819301112|journal=Journal of Interactive Marketing|language=en|volume=49|pages=107–124|doi=10.1016/j.intmar.2019.10.003|issn=1094-9968}}</ref> However, other research has found that algorithm aversion does not vary with age.<ref name=":1" />
 
== Proposed Methods to Overcome Algorithm Aversion ==
Algorithms are often capable of outperforming humans or performing tasks much more cost-effectively.<ref name=":3" /><ref name=":2" />
 
=== Human-in-the-loop ===
One way to reduce algorithmic aversion is to provide the human decision maker with control over the final decision.
 
=== System transparency ===
Providing explanations about how algorithms work has been shown to reduce aversion. These explanations can take a variety of forms, including about how the algorithm as a whole works, about why it is making a particular recommendation in a specific case, or how confident it is in its recommendation.<ref name=":0" />
 
=== User Training ===
Algorithmic recommendations represent a new type of information in many fields. For example, a medical AI diagnosis of a [[Infection|bacterial infection]] is different than a lab test indicating the presence of a bacteria.
 
== Algorithm appreciation ==