Slate Star Codex: Difference between revisions

Content deleted Content added
Make all citation templates Citation Style 1 for consistency per WP:CITEVAR.
Line 40:
 
=== Effective altruism ===
In 2017, ''Slate Star Codex'' ranked fourth on a survey conducted by Rethink Charity of how [[effective altruism|effective altruists]] first heard about effective altruism, after "personal contact", "''[[LessWrong]]''", and "other books, articles and blog posts", and just above "''[[80,000 Hours]]''."<ref>{{Cite web|last1=Mulcahy|first1=Anna|last2=Barnett|first2=Tee|last3=Hurford|first3=Peter|date=17 November 2017|title=EA Survey 2017 Series Part 8: How do People Get Into EA?|url=https://rtcharity.org/ea-survey-2017-part-8/|url-status=live|archive-url=https://web.archive.org/web/20190429135314/https://rtcharity.org/ea-survey-2017-part-8/|archive-date=29 April 2019|access-date=9 September 2020|website=Rethink Charity}}</ref> The blog discusses moral questions and dilemmas relevant to effective altruism, such as moral offsets (the proposition that bad acts can be cancelled out by good acts), ethical treatment of animals, and trade-offs of pursuing systemic change for charities.<ref>{{multiref2
| {{Cite book|last1=Chan|first1=Rebecca|url=https://www.worldcat.org/oclc/1126149885|title=Oxford Studies in Philosophy of Religion|last2=Crummett|first2=Dustin|date=29 August 2019|publisher=[[Oxford University Press]]|isbn=978-0-19-188069-8|___location=Oxford|pages=|chapter=Moral Indulgences: When Offsetting is Wrong|doi=10.1093/oso/9780198845492.003.0005|oclc=1126149885|chapter-url=https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198845492.001.0001/oso-9780198845492-chapter-5|archive-url=https://web.archive.org/web/20200909014312/https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198845492.001.0001/oso-9780198845492-chapter-5|archive-date=9 September 2020}}
*| {{Cite journal|last=Syme|first=Timothy|date=7 February 2019|title=Charity vs. Revolution: Effective Altruism and the Systemic Change Objection|url=https://link.springer.com/10.1007/s10677-019-09979-5|journal=Ethical Theory and Moral Practice|language=en|volume=22|issue=1|pages=93–120|doi=10.1007/s10677-019-09979-5|s2cid=150872907|issn=1386-2820|archive-url=https://web.archive.org/web/20200909014311/https://link.springer.com/article/10.1007/s10677-019-09979-5|archive-date=9 September 2020|via=}}
*| {{Cite journal|last=Kissel|first=Joshua|date=2017|title=Effective Altruism and Anti-Capitalism: An Attempt at Reconciliation|url=https://www.pdcnet.org/eip/content/eip_2017_0018_0001_0068_0090|journal=Essays in Philosophy|volume=18|issue=1|pages=68–90|doi=10.7710/1526-0569.1573|archive-url=https://web.archive.org/web/20200909014310/https://www.pdcnet.org/eip/content/eip_2017_0018_0001_0068_0090|archive-date=9 September 2020|via=|doi-access=free}}
*| {{Cite journal|last=Foerster|first=Thomas|date=15 January 2019|title=Moral Offsetting|url=https://academic.oup.com/pq/article/69/276/617/5289640|journal=The Philosophical Quarterly|language=en|volume=69|issue=276|pages=617–635|doi=10.1093/pq/pqy068|issn=0031-8094|archive-url=https://web.archive.org/web/20200909014319/https://academic.oup.com/pq/article-abstract/69/276/617/5289640?redirectedFrom=fulltext|archive-date=9 September 2020|via=}}}}</ref>
 
=== Artificial intelligence ===
Alexander regularly wrote about advances in [[artificial intelligence]] and emphasized the importance of [[AI safety]] research.<ref>{{cite book|last=Miller|first=James D.|chapter=Reflections on the Singularity Journey|date=2017|chapter-url=https://link.springer.com/10.1007/978-3-662-54033-6_13|title=The Technological Singularity|series=The Frontiers Collection|volume=|pages=223–228|editor-last=Callaghan|editor-first=Victor|archive-url=https://web.archive.org/web/20200909014324/https://link.springer.com/chapter/10.1007%2F978-3-662-54033-6_13|place=Berlin, Heidelberg|publisher=Springer Berlin Heidelberg|language=en|doi=10.1007/978-3-662-54033-6_13|isbn=978-3-662-54031-2|archive-date=9 September 2020|editor2-last=Miller|editor2-first=James|editor3-last=Yampolskiy|editor3-first=Roman|editor4-last=Armstrong|editor4-first=Stuart}}</ref>
In the long essay "Meditations On Moloch", he analyzes [[Game theory|game-theoretic]] scenarios of cooperation failure like the [[prisoner's dilemma]] and the [[tragedy of the commons]] that underlie many of humanity's problems and argues that AI risks should be considered in this context.<ref>{{multiref2
| {{Cite journal|last=Sotala|first=Kaj|date=2017|title=Superintelligence as a Cause or Cure for Risks of Astronomical Suffering|url=http://www.informatica.si/index.php/informatica/article/view/1877/1098|journal=Informatica|volume=41|pages=389–400|archive-url=https://web.archive.org/web/20200220215810/http://www.informatica.si/index.php/informatica/article/view/1877/1098|archive-date=20 February 2020|via=}}
*| {{Cite web|last=Foley|first=Walter|date=|title=ESSAY // Killing Moloch: Early Pandemic Reflections on Sobriety and Transcendence|url=https://www.rootquarterly.com/killing-moloch|url-status=live|archive-url=https://web.archive.org/web/20200909014343/https://www.rootquarterly.com/killing-moloch|archive-date=9 September 2020|access-date=9 September 2020|website=RQ|language=en-US|quote=The rationality blog Slate Star Codex uses the brutal Canaanite god Moloch, depicted in Allen Ginsberg's 'Howl,' as a metaphor for humanity's repeated failure to coordinate toward a better future}}
*| {{Cite book|last=Ord|first=Toby|url=https://www.worldcat.org/oclc/1143365836|title=The Precipice: Existential Risk and the Future of Humanity|publisher=Bloomsbury Publishing|year=2020|isbn=978-1-5266-0022-6|___location=London|pages=|oclc=1143365836|quote=A second kind of unrecoverable dystopia is a stable civilization that is desired by few (if any) people. It is easy to see how such an outcome could be dystopian, but not immediately obvious how we could arrive at it, or lock it in, if most (or all) people do not want it... ''Meditations on Moloch'' is a powerful exploration of such possibilities...}}}}</ref>
 
=== Controversies and memes ===
Line 56 ⟶ 58:
 
===Shiri's scissor===
In the short story "Sort By Controversial", Alexander introduces the term "Shiri's scissor" or "scissor statement" to describe a statement that has great destructive power because it generates wildly divergent interpretations that fuel conflict and tear people apart. The term has been used to describe controversial topics widely discussed in social media.<ref>{{multiref2
| {{Cite news|last=Lewis|first=Helen|date=19 August 2020|title=The Mythology of Karen|work=The Atlantic|url=https://www.theatlantic.com/international/archive/2020/08/karen-meme-coronavirus/615355/|url-status=live|access-date=9 September 2020|archive-url=https://web.archive.org/web/20200830034317/https://www.theatlantic.com/international/archive/2020/08/karen-meme-coronavirus/615355/|archive-date=30 August 2020|issn=1072-7825}}
*| {{Cite news|last=Douthat|first=Ross|date=22 January 2019|title=The Covington Scissor|language=en-US|work=The New York Times|url=https://www.nytimes.com/2019/01/22/opinion/covington-catholic-march-for-life.html|url-status=live|access-date=9 September 2020|archive-url=https://web.archive.org/web/20200817143905/https://www.nytimes.com/2019/01/22/opinion/covington-catholic-march-for-life.html|archive-date=17 August 2020|issn=0362-4331}}</ref><ref
| name="The Week 2021">{{cite web | title=3 ways social media pulls us into dumb and dangerous debates | website=The Week | date=2021-08-19 | url=https://theweek.com/culture/1003863/three-ideas-you-need-to-know-to-make-sense-of-our-social-media-dysfunction | access-date=2021-08-24 | archive-date=August 24, 2021 | archive-url=https://web.archive.org/web/20210824191952/https://theweek.com/culture/1003863/three-ideas-you-need-to-know-to-make-sense-of-our-social-media-dysfunction | url-status=live }}}}</ref>
 
=== Anti-reactionary FAQ ===