Alexander regularly wrotewrites about advances in [[artificial intelligence]] and emphasized the importance of [[AI safety]] research.<ref>{{cite book|last=Miller|first=James D.|chapter=Reflections on the Singularity Journey|date=2017|chapter-url=https://link.springer.com/10.1007/978-3-662-54033-6_13|title=The Technological Singularity|series=The Frontiers Collection|volume=|pages=223–228|editor-last=Callaghan|editor-first=Victor|archive-url=https://web.archive.org/web/20200909014324/https://link.springer.com/chapter/10.1007%2F978-3-662-54033-6_13|place=Berlin, Heidelberg|publisher=Springer Berlin Heidelberg|language=en|doi=10.1007/978-3-662-54033-6_13|isbn=978-3-662-54031-2|archive-date=9 September 2020|editor2-last=Miller|editor2-first=James|editor3-last=Yampolskiy|editor3-first=Roman|editor4-last=Armstrong|editor4-first=Stuart}}</ref>
In the long essay "Meditations On Moloch", he analyzes [[Game theory|game-theoretic]] scenarios of cooperation failure like the [[prisoner's dilemma]] and the [[tragedy of the commons]] that underlie many of humanity's problems and argues that [[Existential risk from artificial intelligence|AI risks]] should be considered in this context.<ref>{{multiref2