Content deleted Content added
→Other: grammer brushing |
link to {{Main|Wikipedia:Large language models}} |
||
Line 11:
Detox is a project to prevent users from posting unkind comments in Wikimedia community discussions.<ref>{{Cite book |title=Research:Detox - Meta |url=https://meta.wikimedia.org/wiki/Research:Detox |website=meta.wikimedia.org |language=en}}</ref> Among other parts of the Detox project, the Wikimedia Foundation and [[Jigsaw (company)|Jigsaw]] collaborated to use artificial intelligence for basic research and to develop technical solutions{{examples needed|date=April 2023}} to address the problem. In October 2016 those organizations published "Ex Machina: Personal Attacks Seen at Scale" describing their findings.<ref>{{Cite book |pages=1391–1399 |doi=10.1145/3038912.3052591 |arxiv=1610.08914|year=2017 |last1=Wulczyn |first1=Ellery |last2=Thain |first2=Nithum |last3=Dixon |first3=Lucas |title=Proceedings of the 26th International Conference on World Wide Web |chapter=Ex Machina: Personal Attacks Seen at Scale |isbn=9781450349130 |s2cid=6060248 }}</ref><ref>{{cite web |author1=Jigsaw |title=Algorithms And Insults: Scaling Up Our Understanding Of Harassment On Wikipedia |url=https://medium.com/jigsaw/algorithms-and-insults-scaling-up-our-understanding-of-harassment-on-wikipedia-6cc417b9f7ff |website=Medium |date=7 February 2017}}</ref> Various popular media outlets reported on the publication of this paper and described the social context of the research.<ref>{{cite news |last1=Wakabayashi |first1=Daisuke |title=Google Cousin Develops Technology to Flag Toxic Online Comments |url=https://www.nytimes.com/2017/02/23/technology/google-jigsaw-monitor-toxic-online-comments.html |journal=The New York Times |language=en |date=23 February 2017}}</ref><ref>{{cite web |last1=Smellie |first1=Sarah |title=Inside Wikipedia's Attempt to Use Artificial Intelligence to Combat Harassment |url=https://motherboard.vice.com/en_us/article/aeyvxz/wikipedia-jigsaw-google-artificial-intelligence |website=Motherboard |publisher=[[Vice Media]] |language=en-us |date=17 February 2017}}</ref><ref>{{cite web |last1=Gershgorn |first1=Dave |title=Alphabet's hate-fighting AI doesn't understand hate yet |url=https://qz.com/918640/alphabets-hate-fighting-ai-doesnt-understand-hate-yet/ |website=Quartz |date=27 February 2017}}</ref>
===
In August 2018, a company called Primer reported attempting to use artificial intelligence to create Wikipedia articles about women as a way to address [[gender bias on Wikipedia]].<ref>{{Cite magazine |last1=Simonite |first1=Tom |title=Using Artificial Intelligence to Fix Wikipedia's Gender Problem |url=https://www.wired.com/story/using-artificial-intelligence-to-fix-wikipedias-gender-problem/ |magazine=Wired |date=3 August 2018}}</ref><ref>{{cite web |last1=Verger |first1=Rob |title=Artificial intelligence can now help write Wikipedia pages for overlooked scientists |url=https://www.popsci.com/artificial-intelligence-scientists-wikipedia |website=Popular Science |language=en |date=7 August 2018}}</ref>
===ChatGPT===
{{Main|Wikipedia:Large language models}}
In 2022, the public release of [[ChatGPT]] inspired more experimentation with AI and writing Wikipedia articles. A debate was sparked about whether and to what extent such [[large language model]]s are suitable for such purposes in light of their tendency to [[Hallucination (artificial intelligence)|generate plausible-sounding misinformation]], including fake references; to generate prose that is not encyclopedic in tone; and to [[Algorithmic bias|reproduce biases]].<ref>{{Cite web |last=Harrison |first=Stephen |date=2023-01-12 |title=Should ChatGPT Be Used to Write Wikipedia Articles? |url=https://slate.com/technology/2023/01/chatgpt-wikipedia-articles.html |access-date=2023-01-13 |website=Slate Magazine |language=en}}</ref><ref name ="vice"/> {{As of|2023|05}}, a draft Wikipedia policy on ChatGPT and similar [[large language model]]s (LLMs) recommends that users who are unfamiliar with LLMs should avoid using them due to the aforementioned risks, as well as the potential for [[libel]] or [[copyright infringement]].<ref name ="vice"/> Online communities expert [[Amy Bruckman]] told [[Vice (magazine)|''Vice'']] that she believes that LLM output could be used as a first draft for Wikipedia content, but that such content would have to be scrutinized by a human editor or else it could degrade the overall quality of Wikipedia. She compared potential strategies for remediating low-quality AI content to those for fighting [[vandalism on Wikipedia]].<ref name ="vice">{{cite news |last1=Woodcock |first1=Claire |title=AI Is Tearing Wikipedia Apart |url=https://www.vice.com/en/article/v7bdba/ai-is-tearing-wikipedia-apart |work=Vice |date=2 May 2023 |language=en}}</ref>
|