Content deleted Content added
G13 vs G14 (talk | contribs) →Beginnings of generative AI: a couple citations you forgot to add |
corrected editor's pronouns to they/their to match sources cited |
||
Line 30:
On August 2025, the Wikipedia community created a policy that allowed users to nominate suspected AI-generated articles for [[speedy deletion]]. Editors usually recognize AI-generated articles because they use citations that are not related to the subject of the article or fabricated citations. The wording of articles is also used to recognize AI writings. For example, if an article uses language that reads like an [[LLM]] response to a user, such as "Here is your Wikipedia article on” or “Up to my last training update”, the article is typically tagged for speedy deletion.<ref name=":0"/><ref>{{Cite web |last=Maiberg |first=Emanuel |date=August 5, 2025 |title=Wikipedia Editors Adopt 'Speedy Deletion' Policy for AI Slop Articles |url=https://www.404media.co/wikipedia-editors-adopt-speedy-deletion-policy-for-ai-slop-articles/ |access-date= |website=[[404 Media]] |language=en}}</ref> Other signs of AI use include excessive use of [[em dashes]], overuse of the word "moreover", promotional material in articles that describes something as "breathtaking” and formatting issues like using curly [[Quotation mark|quotation marks]] instead of straight versions. During the discussion on implementing the speedy deletion policy, one user, who is an article reviewer, said that he is “flooded non-stop with horrendous drafts” created using AI. Other users said that AI articles have a large amount of “lies and fake references” and that it takes a significant amount of time to fix the issues.<ref>{{Cite web |last=Roth |first=Emma |date=August 8, 2025 |title=How Wikipedia is fighting AI slop content |url=https://www.theverge.com/report/756810/wikipedia-ai-slop-policies-community-speedy-deletion |url-access=subscription |archive-url=https://web.archive.org/web/20250810012316/https://www.theverge.com/report/756810/wikipedia-ai-slop-policies-community-speedy-deletion |archive-date=August 10, 2025 |access-date= |website=[[The Verge]] |language=en-US}}</ref><ref>{{Cite web |last=Gills |first=Drew |date=August 8, 2025 |title=Read this: How Wikipedia identifies and removes AI slop |url=https://www.avclub.com/wikipedia-ai-slop-read-this |access-date= |website=[[AV Club]] |language=en-US}}</ref>
Ilyas Lebleu, founder of WikiProject AI Cleanup, said that
=== Hoaxes and malicious AI use ===
In 2023, researchers discovered that ChatGPT frequently fabricates information and makes up fake articles for its users. At that time, a ban on AI was deemed "too harsh" by the community.<ref>{{Cite web |last=Woodrock |first=Claire |date=May 2, 2023 |title=AI Is Tearing Wikipedia Apart |url=https://www.vice.com/en/article/ai-is-tearing-wikipedia-apart/ |archive-url=https://web.archive.org/web/20241004054831/https://www.vice.com/en/article/ai-is-tearing-wikipedia-apart/ |archive-date=October 4, 2024 |website=[[Vice Magazine]]}}</ref><ref>{{Cite web |last=Harrison |first=Stephen |date=August 24, 2023 |title=Wikipedia Will Survive A.I. |url=https://slate.com/technology/2023/08/wikipedia-artificial-intelligence-threat.html |website=[[Slate Magazine]]}}</ref> AI was deliberately used to create various hoax articles on Wikipedia. For example, an in-depth 2,000-word article about an Ottoman fortress that never existed was found by Ilyas Lebleu and
AI has been used on Wikipedia to advocate for certain political viewpoints in articles covered by [[Contentious topics on Wikipedia|contentious topic]] guidelines. One instance showed a banned editor using AI to engage in [[edit wars]] and manipulate [[Albanian history]]-related articles. Other instances included users generating articles about political movements or weapons, but dedicating the majority of the content to a different subject, such as by pointedly referencing [[JD Vance]] or [[Volodymyr Zelenskyy|Volodymyr Zelensky]].<ref>{{Cite web |last1=Brooks |first1=Creston |last2=Eggert |first2=Samuel |last3=Peskoff |first3=Dennis |date=October 7, 2024 |title=The Rise of AI-Generated Content in Wikipedia |url=https://arxiv.org/html/2410.08044v1 |access-date= |website=[[ArXiv]] |language=en}}</ref>
Line 57:
In November 2023, Wikipedia co-founder [[Jimmy Wales]] said that AI is not a reliable source and that he is not going to use ChatGPT to write Wikipedia articles. In July 2025, he proposed the use of LLMs to provide customized default feedback when drafts are rejected.<ref>{{Cite web |last=Maiberg |first=Emanuel |date=Aug 21, 2025 |title=Jimmy Wales Says Wikipedia Could Use AI. Editors Call It the 'Antithesis of Wikipedia' |url=https://www.404media.co/jimmy-wales-wikipedia-ai-chatgpt/ |website=404 Media}}</ref>
[[Wikimedia Foundation]] product director Marshall Miller said that WikiProject AI Cleanup keeps the site's content neutral and reliable and that AI enables the creation of low-quality content. When interviewed by [[404 Media]], Ilyas Lebleu described speedy deletion as a "band-aid" for more serious instances of AI use, and said that the bigger problem of AI use will continue.
[[File:Models of high-quality language data – (a) Composition of high-quality datasets - The Pile (left), PaLM (top-right), MassiveText (bottom-right).png|thumb|Datasets of Wikipedia are widely used for training AI models.<ref>{{cite arXiv |eprint=2211.04325 |class=cs.LG |first1=Pablo |last1=Villalobos |first2=Anson |last2=Ho |title=Will we run out of data? Limits of LLM scaling based on human-generated data |date=2022 |last3=Sevilla |first3=Jaime |last4=Besiroglu |first4=Tamay |last5=Heim |first5=Lennart |last6=Hobbhahn |first6=Marius}}</ref>]]
{{Commons category|Wikimedia projects and AI}}
|