Artificial intelligence in Wikimedia projects: Difference between revisions

Content deleted Content added
Pxldnky77 (talk | contribs)
Moved sections around because this article is mostly focused on Wikipedia.
Pxldnky77 (talk | contribs)
Merged content from AI-generated content on Wikipedia to here. https://en.wikipedia.org/wiki/AI-generated_content_on_Wikipedia
Tags: possible AI-generated citations possible prose issues Visual edit
Line 1:
{{Short description|none}}
 
[[Artificial intelligence]] is used in [[Wikipedia]] and other [[Wikimedia projects]] for the purpose of developing those projects.<ref>{{cite web |last1=Marr |first1=Bernard |title=The Amazing Ways How Wikipedia Uses Artificial Intelligence |url=https://www.forbes.com/sites/bernardmarr/2018/08/17/the-amazing-ways-how-wikipedia-uses-artificial-intelligence/#7cbdda802b9d |website=Forbes |language=en |date=17 August 2018}}</ref><ref name="NYT-20230718">{{cite news |last=Gertner |first=Jon |title=Wikipedia's Moment of Truth - Can the online encyclopedia help teach A.I. chatbots to get their facts right — without destroying itself in the process? + comment |url=https://www.nytimes.com/2023/07/18/magazine/wikipedia-ai-chatgpt.html |date=18 July 2023 |work=[[The New York Times]] |url-status=bot: unknown |archiveurl=https://web.archive.org/web/20230718233916/https://www.nytimes.com/2023/07/18/magazine/wikipedia-ai-chatgpt.html#permid=126389255 |archivedate=18 July 2023 |accessdate=19 July 2023 }}</ref> Human and [[Internet bot|bot]] interaction in Wikimedia projects is routine and iterative.<ref>{{cite arXiv |last1=Piscopo |first1=Alessandro |title=Wikidata: A New Paradigm of Human-Bot Collaboration? |date=1 October 2018 |eprint=1810.00931|class=cs.HC }}</ref>
 
Various articles on [[Wikipedia]] have been created entirely or with the help of [[artificial intelligence]]. AI-generated content can be detrimental to Wikipedia when unreliable or containing fake citations.
 
To address the issue of low-quality AI-generated content, the [[Wikipedia community]] created in 2023 a [[WikiProject]] named [[Wikipedia:WikiProject AI Cleanup|AI Cleanup]]. On August 2025, Wikipedia adopted a policy that allowed editors to nominate suspected AI-generated articles for [[speedy deletion]].
 
== Using artificial intelligence for Wikipedia ==
 
=== Beginnings ===
The use of [[Artificial intelligence|AI]] to generate articles on Wikipedia started with the rise in popularity of chatbots like [[ChatGPT]] in 2022. In 2023, the Wikipedia community noticed the problem and created a special [[WikiProject]] named [[Wikipedia:WikiProject AI Cleanup|AI Cleanup]] to clean Wikipedia from AI content. The project wrote its own guidelines to help users in spotting AI content. As of 2025, it had created a list of over 500 articles pending review for suspected AI writing. Wikipedia has also created a special template for suspected AI-generated articles, which was used in articles like [[Danish nationalism]] and [[Natalie Portman]]. On October 2024, a study by [[Princeton University]] revealed that about 5% of 3,000 newly created articles (created on August 2024) on [[English Wikipedia]] were created using AI. The study said that some of the AI articles were on innocuous topics and that AI had likely only been used to assist in writing. For some other articles, AI had been used to promote [[Business|businesses]] or political interests.
 
On December 6, 2022, a Wikipedia contributor named Pharos created the article "[[Artwork title]]" in his sandbox, declaring he used ChatGPT to experiment with it and would extensively modify it. He noted that the text needed to be toned down for neutrality. Another editor tagged the article as "[[original research]]", arguing that the article was initially unsourced AI-generated content, and sourced afterwards, instead of being based on reliable sources from the outset. Another editor who experimented with this early version of ChatGPT said that ChatGPT's overview of the topic was decent, but that the citations were fabricated. [[Wiki Education Foundation]] reported that some experienced editors found AI to be useful in starting drafts or creating new articles. It said that ChatGPT “knows” what Wikipedia articles look like and can easily generate one that is written in the style of Wikipedia. It warned editors that ChatGPT had a tendency to use promotional language. Miguel García, Wikipedia editor from Spain, said that when ChatGPT was originally launched, the number of AI-generated articles on the site peaked. He added that the rate of AI articles has now stabilized due to the community's efforts to combat it. He said that majority of the articles that have no sources are deleted instantly or are nominated for deletion.
 
=== Signs of AI use and speedy deletion ===
On August 2025, the Wikipedia community created a policy that allowed users to nominate suspected AI-generated articles for [[speedy deletion]]. Editors usually recognize AI-generated articles because they use citations that are not related to the subject of the article or fabricated citations. The wording of articles is also used to recognize AI writings. For example, if an article uses language that reads like an [[LLM]] response to a user, such as "Here is your Wikipedia article on” or “Up to my last training update”, the article is typically tagged for speedy deletion. Other signs of AI use include: excessive use of [[em dashes]], overuse of the word "moreover", promotional material in articles that describes something as "breathtaking” and formatting issues like using curly [[Quotation mark|quotation marks]] instead of straight versions. During the discussion on implementing the speedy deletion policy, one user, who is an article reviewer, said that he is “flooded non-stop with horrendous drafts” created using AI. Other users said that AI articles have a large amount of “lies and fake references” and that it takes a significant amount of time to fix the issues.
 
Ilyas Lebleu, founder of WikiProject AI Cleanup, said that he and his fellow editors noticed a pattern of unnatural writing that they managed to connect to ChatGPT. He added that AI is able to mass-produce content that sounds real while being completely fake, leading to the creation of [[hoax]] articles on Wikipedia that he was tasked to delete. Wikipedia created a guide on how to spot signs of AI-generated writing. The guide states that AI uses editorial commentary in its content, it listed phrases like "it's important to note", "it is worth", and "no discussion would be complete without" as examples. It also said that AI uses phrases like "In summary", "In conclusion", and "Overall" in the end of the articles. The guide also said that fabricated sources are also a major sign of AI use, AI is known to create hallucinated sources that have fake DOIs or ISBNs, or broken [[HTTP 404|404]] links. The guide also added that ChatGPT is known to add broken code while adding external links to articles, leaving "turn0search0" in its links. The guide is called "[[Wikipedia:Signs of AI writing|Signs of AI writing]]".
 
=== Hoaxes and malicious AI use ===
In 2023, researchers discovered that ChatGPT can unintentionally fabricate information and make up fake articles for its users. At that time, ban on AI was deemed "too harsh" by the community. AI was deliberately used to create various hoax articles on Wikipedia. For example, Ilyas Lebleu and his team managed to expose a well-written 2,000-word article about an Ottoman fortress that never existed. The content of the article, while completely wrong, was difficult to debunk without knowledge of 13th-century Ottoman architecture. Another examle showed an user adding AI-generated misinformation to [[Estola albosignata|''Estola albosignata'']], species of beetle. The paragraph the user cited and the citation looked normal, however, the source was not related to the subject at all and was about unrelated species of crab in French.
 
AI has been used on Wikipedia to advocate for certain political viewpoints in articles covered by [[Contentious topics on Wikipedia|contentious topic]] guidelines. One instance showed a banned editor using AI to engage in [[edit wars]] and manipulate [[Albanian history]]-related articles. Other instances included users generating articles about political movements or weapons, but dedicating the majority of the content to a different subject, such as by covering [[JD Vance]] or [[Volodymyr Zelenskyy|Volodymyr Zelensky]] in a non-neutral way. Ilyas Lebleu said that there are many reasons to why some users add AI-generated content to Wikipedia, he said that they include deliberate vandalism with the intention to create a hoax, self-promotion or them falsely thinking that AI-generated content is correct.
 
=== ORES ===
Line 36 ⟶ 56:
A 2016 research project called "One Hundred Year Study on Artificial Intelligence" named Wikipedia as a key early project for understanding the interplay between artificial intelligence applications and human engagement.<ref>{{cite web |title=AI Research Trends - One Hundred Year Study on Artificial Intelligence (AI100) |url=https://ai100.stanford.edu/2016-report/section-i-what-artificial-intelligence/ai-research-trends |website=ai100.stanford.edu |language=en}}</ref>
 
There is a concern about the lack of [[Creative Commons license#Attribution|attribution]] to Wikipedia articles in large-language models like ChatGPT.<ref name="nyt180724" /><ref>{{cite news |date=28 March 2025 |title=Wikipedia Built the Internet's Brain. Now Its Leaders Want Credit. |url=https://observer.com/2025/03/wikimedia-foundation-execs-speak-on-ai-scraping-attribution-and-wikipedias-future/ |access-date=2 April 2025 |work=Observer |quote=Attributions, however, remain a sticking point. Citations not only give credit but also help Wikipedia attract new editors and donors. ” If our content is getting sucked into an LLM without attribution or links, that’s a real problem for us in the short term,”}}</ref> While Wikipedia's licensing policy lets anyone use its texts, including in modified forms, it does have the condition that credit is given, implying that using its contents in answers by AI models without clarifying the sourcing may violate its terms of use.<ref name="nyt180724" />[[File:Models

== ofReactions high-quality==
In languageNovember data2023, Wikipedia (a)co-founder Composition[[Jimmy ofWales]] high-qualitysaid datasetsthat -AI Theis Pilenot (left),a PaLMreliable (top-right),source MassiveTextand (bottom-right).png|thumb|Datasetsthat ofhe Wikipediais arenot widelygoing usedto foruse trainingChatGPT AIto models.<ref>{{citewrite arXivWikipedia |eprint=2211articles.04325 |class=cs.LGIn |first1=PabloJuly |last1=Villalobos2025, |first2=Ansonhe |last2=Hoproposed |title=Willthe we run outuse of data?LLMs Limitsto ofprovide LLMcustomized scalingdefault basedfeedback onwhen human-generateddrafts dataare |date=2022 |last3=Sevilla |first3=Jaime |last4=Besiroglu |first4=Tamay |last5=Heim |first5=Lennart |last6=Hobbhahn |first6=Marius}}</ref>]]rejected.
 
[[Wikimedia Foundation]] product director Marshall Miller said that WikiProject AI Cleanup keeps the site's content neutral and reliable and that AI enables the creation of low-quality content. When interviewed by [[404 Media]], Ilyas Lebleu described speedy deletion as a "band-aid" for more serious instances of AI use, and said that the bigger problem of AI use will continue. He also said that some AI articles are discussed for one week before being deleted.
 
== See also ==
 
* [[Artificial intelligence in Wikimedia projects]]
* [[AI slop]]
 
[[File:Models of high-quality language data – (a) Composition of high-quality datasets - The Pile (left), PaLM (top-right), MassiveText (bottom-right).png|thumb|Datasets of Wikipedia are widely used for training AI models.<ref>{{cite arXiv |eprint=2211.04325 |class=cs.LG |first1=Pablo |last1=Villalobos |first2=Anson |last2=Ho |title=Will we run out of data? Limits of LLM scaling based on human-generated data |date=2022 |last3=Sevilla |first3=Jaime |last4=Besiroglu |first4=Tamay |last5=Heim |first5=Lennart |last6=Hobbhahn |first6=Marius}}</ref>]]
==See also==
{{Commons category|Wikimedia projects and AI}}