Wikipedia:Village pump (policy)

(Redirected from Wikipedia:Vpp)
Latest comment: 13 minutes ago by ActivelyDisinterested in topic MOS: prescriptive, descriptive, or both?
 Policy Technical Proposals Idea lab WMF Miscellaneous 
The policy section of the village pump is intended for discussions about already-proposed policies and guidelines, as well as changes to existing ones. Discussions often begin on other pages and are subsequently moved or referenced here to ensure greater visibility and broader participation.
  • If you wish to propose something new that is not a policy or guideline, use Village pump (proposals). Alternatively, for drafting with a more focused group, consider starting the discussion on the talk page of a relevant WikiProject, the Manual of Style, or another relevant project page.
  • For questions about how to apply existing policies or guidelines, refer to one of the many Wikipedia:Noticeboards.
  • If you want to inquire about what the policy is on a specific topic, visit the Help desk or the Teahouse.
  • This is not the place to resolve disputes regarding the implementation of policies. For such cases, consult Wikipedia:Dispute resolution.
  • For proposals for new or amended speedy deletion criteria, use Wikipedia talk:Speedy deletion.

Please see this FAQ page for a list of frequently rejected or ignored proposals. Discussions are automatically archived after 7 days of inactivity.

Reword notice at top of WP:Copyright

edit

Reword notice at top of WP:Copyright and restyle box, as it's not as neat as I liked, and it's not as easy to read. See it in my sandbox: User:Waddie96/sandbox2.

Compare:

edit
Important note: The Wikimedia Foundation does not own copyright on Wikipedia article texts or illustrations. It is therefore pointless to email our contact addresses asking for permission to reproduce articles or images, even if rules at your company, school, or organization mandate that you ask web site operators before copying their content. The only Wikipedia content you should contact the Wikimedia Foundation about are the trademarked Wikipedia/Wikimedia logos, which are not freely usable without permission. Permission to reproduce and modify text on Wikipedia has already been granted to anyone anywhere by the authors of individual articles as long as such reproduction and modification complies with licensing terms (see below and Wikipedia:Mirrors and forks for specific terms). Images may or may not permit reuse and modification; the conditions for reproduction of each image should be individually checked. The only exceptions are those cases in which editors have violated Wikipedia policy by uploading copyrighted material without authorization, or with copyright licensing terms which are incompatible with those Wikipedia authors have applied to the rest of Wikipedia content. While such material is present on Wikipedia (before it is detected and removed), it will be a copyright violation to copy it. For permission to use it, one must contact the owner of the copyright of the text or illustration in question; often, but not always, this will be the original author. If you wish to reuse content from Wikipedia, first read the Reusers' rights and obligations section. You should then read the Creative Commons Attribution-ShareAlike 4.0 International License and the GNU Free Documentation License.
 
Important note:
  Please do not contact the Wikimedia Foundation for permission to reuse article text or images.
The Foundation does not own that content and cannot grant permission. This applies even if your company, school, or organization requires permission from website operators before copying material.
  When to contact the Wikimedia Foundation
The only Wikipedia content that requires permission from the Wikimedia Foundation is use of its trademarked logos. These logos are not freely licensed and require explicit written permission for reuse.
For members of the media, see Foundation:Press contacts, others see Wikipedia:Contact us.
  Reusing Wikipedia article text
Permission to reuse and modify article text is already granted under open-content licenses by the original authors, as long as such use complies with the applicable licensing terms, provides proper attribution and licenses any modifications under the same terms.
If you wish to reuse content from Wikipedia, start by reading the Reusers' rights and obligations section. Then review the applicable licenses: the Creative Commons Attribution-ShareAlike 4.0 International License and the GNU Free Documentation License.
  Reusing images
Images on Wikipedia are not automatically covered by the same license as article text. Each image has its own license, which must be reviewed individually. Some images are freely reusable. Others are restricted or non-free and may not be reused or modified without explicit permission from the original author. If an image was uploaded in violation of Wikipedia policy, reusing it could result in copyright infringement.

Diff

Diff

edit
<div style="background-color: #ff000010; color: inherit; border: 1px solid; padding: 1ex; margin: 1ex; margin-right: 20em; min-width: 20em;"> '''Important note:''' The Wikimedia Foundation does not own copyright on Wikipedia article texts or illustrations. '''It is therefore pointless to email our contact addresses asking for permission to reproduce articles or images''', even if rules at your company, school, or organization mandate that you ask web site operators before copying their content. The only Wikipedia content you should contact the Wikimedia Foundation about are the trademarked Wikipedia/Wikimedia logos, which are not freely usable without permission. Permission to reproduce and modify text on Wikipedia has already been granted to anyone anywhere by the authors of individual articles as long as such reproduction and modification complies with licensing terms (see below and [[Wikipedia:Mirrors and forks]] for specific terms). Images may or may not permit reuse and modification; the conditions for reproduction of each image should be individually checked. The only exceptions are those cases in which editors have violated Wikipedia policy by uploading copyrighted material without authorization, or with copyright licensing terms which are incompatible with those Wikipedia authors have applied to the rest of Wikipedia content. While such material is present on Wikipedia (before it is detected and removed), it will be a copyright violation to copy it. For permission to use it, one must contact the owner of the copyright of the text or illustration in question; often, but not always, this will be the original author. If you wish to reuse content from Wikipedia, first read the [[#Reusers' rights and obligations|Reusers' rights and obligations]] section. You should then read the [[Wikipedia:Text of the Creative Commons Attribution-ShareAlike 4.0 International License|Creative Commons Attribution-ShareAlike 4.0 International License]] and the [[Wikipedia:Text of the GNU Free Documentation License|GNU Free Documentation License]]. </div>
+
<div class="colored-box " style="background-color: #ffe9e5; color: #333;border-color:#f54739;"><div class="colored-box-title " style="background-color: #ffc8bd;">[[File:OOjs UI icon information-destructive.svg|20px|class=colored-box-title-icon|link=|alt=]] <div class="colored-box-title-text"><span class="tmp-color" style="color:#101418">'''Important note:'''</span></div> </div><div class="colored-box-content"> ; [[File:OOjs UI icon close-ltr.svg | 18px |class=noviewer]]<span class="nowrap"> </span><span class="tmp-color" style="color:#101418">Please do not contact the Wikimedia Foundation for permission to reuse article text or images.</span> : The Foundation does not own that content and cannot grant permission. This applies even if your company, school, or organization requires permission from website operators before copying material. ; [[File:OOjs UI icon check.svg | 18px |class=noviewer]]<span class="nowrap"> </span><span class="tmp-color" style="color:#101418">When to contact the Wikimedia Foundation</span> : The only Wikipedia content that requires permission from the Wikimedia Foundation is use of its trademarked logos. These logos are not freely licensed and require explicit written permission for reuse. : For members of the media, see [[foundationsite:about/press/|Foundation:Press contacts]], others see [[Wikipedia:Contact us]]. ; [[File:OOjs UI icon articles-rtl.svg | 18px |class=noviewer]]<span class="nowrap"> </span><span class="tmp-color" style="color:#101418">Reusing Wikipedia article text</span> : Permission to reuse and modify article text is already granted under open-content licenses by the original authors, as long as such use complies with the applicable licensing terms, provides proper attribution and licenses any modifications under the same terms. : If you wish to reuse content from Wikipedia, start by reading the [[Wikipedia:Copyright#Reusers' rights and obligations|Reusers' rights and obligations]] section. Then review the applicable licenses: the [[Wikipedia:Text of the Creative Commons Attribution-ShareAlike 4.0 International License|Creative Commons Attribution-ShareAlike 4.0 International License]] and the [[Wikipedia:Text of the GNU Free Documentation License|GNU Free Documentation License]]. ; [[File:OOjs UI icon imageGallery-ltr.svg | 18px |class=noviewer]]<span class="nowrap"> </span><span class="tmp-color" style="color:#101418">Reusing images</span> : Images on Wikipedia are not automatically covered by the same license as article text. Each image has its own license, which must be reviewed individually. Some images are freely reusable. Others are restricted or [[Wikipedia:Non-free content|non-free]] and may not be reused or modified without explicit permission from the original author. If an image was uploaded in violation of Wikipedia policy, reusing it could result in copyright infringement. </div> </div>

What do you think? waddie96 ★ (talk) 18:50, 3 August 2025 (UTC)Reply

I think it’s great! Nononsense101 (talk) 19:17, 3 August 2025 (UTC)Reply
The new layout is much easier to comprehend. Hawkeye7 (discuss) 19:26, 3 August 2025 (UTC)Reply
Much better! I'd say it's ready for implementation. Structure is key for improving understandability. —Femke 🐦 (talk) 20:53, 3 August 2025 (UTC)Reply
Excellent improvement, and breaks up the wall of text that currently exsist and should help ensure people actually read it. TiggerJay(talk) 20:59, 3 August 2025 (UTC)Reply
Wow thanks guys! Really appreciate it. Will edit-request the change 😄. waddie96 ★ (talk) 21:08, 3 August 2025 (UTC)Reply
Update for passersby: change has been implemented (I approve). Mrfoogles (talk) 18:36, 7 August 2025 (UTC)Reply
Cute, +2 passed code review step. <3 waddie96 ★ (talk) 05:54, 8 August 2025 (UTC)Reply
Btw! Codex icons just got officially, officially released. So maybe I do this to the Commons Copyrights webpage too? What notices right now are v. important. I'm good at copyediting (I hope). waddie96 ★ (talk) 05:55, 8 August 2025 (UTC)Reply
@Mrfoogles: could you remove the newlines before each ; in the code? there are currently 4 separate description lists for the FAQ because of this, which obviously should not be the case. 91.193.178.220 (talk) 23:30, 19 August 2025 (UTC)Reply
@91.193.178.220 I don't understand why you are pinging me? I'm not involved in this. I just left a note because I clicked through initially, not realizing it had already been done. Mrfoogles (talk) 01:47, 20 August 2025 (UTC)Reply
Much comprehensive and easier to understand. Ahri Boy (talk) 02:04, 14 August 2025 (UTC)Reply
Much better - thank you! Matt Deres (talk) 12:54, 20 August 2025 (UTC)Reply
Count one more enthusiastic approval. --ColinFine (talk) 16:40, 21 August 2025 (UTC)Reply
A much more areated design that improves readability. Kudos!--A09|(talk) 14:58, 24 August 2025 (UTC)Reply

RfC: Party affiliation in BLP infoboxes

edit

I am an AMPOL editor and I often see articles with party affiliation assumed in the infobox. For instance, Adriana Kugler's infobox states that she is a Democrat, but no inline citation is provided. On the other hand, Todd Blanche does provide a citation for having registered as a Republican. I am questioning the purpose of this parameter for individuals who are not directly associated with politics—in other words, their profession does not pertain to being a politician or political consultant. "If relevant" in the {{Infobox person}} documentation is rather vague. The misuse of this parameter warrants some action.

The rationale for removing the party affiliation parameter is similar to the RfC over the religion parameter. As was stated then, "This would be consistent with our treatment of sexual orientation and various other things we don't include in infoboxes that are matters which may be nuanced, complex, and frequently controversial. The availability of a parameter encourages editors to fill it, whether they have consensus to do so or not, regardless of instructions in template documentation to gain consensus first; new and anon IP editors generally do not read documentation, they simply see a "missing" parameter at article B that they saw at article A and add it." elijahpepe@wikipedia (he/him) 16:38, 10 August 2025 (UTC)Reply

Survey (party affiliation in BLP infoboxes)

edit

Question presented: Should the party parameter in infoboxes be deprecated for non-political BLPs?

Discussion (party affiliation in BLP infoboxes)

edit
I would say that unless they are running/elected in a position that requires a political affiliation to be made as part of the election process so that we have a clear basis to document it, this should be left out of the infobox and explained in the prose. Masem (t) 16:41, 10 August 2025 (UTC)Reply
I think that if they are explicitly running as a candidate for/in affiliation with a given party, and this is cited in the pose, then it should be in the infobox. Otherwise it should not be. Thryduulf (talk) 16:56, 10 August 2025 (UTC)Reply
Agree. Talk:Sydney Sweeney § RfC: Sydney Sweeney's political party affiliation was recently WP:SNOW closed with consensus against inclusion, for instance, and editors should not have to waste time dealing with similar disputes on other BLPs whose subjects are not directly associated with politics. Some1 (talk) 17:16, 10 August 2025 (UTC)Reply
I agree too. Too often I see a supposed party affiliation being added to judge infoboxes (Scalia, for example), based not on party registration or self-declaration but by some third party claiming it, and that opinion being claimed as a RS. Wehwalt (talk) 17:23, 10 August 2025 (UTC)Reply
I am thinking of many local elections that are intended as non-partisan positions, though candidates often assert their position in their campaign materials, in comparison to partisan offices that usually require party primaries to be elected to. In the latter case, the political affiliation is part of the election process and can't be disputed (making it fair to include the infobox). Masem (t) 17:33, 10 August 2025 (UTC)Reply
If someone is explicitly running on a partisan position then that position should be in the infobox. Even if the position is intended to be non-partisan if someone is running on a partisan platform then it is de facto partisan. The job of Wikipedia is to represent what the reality is, not what it is/was intended to be. Thryduulf (talk) 17:57, 10 August 2025 (UTC)Reply
I would be more clear in this comment and state that the infobox should be following what sources say. Brad Schimel was nonpartisan in the Wisconsin Supreme Court election earlier this year, but he was described as a Republican across various outlets. elijahpepe@wikipedia (he/him) 18:27, 10 August 2025 (UTC)Reply
That's exactly a situation that I would *not* include the political affiliation in the infobox, because that's not a requirement for running in that election. In prose, absolutely. Its the same reason we restrict calling out religion in the infobox for only those people who's careers are specifically tied to the church/equivalent body of their religion, though we are free to include any stated religious beliefs in the prose of the article. Masem (t) 04:11, 11 August 2025 (UTC)Reply
Schimel is in an interesting position because he ran as a Republican in the Wisconsin attorney general elections he was involved in. Most of the cases where a politician running for a non-partisan office is clearly affiliated with a party involve prior elections. I was reading a local news report from Wisconsin that made it clear that Schimel was de jure non-partisan. In cases where a candidate explicitly says they are of a certain party but they are running for office in a non-partisan role and they have not run in any other elections where they would be a candidate for that party, then that should not be in the infobox. elijahpepe@wikipedia (he/him) 19:32, 11 August 2025 (UTC)Reply
For a given individual, in some cases it's clear that they're "directly associated with politics," in some cases it's clear they aren't, but there are some people/positions where it's unclear. Todd Blanche is someone I'd put in the third group. He is a political appointee in an ostensibly non-political position, but in this administration, it seems that the position is political as well. I don't think political party is a "nuanced, complex" issue. I also don't think people should be adding this info without an RS. FactOrOpinion (talk) 02:24, 11 August 2025 (UTC)Reply
I would argue that Blanche should not have "Republican" in his infobox. He is not a politician nor a political advisor. The argument that the "position is political" is a reach from what is being suggested here. Wikipedia shouldn't make its own conclusions. In reliable sources, Blanche might be described as a Trump loyalist, but not a Republican, a rather vague term that doesn't encompass Blanche's fealty to the president. The prose can handle describing Blanche properly. elijahpepe@wikipedia (he/him) 04:10, 11 August 2025 (UTC)Reply
  • I think we should limit listings of party affiliation to people who ran for office as a candidate for the party or people who served as officials of the party. I have seen party affiliation listed for people who served in political office in a position that was elected on a non-partisan basis, I do not think that is justified. There are of course people who have had multiple party affiliations. If they served in office for multiple parties that can be listed. One thing to keep in mind is on occasion a member of one party has appointed people from a different party to their cabinet, so even cabinet members we cannot assume they share the party of the president. This is even more clear in cases or any sub-cabinet position, for judges many times so. The same probably applies even more so to people who serve on the cabinet of governors. Many mayors and other local officials in the US are elected on a non-partisan basis.John Pack Lambert (talk) 15:57, 11 August 2025 (UTC)Reply
  • I don't think there is a one-size fits all solution. There are the obvious cases, candidate runs as a partisan in a partisan election. And on the other side, there are non-partisans who run in non-partisan elections. But, there are many people who may be known (either in independent sources or verifiable non-independent sources) as a partisan. And, there are individuals who run as a partisan in a partisan election who change parties or disaffiliate at some point after that election. And, for many subjects, there are BLP considerations to account for. --Enos733 (talk) 16:07, 11 August 2025 (UTC)Reply
    Political party is a voluntary act, not something that can be otherwise discerned, even by RSs. Unless there is evidence of voluntary affiliation, through registration to vote or entering a party primary that requires party membership, or being a party official of some kind, I would exclude. RSs without evidence of this are just partisan name callers. Wehwalt (talk) 17:22, 11 August 2025 (UTC)Reply
If this is an RfC then it needs to be formatted and advertised as such. If it's just a discussion, perhaps in advance of a potential RfC, it needs to be relabeled. ElKevbo (talk) 00:30, 12 August 2025 (UTC)Reply
I have done that now. elijahpepe@wikipedia (he/him) 01:43, 12 August 2025 (UTC)Reply
You still haven't formatted it so it will be advertised as an RfC at WP:RFC/A. FactOrOpinion (talk) 02:08, 12 August 2025 (UTC)Reply
  • The two examples provided are political BLPs and the infobox used is {{Infobox officeholder}}, not the generic {{Infobox person}}. Party affiliation is a basic and often uncontroversial piece of information for office holders. I appreciate that there may be more complexity with non-partisan state and local races and political appointees whose personal party affiliation may differ from that of the leader or body who appointed them. I agree with the comments above that someone like Sydney Sweeney should not have their party affiliation listed; if relevant and appropriate per WP:DUE and other applicable standards it can be discussed in the article body. If this is meant to be an WP:RFCBEFORE discussion, which would be helpful, it should be clarified that this does not apply to {{Infobox officeholder}}. I'm not yet convinced party affiliation should be completely deprecated from {{Infobox person}} but I may get there. It is inappropriate for most public figures who are not/have not been office holders who are not primarily known for political, partisan work. For folks known primarily for and associated with politics but who are not office holders, like commentators and strategists, it may be case-by-case. --MYCETEAE 🍄‍🟫—talk 18:32, 13 August 2025 (UTC)Reply
    It really seems like this is a field that belongs in office holder infoboxes or modules with a start/end, and not for a generic person. I'm really struggling to think of situations where party seems appropriate for a person. Even for non-office holders who are clearly very partisan, it seems like the better way to do it would be to have it in the occupation or known for fields. Something like "occupation: <party> strategist", or "known for: <party> political writings" or similar. That strikes me as more neutral and verifiable for a potentially nuanced fact like affiliation. Driftingdrifting (talk) 17:07, 21 August 2025 (UTC)Reply
  • I think for info boxes we should only ever list party affiliation for people who held public or political office, and not list it for people whose primary office was a non-partisan elected office.John Pack Lambert (talk) 13:36, 15 August 2025 (UTC)Reply
  • If we did want to partisan affiliation to a non-political person's infobox, we'd have to weed through what to make of people who are registered with one political party, but have given significant donations to candidates of a different party; or who are registered as (say) a Democrat but who ran for political office on the Green Party ticket 15 years ago; and other combinations like that. I think it gets complicated quickly and it would be better to avoid it altogether. Just askin' for trouble. Novellasyes (talk) 18:06, 19 August 2025 (UTC)Reply

LLM/AI generated proposals?

edit

We had an RFC earlier this year around how to handle LLM/AI generated comments. That resulted in WP:HATGPT after further discussion at WT:TPG. Recently, an editor started a requested move using LLM generated content. I ran that content through two different AI/LLM detection utilities: GPT Zero says "highly confident", and 100% AI generated; Quillbot stated 72% of the text was likely AI generated.

Should HATGPT be expanded to allow for the closure of discussions seeking community input (RFC/VPR/CENT/RFAR/AFD/RM/TFD/RFD/FFD/etc) that are started utilizing content that registers as being majority written by AI?

I was tempted to just start an RFC on this, but if there's alternate proposals or an existing WP:PAG that already covers this, I'm all ears. =) —Locke Coletc 00:38, 12 August 2025 (UTC)Reply

I think this is a good idea. Editors shouldn't be required to waste their time whenever somebody posts LLM slop. voorts (talk/contributions) 00:42, 12 August 2025 (UTC)Reply
I’m hesitant still with suggesting the use of gptzero except as additional evidence alongside with conclusive proof. But otherwise I’m always of opinion that most use of LLM in discussion is a bad faith usage of editor time. Bluethricecreamman (talk) 00:57, 12 August 2025 (UTC)Reply
As I say every time things like this come up, the focus is completely wrong. We really should not care whether it is or isn't AI-generated, that's just wasting everybody's time trying to determine something that is irrelevant. If the proposal is understandable, relevant to the page it's on, isn't just rehashing something that's already been discussed to death (even if you disagree with it) then whether it was written by a human or machine couldn't be less relevant: deal with it as a good-faith contribution unless you have evidence it is not (use of an LLM is not evidence of good faith or of bad faith, it's completely independent of faith). If it is in bad faith, not understandable, trolling, rehashing a settled discussion, etc. then close it to avoid wasting time - this applies regardless of whether it is LLM-generated or human-generated. One of the many advantages of this approach is that it doesn't require any changes to policies or guidelines, because that's how Wikipedia has worked for many years. Thryduulf (talk) 01:00, 12 August 2025 (UTC)Reply
Fair points. voorts (talk/contributions) 01:06, 12 August 2025 (UTC)Reply
"Fair" points perhaps, but not good points. Real editors who could be doing real things to benefit the project should not have to spend their time parsing machine-generate bloat in the hope that it will turn out to be the one-in-fifty case that isn't anywhere from fatuous vacuity to bullshit hallucination. The OP's linked example is an unfortunately poor exemplar of the problem, but anyone who's been active in project space over recent months has seen examples of text which makes you angry that someone expected you to waste your time reading it. You know how you can tell a tsunami is coming because the ocean suddenly recedes, leaving asphyxiating fish flopping on the sand? That's the stage we're at right now. We should respond to AI-generated text the way we'd respond to text in Klingon: tell the author to come back when they can write in English. EEng 01:32, 12 August 2025 (UTC)Reply
EEng's statement above matches my own sentiment exactly, and I support the expansion of HATGPT to cover LLM-generated proposals. Comments in a discussion shouldn't be generated and neither should requests for discussion. fifteen thousand two hundred twenty four (talk) 04:12, 12 August 2025 (UTC)Reply
And take a look at this [1] ANI discussion for a truly epic example of how one AI-drunk incompetent can waste hours of the time of a dozen competent editors. `EEng 02:41, 13 August 2025 (UTC)Reply
"AI-drunk" reminds me of drunk driving. Cars a powerful and dangerous tool. We have licenses to operate, competence restrictions (age, eyesight), training courses, rules of the road, consequences for violations, etc.. the alternative is ban cars entirely because horses, public transport and walking work fine. -- GreenC 04:37, 15 August 2025 (UTC)Reply
Except we don't have licenses, competence restrictions, training courses, rules of the road, consequences for violations, etc. for AI. All we have is doofuses careening left and right, knocking down pedestrians, tearing up the pavement, frightening the horses, jamming the roadways with their vehicles actually headed nowhere, and poisoning the air with noxious fumes. So yeah, until those issues can be addressed AI should be banned, and walking, cycling, horses, and public transit -- which have served WP very well to date -- will have to continue serve until AI gets to the point that it can magically transform those lacking competence in English, and/or an understanding of what an encyclopedia is, into useful contributors. EEng 23:39, 21 August 2025 (UTC)Reply
I agree. LLMs are getting better, and we will very soon be unable to spot their output.[2] We need to deal with problem posts and edits the way we always have. Donald Albury 01:43, 12 August 2025 (UTC)Reply
Some guy at some company says his people have trouble recognizing fake videos with their naked eyes. So what? You want to throw in the towel right now based on that? EEng 03:40, 12 August 2025 (UTC)Reply
Eh, I think the GPT-5 fiasco points to LLMs reaching a plateau in terms of "quality". I'm not worried. pythoncoder (talk | contribs) 21:39, 13 August 2025 (UTC)Reply
To some extent I agree, but just because LLMs aren't improving fast doesn't mean they aren't improving at all. Especially the biggest and most obviously identifiable tells remaining are likely to be improved on, even if the strategy of just making bigger and more powerful models no longer leads to large increases in performance. Loki (talk) 22:57, 16 August 2025 (UTC)Reply
If it makes you feel better, pretend we're enforcing our existing policy on meatpuppetry to remove text written by somebodything other than the user account editing it onto the page. —Cryptic 01:57, 12 August 2025 (UTC)Reply
I used to think that that agnosticism about the source of commentary is correct but I have changed my mind. The choice is not between using an imperfect heuristic like "is this LLM-generated" and sedulously evaluating the content of discussions. As others have pointed out, editor time is a limited and precious resource. Since LLMs make it easy for editors who would not have otherwise been able to do so to add superficially plausible content to a discussion, we can expect that volume of content to increase, without a corresponding increase in time to evaluate it. That means our standards for discussion are going to shift in the direction of being more BITEy and intolerant of imperfect contributions regardless of whether we adopt any rule regarding LLMs. If LLMs really do improve to the point of undetectability, as Donald Albury suggests, then we're probably going to be driven into a different set of heuristics with hard and stringently enforced limits on WP:BLUDGEON and so on. But for now, LLMs do seem to have a distinct "register", even if it's hard to prove with certainty, and I think it might be more fair to go after that while we can. Choess (talk) 03:43, 13 August 2025 (UTC)Reply
@Thryduulf As I say every time you make comments like this, I disagree. The source matters and LLM use is evidence of bad faith, because it shows the editor doesn't care, doesn't respect the community's time, and is happy to outsource their brain to a machine. We should have a heavy bias towards proposals created by thinking, breathing humans, not something someone lazily asked a bot to slap together. The former has value, even if the proposal is dumb; the latter is slop and without any worth. Cremastra (talk · contribs) 13:45, 16 August 2025 (UTC)Reply
LLM use is evidence of bad faith, because it shows the editor doesn't care, doesn't respect the community's time, and is happy to outsource their brain to a machine. I couldn't disagree with your rabid assertion (note it's not even an assumption) of bad faith more strongly. LLM use is not evidence of faith, good, bad or otherwise. What matters is the faith of the user, and that is not demonstrated by their using an LLM because some users of LLMs do so in good faith (for example those completely unaware of the attitude of some editors here towards it) while others do it in bad faith. Please stop assuming that everyone who has a different opinion of LLMs than you is inherently out to destroy Wikipedia - they are not. Thryduulf (talk) 13:53, 16 August 2025 (UTC)Reply
You're calling my assertions rabid now? That's a new low. Cremastra (talk · contribs) 13:54, 16 August 2025 (UTC)Reply
If you don't want to be accused of making rabid assertions, don't make them. Thryduulf (talk) 13:56, 16 August 2025 (UTC)Reply
Good grief.
By the way, I don't assume that everyone who has a different opinion of LLMs than you is inherently out to destroy Wikipedia. I assume that (1) article contributions based on AI are bad for the encyclopedia, even if the intent is good, and (2) talk page contributions based on AI are evidence of bad faith, (3) that AI is a bad thing. Cremastra (talk · contribs) 13:59, 16 August 2025 (UTC)Reply
Now for some facts:
  1. Some, but not all, article contributions based on AI are bad for the encyclopaedia. Good contributions based on AI are indistinguishable from good contributions that have been nowhere near an LLM.
  2. Some, but not all, talk page contributions based on AI are left in bad faith. Use of AI alone is not evidence of good or bad faith.
  3. Not all AI is LLMs. Not all AI, and not all LLM, is bad (or good) - it is vastly more nuanced than that.
Thryduulf (talk) 14:21, 16 August 2025 (UTC)Reply
In effect, the AI/LLMs-on-Wikipedia debate is divided between those like you who want to assess the content of the contribution, regardless of its origin, and those like me who think it's just simpler to ban LLMs because they're a net negative and more trouble than they're worth. The upside of your approach is that it's less likely to chase away potentially positive contributors; the downside is that it means a lot of cleanup work and AI slop to manage. The upside of my approach is that it's clean, simple, and effective; the downside is that it is best suited for cynical, paranoid people like myself. Cremastra (talk · contribs) 15:45, 16 August 2025 (UTC)Reply
In general I agree with your last comment, but I have a few quibbles:
  • it means a lot of cleanup work and AI slop to manage is incorrect. Slop will continue to be posted whether LLMs are banned or not for multiple reasons - not all slop is LLM slop, we have absolutely no way of determining whether something is or is not LLM-generated before it is submitted, and bans don't stop people doing the thing that is banned (either in good faith because they don't know it's banned, or in bad faith because they do it anyway). Fortunately we already have all the tools we need to manage this as best we already can: slop can be closed/hatted/reverted (as appropriate to the situation) regardless of whether it is LLM-slop or human-slop, disruptive non-slop can be closed/hatted/reverted (diito) regardless of whether it is LLM-disruption or human-disruption. So in summary neither approach changes the amount of cleanup work required.
  • Your list of downsides to your approach neglects to include the significant harm to the project from driving away good-faith editors and the amount of needless disruption caused by arguments over whether something is or is not LLM-generated.
Thryduulf (talk) 16:30, 16 August 2025 (UTC)Reply
divided
Well... going by the outcomes of the last half dozen LLM P&G RfCs, I'd say this division is like an 80/20 split in favor of "ban all LLM slop", and closer to 90/10 if the opposition is at Thryduuulf's level...
Anyway, it's not like copy-pasting LLM output in conversations or as scholarship is considered "okay" in the wider world, in which case we could AGF a bit more for newbies who don't realize it's not acceptable here. So frankly I have no qualms about biting an editor who needs an unfiltered LLM to communicate as they are either too lazy/incompetent to be a productive editor or they belong in a different language edition. JoelleJay (talk) 18:51, 16 August 2025 (UTC)Reply
I agree with this. Cremastra (talk · contribs) 19:13, 16 August 2025 (UTC)Reply
I am not okay with endorsing the biting of any editor, for any reason, let alone enshrining a requirement to do so in policy. Such is fundamentally incompatible with Wikipedia's basic philosophy and I'm horrified that people are seriously considering it. Thryduulf (talk) 20:40, 16 August 2025 (UTC)Reply
The UPEs must love you... JoelleJay (talk) 05:33, 17 August 2025 (UTC)Reply
I agree with Tryptofish's comment here on the matter. Correct me if I'm wrong, but I think you see LLMs and generative AI as a valid tool that can be misused; I, and many others, I think, see it as a tool that is fundamentally not appropriate for editing an encyclopedia. Cremastra (talk · contribs) 16:07, 17 August 2025 (UTC)Reply
I think you see LLMs and generative AI as a valid tool that can be misused... yes and no. The current generation of LLMs are unsuitable for making edits to the text of articles without full human review (AI-generated images are not really relevant to this particular discussion and are best treated separately anyway); whether LLM+human review is more or less "efficient" than a fully-human edit is a matter of personal opinion that is likely to be impacted by the nature of the specific edit. In most, but importantly not all, cases unreviewed LLM-based contributions to talk pages are not a net benefit. However this misses the fundamental reasons I disagree with you, which is that you see any use of LLMs as automatically meaning that the person using the LLM is contributing here in bad faith whereas I see evidence of people using LLMs here in both good and bad faith. Specifically there are many people who make LLM-based comments with a sincere desire to improve the encyclopaedia without knowing that there are many editors here whose views regarding AI are so blinkered that they cannot or will not consider that someone can do such a thing.
My response to Tryptofish's comments are similar: we do not BITE those who are incompetent or NOTHERE because we give them a chance to demonstrate that they can contribute constructively before blocking them, and when we do block them we do so on the basis that they either cannot or will not do so. That is fundamentally different to someone who currently is not contributing in a manner we approve of but who may (or may not) be capable and willing to when they learn what that means - if it turns out that they cannot or will not then it is appropriate to deal with them in the same manner we treat those who are incompetent or NOTHERE but who do not use LLMs. Simply using an LLM is not evidence, on its own, of bad faith, incompetence or of not being here to improve the encyclopaedia.
UPE is also similar in this regard - while there are unarguably many undisclosed paid editors who are here in bad faith there are also such editors who are here in good faith but simply do not know our rules and do comply when they learn that they need to (and how to do that). There are additionally an unknowable number of undisclosed paid editors who exclusively make good quality contributions to unquestionably notable topics such that nobody even suspects they are paid editors and they never learn they should disclose. So again, simply being an undisclosed paid editor is not evidence, on it's own, that one is here in good or bad faith.
Separate from the issue of faith is that, as multiple other people have also pointed out, is that contributions that are actually bad, whether LLM-generated or not, can already be dealt with under existing policies and guidelines so there is simply no need for a policy/guideline specific to LLMs. Thryduulf (talk) 09:15, 18 August 2025 (UTC)Reply
It is not a question of whether an LLM comment is necessarily bad and therefore should be removed. The point being made is that nearly all LLM comments are disruptive because of their length and thrown-at-the-wall details (and the fact that they are rarely helpful). Replying to such comments would require significant effort. Further, there is a good chance that replies will be ignored by the editor concerned. Debating LLMs would lead to their normalization which could easily overwhelm talk pages and noticeboards. Johnuniq (talk) 10:55, 18 August 2025 (UTC)Reply
Comments that are disruptive can already be hatted/removed regardless of why they are disruptive and regardless of whether they are LLM-generated or not. Comments that are LLM-generated but not disruptive (which you acknowledge exist) should not be removed. Thryduulf (talk) 11:11, 18 August 2025 (UTC)Reply
Comments that are LLM-generated but not disruptive (which you acknowledge exist) should not be removed. I disagree. I think it is not too much to ask to communicate with actual human beings. Talking with an actual user as opposed to through the screen of an LLM makes communication a lot easier. Cremastra (talk · contribs) 14:12, 18 August 2025 (UTC)Reply
Then you are in luck, an actual person will be the one that posted the content and the one you are talking with. LLMs do not post on their own, they all require human thought and input. Thats how they work. PackMecEng (talk) 14:21, 18 August 2025 (UTC)Reply
That’s not entirely accurate. While it’s true that an LLM doesn’t autonomously log in and hit “submit,” it’s misleading to suggest that posts generated by an LLM are purely human in origin. In practice, many edits and comments across platforms are authored almost entirely by machine output, with minimal or even no meaningful human oversight. The “input” may just be a short prompt, but the bulk of the content—including the structure, wording, and even factual framing—comes from the model.
Equating that to “human thought” risks blurring the distinction between genuine human authorship and machine-assisted or machine-generated text. Saying “an actual person posted it” ignores that the human role might be closer to pressing a button than actually creating the content. That distinction matters if we care about originality, accountability, and reliability of information. CMD (talk) 15:07, 18 August 2025 (UTC)Reply
And if we know that they did not check what they are submitting you would be correct. But we cannot know that. Its just assuming bad faith at that point. So we go off the assumption that when someone hits submit they checked what they are posting. There is no other option. So yeah, I am going to ignore the distinction because it has no value and does not matter. PackMecEng (talk) 16:33, 18 August 2025 (UTC)Reply
That’s not entirely accurate. It’s misleading to suggest that posts generated by an LLM are human in origin simply because a human hit the submit button. In practice, many edits and comments across platforms are authored almost entirely by machine output, with minimal or even no meaningful human oversight. The “input” may just be a short prompt, but the bulk of the content—including the structure, wording, and even factual framing—comes from the model.
Equating that to “human thought” risks blurring the distinction between genuine human authorship and machine-assisted or machine-generated text. Saying “an actual person posted it” ignores that the human role might be closer to pressing a button than actually creating the content. That distinction matters if we care about originality, accountability, and reliability of information. -- LWG talk 17:39, 18 August 2025 (UTC)Reply
Equating that to “human thought” risks blurring the distinction between genuine human authorship and machine-assisted or machine-generated text. firstly there is a strong community consensus that machine-assisted and machine-generated text are not the same. There is a strong community consensus that the former is not inherently problematic, and a lesser consensus that only unreviewed LLM-generated text is.
Regardless, there is no benefit to making any of these distinctions because if the text is disruptive it can already be removed regardless of which of the three types it is. Nobody has given any justification for removing text (of any origin) that is not disruptive. Thryduulf (talk) 17:42, 18 August 2025 (UTC)Reply
LLM-generated content, and even comments with a significant LLM assist, are disruptive because they are not written by a real human being. Is it too much to ask to communicate with people as opposed to having users export their minds to an AI? Is that really so radical? I simply cannot understand your perspective on LLMs. How is using an LLM to communicate ever appropriate? Cremastra (talk · contribs) 18:07, 18 August 2025 (UTC)Reply
@Thryduulf I agree with you that there is a distinction between machine-assisted and machine-generated text, and that the former is not inherently disruptive. I also agree with the strong community consensus (against which you appear to be one of the few dissenting voices) that unreviewed LLM-generated text is inherently disruptive and is unacceptable on this wiki (though I share your concerns about feasibility and enforcement of some of the countermeasures that have been proposed).
I think where we differ is in our view of text that falls between the extremes. I think your insistence on ignoring source and judging text entirely on content disregards the fact that a large part of the meaning of any text is its surrounding context. The same text can be disruptive if it comes from one source in one context while being fine from a different source in a different context. One of the most essential pieces of context in any communicative act is who is the speaker. We already have firm rules here that it is totally unacceptable for editors to outsource their writing to a hired human, so I see no reason why we should tolerate outsourcing to a SaaS that does the same work. Likewise, we consider that any editor who copy/pastes content from an external website has an obligation to disclose where they copy/pasted the content from and their rationale in doing so, and I see no reason why we should tolerate undisclosed copy/pasting from an external website that dynamically generates the content on demand. I recognize that there's fuzzy space in the middle and I recognize that we should be cautious when making new rules, but I think your treatment of the issue is incomplete. -- LWG talk 18:40, 18 August 2025 (UTC)Reply
I agree with Thrydulf. Donald Albury 21:25, 16 August 2025 (UTC)Reply
Another consideration is copyright. If an editor posts an article that they did not write, that would seem to violate the existing copyright rules of Wikipedia. I was going to dig into the legal side of it, but got stuck on the answer that Google's AI came up with: "Copyright protection requires human authorship; works generated solely by AI are not copyrightable, but works that are assisted by AI can be if a human exercises sufficient creative control over the final output." I though this was actually a good starting point for policy, that is, the concept of "sufficient creative control". Rublamb (talk) 20:09, 26 August 2025 (UTC)Reply
Oppose (kind of): I support the idea in theory. But the linked move request would have been WP:SNOW closed as oppose anyway. What happens if someone posts a LLM-generated RfC that people support (which will likely happen)? Or if someone posts a LLM-generated RfC on a perpetual source of drama, and people respond to it before the LLM use is noticed (which will also, maybe even more likely, happen)? Gnomingstuff (talk) 06:54, 12 August 2025 (UTC)Reply
Current practice for discuassions that don't need closing seems to be someone asks if llm was used, and then either it is rather unbelievably denied, or there is some pivot to "you should focus on the argument rather than the method" which I'm pretty sure llms must be offering as a reply given how consistent it is. After that the discussion tails off. For those that do need closing and would otherwise linger wasting everyone's time, I would agree with the proposal that the guidelines should allow someone to quick close them, while not making it mandatory. CMD (talk) 07:18, 12 August 2025 (UTC)Reply
Broad support as a guideline, given this has moved towards bolded !votes below. CMD (talk) 11:09, 15 August 2025 (UTC)Reply
If LLMs are to be allowed to generate such requests then simply ask an LLM to generate a reply based on your position, make sure to ask it to give detailed explanations now all the points it raises. If it's the case then maybe someone could create a script to autogenerate comments, or even the whole discussion. Editors shouldn't be expected to put more effort into replies than the original poster put into theirs. -- LCU ActivelyDisinterested «@» °∆t° 09:37, 12 August 2025 (UTC)Reply
I admire your good sense to troll back basically. =) —Locke Coletc 03:12, 13 August 2025 (UTC)Reply
If generating the original comment using an LLM isn't trolling then neither is the reply. If the reply would be trolling then the original comment should be hatted. If people think that editors should be allowed to use LLMs, then streamlining the process so everyone can use them is surely desirable. -- LCU ActivelyDisinterested «@» °∆t° 14:41, 13 August 2025 (UTC)Reply
I would tend to support this, although with two caveats. Firstly, that AI detection software, while useful, isn't perfectly accurate and shouldn't be exclusively relied on for that purpose. And, secondly, that proposals getting reasonable support shouldn't be closed just because the original proposal was AI-generated, while those with no support can be immediately closed based on that.
The main issue for me (and the reason why I believe this is not comparable to existing human-written discussions) is that it is trivially easy to generate long proposals with AI, and that it comparatively takes a much larger amount of volunteer time to analyze (and usually dismiss) these proposals. This imbalance is simply not fair to our volunteers, and having to repeatedly deal with AI-generated proposals will just slow down community discussions and divert precious resources from more well-thought proposals. Chaotic Enby (talk · contribs) 13:21, 12 August 2025 (UTC)Reply
  • Support - To address the concerns about good proposals written with AI being closed, if it's so obvious a good idea, it would certainly be proposed quickly anyway. I don't think the benefit of a theoretical wonderful AI-written proposal that wouldn't be suggested anyway is worth the massive downside of giving any kind of additional foothold to LLMs. LLMs are an existential threat to Wikipedia as a useful project, and I see it as our mission to stop it wherever it is possible to do so.
    CoffeeCrumbs (talk) 17:28, 12 August 2025 (UTC)Reply
  • Support speedy-closes of formal discussions created primarily/entirely by chatbot - It's highly unlikely the people using the chatbots are willing (assuming they're able) to make coherent arguments based on policy and a reading of the available sources, but if they are there's no reason to bring in a fallible script that's huffing nutmeg. Even the most perfunctory human-written discussion is better than a long AI-written post simply because the human is far better at source critique and rebutting opposing arguments. As Enby says above, I wouldn't support speedy-closing any discussion which has already attracted some amount of commentary before its provenance was discovered. —Jéské Couriano v^_^v threads critiques 17:50, 12 August 2025 (UTC)Reply
    It's highly unlikely the people using the chatbots are willing (assuming they're able) to make coherent arguments based on policy and a reading of the available sources, but if they are there's no reason to bring in a fallible script that's huffing nutmeg. – Yes, this is another excellent point. I believe our attitude should be that use of AI to generate either article text, or discussion text, is ipso facto proof of incompetence as an editor -- because no competent person would think that AI-generated text is a useful contribution -- and should result in an immediate indef. I am not kidding about this. Shoot to kill. (Unblock only after a clear statement that they now understand the issue, but a second offense should be another indef, with a minimum 12 months before unblock may be re-requested).
    As for the wikt:bleeding hearts who worry about people who would not be able to contribute without relying on AI to write for them: well, if you can't write it yourself, neither can you review what AI wrote for you, so I'm afraid we can't use you on the project. EEng 22:25, 12 August 2025 (UTC)Reply
    I'm frankly astounded and appalled by this attitude. Whatever happened to WP:AGF, WP:BITE and the other half dozen or so things you've tossed by the wayside in your haste to hate? Thryduulf (talk) 23:05, 12 August 2025 (UTC)Reply
    Questioning someone's competence is not questioning their good faith, but stupid sincerity is not enough. And I do not apologize for BITE-ing a robot, even if it speaks through a ventriloquist's dummy in human form. To paraphrase someone that I'm not likely to quote ever again: Extremism in defense of Wikipedia is no vice. Moderation in tracking down and stamping out AI-generated crap posted by script kiddies is no virtue. [3].
    If we don't take dramatic action immediately, our cherished Neutral Point of View will soon give way to the Neural Point of View. (You can use that quip free of charge.) EEng 01:00, 13 August 2025 (UTC)Reply
    P.S. I dare anyone to take a gander at this [4] ANI discussion and not be angry at the time wasted by competent editors who are forced to wade through the AI slop being posted -- and defended! -- by this one incompetent. And I have no problem calling him incompetent, since he obviously lacks common sense. EEng 02:41, 13 August 2025 (UTC)Reply
    Dare accepted. I'm more angry at the people who are choosing to insult editors on a project page while yapping about how we "must take dramatic action immediately," instead of taking dramatic action immediately. Be the change you wish to see in the world. Gnomingstuff (talk) 04:24, 13 August 2025 (UTC)Reply
    Boy, you're not kidding. —Locke Coletc 04:31, 13 August 2025 (UTC)Reply
    Yeah, I don't people realize how bad the problem has already gotten. A lot of the AI slop has gone undetected despite being blatant; you can't really say anyone's being "forced to wade through the AI slop" considering how few people are actually wading through it. I haven't even really done much to fix it myself -- my main skill is tracking down and identifying problems, and I'm OK with that. (Maybe I should have been an auditor.)
    But the AI cleanup backlog jumped from ~100 AI articles to ~400 in a couple of days, not due to a sudden influx of slop, but because I singlehandedly found 300 instances of slop that was already there. This isn't me being self-aggrandizing, just stating the facts. I didn't use any special tools besides a few simple targeted regexes -- I typed phrases we already know about into the Wikipedia search box and investigated the obvious cases. Anyone else could have done the same thing anytime in the past 2 years, rather than insulting people who often really do genuinely think they are helping the encyclopedia, sometimes because they've been encouraged to do so through edit-a-thons, Wiki Ed courses, or the Wikimedia Foundation itself. Their edit summaries often mention "improving the encyclopedia," "rewriting for a neutral tone," etc.
    (Also, for what it's worth: WP:CHATGPT is not actually policy, although arguably it should be. WP:CIVIL is.) Gnomingstuff (talk) 17:01, 13 August 2025 (UTC)Reply
    I've literally been tracking down hundreds of AI-generated articles for the past several days. Please don't tell me what I do and don't worry about. Gnomingstuff (talk) 23:08, 12 August 2025 (UTC)Reply
    If you're addressing me: I didn't tell you or anyone else what they worry about. I addressed any editors who happen to harbor a particular worry which I specified, and discussed that worry. EEng 01:00, 13 August 2025 (UTC)Reply
    +1 to everything EEng has said. AI contributions have no value, and I'm tired of people tip-toeing politely around AI slop and pretending it's something other than a steaming garbage heap. Quite frankly it smells of appeasement. Cremastra (talk · contribs) 13:52, 16 August 2025 (UTC)Reply
    Except we're not tip-toeing politely around AI slop we're pointing out that AI slop can be dealt with under existing policies and guidelines because all slop can be dealt with under existing policies and guidelines regardless of whether it is human slop or AI slop. Thryduulf (talk) 13:55, 16 August 2025 (UTC)Reply
  • Irrelevant - given that the actual proposal at an RM is simply “current title —> proposed title”, I don’t think it matters if someone uses an LLM to generate it. Similarly, an RFC question/proposal is supposed to be brief and neutral (example: “Should the article say ABC instead of XYZ?”) and, again, I don’t think it matters how that basic question is generated (In fact, I would love to train LLMs so they generate RFC questions this way).
    What I think is actually being objected to is using an LLM to generate the proposer’s opening statement (explaining why they think the move should take place, or why ABC should be replaced with XYZ) … but that is commentary on the proposal, not the proposal itself… and commentary is already covered by HATGPT. Blueboar (talk) 19:04, 12 August 2025 (UTC)Reply
    That is correct, and it's because the opening statement is essentially the proposer's argument for why XYZ should happen. It isn't something an LLM actually has the capacity to summarise or explain in most cases, especially if offline sources are being used for the argument (as LLMs generally cannot access those); using one for the purpose basically forces the proposer to waste time clarifying whatever the LLM said than actually defending their proposal, and that's outright ignoring the LLM's divinorum addiction. —Jéské Couriano v^_^v threads critiques 21:06, 12 August 2025 (UTC)Reply
    But HATGPT already says we should discount comments generated by LLMs. So what is the point of this proposal? Blueboar (talk) 21:17, 12 August 2025 (UTC)Reply
    To prevent people from wasting time clarifying or arguing over whatever the LLM said instead of defending their position.Jéské Couriano v^_^v threads critiques 00:49, 13 August 2025 (UTC)Reply
    But HATGPT already covers this. We can discount comments generated by an LLM… It doesn’t matter whether that comment is the initial comment (by the proposer) or a subsequent comment (by an editor responding to the proposal). Blueboar (talk) 12:41, 13 August 2025 (UTC)Reply
    But, if someone opens a proposal and their original comment gets collapsed, should other volunteers have to spend their time opposing the proposal? That's the question this new policy tries to answer – they shouldn't. From what I understand, HATGPT would leave the proposal open (and taking volunteer time from more relevant proposals), just without the opening comment. Chaotic Enby (talk · contribs) 13:06, 13 August 2025 (UTC)Reply
    @Chaotic Enby: That's the wrong question. At present, without any change to any guideline or policy, editors already do not have to spend their time opposing any struck/collapsed proposal, even if a human had written it. We already can speedily close; a guideline saying "you can" when a policy already suggests "you should" (that policy being WP:NOTBURO) would be a bad guideline. If there is no driving rationale for a change from the status quo in the discussion, and everyone is supporting the status quo—and there is therefore no controversy—the formal process is a waste. Editors can keep talking about how they all agree that something is okay "in their spare time", not using resources of venues such as AfD, RM, etc.: The scaffolding of "7+ days' listed specifically-formatted discussion that must be closed" is not needed. Such processes are closed with a speedy endorsement of the status quo (such as Wikipedia:Speedy keep—an existing guideline about this). NOTBURO says: "Disagreements are resolved through consensus-based discussion, not by tightly sticking to rules and procedure". So, yes, some constraints of "rules and procedure" may help consensus-formation develop more harmoniously because there is disagreement (which may be accompanied by a little bit of tension and a human tendency to stonewall or overstep, especially when advanced tools with limited access are involved) ... but if there is no disagreement, why any rules, and why any procedure? The driving rationale for a change can evaporate in any discussion, turning a (seemingly or truly) controversial issue into a non-controversial one, and this can happen in a variety of ways. One such way is withdrawal/reversal of a !vote. Another is the nomination/comment being struck: ban/ARBECR violation, sockpuppetry, meatpuppetry, trolling, and AI content—already in WP:AITALK. So the only change might be: Should AI use be exempt from this general logic, and should editors become obligated to treat struck AI content as nominations/comments that are not struck. So this is fundamentally a relitigation of AITALK: If they are struck, but editors must begin to behave as if they were not, the striking of AI comments becomes striking in name only (just a visual change, no functional difference) and AITALK is effectively abrogated. So the proposal in this discussion is to overturn AITALK with the detail of leaving functionally meaningless striking-in-name-only in place. Blueboar is entirely correct. This discussion is badly framed and its no consensus outcome could improperly undermine AITALK.
    ... and the oppose !votes reflect this, as they intuitively understand the stakes. So, for example, below, opponents say: Unless a detection method is found that is consistently accurate I don't really trust others vibes to remove users votes in something, I think any procedure such as hatting suspected LLM-produced material has the potential of encouraging the biting of newcomers, and similar. So, comments should not be struck/collapsed ("removed"). That is just a !vote to abrogate AITALK, indistinguishable from a comment opposing adoption of AITALK in a discussion on whether to adopt AITALK ... but AITALK has already been adopted. Now, editors are building consensus for AITALK again, trying to persuade opponents of AITALK that it should be understood to mean what it already means. As these opponents oppose AITALK to begin with (because of a total skepsis toward the possibility of doing something about the AI problem / deeply-held view that it is not a problem), they will of course never be persuaded about some particularity regarding the application of this thing that should not be a thing and will embrace the premise that the thing is toothless and that a consensus is needed to give it teeth. At the same time, supporters of AITALK will not !vote in favor of AITALK-as-AITALK (aware or unaware of its practical implications) believing that their support is not needed because it has already been adopted. Therefore, this time, acceptance of AITALK will fail. The starter of this discussion wanted to make AITALK "stronger", but instead caused it to be undone. This is why RfC questions need to be neutral and need to contain a proposal to change the status quo without misrepresenting the status quo. —Alalch E. 23:58, 21 August 2025 (UTC)Reply
    This also gives AI comments extra priority and durability over human comments: While a human comment being struck could cause a discussion to be closed, an AI comment the same as that human comment being struck cannot cause a discussion to be closed, because showing this RfC to the errant speedy closer should lead that closer to concede that they acted in error, against community consensus, because treating struck AI votes the same as struck human votes is a rejected proposal: namely, policies and guidelines do not allow for the closure of discussions seeking community input (RFC/VPR/CENT/RFAR/AFD/RM/TFD/RFD/FFD/etc) that are started utilizing content that registers as being majority written by AI—the accepted status-quo premise of this discussion. —Alalch E. 00:36, 22 August 2025 (UTC)Reply
    WP:CCC, as to The starter of this discussion wanted to make AITALK "stronger", but instead caused it to be undone, it was not my intent to undermine AITALK whatsoever. The language at AITALK definitely could have been written better to make clear there was already a consensus for this. And the only reason this was turned into an RFC was because of the constant bolded !votes. I had a feeling I didn't understand the full history of AITALK/HATGPT, hence why I explicitly said I was looking for feedback in advance of a proposal. —Locke Coletc 00:48, 22 August 2025 (UTC)Reply
    A panel will be needed to fix the mess. —Alalch E. 00:50, 22 August 2025 (UTC)Reply
    I do agree with your analysis, although I don't think WP:NOTBURO says "we should" to anything. But yes, if anything, AITALK should be at least retained: the current discussion is not specific enough to find a consensus to revert it in part or as a whole.
    However, as the example that started this whole discussion showed, I don't think AITALK made it explicit enough that hatted AI content was to be treated as a struck nomination and explicitly allowed for an instant closure. The spirit of the policy certainly did, but the letter didn't, thus this discussion. Mostly because "the spirit" is something vague and, ultimately, a bit subjective. And having the policy itself make it explicit would remove this disagreement. Chaotic Enby (talk · contribs) 10:38, 22 August 2025 (UTC)Reply
    I'm pretty sure the LLM generated the entire request. If you go back to the diff I posted, go look at that page as it looked during the first edits: they inserted it into the wrong place on the page, and I get the impression it didn't know how to fill in certain fields so it left some blank. But if it makes any difference, I also object to the "opening statement" being majority-written by an LLM. —Locke Coletc 03:14, 13 August 2025 (UTC)Reply
    By "entire request", you mean only the first of the 10 comments posted in that RM by the newbie, but none of the significant and substantive arguing you and the OP did over (a) the actual question and (b) whether an LLM was used in the first comment, right?
    I'm somehow getting a different feeling about which part was the waste of time. WhatamIdoing (talk) 03:54, 13 August 2025 (UTC)Reply
  • Support — Blueboar presents a convincing enough argument in favor of this proposal. I consider this to be an extension of existing policy. Talking about discussions over whether a proposal is AI-generated should be conducted in criticisms of the existing HATGPT rule. elijahpepe@wikipedia (he/him) 03:38, 13 August 2025 (UTC)Reply
  • Support clarifying existing policythis wasn't a formal RFC when I initially commented and as of now it's unclear what exactly people are !Voting on to make it clear that using an LLM to generate opening statements of discussions is just as unacceptable as using an LLM to generate replies. As Cryptic alluded to above, using LLM to generate substantive content in discussions (as opposed to minor copyediting/formatting) is essentially the same or allowing someone else to log in and edit using your account. If we do not allow editors to direct their (human) personal secretary to edit on their behalf, why would we tolerate the same conduct when the secretary is replaced by an LLM? Or, from a different angle, content that is substantively copy/pasted from LLM output should be treated like content that is copy/pasted from other sources, which if not attributed goes against WP:PLAGIARISM. Policy aside, I believe any editor who generates content wholesale with an LLM should as a matter of courtesy/transparency indicate that they have done so, and indicate the model and prompt used. -- LWG talk 18:34, 13 August 2025 (UTC)Reply
    why would we tolerate the same conduct when the secretary is replaced by an LLM – What we're seeing in AI use is way worse than that. It's less a human using an AI secretary to generate content, and more an AI entity using a human (or ventriloquist dummy in human form) to post its content. It's not a human using AI -- it's AI using humans. EEng 19:53, 13 August 2025 (UTC) P.S. BTW, indicating the model and prompt used isn't enough, since in general an LLM's response to whatever you just asked it is shaped by the entirety of one's prior interactions with it.Reply
I think you'd be fully within your rights to close that discussion per existing consensus. If anything, the text at WP:HATGPT is too watered down from the RfC closure, which said that "if a comment is written entirely by an LLM, it is (in principle) not appropriate". IMO, something to that effect should be added to the policy text. pythoncoder (talk | contribs) 21:45, 13 August 2025 (UTC)Reply
I also agree with making that change to the text. Chaotic Enby (talk · contribs) 11:19, 14 August 2025 (UTC)Reply
  • Whether or not we need to expand HATGPT, I'm all in favor (aka support in a broad sense) of shutting down any discussion that wastes the community's time, and anything that resulted from some software "thinking" about it, rather than a human thinking about it, falls in the category of shut-it-down. Base it on IAR, or base it on common sense. I see some pearl-clutching about BITE and AGF, but that strikes me as so 2024. We are facing something that can scale to a magnitude that we will be unable to deal with it, unless we are realistic about the need to deal with it assertively. --Tryptofish (talk) 23:08, 13 August 2025 (UTC)Reply
    Just to add to my previous comments… If it is felt that HATGPT needs to specify that it applies to the explanatory language of a proposal as well as subsequent comments, I don’t object to amending HATGPT. Blueboar (talk) 00:06, 14 August 2025 (UTC)Reply
    Seeing the ongoing disagreements about BITE, something additional that occurs to me is that the community has long been at least reasonably comfortable with WP:Competence is required. It seems to me that editors who feel like the only way that they can participate in the community is by letting LLMs do their writing for them are running afoul of competence. (I'm referring here to LLMs, not assistive technologies such as screen readers.) We don't regard it as a BITE situation when we issue a WP:NOTHERE block, and I think that a user who equates LLM-generated content with encyclopedic content is likely to be not-here. --Tryptofish (talk) 22:14, 16 August 2025 (UTC)Reply
  • Support. WP:AITALK already allows for the collapsing and striking of LLM-generated proposals, since they are a subset of LLM-generated comments, but this particular bullet point does not yet comment on whether the ensuing discussion should be closed. Discussions that lead with LLM-generated comments are often unconstructive, and frequently devolve into arguments about LLM use or bludgeoning with additional LLM-generated comments. Since there appears to be some uncertainty about whether LLM-led discussions can be closed, WP:AITALK should be amended to clarify that they can be, per a combination of the existing WP:AITALK text and this portion of the Marking a closed discussion section: "If a discussion has been so disruptive or pointless that it is better for editors to waste no further time even looking at it, the alternative templates {{Hidden archive top}} and {{Hidden archive bottom}} can be used instead, to produces a similar 'closure box' around it, but collapsed to hide the content, as with off-topic threads", although any collapsible template would work. An editor who posts an LLM-generated proposal can resubmit the proposal if they manually write it in their own words.
    I also support Pythoncoder's suggestion to have WP:AITALK explicitly designate LLM-generated comments as inappropriate, in line with the consensus at Wikipedia:Village pump (policy)/Archive 199 § LLM/chatbot comments in discussions. In practice, LLM-generated comments are already recognized as disruptive, especially when undisclosed. — Newslinger talk 07:57, 14 August 2025 (UTC)Reply
  • Oppose - Unless a detection method is found that is consistently accurate I don't really trust others vibes to remove users votes in something. It is important to remember the previous consensus on the topic, specifically The word "generative" is very, very important here, though. This consensus does not apply to comments where the reasoning is the editor's own, but an LLM has been used to refine their meaning. Editors who are non-fluent speakers, or have developmental or learning disabilities, are welcome to edit here as long as they can follow our policies and guidelines; this consensus should not be taken to deny them the option of using assistive technologies to improve their comments. In practice, this sets a good lower bound for obviousness, as any comment that could conceivably be LLM-assisted is, by definition, not obviously LLM-generated. In practice most LLM-assisted comments are not noticed because it does not actually matter. Anything else can be dealt with existing policy. I am similarly not convinced by the pearl clutching on wasting editors time, Wikipedia editors have been able to do that for decades without using LLMs and the addition of them has not been a noticeable uptick in it that I can tell. This is not some crazy crisis that will doom the pedia, it is a tool, nothing more. The usual garbage in garbage out applies in most issues with using the tool. PackMecEng (talk) 00:33, 15 August 2025 (UTC)Reply
    @WhatamIdoing This quote and archive link might be what you were asking about on my talk page. @PackMecEng, you might consider what @Gnomingstuff has shared above, the amount of LLM content being found in articles has increased significantly, and usage of it on talk pages is only going to get worse. You call it pearl clutching, but if the scale of LLM use increases then it will be a significantly bigger time sink for Wikipedia editors. At what point do we all just shut off our browsers and just let LLM's argue back and forth on our behalf with a sentence or two to get them started? I edit and comment on talk pages because I want to interact with other editors, not people running chatbots and copying/pasting their responses or proposals in bad faith with little actual time investment on their part. —Locke Coletc 00:42, 15 August 2025 (UTC)Reply
    If you don't want to interact with a comment/user then don't interact with that comment/user, nobody is forcing you to do that. Thryduulf (talk) 02:13, 15 August 2025 (UTC)Reply
    What a lame cop-out. You could say the same thing about anyone who stirs the pot in nonproductive ways -- "Well, no one's forcing you." But someone has to deal with AI-generated vapid crap proposals, discussion posts, and so on. No matter who grits their teeth to do it, it's time that could have been productively spent elsewhere. EEng 03:41, 15 August 2025 (UTC)Reply
    But someone has to deal with AI-generated vapid crap proposals, discussion posts, and so on. firstly no they don't - such posts can be simply ignored by everyone, but secondly if someone does choose to deal with them then can do so under current policy without needing this proposal. Thryduulf (talk) 10:50, 15 August 2025 (UTC)Reply
    If everyone ignores it because of AI crap, then the clueless (or malicious) AI user declares WP:SILENCE and makes a misguided change. Then someone has to deal with it, if only by reverting. Anomie 12:08, 15 August 2025 (UTC)Reply
    Eh probably not though right? Could that happen? Sure, just the same as someone making a terrible proposal, but is it likely to get no push back? Almost certainly not, this is the internet amd the need to be right is far too strong. PackMecEng (talk) 13:16, 15 August 2025 (UTC)Reply
    Thryduulf was suggesting everyone can ignore the proposal. I followed that idea to a logical conclusion. Anomie 21:09, 15 August 2025 (UTC)Reply
    You can claim SILENCE, but the next editor can revert you, which is proof that there's no silent agreement. Additionally, some proposals (e.g., "Let's have a new guideline") require active support, not just the absence of objections. WhatamIdoing (talk) 18:00, 16 August 2025 (UTC)Reply
    Yes. And then the LLM-user throws a fit because they were reverted without discussion, and people have to engage further. Anomie 00:12, 17 August 2025 (UTC)Reply
    I can attest that this is in fact how these things go. I recently dealt with a user who, when reverted, just asked his LLM to formulate an argument contesting the reversion and proceeded to bludgeon talk pages with multiple AI-generated sections per day. They were ultimately indeffed as WP:NOT HERE and WP:CIR, but not before me and other editors wasted tens of thousands of bytes refuting the disjointed and incoherent logic of his bot and tracking down fabricated references. Even after the block it took me multiple hours (all my wiki time for several days) to go through all the articles this user has edited and reverse the damage. -- LWG talk 05:13, 17 August 2025 (UTC)Reply
    No Wikipedian should be forced to interact with LLM generated proposals. Period. If I had my druthers, WMF would reallocate all development resources to at minimum a way to tag edits automatically as containing LLM content, and at best, flat out rejecting LLM edits from new/unverified users (and then tagging anything allowed through so people can know what they're dealing with). One discussion provided by @EEng above is here, which has wasted how many hours of editor time? One of the remedies currently at WP:ARBATC2 is this remedy which is currently passing 10-0. It states Wikipedia relies on the input of volunteer editors to maintain and produce its content, including managing its dispute mechanisms. The time editors can commit to this is one of its most precious resources. This resource should not be wasted pointlessly. LLM edits are a time sink.
    Why are you supporting wasting editor time, a precious resource, replying to and dealing with LLM generated AI-slop? —Locke Coletc 03:02, 15 August 2025 (UTC)Reply
    No Wikipedian should be forced to interact with LLM generated proposals. Period. No Wikipedian is, even without this proposal. If a comment is a disruptive waste of time, it can already be hatted/removed as a disruptive waste of time under current policy, regardless of whether it is or isn't LLM-generated meaning that whether it is or isn't LLM-generated is completely irrelevant meaning that this proposal, which encourages arguing about whether something is or is not LLM-generated, is the waste of time. Thryduulf (talk) 03:07, 15 August 2025 (UTC)Reply
    That's like arguing that a particular speedy deletion is completely irrelevant if something can be deleted through AfD. We can and do approach issues through multiple ways which can involve different but overlapping considerations. CMD (talk) 03:12, 15 August 2025 (UTC)Reply
    No. To use your speedy deletion analogy this proposal is the equivalent of saying we need a speedy deletion criterion specifically for articles written primarily by editors who are or appear to be male that do not indicate importance. That's wholly redundant to the existing criterion that allows us to speedy delete articles that do not indicate importance regardless of who wrote them, but with added irrelevant, time wasting and disruptive arguing about whether or not the editor is or is not male. Thryduulf (talk) 03:22, 15 August 2025 (UTC)Reply
    I don't think tech choices are equivalent to demographic attributes, and find that a very poor comparison to make. CMD (talk) 03:38, 15 August 2025 (UTC)Reply
    Then you have misunderstood what I've written. I'm not saying the two inputs are equivalent, I'm saying that the interactions of the proposed and theoretical policies with existing policies and community behaviour are the same. Thryduulf (talk) 10:48, 15 August 2025 (UTC)Reply
    I understood. It was a terrible analogy that also doesn't work. There's no need to obscure the discussion by asserting there are only proposed and theoretical polices, we already have existing guidelines around this topic that do not work in a way similar to weird assertions about gender. CMD (talk) 11:05, 15 August 2025 (UTC)Reply
    Your comment makes it clear that you have either not actually understood or are not listening to anything that contradicts your opinion. Current policies and guidelines allow for anything that is disruptive to be closed/hatted regardless of whether it is LLM-generated or not. So the only things that are not covered are things which are not disruptive, and we should not be speedily closing things that are not disruptive. Thryduulf (talk) 12:39, 15 August 2025 (UTC)Reply
    My opinion is that we shouldn't treat llm use like an inherent demographic characteristic. We have specific guidelines to hat LLM-generated text already, so your assertion is incorrect. CMD (talk) 16:47, 15 August 2025 (UTC)Reply
    @CMD Unfortunately, it kind of is relevant, although maybe for a different reason. For unsurprising reasons finding reliable sources for this is a nightmare, but many surveys suggest AI use is arguably more common in non-Western countries, and this is consistent with what I've seen on Wikipedia both in articlespace and on talk pages. Gnomingstuff (talk) 14:26, 15 August 2025 (UTC)Reply
    There will be trends of llm use that correlate with different demographic aspects, but that does not make llm use a demographic aspect itself, similar to other trends that correlate with demographics. CMD (talk) 16:50, 15 August 2025 (UTC)Reply
    I think we're on the same page then. Gnomingstuff (talk) 17:02, 15 August 2025 (UTC)Reply
    I talked to someone yesterday who uses LLMs regularly. Part of her job is responding to customer complaints. She has pretty severe dyslexia. What used to be an hour of carefully checking her spelling, grammar, and punctuation is now 30 seconds of explaining the problem to her phone, 60 seconds of reading the response out loud to make sure it's correct, and then sending it to the customer. I'm honestly not seeing much difference between this and the https://www.snopes.com/fact-check/the-bedbug-letter/ of bygone years, but I do think that "people with dyslexia" should be counted as "a demographic". WhatamIdoing (talk) 18:11, 16 August 2025 (UTC)Reply
    I don't know why I've been tagged here to be perfectly honest but my point seems to have been missed. Dealing with LLM slop is a direct way of improving the encyclopedia, whether you like it or not. Complaining about being "forced to" deal with LLM slop -- something that, again, you clearly are not being forced to do -- is not.
    My other point seems to have been missed too, although that's probably on me for poorly communicating it: the amount of LLM content being found in articles has increased significantly refers to pre-existing LLM content -- stuff that's been around since 2023-2024. We're past the point where we can worry about the "increasing scale" of LLM use (and I wish the recent news articles were more clear about this). The scale has already increased. Our options now are to deal with it or not. Gnomingstuff (talk) 14:19, 15 August 2025 (UTC)Reply
    I don't know why I've been tagged here to be perfectly honest I always feel rude referring to another editor's comments in larger discussions like this when given it's size they might miss it. —Locke Coletc 17:17, 15 August 2025 (UTC)Reply
    No worries, that's what I figured. I probably would have missed it. Gnomingstuff (talk) 18:17, 15 August 2025 (UTC)Reply
    "garbage in garbage out" does not apply to this tool at all. The close is a bit tricky in that respect, llms are inherently generative in how they operate, they cannot not generate. You can put great stuff in and get garbage out (and the reverse, sometimes). Treating it as a garbage in garbage out tool completely misunderstands what llms are. CMD (talk) 02:50, 15 August 2025 (UTC)Reply
    No, that is pretty much how they operate. Like most tools, even good input has the possibility to generate undesirable results. Being a good yser of the tool lets you recognize that and adjuts. That is garbage in garbage out, it still comes down to poor tool use. LLMs are not special in that regard I'm afraid. PackMecEng (talk) 13:15, 15 August 2025 (UTC)Reply
    Garbage in garbage out means that flawed inputs result in flawed outputs. If you have good input then the idiom doesn't apply at all. CMD (talk) 16:51, 15 August 2025 (UTC)Reply
    Eh, if the input did not produce the desired result but anotherone did, it was a flawed input. Thats how that works. PackMecEng (talk) 18:57, 15 August 2025 (UTC)Reply
    Any loss at craps is also due to flawed input. fifteen thousand two hundred twenty four (talk) 19:01, 15 August 2025 (UTC)Reply
  • This discussion just got reformatted as an RFC (for which I am partly responsible as I am one of the people who used bold !votey formatting in my comment), but on reflection it's unclear to me what the formal question being discussed is. Many people here seem to be rehashing prior discussions about the harm/lack of harm/current trends of LLM use on Wikipedia, which is unnecessary as prior discussions have already established a strong consensus that types of LLM use people are complaining about here are disruptive and should be hatted/removed. As far as I can tell, the only real question posed here is whether a proposal whose opening statement is hattable/removable under existing consensus may also be closed without further discussion. The answer is obviously yes, no RFC required. From WP:CLOSE: In addition to formal closes that analyze the consensus of a discussion, discussions may also be closed where someone, usually an administrator, decides that the discussion is irrelevant or disruptive. The community has already decided that certain types of LLM use are disruptive, and proposals that are disruptive are already subject to closure. What else is there to discuss? -- LWG talk 18:33, 15 August 2025 (UTC)Reply
    The question put forth here is should content generated by LLMs automatically be hatted/closed if certain tools register it as highly condident its AI generated. The previous discussion was based around bad or disruptive content vs all content in general. Which the previous RFC makes a distinction at. That is why this is a problem, its an expansion past and opposed to the previous RFC. PackMecEng (talk) 18:56, 15 August 2025 (UTC)Reply
    Since that RM was disruptive (and in fact all the !votes were Oppose anyway) my understanding is that under current community norms it could and should have been closed at any point. -- LWG talk 19:09, 15 August 2025 (UTC)Reply
    As was done at the example provided by me at the start, we did in fact HAT the proposal, but the discussion remained open (and !voting occurred). This RFC is further clarifying that for proposals of any type (RFC, xFD, etc), the discussion can simply be closed (perhaps with a closure note of No action taken and a reference to WP:HATGPT), sparing concerned editors from having to monitor such conversations for a week or longer. There's also the lingering question of how to handle such a situation after !voting has commenced. Void the discussion and leave it to anyone invested in the idea to start a new discussion (not utilizing LLM)? —Locke Coletc 18:59, 15 August 2025 (UTC)Reply
    If there is productive ongoing discussion, closing it would be counter-productive (and in some cases disruptive). If there is ongoing discussion that is not productive, then existing policies and guidelines allow it to be closed. There is no need for anything else. Thryduulf (talk) 19:53, 15 August 2025 (UTC)Reply
  • I think fighting against AI/LLM is a losing battle (we'll see AI-generated textbooks,[5] AI-generated books/novels,[6] AI-generated encyclopedias (?), etc. sooner or later). But I support this proposal in general. I would add an exception, though, and say that if the editor prefaces their AI-generated proposal with something along the lines of: "I've used AI/a chatbot to help me generate this proposal", then I would be fine with letting the proposal stand. Some1 (talk) 15:12, 16 August 2025 (UTC)Reply
    @Some1 We do indeed have an AI-generated encyclopedia[7], although it precedes llms. CMD (talk) 17:25, 16 August 2025 (UTC)Reply
    Thanks, and I just learned that there's something called wikigen.ai... Some1 (talk) 17:35, 16 August 2025 (UTC)Reply
    That thing seems to just make summaries of our articles for people who are lazy, as well as occasionally making up some nonsense. I tried on Macrobdella decora, a topic I'm very familiar with, and it told me "The leech's closest relative is believed to be the European medicinal leech, Hirudo medicinalis." which is quite a doozy given that that species is in a different family altogether. Cremastra (talk · contribs) 19:16, 16 August 2025 (UTC)Reply
    A simple fill-in-the-blank boilerplate form, using technology simpler than the Mail merge word processing button in the 1980s, is not "AI-generated" content. WhatamIdoing (talk) 18:03, 16 August 2025 (UTC)Reply
    That very much depends on what you mean by "AI-generated". Some editors have previously noted that their definition of that term includes essentially anything touched by anything that can be called an "AI", others use a definition closer to "has no human input after the prompt". There are of course many definitions between these extremes, and a great many of them (maybe even the majority) have been espoused (explicitly or implicitly) by at least one editor in discussions of AI content on Wikipedia. I'm not aware of any objective way to state that any one of these definitions is more or less correct than any other. Thryduulf (talk) 18:33, 16 August 2025 (UTC)Reply
    We do have WP:LLMDISCLOSE. It isn't enforced because it isn't policy, but it probably should be. Gnomingstuff (talk) 19:17, 16 August 2025 (UTC)Reply
  • That mention just above, of WP:LLMDISCLOSE, hits upon the same thing that I have been starting to think. It might be a very good idea, and even something where we might find agreement between editors who oppose all LLM content, and editors who argue that the content should be judged on its merits, if we were to make disclosure a matter of policy, and enforceable. I'm not making a formal proposal – yet. Just floating the idea. We have, in the past, felt like paid editing had the potential to overwhelm Wikipedia with unacceptable content. But requiring disclosure has been working reasonably well, all things considered. I think the same principle could apply here – at least as a start, pending on what develops in the future if the scale of AI reaches a level where we would have to consider more. --Tryptofish (talk) 22:21, 16 August 2025 (UTC)Reply
  • Oppose as stated per PackMecEng. I don't think there is any clear way to differentiate between LLM-generated proposals and human-generated proposals as of right now: I don't trust so-called AI-detecting websites and I definitely don't trust editors to do this based on vibes. Loki (talk) 23:07, 16 August 2025 (UTC)Reply
  • Oppose I believe that adding policies restricting the use of LLMs is unnecessary WP:CREEP, and that any problems arising from the use of LLMs can be handled with previously existing policies, guidelines, and customary usage. In addition, given the uncertainties of correctly identifying LLM-produced material, I think any procedure such as hatting suspected LLM-produced material has the potential of encouraging the biting of newcomers. - Donald Albury 00:05, 17 August 2025 (UTC)Reply
  • Already covered by WP:AITALK. If editors engage on the substance by supporting the AI-generated proposal, the discussion cannot be closed. If they only oppose the proposal, which is then struck according to AITALK, WP:SK#1 applies, in the deletion process, and by analogy in other processes (absence of a driving rationale for a change from the status quo). If the nomination is struck, its rationale becomes formally absent. If there are support !votes, they take the place of the nominator, as a rationale or rationales is present in them.—Alalch E. 14:17, 17 August 2025 (UTC)Reply
  • Oppose The move proposal cited by the OP seemed reasonably coherent and to the point. Its only fault seemed to be that it was rather prolix. But this discussion here demonstrates that humans are quite capable of generating lots of bloviation without AI assistance. For such general problems then you need general procedural rules such as arbcom's 500 word limit. Andrew🐉(talk) 20:45, 18 August 2025 (UTC)Reply
  • Request panel close of this discussion. Because there is a problem with the question (the problem is discussed at length in the discussion itself), this discussion is very unfocused, and correctly interpreting it will require a panel. Otherwise, findings could be absurd, uninentionally ironic, could distort existing policy, etc. Three administrators will be needed to assess the quality of the arguments given on the various sides of an issue, as viewed through the lens of Wikipedia policy, and they need to reality-check amongst themselves on what current Wikipedia policy actually says to do that correctly. A single (well-intentioned and responsible) closer could make an error, but a panel is unlikely to.—Alalch E. 00:48, 22 August 2025 (UTC)Reply
    If those who volunteer to evaluate consensus wish to do so in a group, by all means. I disagree, though, with mandating that it be done by a group. There are numerous experienced evaluators of consensus who I feel have established their reliability in producing considered evaluations. isaacl (talk) 00:14, 25 August 2025 (UTC)Reply
    • Support LLM generated commets helps enhance efficiency by synthesizing complex information into digestible forms
    Umar Halid (talk) 11:45, 26 August 2025 (UTC)Reply
  • Comment. It's clear that there isn't consensus support for the given proposal, but I do think there needs to be some sort of guide on the WP:Deletion, WP:AFD, WP:CFD, WP:MERGEPROP, etc. pages articulating what to do with AI/LLM generated proposals and how to respond. Most editors aren't going to be aware of WP:HATGPT so their is a need to formulate some sort of guideline language on the various pages. Best.4meter4 (talk) 17:02, 26 August 2025 (UTC)Reply

Alternative approach: make transparency policy

edit

An idea that came up in passing, above, is to make WP:LLMDISCLOSE, or something similar, a policy. Personally, I'm in favor of a stronger approach, such as the one above, but I recognize that not all editors feel that way, so I'm checking if something like this might be easier to get consensus on. What I'm hearing is that some editors feel that the use of LLMs should not be regarded as inherently disruptive. I actually think it is, but I can understand the disagreement, and I think that requiring disclosure would be better than nothing.

What I'm thinking of is to take wording similar to what is currently at LLMDISCLOSE, and put it on a standalone page, which would then be presented to the community as a proposed policy. I see this as somewhat analogous to what we currently do with COI and paid editing. Don't forbid it, but ask editors who use LLMs to be transparent about it. This would make it easier to track, and avoid confusion.

Does this idea have enough support to justify pursuing it further? --Tryptofish (talk) 23:55, 24 August 2025 (UTC)Reply

I would support this. Like you I prefer a strong approach, but I suspect that LLMs will end up like things such as COI and paid editing – strongly discouraged, disclosure required, but not actually banned. Cremastra (talk · contribs) 00:00, 25 August 2025 (UTC)Reply
To clarify, does your proposal include repealing the current guidance on hiding program-generated comments? isaacl (talk) 00:19, 25 August 2025 (UTC)Reply
Good question. I'm still trying to feel out how other editors regard the idea, so I'm willing to go either way, but I would lean towards treating them as not being mutually exclusive. In other words, I would lean towards saying that the first editor, the one who posts an LLM-generated comment, is required by policy to disclose that it was LLM-generated, and that the second editor, the one who wants to hide that comment, is permitted to do so. --Tryptofish (talk) 20:18, 25 August 2025 (UTC)Reply
In that case, the original question being posed still needs to be resolved. Does a proposal (minus any commentary) fall under the current guidance? If not, then is there consensus to hide proposals whose text was generated by a program? isaacl (talk) 21:31, 25 August 2025 (UTC)Reply
In that case, the original question being posed still needs to be resolved. Cool. You can do that above, this section is about Tryp's proposal. —Locke Coletc 21:42, 25 August 2025 (UTC)Reply
Just clarifying this is a parallel proposal, rather than an alternative approach that replaces the existing approach. isaacl (talk) 22:30, 25 August 2025 (UTC)Reply
Strictly speaking, I'm trying to assess what other editors think, so this isn't (yet) a proposal in the formal sense. But yes, I'm inclined to approach this as a parallel proposal, unless I get feedback here to formulate the proposal differently. --Tryptofish (talk) 22:52, 25 August 2025 (UTC)Reply
Your proposal is unrelated to AITALK, and making LLMDISCLOSE a policy is a stronger approach than having AITALK remain what it already is, as the non-approach above is an unintentional rehash of the AITALK RfC, which had already resolved with the adoption of the AITALK approach, about which you said that not everyone agrees, but it's already a consensus-settled matter from just several months ago, and consensus is not unanimity. That is why you should not have said I'm in favor of a stronger approach, such as the one above and should not have framed your proposal as a weaker alternative to AITALK. I am the original author of LLMDISCLOSE (Special:Diff/1134431809), but I refuse to !vote on it in a way that is premised on AITALK being effectively abrogated based on a confused rehash. —Alalch E. 03:15, 26 August 2025 (UTC)Reply
Oh, maybe we were just misunderstanding each other. It was never my intention to frame what I suggest here "as a weaker alternative to AITALK". Sorry if that's what you thought I was saying. I was trying to say that requiring disclosure is, well, in a sense, "weaker" than prohibiting LLM-generated proposals. And I was doing that in hopes of gaining support from editors who oppose the proposal above (which I, personally, support). But I don't want these issues to become a fight between us. You thought of LLMDISCLOSE. I like LLMDISCLOSE. I'm looking to promote something like LLMDISCLOSE from an essay to a policy. --Tryptofish (talk) 21:57, 26 August 2025 (UTC)Reply
Not all editors feel that way but it already passed when WP:AITALK was adopted, and consensus is WP:NOTUNANIMITY. This l2 section is now a weakly and badly framed proposal to adopt again something that was already adopted very recently. It is all a bad misunderstanding. —Alalch E. 17:20, 25 August 2025 (UTC)Reply
I must be confused, when I visit WP:LLMDISCLOSE I don't see a {{policy}} tag on it. I see the whole page tagged with {{essay}}. Can you point to the existing consensus for WP:LLMDISCLOSE to be tagged as policy? —Locke Coletc 17:48, 25 August 2025 (UTC)Reply
I was referring to Personally, I'm in favor of a stronger approach, such as the one above, but I recognize that not all editors feel that way,. —Alalch E. 19:50, 25 August 2025 (UTC)Reply
The way I understand it, WP:AITALK is part of the Talk page guideline, so it's a behavioral guideline rather than a policy. Although it has consensus, it also is written in terms of "may be struck or collapsed", rather than "must". WP:LLMDISCLOSE is currently on an essay page. --Tryptofish (talk) 20:18, 25 August 2025 (UTC)Reply
The same section of the same guideline says Removing or striking through comments made by blocked sock puppets of users editing in violation of a block or ban. Naturally, that means that sock comments and nominations are ordinarily discounted, once detected. Do we need a VPP discussion to adopt a policy for the same? No. —Alalch E. 21:40, 25 August 2025 (UTC)Reply
When I'm ready to make a formal proposal, I'm inclined to have a community discussion, on the theory that policies should be adopted in that way. If it turns out that support is so clear that it becomes a WP:SNOW kind of thing, that would be great, but I'm not going to presuppose that. --Tryptofish (talk) 22:52, 25 August 2025 (UTC)Reply
The community discussion was had, just several months ago: LLM/chatbot comments in discussionsAlalch E. 03:05, 26 August 2025 (UTC)Reply
Strong support, we need to stop with the mixed messages. Also, if enough people do disclose it gives us information/edit patterns that can be used to track/identify undisclosed AI edits. Gnomingstuff (talk) 19:13, 25 August 2025 (UTC)Reply
Strong support making the WP:LLMDISCLOSE section policy ({{policy}} will need to be updated to have a |section=yes option for this use case as {{guideline}} already does). This should be uncontroversial. —Locke Coletc 20:36, 25 August 2025 (UTC)Reply
Support. Undisclosed LLM use is already considered an aggravating factor in conduct disputes, and I support formalizing this to convey our expectations more clearly. Per Locke Cole, using {{Policy section top}} on WP:LLM and {{Policy|type=section}} on WP:LLMDISCLOSE would be a simple way to implement this. — Newslinger talk 01:53, 26 August 2025 (UTC)Reply
Support making WP:LLMDISCLOSE policy in the way suggested by Locke Cole and Newslinger. I'm still confused by a lot of the discussion above, but it has been my position for a long time now that disclosure of LLM use (when the LLM is contributing substantive content) is necessary to avoid violation of of WP:PLAGIARISM and WP:NOSHARE, and I would like to make that expectation clear in a way that can easily be explained to new editors. -- LWG talk 12:03, 26 August 2025 (UTC)Reply
Support making WP:LLMDISCLOSE policy, which is de facto how it is usually treated already. Making it clear upfront avoids leaving a minefield for new editors having to learn unwritten social norms about LLM use. We already require disclosure for paid editing, or for the use of multiple accounts, and it doesn't prevent us from having additional regulations. Chaotic Enby (talk · contribs) 15:26, 26 August 2025 (UTC)Reply
  • Support making WP:LLMDISCLOSE policy. I also think editors who violate disclosure should be blocked from editing.4meter4 (talk) 17:08, 26 August 2025 (UTC)Reply
    It wouldn't break my heart if there were a WP:1LLM or WP:3LLM rule similar to WP:1RR/WP:3RR. But even without that, if this were policy, it would be textbook WP:DE (especially if done so after receiving a {{uw-a1}} on up to {{uw-ai4}} on their talk page with no sign of stopping). —Locke Coletc 17:26, 26 August 2025 (UTC)Reply
    Regarding 1LLM/3LLM, I would say the problem is more quality than quantity? If people use LLMs to fix their spelling and nothing else, or as an advanced regex, then using them once or ten times isn't an issue. While someone pasting unreviewed LLM text in a discussion is problematic even if done only once (and can already been hatted). Chaotic Enby (talk · contribs) 18:33, 26 August 2025 (UTC)Reply
    Since this is just a discussion about disclosure, it would do nothing to get in the way of any further kinds of actions (in other words, it won't say that admins are prevented from blocking someone who is disruptive). I agree that there is room for judgment in evaluating how the LLM has been used, and that admins have room for judgment in whether to block or warn someone. --Tryptofish (talk) 21:57, 26 August 2025 (UTC)Reply
    If 1/3LLM is specifically for undisclosed, blatant LLM output, and isn't a restriction on additional actions (like 3RR doesn't prevent blocks for other kinds of edit warring), then it could definitely work. Chaotic Enby (talk · contribs) 22:03, 26 August 2025 (UTC)Reply
    This is interesting. My thinking up to this point was to go as far as proposing policy that, in effect, says something to the effect of "you are required to disclose". So if someone does not disclose, they would be violating the proposed policy. What you are saying is to institute a more formal process over how many chances an editor gets before crossing a "bright line". I'm interested in what other editors think about that. --Tryptofish (talk) 22:09, 26 August 2025 (UTC)Reply
    I don't know if a more formal process is really needed – despite the name, it feels more like a natural continuation of the warning process, rather than a per-article thing like 3RR. So maybe, instead of a bright line, it could be a guideline on how much someone should be warned before formal sanctions? 3LLM could also help avoid editors being blocked based on one person's hunch, if we require three different people to warn someone for undisclosed LLM use. Chaotic Enby (talk · contribs) 22:17, 26 August 2025 (UTC)Reply
  • Support: This would help editors make informed decisions about where to focus their efforts. fifteen thousand two hundred twenty four (talk) 20:04, 26 August 2025 (UTC)Reply
    @Fifteen thousand two hundred twenty four, your first edit to a talk page was only a couple of years ago. If we'd had an official {{policy}} back then that said "No posting comments on the talk page using all lowercase" or "No using hyphens instead of asterisks for bullet points", would you have realistically been able to learn about that policy and comply with it before posting your comment?
    How do you think you would have felt, if you came back the next day and found your comment hidden with a note saying something like "Collapsed violation of formatting rules"? Would you have felt welcomed and valued, or rejected and confused? WhatamIdoing (talk) 20:09, 26 August 2025 (UTC)Reply
    WAID, I'm not sure from your question whether or not you have concerns about the proposal here, but I would welcome suggestions from you or anyone else about how to improve it. --Tryptofish (talk) 21:57, 26 August 2025 (UTC)Reply
    There is a vast gulf between petty rules about formatting issues and rules asking for original thought. Cremastra (talk · contribs) 22:44, 26 August 2025 (UTC)Reply
    I'm unsure what relevance this has to my support for a policy requiring editors disclose when they use an LLM.
    - "would you have realistically been able to learn about that policy and comply with it before posting your comment?" – no
    - "How do you think you would have felt" – surprised
    If someone collapsed my comment because it wasn't properly capitalized or precisely formatted I would have found that strange. If someone collapsed my comment because it wasn't my own original words, unfiltered by a predictive model, I would have found that deeply reasonable.
    Some other editors would no doubt feel as you posited; however, the well-being of the project comes before editors' personal feelings. The community has decided that use of an LLM in discussions is disruptive enough to the functioning of the encyclopedia to warrant the option for removal from immediate view. I don't disagree.
    Perhaps we could do more to inform editors who's comments have been collapsed. Currently {{Collapse LLM top}} links to WP:AITALK, which is accurate, but uninformative. It's the same as saying "this comment has been collapsed because there is a rule that says it can be collapsed". Maybe modifying WP:AITALK to provide a bit of the rationale behind why the policy exists could help. fifteen thousand two hundred twenty four (talk) 23:13, 26 August 2025 (UTC)Reply
    I think that's a very good point, so I just did this: [8]. --Tryptofish (talk) 23:21, 26 August 2025 (UTC)Reply
I see that people are leaving support comments, but I'm confused by what they are supporting. Are they endorsing that you start a formal RfC, or that the policy actually change? If the second, I disagree, largely because I don't know what "incorporates LLM output" means. If we make LLMDISCLOSE policy, we should revise the text to make "incorporates" more specific. Cheers, Suriname0 (talk) 23:03, 26 August 2025 (UTC)Reply
I'm interpreting it as supporting having a formal RfC. I suspect that some editors think that they are supporting an actual policy, but that would mean that they likely would support having an RfC to do that. At this point, I'm assessing whether there is enough support to keep going with it, and it looks like there is. I'm also interested in feedback that I can use to make a proposed policy that improves on what the essay page currently says, so I'm taking note of every comment here that does that. --Tryptofish (talk) 23:09, 26 August 2025 (UTC)Reply
Great, looking forward to the RfC. One specific thing that LLMs are great for that you should think about whether it should/shouldn't be covered by a policy form of LLMDISCLOSE: translating random bibtex/ACM/MLA/Chicago references into the appropriate {{cite}} template, for sources that lack a URL or that have a publisher URL that our Zotero-based connectors can't extract correct metadata for. Trivially, an edit I make in this way "incorporates LLM output", but it's functionally the same as using the Zotero connector: I input the URL/DOI/ISBN/citation, then correct the (often incorrect) wikitext output. It's not a problem to require disclosure in this case, but I do think it probably isn't helpful in the way this policy is intended to be.
Other edge cases that might be worth thinking about while drafting the RfC: using LLMs with web search to conduct a WP:BEFORE or to find sources I might have missed, using sources discovered in search engine AI summaries (e.g. Google's Gemini summary), making edits based on LLM critiques, using LLMs for template discovery ("I want to do X on English Wikipedia, is there a wikitext template that does that?"), or using LLMs for suggesting missing See Also links (this is a task that other ML models exist for already; it might be weird to require disclosure when an LLM is used to generate suggestions but not when other 3rd-party ML models are used). Cheers, Suriname0 (talk) 00:41, 27 August 2025 (UTC)Reply

Wikipedia:Naming conventions (stations) has an RfC

edit
 

Wikipedia:Naming conventions (stations) has an RfC for possible consensus. A discussion is taking place. If you would like to participate in the discussion, you are invited to add your comments on the discussion page. Thank you. Tomiĉo (talk) 10:40, 12 August 2025 (UTC)Reply

I, for one, can't seem to find this RfC. Dege31 (talk) 12:37, 12 August 2025 (UTC)Reply
It's at Wikipedia talk:WikiProject Stations. CMD (talk) 13:17, 12 August 2025 (UTC)Reply

Annotated books: Should cite use the editor's name or the original author's name?

edit

Does WP have a guideline governing the author that appears in a source such as the following:

  • Baring-Gould, William (1967). The Annotated Sherlock Holmes. New York: Clarkson N. Potter. ISBN 0-517-50291-7.

where the book has an original author (Doyle) and a famous editor/annotator (Baring-Gould)? This particular example (from the Sherlock Holmes article) seems wrong, since Doyle wrote more than half the words in the book, yet is not even named in the citation.

This must be a common issue: There are scores of books where famous editors add extensive commentary to books written by another famous author (Dante, Dickens, Conan Doyle, Shakespeare, Carrol, Twain, etc). In the situation where the editor is famous, and the edited/annotated books becomes famous, people typically refer to the book by the editor's name. Which explains how Doyle's book got attributed to Baring-Gould in the example above.

This is not an academic question, I'm trying to figure out how to display a source The Journals Of Captain James Cook where the book is almost always referred to by the editor's name, not Cook's name.

Is there a guideline that says something like: "The named author should be the author of the original work; and the annotator/editor should be described as the editor"? Or is WP silent on this, and it is handled on a case-by-case basis? I looked in the MOS and could not find anything. Noleander (talk) 03:59, 20 August 2025 (UTC)Reply

If you are citing the original author's text, you should be using the original work and not the annotated work. If the original text was only ever published in an annotated form, your text should make it clear whether the material is from the annotations or the original text but the actual book would still be cited to the editor/annotator. --User:Khajidha (talk) (contributions) 12:22, 20 August 2025 (UTC)Reply
Thanks for the suggestion. For the source I'm interested in: The Journals Of Captain James Cook was never published (author Cook) as a stand-alone work. The first publication was the edited/annotated version (editor was Beaglehole). Noleander (talk) 13:28, 20 August 2025 (UTC)Reply

I cannot find any support in {{cite book}} for listing both an editor and an author, except in the case of a book where each chapter is written by a different author. In the case where an editor produces an edited version of another author's work, The Chicago Manual of Style, 18th ed., ¶ 14.6, says to give both the editor's name and the author's name, and provides this example (showing both markup an as rendered):

{{Hanging indent|Bonnefoy, Yves. ''New and Selected Poems''. Edited by John Naughton and Anthony Rudolf. University of Chicago Press, 1995.}}
Bonnefoy, Yves. New and Selected Poems. Edited by John Naughton and Anthony Rudolf. University of Chicago Press, 1995.

Jc3s5h (talk) 21:53, 20 August 2025 (UTC)Reply

I don't see anything in Template:Cite book to suggest that the examples cover all use cases. I would use something like:
using |contribution=Annotation and |others=Annotated by William S. Baring-Gould with a wikilink.
Is there a way to suppress the quotation marks for |contribution=Annotation? -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 13:26, 20 August 2025 (UTC)Reply
Interesting options ... using the "contribution" field could be a good approach. Noleander (talk) 13:31, 20 August 2025 (UTC)Reply

MOS: prescriptive, descriptive, or both?

edit

The Manual of Style varies in levels of consensus. In Wikipedia:Arbitration/Requests/Case/Article_titles_and_capitalisation_2 it was alleged for some parts of MOS: some of those guidelines have fewer watchers than my talk page, and are largely written by parties to this case (see discussion). Meanwhile, CONLEVELS states:

Consensus among a limited group of editors, at one place and time, cannot override community consensus on a wider scale. For instance, unless they can convince the broader community that such action is right, participants in a WikiProject cannot decide that some generally accepted policy or guideline does not apply to articles within its scope.

I don't think it's unreasonable to conclude that while some parts of MOS are the result of consensus with significant participation, there may be other parts that are indeed consensus among a limited group of editors, at one place and time.

Also of note are the proposals by L235 that did not make principles for that case. Specifically,

Policies and guidelines have a combination of prescriptive and descriptive characteristics. Policies and guidelines document community consensus as to "standards [that] all users should normally follow" (Wikipedia:Policies and guidelines), giving them some degree of prescriptive force. Simultaneously, policies and guidelines seek to describe "behaviors practiced by most editors" (Wikipedia:Policies and guidelines), and change with community practice, giving them a descriptive quality. Naturally, disagreements regarding the extent of a policy's consensus or prescriptive effect arise from this combination, and the text of a policy can sometimes diverge from or lag behind community consensus. These disagreements, like all disputes on Wikipedia, should be resolved by discussion and consensus.

Does MOS necessarily indicate community consensus on a wider scale? In other words, should closers examine the specific text for level of consensus before using it to overrule a (potentially larger) group of editors? Good day—RetroCosmos talk 01:45, 26 August 2025 (UTC)Reply

  • Comment WP:MOS says at the top "Editors should generally follow it, though exceptions may apply." Not sure anything constructive will come of this rfc, but time will tell. Gråbergs Gråa Sång (talk) 07:03, 26 August 2025 (UTC)Reply
  • I would agree with L235, and add that, ideally, policies and guidelines describe community consensus and prescribe editors to follow this consensus. Regarding the MoS, as a set of guidelines with various ranges, it is expected that not all of its pages will have the same level of consensus – a very specific topic will attract less interested editors, and thus naturally have a lower CONLEVEL. That in itself is not necessarily problematic. However, if it goes against a wider consensus, or only reflects a subset of the views of editors interested in that topic, then there is indeed a CONLEVEL issue and a broader discussion should be held. Chaotic Enby (talk · contribs) 15:31, 26 August 2025 (UTC)Reply
  • As a closer, I would not feel justified in going on an independent fact-finding mission to determine the level of consensus that supports a specific policy or guideline. I would support overturning closures that were based on such an independent mission. If participants in the discussion gave valid arguments based on their own analysis of the level of consensus, I would consider that when making my decision.
    To put it another way, I presume that guidelines and policies have a higher level of consensus than any local discussion. A mass of editors who disagree with a guideline should be directed toward venues where guideline change can happen, not a local discussion. Firefangledfeathers (talk / contribs) 15:53, 26 August 2025 (UTC)Reply
Consensus isn't only found by discussion, but also by use. Maybe four editors discussed a particular piece of policy or guidance, but many editors may follow it because they also support what has been said. If editors disagree with any particular price of guidance then they should start a centralised discussion in whatever forum would be appropriate.
So the answer to the specific question is probably, maybe, but to start discussion on specifics as required. Certainly the MOS in it's entirety has some level of wide scale support, even if it's quite possible that not all of it does. -- LCU ActivelyDisinterested «@» °∆t° 12:42, 27 August 2025 (UTC)Reply

Promoting WP:GNIS to guideline

edit

Wikipedia:Reliability of GNIS data has been around as an essay for over five years now, spelling out the issues we have found with using this database (and now its successor, the National Map) as an authority. The short version: nobody disputes its authority w.r.t. names of places (which is what it was intended for); it's usually good for locations but there are some thing which have to be taken into consideration; its classification of places is problematic and cannot be relied on for several reasons, and it does not claim to be authoritative for that. In general people working on these geography article tend to accept this analysis, but many are unaware of it and we get the occasional argument that since it's an essay it can simply be dismissed as such; in any case WP:GEOLAND spells out that "The Geographic Names Information System and the GEOnet Names Server do not satisfy the "legal recognition" requirement and are also unreliable for "populated place" designation."

I would like to propose elevating this essay to a guideline, with appropriate modifications. Mangoe (talk) 15:34, 26 August 2025 (UTC)Reply

Charts and colors

edit

There have been several changes where mw:Extension:graph graphs have been migrated to mw:Extension:Chart graphs. The graph extension is broken, hence the migration. Now, Chart does not allow specific colors, it does not support it. Sometimes, these graphs have legends outside of the graph. I figured it is best to ask before this gets too common. Is it ok to just change the legend to match the chart, or should the graph stay broken? Snævar (talk) 19:28, 26 August 2025 (UTC)Reply