Community Wishlist Survey 2023/Larger suggestions/Create a large language model that aligns with the Wikimedia movement: Difference between revisions
Content deleted Content added
VulcanSphere (talk | contribs) Support proposal |
|||
Line 41:
* {{support}} People will use LLMs regardless of what our policies say. We might as well make an LLM available that is specifically trained to recognise sources Wikipedia sees as reliable, and specifically trained on our policies. I doubt any general-purpose LLM will ever be "good" at Wikipedia-specific tasks, without being trained specifically for them. [[User:DFlhb|DFlhb]] ([[User talk:DFlhb|talk]]) 19:36, 16 February 2023 (UTC)
* {{support}} <span style="background:#202122;font-family:'Lato';padding:4px 3px 3px">[[User:VulcanSphere|<span style="color:#339966">Vulcan</span>]]<span style="color:#EC6208">❯❯❯</span>[[User talk:VulcanSphere|<span style="color:#45B1E8">Sphere!</span>]]</span> 16:39, 18 February 2023 (UTC)
* {{Weak oppose}} while it could be useful, the issue is that information coming from an AI obviously cannot be fully trusted. For some of these things like source recommendations, the time it would take to verify the the correctness of the AI's suggestions would be just as long as the time it would take to just fully do it yourself, and for things like vandalism detection, that's not what language models can do. That's not what they're for. It's an interesting idea at the base though--using AI to make lives easier. But putting it that way, isn't that what society as a whole is generally trying to do right now? [[User:Snowmanonahoe|Snowmanonahoe]] ([[User talk:Snowmanonahoe|talk]]) 19:36, 18 February 2023 (UTC)
|