Revision as of 09:22, 22 March 2008 editUgha (talk | contribs)307 edits →Mirroring wikipedia with lots of ads?: new section← Previous edit | Latest revision as of 02:23, 17 January 2025 edit undoTarnishedPath (talk | contribs)Extended confirmed users, IP block exemptions, New page reviewers, Pending changes reviewers17,770 editsm →A discrimination policy | ||
Line 1: | Line 1: | ||
{{pp-sock|small=yes}} | |||
<noinclude>{{Villagepumppages|Policy discussion|The '''policy''' section of the village pump is used to discuss existing and proposed policies and guidelines.<br>If you want to propose something new other than a policy or guideline, use the '']'' section.<br> | |||
<noinclude>{{Short description|Page for discussing policies and guidelines}}{{Redirect|WP:VPP|proposals|Misplaced Pages:Village pump (proposals)}}{{village pump page header|Policy|alpha=yes|The '''policy''' section of the ] is intended for discussions about already-proposed ], as well as changes to existing ones. Discussions often begin on other pages and are subsequently moved or referenced here to ensure greater visibility and broader participation. | |||
Please see ''']''' for a list of frequent proposals and the responses to them. | |||
*If you wish to propose something ''new'' that is ''not'' a policy or guideline, use ]. Alternatively, for drafting with a more focused group, consider starting the discussion on the talk page of a relevant WikiProject, the Manual of Style, or another relevant project page. | |||
|WP:VPP}} | |||
* For questions about how to apply existing policies or guidelines, refer to one of the many ]. | |||
* If you want to inquire about what the policy is on a specific topic, visit the ] or the ]. | |||
__NEWSECTIONLINK__ | |||
* This is '''not the place to resolve disputes''' regarding the implementation of policies. For such cases, consult ]. | |||
<br clear="all" /> | |||
* For proposals for new or amended speedy deletion criteria, use ]. | |||
Please see ''']''' for a list of frequently rejected or ignored proposals. Discussions are automatically archived after two weeks of inactivity.<!-- | |||
] | |||
-->|WP:VPP|WP:VPPOL}}__NEWSECTIONLINK__ | |||
] | |||
{{centralized discussion|compact=yes}} | |||
] | |||
__TOC__<div id="below_toc"></div> | |||
] | |||
] | |||
] | |||
] | |||
]</noinclude> | |||
] | ] | ||
] | |||
{{User:MiszaBot/config | |||
|archiveheader = {{Misplaced Pages:Village pump/Archive header}} | |archiveheader = {{Misplaced Pages:Village pump/Archive header}} | ||
|maxarchivesize = |
|maxarchivesize = 400K | ||
|counter = |
|counter = 199 | ||
|algo = old( |
|algo = old(10d) | ||
|archive = Misplaced Pages:Village pump (policy)/Archive %(counter)d | |archive = Misplaced Pages:Village pump (policy)/Archive %(counter)d | ||
}} |
}}</noinclude> | ||
{{User:MiszaBot/config}}<span id="below_toc"/> | |||
== Tor nodes == | |||
An ongoing discussion is in progress regarding adjusting ] in reference to TOR nodes. The discussion is ]. Regards, ] | |||
== ] - time to get rid of it? == | |||
'''Moved from archive as it's premature to close this - future datestamp applied to make sure it isn't archived again''' - ''']''' <sup>(])</sup> 17:51, 26 January 2009 (UTC) | |||
Moving from ]... | |||
About two months ago, I listed Requests for user comment ] under the premise that it '''did not work''', and it's basically a quagmire of personal attacks and a stepping stone to ArbCom. The consensus in the MFD, including the creator of the process and the MfD's closer, is that it doesn't really work 99.9% of the time, and only exists because there is no other process existent. Just get rid of it and reinstate ], as that actually did do some good. ''']''' <sup>(])</sup> 17:51, 26 January 2008 (UTC) | |||
:Sounds like a good idea. I personally preferred CSN better than RFC/U. ] (]) 18:10, 26 January 2008 (UTC) | |||
::I would support CSN provided there was a minimum time for comments (about 7 days). There should also be a maximum time for banning (1 year, same as ArbCom). ] (]) 18:14, 26 January 2008 (UTC) | |||
:CSN had teeth, RFC/U hardly any. CSN saw discussion and nuance, RFC/U sees ganging up and party-lines half the time. With the same provisos as R. Baley, except I'd prefer six months, it would be good to have it back. ] (]) 18:20, 26 January 2008 (UTC) | |||
:Maybe you could merge the two... CSN to me always seemed to arbitrary. Consensus could be declared in an hour or never... that kind of gives power to people who can generate a mob of "me too"s on demand. RFC is very structured but seldom goes anywhere. Is there any realistic way to have CSN but with a more normalized process, to give the accused a change to reply, slow down the mob mentality, and reasonably assess consensus? --] 18:28, 26 January 2008 (UTC) | |||
::Would it need a new name possibly? Also please note than CSN only closed ] and consensus might not of changed much since then. Also, a lot of things that "could" of gone there are instead now sent to ] or ], meaning they get a lot more traffic and stress put on them. ] (]) 18:29, 26 January 2008 (UTC) | |||
::W.marsh, don't you think a minimum one-week period for each sanction discussion would help with the mob of "me-too"s? (Too much evidence has emerged lately of off-wiki co-ordination for us to discount that as a factor.) ] (]) 18:32, 26 January 2008 (UTC) | |||
:::A week sounds reasonable. If it's truly an emergency ] should apply, and if someone's transgressions don't seem blockworthy a week after the fact, then a ban was a bad idea to begin with. I'd also like to look at a waiting period before people start bolding words ('''ban''', '''don't ban''', etc.) maybe 48 hours of pure discussion without people taking definitive stands like in a vote. I think that would lead to better discussion, people tend to feel psychologically committed to a stance once they're locked in to it. --] 18:36, 26 January 2008 (UTC) | |||
::::At Arbcom they've decided to take the ambitious step of waiting (I believe 48 hours, but I can't remember) before voting on the proposed decision page. We could do something similar, discussion can take place for 2 days, but no proposed "remedies" (ban, topic ban, etc.) could be offered until 48 hours after a new complaint had been certified (maybe not "certified," just following the initial complaint --basically enforce 2 days of discussion before any talk of "banning"). ] (]) 18:44, 26 January 2008 (UTC) | |||
RFC works when it's used for asking for comments, it does not work when sanctions are sought, but that is not its purpose. The CSN should be brought back and RFC kept and used for its intended purpose. <span style="font-family: verdana;"> — ] • ] • </span> 20:09, 26 January 2008 (UTC) | |||
:The Community Sanction Noticeboard had its own problems, though I'm not sure that it needed to be eliminated. Part of the problem is that dispute resolution mechanisms seem to come and go - Mediation went away, and now it's back under a new name, the CSN came and went, ANI seems to alter its mission every so often. I see three main problems with RFC/U: it is not empowered to sanction, it's intended to keep reduce the burden on ANI, and it's a mandatory step before going to ArbCom, which can sanction. The solution I see is to 1) bounce more stuff, both from RFC/U and ANI, to Mediation (wherever it's living right now), 2) have some level of sanction available at RFC/U, which would probably require administrator patrolling, and 3) allow admins to move complicated cases off ANI to RFC/U. Perhaps a name change would be in order - instead of "Request for Comment/User Conduct", it could become "Administrators' Noticeboard: Ongoing Problems" (to distinguish it from AN:'''Incidents'''). Making it part of the Administrators' Noticeboard would mean that sanctions would be available and it would be an appropriate preliminary step to ArbCom. It would also reduce the load at ANI, where probably half the volume of discussion is on complicated, drawn-out issues, even though those are fewer than 10% of the actual incidents reported. Community Sanctions would all get moved to AN/OP, also. As part of the AN cluster, AN/OP would be fairly highly visible. <span style="font-family:serif;font-size:120%">''']''' ]</span> 20:37, 26 January 2008 (UTC) | |||
:I'm Opposed to this. Many of our processes suffer from a lynching mentality and RFC is as bad as some of them but it does serve a purpose. I really do not see a return to the votes for lynching that CSN turned into as a viable alternative. If we are replace this process we need some other way to garner community feedback into problematical or disputed editor behaviour and a noticeboard doesn't seem the way forward. ] <sup>'']''</sup> 22:22, 26 January 2008 (UTC) | |||
:: I concur with Rlevse's and Spartaz's comments. --] 00:39, 27 January 2008 (UTC) | |||
Both W.marsh and Spartaz voice important concerns. The CSN was split off from ANI, and then was merged back into ANI after only 8 months. I think ANI, with its high visibility and traffic, is the proper place for most such discussions. The ] is very instructive as to the potential problems that must be kept in mind. I oppose any page dedicated exclusively to "sanctions," as well as any form of voting for a ban. | |||
Getting back to RFC/U, I think its purpose and its place within the DR process should be better defined. The list of DR options ] is rather bewildering, and does not indicate (what I see as) RFC/U's status as a second-tier DR forum for problems that have proven intractable in the first-tier forums. The third tier, of course, is Arbcom. | |||
There is a grave problem when people see DR as a list of hoops that must be jumped through before you can ban someone. Emphasis should be placed on restoring relationships and on helping problematic editors to become better ones. Note that I am not talking about obvious trolls, who should be dealt with easily enough in the first-tier DR forums. To me, the purpose of the first-tier forums is to have one or two experienced editors tell a problematic editor that he/she is behaving problematically and should change. At this point, the case may be obvious enough that a block or ban would be appropriate. The purpose of RFC/U is then for the larger community to communicate that same message. If the problematic behavior continues, then an admin can enact a community ban, and the tougher cases can go to Arbcom. If I am out in left field on this, then tell me so or ignore me. If not, then the DR guidelines should be a lot more clear that this is the case. --] (]/]) 05:09, 27 January 2008 (UTC) | |||
:It would be good if it worked that way, but the practice is less harmonious. The process seems to escalate conflict rather than diminish it. I don't however know how to substitute it. CSN was seen as a kangaroo court, so that too had problems. ''']''' (]) 09:11, 27 January 2008 (UTC) | |||
::Practice does not need to be harmonious. I'm not so naive as to think that a large fraction of people are actually focused on "restoring relationships" etc. But I'd settle for orderly. --] (]/]) 01:29, 28 January 2008 (UTC) | |||
The problem I have seen in the few RFC/U's I've seen (as an outsider) is that there is very little in the way of objective evidence. It usually ends up in IDONTLIKEHIM comments, or sometimes people siding with the nominator they like or the defendant they like, or even lining up with the POV they like. | |||
Any complaint, whether it is in an RFC/U or an AN/I or a proposed AN/OP, should have specific charges based on policy or guidelines and specific diffs to support the charge, and diffs to demonstrate attempts to resolve the problem. A user who behaves badly should be warned every time the problem is noticed. Just as we warn against vandalism, we should warn about NPA, incivility, etc. (If we had more warning templates, users might issue warnings more often.) If we warned users more often we might see fewer problems. If problems persist, then the warnings will provide the evidence to justify blocks. | |||
AIV is not contentious because there is a visible history of escalating warnings to demonstrate the problem, to demonstrate attempts to resolve the problem, and to justify the length of a block. 3RR is not contentious because diffs provide objective evidence of bad behavior. RFC/U, AN/I, CSN almost always are (were) contentious because there is usually no objective evidence to demonstrate the problem and attempts to resolve the problem. I think that RFC/U would be more effective if it required specific charges of violated guidelines, specific diffs to support the charges, and specific diffs to demonstrate attempts to resolve the problem. | |||
I was just about to make these suggestions about specificity over at WT:RFC when I saw the link to this discussion. I might still suggest it over there to try to improve the process while waiting to see if a consensus develops over here to eliminate or replace the process. I'm also thinking of starting a new section over here to suggest that we should issue warnings for bad behavior much more often. I have seen a lot of incivility go unwarned. If we had escalating templates for warnings, editors might use them more often. ] (]) 02:41, 28 January 2008 (UTC) | |||
IMHO, RfC on User Conduct should be used to elicit a wider community involvement in the background of the situation instead of the superficial cat-calling that we stumble acrost in article-talk and user-space. I frequently accidentally wander into a vicious debate, simply because I visit a lot of pages. The RfC/U posted to the article-talk, and user-talk of '''both''' the RfC presenter and the subject would allow for impartial input. Which should continue for a '''minimum''' of three days there. Then, as above mentioned, the subject can be given some '''breathing room''' in which to evaluate improvement or at least detachment. After '''sufficient''' time, if an editor feels that anti-project editing still exists, '''then''' it would be appropriate to escalate to CSN and allow at least 3 further days for responses to be gathered. So my nutshell, RfC/U as a precursor to CSN and a necessary part of DR.] (]) 02:59, 28 January 2008 (UTC) | |||
:The problem with ANY system of open community comment on another editors actions, regardless of which Wiki-acronym you attach to it, is that it is always open to sniping and abuse (once someones name shows up there, everyone they ever have pissed off gangs up on them). The question is whether such abuse is willing to be tolerated in order to have a system whereby the community can comment on user behavior. You can't have a system in place that is immune to this kind of abuse, but neither should you throw out the baby with the bathwater... --].].] 06:54, 28 January 2008 (UTC) | |||
I am strongly in favor of the ] system. It isn't good at seeking punishments for past bad behavior, but that's partly because sanctions are ''preventive'', not punitive -- the point is, sanctions should be applied when bad behavior ''continues,'' rather than because it existed. RFCs are good for that -- if a user pushes POV, for instance, and it becomes well-established that this is the case in an RFC, and they ''continue'' to do it, sanctions can be safely applied. RFCs sometimes get out of control, but that's actually a good thing -- think of it as water in the mountains, it needs to come downhill somewhere. ] is a good way of handling that release of tensions because of the way its rules keep editors from commenting back and forth, which tends to build tension. Plus, they have a good way of adding lots of uninvolved editors to the mix, which distributes the energy. ]]<sup>]</sup> 15:49, 28 January 2008 (UTC) | |||
I don't really know what to think. The Misplaced Pages community hasn't shown itself to be anymore trustworthy than the Misplaced Pages admins. Both increasing and decreasing admin accountability or things like RFC/U seem counterintuitive. Making it more strict allows people to witch-hunt users and admins they don't like. Making it more lax allows trolls and corrupt admins to do whatever they want. The problem is that so many Misplaced Pages editors have zero regard for reason. ''That'' needs to be addressed first, I think. <font size="4">]</font> <font face="impact"> ]</font> (]) 11:35, 5 February 2008 (UTC) | |||
RFC works (as stated above) when it's used for asking for comments on behavioral issues of a user or users, it does not work when used for witch-hunts, lynchings, Public floggings, personal attacks, bitterness, and character assassinations. Since this process does seem to escalate some conflicts rather than diminish them, perhaps modifying the guidelines within the process is needed as opposed to removal. Without RfC/U, the only formal steps in dispute resolution that focuses on editors are AN/I and ArbCom. Conversly AN/I could serve as an appropriate venue and does provide wide community involvement on issues (Apropriatly a modified format would be needed on AN/I to replace RfC/U). Processes exist to have a purpose, I belive this does, but some reform may be needed to improve it.--] (]) 13:18, 5 February 2008 (UTC) | |||
If you thought RFC is terrible, CSN was horrendous. I don't ever want to see anything like that back on wikipedia ever again. But if I do, I shall certainly crucify the inventor using their own process. ;-) --] (]) 15:48, 5 March 2008 (UTC) | |||
===How to guide=== | |||
I think RFC is a good way to gather evidence and gauge community sentiments. If an RFC/U convinces an editor to cease causing problems, that is a good result. If they continue, a note can be posted at ANI requesting a community remedy, such as an ] or ban, with a link to the RFC/U. If there is no consensus at ANI, the case can go to ArbCom, and again, a link to the RFC/U provides much of the necessary evidence. The processes work when people use them correctly. ] <sup>]</sup> 14:05, 5 February 2008 (UTC) | |||
:To be honest, if we ever want RFCU to ever work, we need more admin intervention - Anittas was indefed a second time in October. The attack he was blocked for was on RFCU for twelve days, but nothing happened until ANI got wind of it. ''']''' <sup>(])</sup> 00:50, 19 February 2008 (UTC) | |||
== SPCA, International == | |||
Eep! Forgot this was policy. Moving to ]. ]<sub>]]</sub> 2008 February 20, 20:01 (UTC) | |||
== WP:NOT#MYSPACE == | |||
*] | |||
*] | |||
I would like to start a centralised discussion about this issue. I know that people often comment to the effect of "harmless", "builds community spirit" etcpp. I personally think that it's crap and should be deleted, with the positive side effect of possibly alienating one or two idiots who are only here to play the hidden page/link game or maintain their guestbooks. And I ''do'' think this is really one big issue. And awarding barnstars for such stuff is just outrageous. Imo. Comments? ] (]) 17:35, ], 200<!--DT-->8 | |||
:Agreed, both look like good candidates for speedy deletes. ] (]) 17:47, 5 March 2008 (UTC) | |||
:I've been mildly disgusted by these for a while. Misplaced Pages isn't a game. ] (]) 18:20, 5 March 2008 (UTC) | |||
::I've moved the second template to ], but neither are even remotely close to meeting a speedy deletion criteria. <font face="Broadway">]'']</font>'' 18:22, 5 March 2008 (UTC) | |||
:::The first one should also be userfied. --] <sup>]</sup> 18:24, 5 March 2008 (UTC) | |||
:::If the social material promotes a sense of well-being and community spirit which fosters article writing, then I am all for it. Not sure, are any folks who've given these ones been those who do article writing? ] (] '''·''' ]) 06:42, 8 March 2008 (UTC) | |||
===Proposed solution: Create class of users that can have social networking content=== | |||
Here is my proposed solution to this issue. Establish two classes of users: | |||
*Class 1: Your userpage is restricted to Misplaced Pages-related content; no social networking MYSPACEy or blogging-type stuff allowed. But you get no advertisements. | |||
*Class 2: You get a quota (e.g. 10 MB of space) to have all your images, subpages, etc. and you can do pretty much whatever you want (except copyvios, personal attacks, etc.) but any non-Misplaced Pages-related subpages will need to have Google-style text-based ] on them. This will provide revenue to support traffic to these pages. We might even have a separate namespace for this type of content. | |||
Everyone would start out in Class 1. You can upgrade to Class 2 at any time. To go back to Class 1, you need to get rid of your social networking stuff first. Actually, now that I think about it, we probably don't even need to have classes – just have a rule that any social networking-type subpages need to have the ads. | |||
I'm sure we can find a compromise that accommodates everyone. ] (]) 21:51, 8 March 2008 (UTC) | |||
* Why would we do that? The single largest cost to the Misplaced Pages Foundation is the technical costs - server resources and bandwidth. The community rejected advertising ages ago. If you want a MySpace profile, then why not try MySpace? <b>]</b> <small>(])</small> 14:25, 12 March 2008 (UTC) | |||
*'''Hell no''', for the exact same reasoning is Guy. Misplaced Pages is an encyclopedia. It is not a social networking site. Go register on Facebook if you want to network socially. ]] 14:28, 12 March 2008 (UTC) | |||
*"Social networking" of a sort is inevitable, but it should ultimately be about wikipedia. In short, it should be the same rules as now. There's already a fair amount of latitude in personalizing user pages. Status quo seems to work fine. ] <sup>'']''</sup> 14:34, 12 March 2008 (UTC) | |||
* I see no need for encyclopedia users to downgrade to a Class 2 user. There are other sites for that. If WF wants to create a separate site as a fund raising tool, that's fine and good luck. -- ] (]) 14:37, 12 March 2008 (UTC) | |||
* Completely undermines ] - It would need to be fundamentally re-written. ] <sub>(] / ])</sub> 19:36, 13 March 2008 (UTC) | |||
'''oppose''' - this is an encyclopaedia, anyone who fails to understand this after a couple of polite warning should be asked to leave. Why on earth would we voluntary want to fill up our servers with that type of crap? --] (]) 19:41, 13 March 2008 (UTC) | |||
'''Reply''' I agree that ] would need to be fundamentally rewritten, and I advocate that it be. And I'll use an example to illustrate why community-building content is important. Some people might say also that ] is a waste of server resources. But guess what, while I'm watching my watchlist to see if my opponent has moved, I'm also checking everything else, and if someone vandalizes one of my pages, I'll spot it. Or if someone responds to a discussion, I can reply to them, and we make progress faster. And rapport is built with other users, which in many cases leads to collaboration on encyclopedic subjects. So indirectly, the chess improves the quality of the encyclopedia. If I were over at Yahoo Chess doing that, then Misplaced Pages would not be getting those benefits. ] (]) 22:17, 13 March 2008 (UTC) | |||
'''Oppose''' Absolutely not. This is an encyclopedia. ]''']''' 20:13, 15 March 2008 (UTC) | |||
: I think Misplaced Pages has enough participation and man-hours already, and our priority is no longer to increase those numbers. But even disregarding that, the goal is to get and keep the right kind of people. If people come here so that they can play chess, there's a big question mark as to whether or not they'll ever be interested in writing an encyclopedia -- they could just as well play chess all day and waste resources. Whereas if we're purely just a big ol' boring encyclopedia, and people still come here nevertheless, the chances that our participants are interested in contributing increase significantly. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''22:23, 13 Mar 2008 (UTC)''</small> | |||
Honestly, I'm not at all clear on what problem this is intended to solve. Frankly, I think there's already a consensus that active, productive editors are allowed some leeway on the NOTMYSPACE thing, so I don't see a problem there. I don't think we want unproductive editors using Misplaced Pages as MySpace, whether or not there are associated ad revenues. Where's the problem? ] (]) 22:28, 13 March 2008 (UTC) | |||
:What about what happened to Vintei's shop? That guy was active and productive. Some people might specialize in the fun stuff and be a positive community-building influence. Kinda like how in our society, we have circus performers. Some people might say, Hey, these guys are not doing something productive. Oh, but they are. People who are employed doing other stuff can enjoy the carnival, and in fact the ability to spend money on fun stuff is part of the incentive to work. | |||
:But obviously people will say, "Well, if a user wants to only specialize in fun (community-building) stuff here, then take it off wiki." That's kinda like state governments that say, "We know we can't stop our citizens from gambling, but nonetheless, we don't want the casinos within our state lines." As long as they're going to do it anyway, wouldn't you want to be the ones to collect the tax revenues? If Vintei's stuff is going to make people happier here, and want to hang out more on Misplaced Pages, checking their watchlists and whatnot, then awesome. It doesn't matter whether he himself gets involved in building articles, etc.; he helps that happen indirectly. | |||
:We thrashed the community-building issues out on the ] and many other places. But I think at this point, I've run out of arguments because it's just a battle of ] vs. ] (as it currently stands) and we know which one is the trump card. By the way, I think the whole concept of "Misplaced Pages is an ENCYCLOPEDIA, NOT... (long string of things)" is somewhat fallacious because one could also argue, "Misplaced Pages is an ENCYCLOPEDIA, NOT a place for discussing encyclopedia policy," and vote to delete Village Pump. One might argue, "But the Village Pump, while it itself is not part of the encyclopedia, and attracts editor-hours that might otherwise be spent creating articles, it also indirectly helps the community and thus the article-creation process." Exactly – and that same argument could be used for keeping a lot of the MySpace-type stuff. ] (]) 23:12, 13 March 2008 (UTC) | |||
:: You're mistaking Misplaced Pages for the real world, or a society unto itself, in need of representation from all facets of an actual society, which we're not. We have a focused concern that is itself a part of the world. If a bunch of encyclopedia writers showed up at the circus tent demanding equal time, they'd likewise be told to get lost, 'cause that's not what circuses are for. We have our role and they have theirs. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''23:17, 13 Mar 2008 (UTC)''</small> | |||
:::You may be missing my point. Here's an example that might be more apropos. I had a teacher who said that when she was in the corporate world, they appointed her the Vice President of Fun. Her job, apparently, was to figure out community-building stuff for the company to do. One might argue, that type of position is worthless; why not dispense with it? Apparently, they found it useful enough to keep. Similarly, we might have some users whose role is just to work on community-building stuff. If it helps bring in/retain editors, then it can be just as useful as someone who does stub-sorting, FA reviews, etc. We need all these specialists, including those who specialize in fun stuff. 1 Corinthians 12:17: "If the whole body an eye, where the hearing? If the whole hearing, where the smelling?" ] (]) 01:50, 14 March 2008 (UTC) | |||
::::I'm not missing your point. I'm disagreeing with you, like everyone else here. You need to re-read my reply from earlier above, that starts with "I think Misplaced Pages has enough participation and man-hours already, and our priority is no longer to increase those numbers..." as that paragraph answer these points you're making again for the second or third time. I completely understand your reasoning. I'm just saying you're wrong. This would not help us. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''02:08, 14 Mar 2008 (UTC)''</small> | |||
:::::OK, just making sure. The idea that Misplaced Pages has "enough" participation and man-hours is laughable. If there were, there wouldn't be gaping voids in the encylopedia's coverage. It's like saying that a company makes "enough" money or a charity feeds "enough" hungry people. That's only true if you reduce the scope of what you want to accomplish and arbitrarily set the bar at something less than its full potential. We haven't even covered the ] sufficiently. And guess what, it's the community's fault for driving people way with its wrongheaded, counterproductive philosophies that are often the antithesis of ] and ]. It's rather telling that both of those are essays at this point, rather than guidelines. They represent a minority view, unfortunately. ] (]) 16:07, 14 March 2008 (UTC) | |||
::::::The idea that Misplaced Pages has enough man-hours is entirely justified and not contradicted by gaps in coverage. The gaps are generally the more uninteresting topics. The popular ones are always covered, and more people does not equal coverage of the uninteresting or unpopular -- because these are still people, and if you tell people they're free to work on whatever they want to, they most assuredly won't pick the boring stuff. So your logic is pretty laughable there. And guess what, it's not anyone's "fault" but Jimbo's for creating an encyclopedia written freely by people. And that's if it indeed is a "fault", which it isn't. Your conclusions are contrived according to the point you're trying to make, which masks any actual merit your point might have. Which is a shame, because it may have some. You're just not making any sense. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''10:07, 15 Mar 2008 (UTC)''</small> | |||
:::::::Could it be that certain subjects that people are interested in aren't necessary ones that attract people to write on them for no remuneration? To take another real world example, there are some people who like to both play computer games ''and'' write the code; in fact, a lot of people will do it for free, as evidenced by all the freeware out there. And then you have fields like fast food, which a lot of people want to eat; but few people will volunteer to serve behind the counter at Burger World for no pay. Some of those vital articles are like that. The need for them has been recognized and they've been on the list for awhile (much like Top-Importance articles in certain WikiProjects that have made little progress), but most people don't feel like working on them. Now, if you throw a little compensation in there, maybe. ] (]) 22:37, 15 March 2008 (UTC) | |||
::::::::It's a little far-fetched to think people will consider themselves compensated. The scenario would have to be one where according your model, people are attracted to the site possibly for the social aspect, and ''then'' see articles (on boring things) that need attention -- and they, what, start working on them because they figure Misplaced Pages does so much for them already, allowing myspace content and all, so they feel compelled to write about those topics? Like, compensation before the fact? It's not very likely that would be much of a motivator. Except where a legal obligation exists, people generally work to get compensation, they don't work because they've already been compensated. Not that the allowance of myspace content would even be considered any kind of reward to anyone, no matter when in this scheme they would receive it. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''01:56, 16 Mar 2008 (UTC)''</small> | |||
This is sort of already done. See: ]. They use adverts to cover the costs. --] (]) 22:26, 13 March 2008 (UTC) | |||
:: no they use ads to generate revenue and ''make a profit'' - the intention in no way, shape or or form is to 'cover costs'. --] (]) 22:57, 13 March 2008 (UTC) | |||
::: I admit I'm very green. I've only been running a company for approximately a year now. Making a profit doesn't <s>cover your costs</s> imply your costs are already covered? --] (]) 01:17, 14 March 2008 (UTC) <small>''I may need to call my accountant again...''</small> | |||
:::: Profit is what's left after you've covered your costs. So you're both wrong. Ads cover the costs ''and'' make them a profit. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''01:23, 14 Mar 2008 (UTC)''</small> | |||
::::: Oops. A bit too fast with the humorous reply there... --] (]) 01:54, 14 March 2008 (UTC) | |||
::::::Got to watch out for that "sense of humor," Kim. How are others to know you aren't mocking them? Making fun of their serious comments. Sarcastically demeaning their lack of real business experience. Etc. Of course, you wouldn't do that. Sophisticated incivility: hold up a mirror. "Trolling," it will be called.--] (]) 02:12, 14 March 2008 (UTC) | |||
::::::: Mmmm I didn't find Kim's comment trolly... I think most people knew it was humor... lighten up dude... <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''02:37, 14 Mar 2008 (UTC)''</small> | |||
::::::: Kinda backfired, didn't it? :-P Oh well, live and learn -- ] 02:52, 14 March 2008 (UTC) | |||
:::::::: sorry I've only run for-profit business for over 40 years, so I'm afraid I miss gags from people who've only done it for less than five years (or bankrupts as we call them in the business) ;-) --<span style="font-family: Mistral">]</span> 02:00, 16 March 2008 (UTC) | |||
::::::::: Yeah everyone who's only been running a business for less than 5 years must be bankrupt. That makes sense. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''03:27, 16 Mar 2008 (UTC)''</small> | |||
:::::::::: It was a joke - most SMEs go bust with 5 years (according to the stats in my country). --<span style="font-family: Mistral">]</span> 11:36, 16 March 2008 (UTC) | |||
:Okay, this is supposed to be a solution, but... what's the problem that its trying to solve? Also, ads on a handful of userpages wouldn't make much money, especially since they'd probably be fairly random. And 10MB of space is fairly pointless, even if the final total is 10MB, there might be 1000MB in old revisions to get to the final pages. <font face="Broadway">]'']</font>'' 02:49, 14 March 2008 (UTC) | |||
::The web hosting costs for myspace-type content on people's userpages is also negligible, but I think people object to hosting it because the principle of the thing. The advertising is more symbolic than anything. I'm just throwing out ideas in an attempt to reconcile the two concerns of not wanting to be people's free web host, and people wanting to have that content. Do you have any ideas, or is it going to be that old standby, "What we have now works fine"? That seems to be the "rough consensus" of those who haven't already left in disgust over how things work here. ] (]) 16:07, 14 March 2008 (UTC) | |||
:::This is an encyclopedia. Aside from the resource consumption issue, the reason for not allowing myspace-type content is that we don't want to encourage a change in focus. People serious about writing an encyclopedia may not want to deal with people who come here to chat with their friends and play games -- and it's the serious people who will be the most valuable toward our goal. Again, if we know everyone who comes here is coming as a result of seeing purely an encyclopedia, then we know to some degree certain things about what they'll be doing here. Your "manager of fun" example doesn't apply to Misplaced Pages -- Corporate employees are stuck at the workplace all day and benefit from having fun things specifically made available to them through the company. Misplaced Pages is a website you access from wherever you happen to be for as long as you want. You're not "stuck" within the Misplaced Pages website for an 8-hour day, and if nothing fun happens on Misplaced Pages, you simply have no fun for 8 hours. If Wikipedians want to participate in mysapce-type acticities while at Misplaced Pages, guess what? They can. Simply open two browsers. You seem think there are only two types of users, those who "left in disgust" and those who believe in keeping things as archaic as possible. That's pure conjecture. You're inventing a "problem" that needs to be "solved" based on the disgruntled people who left. For any given institution, especially one as large as Misplaced Pages, there will be plenty of ex-members who feel that their departure was caused by something being "wrong". Why would you base your attitude on them? They're not more objective just because they're now outside the system. In fact they're less so, because they ''were'' members, and only the ones who've had bad experiences. You're basically coming at this from an entirely unbalanced perspective. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''09:58, 15 Mar 2008 (UTC)''</small> | |||
:::The community is opposed to myspace-y content and ads, so the solution to a problem invented by people who are no longer here is to combine them? Ads on a few userpages would still have a negative effect on the part of the community that is opposed to ads, with minimal benefit (revenue). <font face="Broadway">]'']</font>'' 22:54, 15 March 2008 (UTC) | |||
=== Why do we prohibit making friends again? === | |||
In the course of being so vehemently against "the myspaceification of Misplaced Pages", it seems as though we're continuing to depersonalize editors and perpetuating the bitter disputes and arguments that plague the project. I don't see how it's harmful for users to publish information about themselves and their likes/dislikes, or have conversations with each other that (god forbid) don't relate directly to the project. Can someone explain this one to me? | |||
Feel free to block me or report me to the Arbitration Committee for even suggesting it, but I think it might actually be beneficial to the project if it were set up as a social network. Someone gets pissed off at you while working on an article together, visits your talk page to chew you out, and then realizes you actually have some things in common. You're not such a bad guy after all, and they end up leaving a relatively friendly message instead of a "civil" one. I don't know. I just think it would help to defuse the constant tension that surrounds editing if Misplaced Pages were more... friendly. | |||
(Also, I highly doubt this has anything to do with server load, and everything to do with comments like Dorftrottel's. As I understand, 99% of the server load is serving cached pages to unregistered users. Generating pages from scratch for logged-in users takes a lot more server resources per page, but we do it anyway because the amount of "registered user content" is much smaller than the amount of data being sent to unregistered users. Some graphs would be helpful here.) — ] 04:40, 16 March 2008 (UTC) | |||
:I think you're mistaking "Don't spend all your time editing your user space" with "Don't edit your user space". Lots of editors choose to have some information about themselves on their user pages, but if an editor is spending the majority of the time on social interests as opposed to project building, that's a concern. ] <sup>]</sup> 04:49, 16 March 2008 (UTC) | |||
::This is a volunteer effort. It doesn't have to be MySpace, but if Misplaced Pages editing and other encyclopedia-maintaining activities become a grind, with no rewards or human interaction to be found, then it isn't ]. If we lose the human aspect, we will lose (as we have lost) editors. See also ]. I nearly quit after a couple of weeks because I was witnessing a lot of acrimony. There didn't seem to be enough positive reinforcement and pleasant interaction with other human beings to mitigate the animosity and petty bickering I was seeing between other editors. ] (]) 05:05, 16 March 2008 (UTC) | |||
:::Once again, see Shell's response. You're allowed to interact and have fun. You just can't spend all your time in userspace. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''12:17, 16 Mar 2008 (UTC)''</small> | |||
I have nominated the template for deletion, since the conversation here seems to have wandered somewhere else] --] (]) 12:45, 19 March 2008 (UTC) | |||
== “The battle for Misplaced Pages's soul” == | |||
*http://www.economist.com/printedition/displaystory.cfm?story_id=10789354 | |||
{{cquote|Misplaced Pages is facing an identity crisis as it is torn between two alternative futures. It can either strive to encompass every aspect of human knowledge, no matter how trivial; or it can adopt a more stringent editorial policy and ban articles on trivial subjects, in the hope that this will enhance its reputation as a trustworthy and credible reference source. These two conflicting visions are at the heart of a bitter struggle inside Misplaced Pages between “inclusionists”, who believe that applying strict editorial criteria will dampen contributors' enthusiasm for the project, and “deletionists” who argue that Misplaced Pages should be more cautious and selective about its entries.}} | |||
===Edit point=== | |||
I think it is time we decide which way to go. There have been many failed attempts to address this, but they all failed due to their partisan or limited nature. Generally speaking which way does the community want to go? --<small> ]</small> <sup>]</sup> 03:36, 7 March 2008 (UTC) | |||
:We want to evaluate each case separately. ] (]) 03:40, 7 March 2008 (UTC) | |||
:I think the community should read '']''. ] (]) 03:48, 7 March 2008 (UTC) | |||
I just read the article. I value encyclopedias for their educational value, but tend to take a classical view of education. That is, I view it as a process not only of informing, but of intellectual improvement. Encyclopedias are of no value if they do not produce valuable and insightful information. The ''Economist'' gave the example of Solidarity leaders and ''Pokémon'' characters. I take the view that we should have entries on all Solidarity leaders, but no entries on ''Pokémon'' characters (just the show itself). Some works of literature and cinema do have value because they sometimes provide insight through fictional symbolism. They also at times produce social change. ''Pokémon,'' on the other hand, is a meaningless children's show with no educational value. I understand that this is a dangerous contradiction, though. I have seen many insightful and notable entries nominated for deletion simply because they were too foreign to the nominator. They appeared not to be notable. So I think we should state clearly that subjects with educational and intellectual value are always notable and shallow subjects are not.--] (]) 03:57, 7 March 2008 (UTC) | |||
:Passing judgment on what's "shallow" and what's "intellectual" doesn't strike me as very ]. At least "notability" is something that one can attempt to objectively define, in terms of it being something that a lot of people are interested in (even if it's shallow), but trying to decide what has intellectual merit... very subjective. ] (]) 04:01, 7 March 2008 (UTC) | |||
::Exactly what I was going to say. There is far, far too much subjectivity involved in determining what has educational and intellectual value. And while I would personally agree on the lack of value to me of a Pokemon character, at the same time, an article such as ] might be seen as having no value to a Pokemon fan where it has a great deal to me. In such a case, who is right? Ultimately, to respond to White Cat's question we have places like Conservapedia for the limited "educational scope", and wikia for all things "trivial". Misplaced Pages has sailed down the middle of the two alternatives for some time now, and I don't see the harm in continuing on this course. ]] 04:07, 7 March 2008 (UTC) | |||
:So, ], are you actually suggesting that we need to make a general, high level decision about whether we are "inclusionist" or "exclusionist"? What possible purpose would that serve? ] (]) 04:04, 7 March 2008 (UTC) | |||
::Awareshift's idea strikes me as somewhat unfeasible and unrealistic, largely because what does possess educational and intellectual value to one person does not to another. I personally would say that ] possesses such value (because of its reading level and (depending on DM) ]), but, even assuming good faith towards him, he would likely think otherwise based off of the fact it has movies and video games. Seriously, when was the last video game where you were forced to divide by the cosine of ''x''? Remember, Misplaced Pages is for a '''layman's audience'''. It isn't for profs at the ] trying to make foot warmers out of nosehairs. -<font color="black">'']'' <sup>(] ])</sup></font> 04:11, 7 March 2008 (UTC) | |||
::::Those young people you speak of should visit ''Misplaced Pages'' to study math or history instead of kill time. I imagine that reading about Dungeons and Dragons too often will actually hurt your performance in school.--] (]) 04:38, 7 March 2008 (UTC) | |||
:::::I hate to be the bearer of bad news, Awareshift, but if they study them, there's a very good chance those articles are suddenly going to be plastered with the word "WANKER" or "VAGINA" over and over again, thus nullifying their educational value for a short time. A lot of kids don't ''want'' to study; they'd rather have fun, and if it means replacing ] with a picture of ] masturbating, so be it. -<font color="black">'']'' <sup>(] ])</sup></font> 04:50, 7 March 2008 (UTC) | |||
::::::I know they study them. I don't think that they ''should,'' but they do.--] (]) 05:18, 7 March 2008 (UTC) | |||
:::::::Are we talking about the same articles? -<font color="black">'']'' <sup>(] ])</sup></font> 05:23, 7 March 2008 (UTC) | |||
:::(ec)Footwarmers out of nosehairs? What class do they teach that in? --<span style="border:1px solid #63B8FF; font-weight:bold; color:#23238E; background-color:#D0E7FF;"> ] <sub> ] ] </sub> </span> 04:17, 7 March 2008 (UTC) | |||
:::Furthermore, "educational and intellectual value" are a matter of how a subject is covered, not of what the subject is. Most universities (American ones, at least) have cultural studies courses that explore "shallow" pop culture, because shallow or not it's significant and it's illustrative, and we help ourselves more by understanding it than by ignoring it out of some kind of misguided belief in a separation between high and low culture. ] (]) 04:15, 7 March 2008 (UTC) | |||
::::I disagree. The distinction I was making was not between high or low culture. It was between meaningless and meaningful as well as between influential and weak subjects. I have no bias against anything new or popular, so long as learning about it is truly educational. So, try as you might, I doubt that you would be able to produce an article about ''Pokémon'' that would be worth reading intellectually.--] (]) 04:38, 7 March 2008 (UTC) | |||
:::::I think we ]. We'd honestly be less likely to have an intellectual article about, say, '']'' because of outside influences. I hate to say this, but in this case at this point in time, ] beats out ] for intellectual read. | |||
:::::It is because of these external influences that we can never have intellectual articles of some subjects, say ] or ]. Should we delete them because nationalists are using Misplaced Pages as a battleground, or should we keep them and invalidate your very point? -<font color="black">'']'' <sup>(] ])</sup></font> 04:50, 7 March 2008 (UTC) | |||
::::::I think just learning the facts about Israel is enough to provide insight and learn lessons from history as well as the present. It would be even more insightful if we allowed analysis like ''Encyclopaedia Britannica'' does, but facts are good, too. You claim that the entry "Pokémon" teaches readers important lessons. What lessons did you learn from reading it that help you understand life? In other words, how did reading it make you a more intelligent person?--] (]) 05:18, 7 March 2008 (UTC) | |||
:::::::Well, reading it taught me that you seem to like ] more. Seriously, though, your example is a bad one because, as I have stated, that set of articles (Israel/Palestine) is a cultural hotbed and tends to be skewed, and I do not believe a skewed view of a conflict helps '''anyone'''. As for the ''Pokémon'' article, I seem to have gotten the mistaken impression you were talking about challenging reading, not programming the next set of robots. -<font color="black">'']'' <sup>(] ])</sup></font> 05:23, 7 March 2008 (UTC) | |||
::::::::No, I was talking about educational reading, which may be challenging or not. Intelligence is a function of both knowledge and the ability to understand new things (in my opinion). Learning about Israel teaches people about the fundamental world views of Jews and Muslims. It isn't about a strip of land. It is about their views of tolerance and history as well as the ephemerality of foreign alliances. Alliances are meaningless because they can dissolve into war at any time. It also teaches the reader how Muslims and Jews care much more about history than others. These are all insights one can deduce from reading about Israel, to use ''your'' example. Learning about history helps us predict the future and understand the present. I occasionally read ''Encyclopedia Judaica'' which has a Jewish bias. I also occasionally listen to Arab commentators. Both are biased, but both commentaries help me understand Israel.--] (]) 05:33, 7 March 2008 (UTC) | |||
:::::::::Learning about history can predict the future? WHY THE FUCK DID I GET INTO ]?! | |||
:::::::::In my opinion, intelligence is not *what* you know. someone could not know ''y=mx+b'' and still be intelligent. Someone, likewise, could know the name of a minor character in, say ] and still be intelligent. No, intelligence is *how* you use your knowledge. Reading about history is no more intelligent than playing through a game of ]. Only if you can use the knowledge gained from the activity is it of any use. Calling something "intellectual", as you're currently doing, strikes me as rather anti-intellectual. No layman wants to read an article on history if they have something better to do, such as ]. | |||
:::::::::I can guarantee you that, if you delete every article not related to the 3 R's or ] categories, you'll be stuck with a bland lump of dry, gray putty that was once an ornate and intricate statue. -<font color="black">'']'' <sup>(] ])</sup></font> 05:42, 7 March 2008 (UTC) | |||
::::::::::First, I define intelligence as the ability to understand things--both new and familiar. Learning certain types of facts does improve intelligence. For one thing, learning meaningful facts over time makes you reflect on their meaning. This is mental exercise that improves your intelligence. For example, memorizing mathematical formulas will not necessarily improve your ability to understand new formulas, but trying to comprehend what the formulas actually mean will. Mathematical intelligence also improves musical intelligence, and visa-versa. Likewise, learning about history helps you understand current affairs. Memorizing a single date will not do anything. But, as you learn about different events, you begin to see patterns and reflect on them. This is also mental exercise. I fail to see any underlying meaning to ''Pokémon'' cartoons, so watching ''Pokémon'' will not educate you.--] (]) 06:33, 7 March 2008 (UTC) | |||
:::::::::::And ''playing'' it? Pokémon is, believe it or not, a ] first and animé second. -<font color="black">'']'' <sup>(] ])</sup></font> 07:23, 7 March 2008 (UTC) | |||
:To be fair, I was recently grading homework for a computer science course and one of the students explained class based inheritance using examples from Pokémon. I think it's dangerous to exclude information because you don't see the value in it, someone else might. I know I value Misplaced Pages because it's inclusive. --] (]) 19:49, 10 March 2008 (UTC) | |||
Personally, I believe that it is the balance of inclusionism and deletionism that provides the proper balance that Misplaced Pages needs to have. The problem is that it needs to be balanced. Tilting too far inclusionist, and you become indiscriminate, go look at a Trivia section to see what I mean. Tilting too far Deletionist, and potentially good articles are shot on sight, before they have the opportunity to become viable, ]. As much as we state that AfD is not cleanup, often times the threat of deletion is the catalyst that drives the article beyond a mere stub. And our wide-scale inclusion criteria is exactly what separates Misplaced Pages from the rest.. --<span style="border:1px solid #63B8FF; font-weight:bold; color:#23238E; background-color:#D0E7FF;"> ] <sub> ] ] </sub> </span> 04:13, 7 March 2008 (UTC) | |||
:In other words, we need both inclusionists and deletionists ]. ] (]) 04:26, 7 March 2008 (UTC) | |||
::My view, and I hope it is widely shared, is that any subject is acceptable for inclusion as long as there are reliable outside sources to keep everybody honest. The "battle" will only be lost if unsourced information proliferates on Misplaced Pages, which at first will seem like the inclusionists won, but will be quickly followed by the loss of Misplaced Pages's "soul" as people's first stop, as a useful, fact-checked clearinghouse of information. ] (]) 04:46, 7 March 2008 (UTC) | |||
Suggest the community read ]' book ], where he argued that the value of literature is as much a reflection of the reader as of what is read, and that efforts to divide literature into "highbrow" and "lowbrow" and assuming that "lowbrow" means "not serious" have been a really, really, really bad idea that prevents real literary appreciation and growth. He suggested a moratorium on trying to judge "literary merit" and using a different approach. What's true for literature is true for other things as well. Best, --] (]) 05:48, 7 March 2008 (UTC) | |||
:Presence of Pokemon related articles are not responsible in the absence of quality on articles on polish solidarity leaders. However there probably are more secondary sources on Pokemon than polish solidarity leaders. We do not delete articles on polish solidarity leaders or prevent their development to make room for pokemon related articles. It is just that nobody has yet written those articles. In addition do we really want a user that is an expert in pokemon write about polish solidarity leaders? No offense but getting indulged in pokemon in the past ten plus years does not make any one an expert in polish solidarity leaders. Pop culture (Pokemon) aside, this problem plagues even important articles just as much as the economist illustrates. --<small> ]</small> <sup>]</sup> 11:12, 7 March 2008 (UTC) | |||
::The other thing that I don't think that the economist article considers or that is brought up here is that because we are a volunteer project, we cannot force people to write or work on topics they have no interest in. Since WP is an internet culture, it is going to attack a cross-section of the larger internet culture - meaning that we are going to have a lot more people working on articles on anime characters and video games than we are going to have on political figures from non-English speaking countries. This itself is an overall systematic bias that we have to be aware of, but know that we cannot change (otherwise, editors will leave once we tell them they must do something), but by developing policies and guidelines to make such that those topics are treated in an encyclopedic fashion such that when we can "fill in" other topics such as solidarity leaders, we have encyclopedic coverage of those topics as well as more popular culture topics, with an overall increase in the apparent quality of the encyclopedia. This doesn't mean we delete the coverage nor prevent appropriate expansion of pop culture topics to make other topics look better, but it does mean we have to consider how much weight some of those topics are given relative to the goals of creating an encyclopedia. Basically, the Economist article almost is looking at WP ''now'' as a finished product and saying that it's bad, but if you keep in mind and consider that we are unfinished, then it is perfectly fine that our coverage is currently unbalanced, as long as we understand that the goal is to get to a good balance and take steps to help get us there now. --] 14:06, 7 March 2008 (UTC) | |||
:::Misplaced Pages by very nature will never be a finished project. All articles that are not featured in quality are incomplete and will not be a part of the finished product. In other words they are already edited out before they reach the end of the production line. They can became featured articles in time but they will definitely not if people do not allow work on them. This is why I cannot understand some people, namely so-called deletionists, work they way in removing clearly incomplete articles. The articles on popular culture and solidarity figures in Poland are typically unrelated. Balancing the amount of content on pupular culture and other topics by removing popular culture related articles does not sound very productive to me. --<small> ]</small> <sup>]</sup> 17:59, 7 March 2008 (UTC) | |||
::::And I'm not saying we delete them, but instead make sure that our pop culture topics are edited in the same encyclopedic manner as our topics on world leaders and history and geography and other more "non-trivial" topics. We may need to trim the depth of coverage these presently have and utilize outside wiki's for overflow, but there's no reason we can't cover these to at least a degree that meets with the Five Pillars. --] 18:46, 7 March 2008 (UTC) | |||
:::::What is happening is self righteous people are mass removing material on topics they dont care much about. This has no consensus behind it. If there is consensus behind it, I can start trimming articles I do not care about. I have a very long list to process I suppose. Of course eventually we would be only left with the main page in such a thing. --<small> ]</small> <sup>]</sup> 21:56, 8 March 2008 (UTC) | |||
::::::Assuming good faith, they are trying to help clean up WP, though methods such as TTN has taken have not been constructive to this. However, the concept of merging topics failing notability into other areas should be a point that is taken much more at heart before articles have to hit AfD, and even if AfD is still reached, this should always be an option -outright deletion of a contested article without any considering of retaining that information is bad. --] 22:14, 8 March 2008 (UTC) | |||
I wish people creating shitty fancruft would use a spell checker. Also, lots of fancruft is part of some huge business franchise, which produces stuff in various formats that are used as sources -thus entire swathes of wiki are "in universe". Really, I don't care how trivial it is, I just wish they could write betterer. ] 22:56, 9 March 2008 (UTC) | |||
::betterer? or more better? :) ] (]) 00:10, 10 March 2008 (UTC) | |||
The whole content discussion is as old as ... Throughout human cultural history arising trends and opinions of rulers (or the opponents of same) have continuously created, destroyed and recreated. Archeologists make a living digging up what remains and are faced with whether to preserve the Christian mural or chisel it off to reveal the hieroglyphs beneath. French scouts caused uproar and laughter when they removed neolytic "graffiti" from a cave. Just to site a few examples. The list of now famed painters who lived and died without their work being recognized is endless. (Anyone for a Vermeer bonfire?) Knowledge is power, but today's trash may turn into tomorrow's treasure. You'll be hard put to find a book on how to lay a thatched roof in most libraries, since it they are no longer common. Yet university research projects exist trying to preserve and recover this lost art. When I grew up knowing how to use a slide rule was an essential skill. Preserving it would have met the highest standards for "value". My nephew may get to look at one in a museum, since I threw mine out as "junk". The Spanish smelted down "worthless pagan" Inca trinkets to produce items meeting their "high" cultural standards. By declaring a certain knowledge to be "worthless" or "valuable" each preceding generation tries to stamp their own ideas and value systems on the next generation, who are duty bound to resist with all their might in the interest of human progress. What survives or is revived after jumping one or more generations is our "cultural heritage". | |||
Now Misplaced Pages introduces as novel an idea to how knowledge is maintained as democracy was to despotism. I hope the self declared guardians of knowledge are going to die out with one of the following generations. Knowing "Pokemon" characters is as basic a skill to the next generation as knowing "Dr. Seuss" was to mine. There are quotes and proverbs in the literature my generation is leaving behind describing things as "seussian". I hope no one will have deleted the relevant wiki-page when my grandkids stumble over those. So I'd suggest creating a central "graveyard" for deleted pages to save future archeologists and ethnologists some work. <small>—Preceding ] comment added by ] (]) 09:25, 18 March 2008 (UTC)</small><!-- Template:UnsignedIP --> <!--Autosigned by SineBot--> | |||
=== The true problem: notability and mainstream media justification policies === | |||
The true problem is in the notability and similar policies. That can make any silly detail of Pokemon super-relevant (maybe millions of hits in Google and stuff like that) while much more relevant artists from non-English, and specially third world countries, countries can pass unadverted or even be deleted as non-notable. | |||
These overall criteria bias the contents of Misplaced Pages in favor of mere trivia. We need a more academic and, as much as possible, less mediatic approach. | |||
As for the problem with children vandalism, the best solution is surely to stop censoring certain images, so schools start censoring Misplaced Pages at least in class time. That would save a lot of work to our patrollers. | |||
I am inclusionist for encyclopedic content and for what allows for a more and better of our world. But I am exclusionist for trivia, and the articles on Pokemon, Star Trek, the Simpson... chapters, minor characters, etc. belong to a fanzine or some media not Misplaced Pages. | |||
Maybe the solution is to create "Wikizine" inside Wikimedia, for such more diverse but less encyclopedic activities. <small>—Preceding ] comment added by ] (] • ]) 05:06, 7 March 2008 (UTC)</small><!-- Template:Unsigned --> <!--Autosigned by SineBot--> | |||
:Those images are censored because ]. Further, I haevily doubt you are familiar with the discussion that took place at ] some time ago. Pokémon species articles (sans ]) have been lists for a few months now. Further, as I have stated, owing to external influences (i.e. rival factions editing) we'd also have to, if we implemented your reasoning, remove all articles on wars, rogue nations, and cultural conflict so as to present as bland and tasteless a view of the world as possible. Shit, the ] couldn't come up with a scheme better designed to turn everyone into mindless robots who only know exactly what they have to know and nothing else. -<font color="black">'']'' <sup>(] ])</sup></font> 05:14, 7 March 2008 (UTC) | |||
:I will point out that I've been struggling with other editors to fine-tune and polish ] (and to a lesser extent ]) to reflect a balance that makes both sides happy, in that we can give good coverage when we can provide secondary source (why should the reader care about this work if they've never heard of it), while providing primary sourced information to meet the "WP is not paper" approach of including such. It has taken a ''while'' to get here, but the metaphore of balancing a straight pole by pushing at a slant is very apt: initial drafts went too far in one direction, and fine tuning got it to where it is. We do suggest that for more in-depth treatment of fictional topics that a outside wiki is completely appropriate (though people balk at any push on Wikia due to possible conflict-of-interest issues), and I think we're now in the learning stages of figuring out that exact balance for many areas, thanks in some part to the recent ArbCom cases. I know there's inclusists vs deletionists, but I strongly believe we don't need to rush to make a decision, unless we get a mandate from the Foundation to take this in one direction or the other. We need the compromise and figure out steps forward from that. --] 05:43, 7 March 2008 (UTC) | |||
::Secondary sources have little to do with notability but with popularity. Every armed forces servicemen have a secondary source covering their life. "Unheard of" would not be shows televised internationally on multiple countries. If being "heard of" is notability, then definitely thats not what is happening. --<small> ]</small> <sup>]</sup> 10:55, 7 March 2008 (UTC) | |||
:::Not true: while a popular work may lead to large coverage in secondary sources (a very common case), this is not the only way a topic can gain secondary sourcing and thus sufficient sourcing to be included. "Significant coverage in secondary sources" is a measure of the cumulative effects of a topic's popularity, importance, effect on other people, and other areas, while falling under the goal of the Five Pillars. So notability is not reflecting "being heard of". --] 14:14, 7 March 2008 (UTC) | |||
::::Right polularity and etc, which are not the same as notability. It is a poor metric for notability. --<small> ]</small> <sup>]</sup> 17:59, 7 March 2008 (UTC) | |||
:::::Being sourced in multiple independent reliable sources is a bad metric for notability? Seems to meet all our principles to be a verifiable, no-original-research encyclopedia. --] 19:20, 7 March 2008 (UTC) | |||
::::::You should read UK press sometime - very many pages are devoted to c and d list "celebrities", but not much coverage is given to, for example, mathematicians or scientists. Unless they produce a populist "study" showing that 'drinking wine is healthy' (which will get mis-reported.) Thus WP ends up with a gajillion sources for someone who comes third in a TV singing competition, and will have infoboxes giving that person's age, weight, hight, eye colour, blood type, etc etc. ] 13:43, 10 March 2008 (UTC) | |||
Seeing as this thread began with a quote from an article in the ''Economist'' I thought it worth mentioning that there is another article about wikipedia in the March 20, 2008 issue of the ''New York Review of Books'', titled "The Charms of Misplaced Pages". The author describes himself as an "inclusionist" and tells of how he ended up as a defender against article deletions, with a bit of mocking about the notion of "notability". Looks like the article is currently online . Just thought it might be of interest. ] (]) 06:44, 7 March 2008 (UTC) | |||
:I particularly liked the part about "the biggest leaf pile anyone had ever seen." --] (]) 07:58, 7 March 2008 (UTC) | |||
::Well, I liked "When, last year, some computer scientists at the University of Minnesota studied millions of Misplaced Pages edits, they found that most of the good ones—those whose words persisted intact through many later viewings—were made by a tiny percentage of contributors. Enormous numbers of users have added the occasional enriching morsel to Misplaced Pages—and without this bystander's knowledge the encyclopedia would have gone nowhere—but relatively few users know how to frame their contribution in a form that lasts." from the same article. ] (]) 07:44, 8 March 2008 (UTC) | |||
===A high level discussion=== | |||
:About a year ago, no one was even trying to mass blank/redirectify articles of trivial topics. Afds on these were also mostly unheard of. This isn't an inclusionist vs. deletionist discussion. This notion is not based on consensus or discussion at all, if so please cite this community-wide discussion. I think because the covered topics are trivial individually no one wants to spend time discussing them individually. Although the practice of reviewing and establishing notability itself should be done on a case by case basis, this is an overall general discussion to reach a general agreement on the topic to hopefully establish what to do and what not to do. | |||
:Our criteria in establishing what is notable may need adjustment. As the economist article discusses, important topics with a capital "I" may have very little to no secondary coverage that are readily available to establish notability. Likewise things with overwhelming coverage from secondary sources may be fundamentally trivial which isn't necessarily article worthy then again it may very well be article worthy. | |||
:It is important to note that different sections on ] (], ], ], ] (often linked to as ] or ])) are not in conflict with each other. | |||
:--<small> ]</small> <sup>]</sup> 10:55, 7 March 2008 (UTC) | |||
:::White Cat, this didn't seem to be a problem until recently. I don't know that for certain, but I used Misplaced Pages in the past, stopped using it for a long, long time, then came back to find that the community seemed to have gotten totally thrown out of whack | |||
:::This is basically a problem of various cabals -- you know who you are -- swarming around certain subjects. See ]. Most of those are jokes, but a fair amount of those are surprisingly legitimate. Several also aren't listed. There are also social clusters around anime, Star Trek, Star Wars, LOTR, etc.., and probably more stuff that I've missed. | |||
:::Groups like this swarm around certain subjects (aside from all of the annoying bot owners, generating stuff, too, without an official RFA) and when people come by to enforce the guidelines, they're stifled because of a localized group of little kids defending their articles with democratic, bureaucratic authority, appealing to the fact that they are the "majority" and wikilawyering. | |||
:::These same groups of people have all formed one giant monstrosity called "inclusionsts." Virtually every POV-pushing troll on Misplaced Pages supports Inclusionism. And why shouldn't he? If you want to promote your business, use Misplaced Pages for political propaganda, dump fan analysis on Misplaced Pages, or upload internet memes for the lulz, why ''wouldn't'' you support Inclusionism? | |||
:::And it's important to point out that so-called "deletionists" aren't even really deletionists, as it seems to me. Perhaps some of them are, but that's silly. I say that because they don't have a blanket policy of wanting to delete articles. They simply want existing guidelines on the notability of fan fiction, pop culture, and copyvio, to be '''''enforced'''''. See ] There wasn't this distinction before, because in the past, policies were '''''enforced''''', I think. Crap like ] wouldn't have made the cut. | |||
:::Clarification would be good, but not likely possible because inclusionists stand in the way of such clarification. But if the rules were simply ''enforced'' and these edit gangs were broken up, there wouldn't be a problem. <font size="4">]</font> <font face="impact"> ]</font> (]) 15:20, 7 March 2008 (UTC) | |||
:::: I'm disturbed by your example; Chris Crocker meets ]; the notion that enforcement of policy would result in deletion of that article demonstrates a deep misunderstanding of Misplaced Pages inclusions policy. ] (]) 15:46, 12 March 2008 (UTC) | |||
::::I am an inclusionsist at heart. I am not a troll. I suggest you stop insulting me and people like me. Please post your comment in a civilized manner. | |||
::::I am also unhappy with the group effort by some deletionists that work together to overwhelm any opposition in the way of the deletion. Basicaly they try to make up in numbers what they lack in logic. | |||
::::--<small> ]</small> <sup>]</sup> 17:59, 7 March 2008 (UTC) | |||
:::::Wrong and wrong and wrong. You have not been insulted, yet you are insulting concerned editors of cabalising and wanting to destroy material. More importantly, ''you inclusionists'' are the ones who gang up in AfDs (and recently in RfAs of dissenters!) to suppress any reasonable deletion of unsalvageable in-universe crap. ] (]) 09:19, ], 200<!--DT-->8 | |||
::I understand the point about Notability being too low a bar, The problem is however, the only reason that Notability works at all is because it's an objective standard, that keeps out most of the trash, while being as fair to all. It doesn't matter what I think about a subject, as long as it has the required sources, it's in. Other than that, I don't like Notability that much. Perfectly good articles are being deleted simply because the subject predated Google. The problem is, how do you redefine that fence in a way that is objective and fair? --<span style="border:1px solid #63B8FF; font-weight:bold; color:#23238E; background-color:#D0E7FF;"> ] <sub> ] ] </sub> </span> 21:53, 7 March 2008 (UTC) | |||
I'm not insulting you. I'm saying your ''philosophy'' is silly, not you, the person. There's a big difference there. Despite your philosophy, you seem to be a good editor. <font size="4">]</font> <font face="impact"> ]</font> (]) 21:52, 7 March 2008 (UTC) | |||
:Your offensive tone is unacceptable. What makes your ''philosophy'' any better than mine? You are insulting all opinions but your own it appears. Why should anyone care what you have to say given your attitude towards theirs? --<small> ]</small> <sup>]</sup> 21:41, 8 March 2008 (UTC) | |||
===Sanctioned alternate wikis?=== | |||
Could part of this problem be solved by actively encouraging the opening of alternate Wiki's? Things like ] and ] seem to have the capability to host the bulk of information regarding their respective topics, with far less worry about relative importance. | |||
Perhaps I'm an optimist, but I think the complaints of most "inclusionists" would be settled if there is a place that the information they want to share can be hosted. ] (]) 18:49, 7 March 2008 (UTC) | |||
:There are some Wikis, however, that are unusable by a specific group (i.e. the D&D Wiki because of its allowance of homebrew). And the inclusionists still won't be happy even if there is - most of the anons on ]-related subjects complain that Misplaced Pages, by its very nature, should contain all the crufty crap that was the individual species articles. Whenever we tell them to go to , they wing back a loud "NO!" and keep complaining. -<font color="black">'']'' <sup>(] ])</sup></font> 19:27, 7 March 2008 (UTC) | |||
::If such an outlet exists then I'm going to agree with firmer rules. Perhaps something along the lines of "Would this content be more suited to an alternative wiki or as a ]?" ] (]) 20:29, 7 March 2008 (UTC) | |||
:::My main concern with alternate wikis is that their existence is sometimes abused in discussions, for instance by arguing that an article on a Star Wars-related topic should be removed because a Star Wars wiki already exists... Such arguments ignore the merits of an individual article and article topic, and instead focus on the general subject area (see below). ''']''' <sup>(])</sup> 20:36, 7 March 2008 (UTC) | |||
::::Actually, that's not the arguments I see at D&D or Pokémon articles at all - they tend to focus more on the subject of the article and not the subject area. -<font color="black">'']'' <sup>(] ])</sup></font> 21:01, 7 March 2008 (UTC) | |||
:::::I was referencing mostly various AFD discussions I've run across, which often contain comments to the effect of "Keep - Star Trek characters are obviously notable" or "Delete - there is a Star Trek wiki for this stuff". Neither coment addresses the article or article topic itself, but rather references some other, unrelated factor (the notability of the ''Star Trek'' franchise or the existence of a ''Star Trek'' wiki). ''']''' <sup>(])</sup> 22:46, 7 March 2008 (UTC) | |||
I agree, Oberiko. Also, what you just said is now a part of ]: ] | |||
It might be good to add a "move it elsewhere" section to ], period. <font size="4">]</font> <font face="impact"> ]</font> (]) 21:54, 7 March 2008 (UTC) | |||
:Why do we even need wikipedia for? All articles on history can go to the history wiki because I have hereby officially declared them unnecesary. No one gave me this authority but hell I can mass redirectify articles regardless... --<small> ]</small> <sup>]</sup> 21:40, 8 March 2008 (UTC) | |||
:I agree totally. I would keep history, but move all sports off to a sports wiki. Perhaps make an exception for sports that have global appeal (football as in World Cup, tennis, cycling), but certainly only marginally important sports (lacrosse, cyclocross, American football). ] (]) 12:35, 10 March 2008 (UTC) | |||
===Focus on the topic, not the subject area=== | |||
What happened to judging articles (and article topics) on their individual merits, as opposed to making sweeping generalisations about an entire subject area or entire class of topics (and entire groups of editors, for that matter)? Why are subjective personal opinions about the importance/unimportance or intellectual/popular/cultural value of a general subject area a part of discussions regarding something as objective as the presence of coverage in reliable source? And finally, what's the story with the ] articles? (Why is it such a common example in these types of discussions?) Thanks, ''']''' <sup>(])</sup> 20:36, 7 March 2008 (UTC) | |||
:Pokémon articles are common examples because, up until last year, every single Pokémon species had its own individual article - and every single one of those articles (exc. ]) had more cruft issues than a crack team of chimpanzee hackers trying to fix coding from ]. After a discussion on ], it was decided to merge all the species articles (again, sans ], and, more recently, into lists of 20). While the articles on the actual franchise and its video games are superbly-done articles (I can say this having worked on ]), the character articles are nowhere near as good as the game articles. | |||
:Pokémon also tends to get brought up because, until the megamerger, there was a "Pokémon Test" which was used at AfD to determine notability (for example, "Article Foo is less notable than ]"), and the entire metaseries tends to be somewhat pervasive. -<font color="black">'']'' <sup>(] ])</sup></font> 20:59, 7 March 2008 (UTC) | |||
::You came this close to owing me a new keyboard for the Daikatana line... --<span style="border:1px solid #63B8FF; font-weight:bold; color:#23238E; background-color:#D0E7FF;"> ] <sub> ] ] </sub> </span> 22:15, 7 March 2008 (UTC) | |||
::Thanks for clarifying! A number of comments I had previously read now make sense. (By the way, just so there is no confusion, my call to "focus on the topic, not the subject area" was a general call; it was not directed at either the Pokémon issue or your comment specifically.) Cheers, ''']''' <sup>(])</sup> 22:37, 7 March 2008 (UTC) | |||
:There are some subject areas that could have very many articles, but don't actually need them. Examples include Bus routes, Pokemon, wrestling articles (an article for every wrestler, for every episode, for every plot line, for every move etc), some tv shows or book series. It'd be great if these subjects had a few main "gateway" articles - editors could concentrate on making these excellent. I hate to sound so negative about these subjects; the dedication and knowledge shown by editors should be commended. I hate the artificial split into "deletionist" or "inclusionist" camps. ] 13:52, 10 March 2008 (UTC) | |||
=== "The result is that novices can quickly get lost in Misplaced Pages's Kafkaesque bureaucracy." === | |||
The rest of the article is just a blind. This is the key item. This is not the first time our deletion system *alone* is presented in an article, and even is mistaken as somehow being the core of wikipedia. | |||
It isn't. It certainly shouldn't be notable or big enough to get articles in prominent magazines, all by itself. | |||
The deletion pages on wikipedia have taken on a life of their own. ''"Misplaced Pages won't be able to survive without deletion"'' you say, but I've heard that before: ''"Misplaced Pages won't be able to survive without Esperanza"'' and ''"Misplaced Pages won't be able to survive without the AMA"''. | |||
I'm skeptical we even need a deletion system. But if we do, perhaps we could make a new one from scratch, that actually follows wiki-principles. (Does anyone still know what those are? ;-) ) | |||
--] (]) 22:55, 7 March 2008 (UTC) <small>''"bureaucracy, what bureaucracy? he said... while ripping it out and stuffing it under the carpet.''</small> | |||
:Oh I don't know. Misplaced Pages is one of the top ten most visited sites. People tend to care what happens in the sites on the top 10th most visited. --<small> ]</small> <sup>]</sup> 21:43, 8 March 2008 (UTC) | |||
::It seems to me that what you want may be a change in attitudes, rather than just a change in structure... ''']''' <sup>(])</sup> 23:15, 8 March 2008 (UTC) | |||
===What are the main arguments for deletion?=== | |||
I can understand the need to prune articles that fall into ], but I do find it somewhat difficult to grasp the need to get rid of articles such as characters from movies / television series' and the like. Can someone (in bullet point notation) lay out the primary reasons? ] (]) 15:17, 12 March 2008 (UTC) | |||
:While I am not convinced of the merits of the arguments, I think the basic idea is that many of these articles do not meet Misplaced Pages's notability guidelines (]), and thus they should be merged into lists or deleted. The controversy arises because there doesn't seem to be broad consensus as to how stringently to interpret the guidelines. ] (]) 16:41, 13 March 2008 (UTC) | |||
:It's all to do with the way that people have difference philosophies of what Misplaced Pages should be, and that people contribute for different reasons. There's two extreme points of view: | |||
:* Should Misplaced Pages aim to be a ]-style all-encompassing Wiki of all human knowledge? (An extreme "inclusionist" philosophy, or a "Wiki" philosophy) | |||
:* Should Misplaced Pages be an accessible encyclopedia aimed at writing encyclopedia-style topics for a general audience avoiding niche topics and only containing easily verifiable information? (An extreme "deletionist" philosophy, or an encyclopedia philosophy) | |||
:And several degrees between the two, where Misplaced Pages currently lands as it tries and come up with the limits between the two philosophies where there are quite blurred lines as articles become increasingly harder to verify as they increasingly contain more specific, niche, information and that's where heated arguments begin about where Misplaced Pages's boundaries should be exactly. | |||
:And there's no real answer to what the particular correct philosophy is, just opinions, and both ideas have their own sets of advantages and disadvantages, and you're never going to please both sides completely. It's a difficult problem without a solution and you're never going to please everyone. -] (]) 23:01, 14 March 2008 (UTC) | |||
IMHO: It is a noble (and perhaps even achievable) goal to have Misplaced Pages eventually contain all human knowledge. But to suddenly remove ] and open the floodgates to having every kid in the world write an article about him or herself and to have "memorial" articles written about anyone's dead uncle, would be crazy at this point in the project. So extreme inclusionism is as dangerous as extreme deletionism. Misplaced Pages needs to grow towards "all of human knowledge" '''slowly'''. This means that we should consider gradually relaxing our notability standards year by year. I don't think it's unreasonable to say that (for example): "In 2009 we're going to remove the ] guideline and allow the creation of articles about any school, in 2010, every musician who ever made a recording that was sold commercially and every author who ever published a book is eligable to have an article written about them". This is something we'd want to plan for - a gradual process. | |||
It's already becoming quite difficult to find "notable" subjects about which much is known - yet which do not yet have a Misplaced Pages article. I think we are actually zeroing in on having written at least something about every subject that falls within our notability standards. This is evident from ] - the rate of creation of new articles is falling - presumably because we're finding fewer new things to write about. | |||
The cost of disk space is still declining exponentially - but Misplaced Pages is now only growing linearly - so we should be able to relax the notability rules to allow more stuff at the same dollar cost. | |||
The tricky part is attracting enough editors to maintain that material without declining standards - and I believe that the only way to do that is to make Misplaced Pages less bureaucratic. There really is a horrible maze of rules - some useful - but many are put there by people who've lost sight of the joy of editing articles and who have taken up Wikipolitics as a full time activity. Relaxing notability standards would be one way to attract new blood. The kid who innocently wants to write an article about his or her school (which IS exceedingly notable by the standards of the kids who go there) - but gets it shredded by the deletionists per-WP:SCHOOL is unlikely to become a full time editor in the future - that first experience with Misplaced Pages is the crucial one - and it's rarely as pleasant as it used to be (say) 5 years ago. The one who starts off by writing an article about his/her rather uninteresting highschool - and who gets tons of help and encouragement from the community - may well be the one who expands the stubs of 50 other high-importance articles about mathematics in the future. | |||
] (]) 14:43, 18 March 2008 (UTC) | |||
:Thanks Steve, that seems like a very well thought out comment. I think I could get behind an inclusion standard which is based on the following: | |||
:#Technical limitations: Since disc space and bandwidth isn't free, this is always our overriding concern, though it grows less significant each year | |||
:#Verifiability: Each article (and fact therein) has to be verifiable from a reputable source | |||
:#Privacy: No personal information (SIN, phone number, address etc.) can be posted unless such information is intentionally or well-known public knowledge | |||
:#Not for advertisement or commercial use | |||
:Beyond that, I don't really see much problem with including anything. Having articles on pokemon, television series episodes, little league seasons, geneology and the like doesn't seem like a negative thing to me; after all, you're only going to find them if you look for them. ] (]) 20:23, 18 March 2008 (UTC) | |||
:: I have a two word change...delete: "and bandwidth" in point (1). The bandwidth requirements for Misplaced Pages are overwhelmingly driven by the number of readers - not the number of articles. Unless increasing the number of articles (by adding articles about things like obscure high schools) brings us a lot of new readers, the only additional bandwidth caused by a relaxation of notability standards would be the bandwidth it takes to create and index these new articles - which is likely to be utterly negligable. If (as claimed) these articles will not be much read - then they won't attract new readers (or increase the number or size of articles that existing readers read). Hence a gentle and gradual deregulation of the notability criteria would not affect bandwidth significantly UNLESS it brought a lot more readers to the site - which would be "A Good Thing". Since a lot of these articles are going to be short stubs - it's arguable that the bandwidth to deliver an article about a less notable subject would be comparable to delivering an "Article not found" page - which is the logical alternative. As for your other three points - I'm certainly not advocating a change to existing verifiability, privacy or commercial use rules. ] (]) 16:11, 19 March 2008 (UTC) | |||
=== Wiki is not paper === | |||
]. I will never read articles about Pokemon characters, but they cause no harm to the encyclopedia or my reading experience because I won't see them if I don't go looking for them. This is a non-issue. — ] 00:35, 22 March 2008 (UTC) | |||
=== Trash Namespace Proposal === | |||
There is a ] to create a namespace that where deleted pages can still be accessed. This proposal represents a solution to the dilemma raised in the above-mentioned economist article that is compatible with the spirit of ] while also addressing some of the concerns of those who wish to be more stringent about the removal of non-notable articles. | |||
== Is it acceptable to try to change certain policies by voting to not apply them in individual cases? == | |||
] notes that there are three main avenues for changing policies. Basically, (1) You can codify existing practice which have developed from the grassroots; (2) you can propose a change in a top-down manner; or (3) Jimbo can change it. A number of essays, such as ], discourage the last two methods, and note that it is very hard to change policy through formal proposals. Guidelines can be changed a bit more easily. | |||
We know that, after the foundational principles were laid down, most subsequent Misplaced Pages guidance arose from codification of practices rather than through proposals. It seems clear that, if there is an issue not currently covered by guidance, but a practice for dealing with that issue has become pretty widespread, it is acceptable and fairly easy to enact new guidance codifying that practice. | |||
What about if we want to actually ''change'' guidance – that is, remove an existing provision or even change it to the opposite of what it currently is? Many unsuccessful attempts are made to do this through avenue #2, proposals. Can the guidance be changed by deliberately changing current practice, e.g., pushing for actions to be taken that run counter to existing guidance, so that eventually the changed practice can be codified as a change to the policy or guideline page? | |||
I want to make a distinction between three different kinds of situations, which I will label A, B, and C, as follows. (A) At times, it is obvious that we can/should ] and act contrary to policy for the good of the encyclopedia. (B) Sometimes the policy in question is a foundational policy that cannot be deviated from. In either of those situations (A or B), the acceptable action is clear-cut. (C) But sometimes there is room for legitimate disagreement as to what is the best course of action; typically, these cases involve guidelines or non-core policies (e.g. ], ], certain provisions of ], etc. as opposed to policies like ], ], etc.) In those cases, is there leeway to simply violate the guidance if the rough consensus of users decides it wants to do so as a way of changing the norm, and by extension, eventually the guidance codifying the norm? | |||
Suppose, for instance, someone is playing a chess game in a userspace subpage, and someone else nominates it for deletion in accordance with ] and ]. Half the editors voting in the MfD want to keep it, because they disagree with the rule. The other half want to delete it. Should the keep votes be disregarded because they are contrary to guidance, and the page be deleted? Or should it have a result of "keep" or "no consensus" because this is a legitimate way to begin changing guidance through avenue #1? ] would seem to suggest not; it notes, "This is not a forum for modifying or revoking policy." On the other hand, ] says that most rules are descriptive, not proscriptive; so how does one really know when it is okay for the rough consensus on an individual 5-day XfD, for instance, to override policy that was presumably adopted by a broader consensus over a longer period of time? Does it basically just depend on what the closing admin thinks will survive a ]? | |||
I'm thinking that what we have now is a bit like typical legal systems. Where is no statute, common law can develop through decisions in various cases. But where there is a statute, it overrides the common law, and the court can't make a decision contrary to it. On the other hand, the court can overturn the statute if it runs counter to foundational principles (which in the real world, might be the Constitution). And people (including those in positions of trust and power) sometimes disregard rules and processes if they think they can get away with it. ] (]) 20:33, 7 March 2008 (UTC) | |||
:], but changing against the wind is a difficult task. | |||
:Let's examine your chess analogy. First of all, scrub the idea of !votes, because Misplaced Pages is not a vote. The admin is trained to determine the merits of both sides of the debate, and rule whichever side provides the stronger case. That being said, the Keep argument has a higher burden of proof in this case, because they not only have to argue against the Delete argument, but must also prove that at least in this case that policy should be set aside. --<span style="border:1px solid #63B8FF; font-weight:bold; color:#23238E; background-color:#D0E7FF;"> ] <sub> ] ] </sub> </span> 21:24, 7 March 2008 (UTC) | |||
::Ah, OK. There seem to be a lot of cases where people say, "We have to get rid of that page; it's ugly, unprofessional, a waste of time, etc." and the other side says "]; ]; ]; ];" etc. It seems to be basically a matter of opinion. But the admin closes one way or another, with the exact reasons unknown, and one side is pretty upset. Moreover, the odds of getting it overturned on ] are pretty slim, so they typically don't bother. As , one of the problems is that admins usually don't state the exact reason(s) why they close debate a certain way; they just say "Result was _____." If they had to state the reason (e.g. a brief statement of the decisive policies/facts) then it would further help diminish the illusion that this is a vote and possibly lead to better DRV discussions. In fact, I think I'm going to propose this right now. ] (]) 21:48, 7 March 2008 (UTC) | |||
::Please, in the future, distinguish between ''policies'' and ''guidelines''. The examples you mention of "non-core policies" - ] and ], are in fact ''guidelines''. Policies are very, very different from guidelines; in fact, they are sufficiently different as to almost make this discussion pointless. Guidelines ''do'' have some give; ''policies'', while sometime ambiguous, ''don't'' allow discretion except when they are ambiguous. Sometimes arguments over guidelines - such as whether it's acceptable to put footnotes ''before'' punctuation rather than after - simply ''don't'' get decided; that's much less common with policies. -- <font style="font-family:Monotype Corsiva; font-size:15px;">] </font> ] 21:45, 7 March 2008 (UTC) | |||
:::Excellent point, thanks. However, ] is a policy which also governs userpages, and is frequently cited in deletion debates. So, would even a "unanimous minus 1" consensus of editors in an MFD be unable to disregard WP:NOT in the chess case? ] (]) 22:06, 7 March 2008 (UTC) | |||
::::You can also ''']'''. ] <font size="4">]</font> <font face="impact"> ]</font> (]) 21:56, 7 March 2008 (UTC) | |||
:::Well, policies can have exceptions too. The burden of proof is just an order of magnitude higher. The point is, if you are arguing against guideline/policy, you have to prove your case as to ''why'' we have to IAR. --<span style="border:1px solid #63B8FF; font-weight:bold; color:#23238E; background-color:#D0E7FF;"> ] <sub> ] ] </sub> </span> 22:12, 7 March 2008 (UTC) | |||
::::Ah, OK. So it sounds like, if you successfully argue IAR in a lot of debates covering a certain issue, that could lead to policy eventually changing, because the practice has changed. ] (]) 22:14, 7 March 2008 (UTC) | |||
:::::Theoretically possible, but you'll need one hell of a movement behind you, and it may be a protracted battle. A one-man war is gonna be VERY difficult to maintain, (I know there's at least one person in this conversation who could attest to that...) --<span style="border:1px solid #63B8FF; font-weight:bold; color:#23238E; background-color:#D0E7FF;"> ] <sub> ] ] </sub> </span> 22:25, 7 March 2008 (UTC) | |||
::::::Is the reason why it's necessary to have a movement behind you that one person simply can't cover all the deletion debates? Theoretically, in each debate, it just takes one person with cogent arguments to make the difference, even if everyone else is against them. The closing admin can say, "You know what, he's right" and close accordingly. ] (]) 23:05, 7 March 2008 (UTC) | |||
::::::: O:-) --] (]) 23:10, 7 March 2008 (UTC) | |||
:::::::(ec) I say a movement, because an admin cannot close a discussion with, "Even though all these people said Delete, I'm gonna close as Keep just because Obuibo said so." Not even ] has that power anymore, (though if you can get him onto your side, it's a plus...) Winning an argument pretty much requires convincing others to your side. One person may be able to sway enough people to save an article from deletion, but to affect policy, you're going to need quite a few like-minded people backing you up --<span style="border:1px solid #63B8FF; font-weight:bold; color:#23238E; background-color:#D0E7FF;"> ] <sub> ] ] </sub> </span> 00:06, 8 March 2008 (UTC) | |||
::::::::@Obuibo Mbstpo: If you need to argue and defend using IAR in the face of anything other than mindless process wonking, it probably doesn't apply. @RoninBK: They could. Deletion debates are not a vote and an incredibly convincing argument put up against a bunch of crap might win out. It would probably be contested, but its not forbidden. On an aside, Jimbo can do pretty much anything he wants that won't turn massive portions of the community against him. <font face="Broadway">]'']</font>'' 04:38, 8 March 2008 (UTC) | |||
::::::::: I use IAR, or rather ] almost exclusively. --] (]) 17:38, 8 March 2008 (UTC) | |||
::::::::The argument was not "close as keep just because X said so", but "close as keep because X's argument was rock solid". Whole worlds of difference between those two things. ] (]) 06:29, 18 March 2008 (UTC) | |||
...In a perfect world. <font size="4">]</font> <font face="impact"> ]</font> (]) 23:55, 7 March 2008 (UTC) | |||
:I forgot, non-admins can close debates too. Sayyyyy, this gives me an idea... ] (]) 23:56, 7 March 2008 (UTC) | |||
:: And it'd better darn well be very ]! --] (]) 00:14, 8 March 2008 (UTC) <small>''No poisoning the well for others please!''</small> | |||
:::No, no, I would never want to do anything contrary to Misplaced Pages policy, guidelines or community standards. By the way, how about a nice game of chess? I've got ] set up so that you only need to enter the board position once and it shows you the perspective from both sides (white player and black). It rocks. ] (]) 02:52, 8 March 2008 (UTC) | |||
::::I might, but it looks like you got yourself a ] there, Kasparov... --<span style="border:1px solid #63B8FF; font-weight:bold; color:#23238E; background-color:#D0E7FF;"> ] <sub> ] ] </sub> </span> 03:51, 8 March 2008 (UTC) | |||
:::::I don't like your snide implication that I'm only the second-best chess player in the world. I would very much prefer that you refer to me as "Deep Blue." Thank you. ] (]) 16:24, 8 March 2008 (UTC) | |||
:::: My favorite game is "]", but thanks.;-) I would very much prefer you ignored all rules and instead worked for the good of the encyclopedia. --] (]) 04:01, 8 March 2008 (UTC) | |||
::::: Well said, indeed. (I guess it is true, that the only way to win is not to play.) --<span style="border:1px solid #63B8FF; font-weight:bold; color:#23238E; background-color:#D0E7FF;"> ] <sub> ] ] </sub> </span> 04:55, 8 March 2008 (UTC) | |||
Anyway, I was thinking of making more edits such as in which I voted, "'''Keep''' and record in central database of precedents for justifying future userpage-restriction-relaxing amendments to ] and ] in accordance with ], 'Documenting actual practices and seeking consensus that the documentation truly reflects practices.'" I can write analogous remarks in my keep votes in which I am attempting to shift the boundaries of article notability at AFD. Perhaps some users will copy this technique, and by keeping track of the results, we can eventually have evidence in our favor for amending the policy. I was thinking that this is an alternative method to making a formal proposal and trying to argue it on the policy talk page, as ] would seem to recommend as a more efficient method. ] (]) 17:06, 8 March 2008 (UTC) | |||
: The actual procedure is slightly more fluid. 1. people do stuff. 2. someone notices no one wrote it down yet. 3. they write it down. Voila, policy! This is process actually responsible for ~90% of our documentation afaict. :-) --] (]) 17:33, 8 March 2008 (UTC) | |||
:: Note that we ]. But if everyone *IS* doing something in a particular way at some moment, changing a policy page to say so at that moment is easy (while if no-one is doing it, changing a policy page is extremely hard) --] (]) 17:37, 8 March 2008 (UTC) <small>''caveat: there are some problems with how policy is maintained atm. I hope to try and solve them through ], so that everyone is at least on the same page... but no promises.''</small> | |||
:::I notice that the page specifically says, "This does not mean that Misplaced Pages ignores precedent. A precedent usually has reasons too, which may still be valid. There is a distinction between unresolved ] concerns over a reasonable or policy related matter, and ] trying to enforce an individual view. An issue decided in the past can always be discussed again, especially if there is new information or a question of policy being breached." | |||
:::So, here it seems like it's saying that you ''aren't'' supposed to buck policy as a way of changing practice and thus changing the policy. Or at least, that people can challenge you about it. Hmmm. Well, anyway, it's obviously set up to try to prevent a wikilawyering approach. In that case, though, I think people should quit saying "Your viewpoint is to be completely disregarded in this XFD because it's in violation of policy!" when anything you say is really an expression of your opinion of what is best for the encyclopedia and could thus be an application of ]; moreover, consensus is simply made up of individual opinions taken as an aggregate, so whether you express your opinion in an XFD, or the village pump, or the policy talk page, it contributes to the consensus. However, it just might not carry as much ''weight'' as it would if it were supported by policy, or if policy were neutral, because policies are expressions of wider consensus than the consensus in an XFD. | |||
:::Moreover, each statement made in XFDs (or Village Pump, or policy talk page, or anywhere else) could be viewed as flowing into the overall consensus much like drops of colored water from many pipettes might be poured into hundreds of beakers large and small, resulting in various shades of color in each beaker; and then they are all poured into one vat whose color reflects the contribution of all those beakers. Each drop of colored water that was put into all those little beakers contributes to the final color of the mixture at the end; the individual drops being metaphors for statements of opinion; the pipettes being metaphors for users; the beakers being metaphors for separate debates; and the vat being a metaphor for policy. ] (]) 19:00, 8 March 2008 (UTC) | |||
:::: WTF about the CCC link. Fixed. | |||
:::: Any good-faith position or edit always carries some amount of weight, depending on the reasoning you provide to support it. This weighting is not modified by policy, rather, policy pages put their own weight into the balance. Policy/guideline/essay pages explain what position a large number people actually currently support (or more accurately, what they used to support a couple of months ago). If policy/guideline/essay pages are well written, they might also contain some hints in which direction people's opinions might change. I don't think there's any hard-and-fast formula to actually calculate what the consensus is at any particular point in time. ] does document a hard and fast method to determine whether something does ''not'' have consensus. --] (]) 19:41, 8 March 2008 (UTC) | |||
:::::I love how you make policy page edits like that at a moment's notice. (Some people are ] and think they have to thrash it out on the talk page first.) Unfortunately, no one will be able to easily look later and tell what was the conversation surrounding the change. I was thinking that a cool technical change would be to add a button in the edit history to jump to that time in a user's contributions, so you can see the other edits they were making around the time that the edit was made. So, years from now, people might be able to figure out what prompted it. It could help in gaining insight into the evolution of policy (not to mention other pages). Shall I submit it to Bugzilla? Ah, I'll just be bold and do it, no need to talk about it first. See . I'm not being sarcastic, by the way. Sometimes things can come off as sarcastic when we're not engaging in ]. ] (]) 20:03, 8 March 2008 (UTC) | |||
By the way... I'd be happy to ignore all rules and work for the good of the encyclopedia if the rules would ignore me for awhile! :) ] (]) 00:20, 9 March 2008 (UTC) | |||
Wow, speaking of games, I found this cool article: http://www.wikitruth.info/index.php?title=Gaming_the_system ] (]) 05:40, 11 March 2008 (UTC) | |||
== Long usernames == | |||
] was blocked because his username was too long. I'm not arguing with the block, as it was supported by multiple admins, and besides, I'm not sure it's my place to do so. But here's a practical suggestion. If Misplaced Pages has a limit on username length, why not put a limit on the number of characters that can be entered in the username field during new account creation? ] (]) 07:31, 9 March 2008 (UTC) | |||
:Corrected typo in the above text. ] (]) 07:33, 9 March 2008 (UTC) | |||
I'd like to point out that as a side issue, re-blocking someone after they've been unblocked to make a username change request is disruptive to the username change process. —] 20:24, 10 March 2008 (UTC) | |||
: Why not prevent them from being created? ] + .{30,} = no more Usernames longer than 30 characters. No more blocks, no more long usernames. - ] (]) 22:50, 16 March 2008 (UTC) | |||
:: See ] — <span style="font-variant:small-caps;font-family:sans-serif;">]</span><sup style="font-family:serif;">(])</sup> 02:33, 21 March 2008 (UTC) | |||
== Foreign military operational names (part 2) == | |||
I didn't get much response over at ], so I'm hoping to get a bit more here. As mentioned above, we've ] at ] and I think we're content to let it be resolved here. | |||
Basically, the question comes down to what level of translation to use for the article name of foreign military operations. Below is an example table. | |||
{| class="wikitable" | |||
|- | |||
! Original name | |||
! Transliteration | |||
! Partial translation | |||
! Full translation | |||
|- | |||
| Fall Weiß | |||
| Fall Weiss | |||
| Case Weiss | |||
| Case White | |||
|- | |||
| Операция Искра | |||
| Operatsia Iskra | |||
| Operation Iskra | |||
| Operation Spark | |||
|- | |||
| 捷号作戦 | |||
| Sho-gō sakusen | |||
| Operation Sho-Go | |||
| Operation Victory | |||
|- | |||
| ケ号作戦 | |||
| Ke-gō sakusen | |||
| Operation Ke-Go | |||
| Operation Ke | |||
|- | |||
| colspan="2" align="center"| Unternehmen Barbarossa | |||
| colspan="2" align="center"| Operation Barbarossa | |||
|} | |||
For note, operations named after ]s (such as Barbarossa being named after ]) would not be translated (though potentially transliterated). Same goes for names like "Operation Ke"; since ] is a simple letter of the Japanese "alphabet" and thus has no meaning to translate. | |||
Any opinions? ] (]) 23:48, 11 March 2008 (UTC) | |||
:Wow! Quite a "snag". I read as far as the first break and didn't have the energy to continue. My choice is a full translation unless it is well-known as a partial translation (e.g. Operation Barbarossa). I assume that there would be redirects from other names and that the article would mention other names. ] (]) 21:30, 4 March 2008 (UTC) | |||
:I vote for the "partial" translation, though I think we will need redirects from the transliterated version. As this is the ''English'' Misplaced Pages I am opposed to articles in non-Latin alphabets anyway. A bit of checking seems to show that the partial version is more common; however there is sure to be some hardhead searching under the transliteration. ] (]) 03:17, 5 March 2008 (UTC) | |||
: English Misplaced Pages, so use as full a translation as possible for the article title. Other names can be listed in the article. If the translation is wrong, someone will fix it. If you can't get it translated then start with what you've got. -- ] (]) 05:32, 5 March 2008 (UTC) | |||
::You may want to read ] before making any suggestions.--] (]) ♠<font color="#BB0000">♥</font><font color="#BB0000">♦</font>♣ 09:01, 5 March 2008 (UTC) | |||
::::I would have suggested the partial translation, for instance keep 'Operation' in English (or whatever the corresponding first word is, battle or, or whatever) but have the actual name in the original language, often they are named for places or so on that have the biggest impact in the original language. ] <sup>]</sup> 11:55, 5 March 2008 (UTC) | |||
:::Having looked at the case mentioned, there's a strong note of Romanian partisans trying to own an article about a Russian offensive. If it has to be in the original language, that original language is Russian in this case, since it was after all their offensive; the Romanian should be dismissed out of hand. However, the other problem is that the transliteration from Russian into Latin characters is a little uncertain. The whole Yassy/Jassy ambiguity arises because of a longstanding Latinism that is wont to use an initial "J" to represent a "Y" consonant sound. I personally think this is an archaism and that we should stick to the phonetically obvious "Yassy", redirecting from "Jassy". As for Iassy", that's for the Italian Misplaced Pages. ] (]) 14:08, 5 March 2008 (UTC) | |||
If you can find a standard English-language reference to the operation, I would go for that. For non-Roman alphabets, my inclination would go for the full translation. For the Roman alphabet, it's more tricky... ] (]) 04:10, 12 March 2008 (UTC) | |||
My advice would be to go for a guideline that is both straightforward and subtle... if there is an overwhelmingly consistent form used in English-language publications, use that. If there's more than one, choose between them on the basis of the following: | |||
#expert advice | |||
#"trends" in the literature | |||
#fuller translations | |||
If there is no real English-Language usage, or a wide variety, go for fuller translations. However, in both cases of fuller translations, stop at whatever point makes most sense; always transliterate, translate non-proper-nouns (don't translate proper nouns even if you can), ''unless'' there is no clean, concise translation, or if the foreign word is attested in English. Some care should be taken where a phrase may be translated cleanly, but the phrase has special significance in the original language that may be neglected by the translation. I can't think of a specific example of that right now. So, ''blitzkrieg'' would never be translated unless there was a strong indication of it being translated in the literature (which I'm fairly sure there isn't), because it's well attested in English, for example. ](]) 12:32, 12 March 2008 (UTC) | |||
:I'm partial to agree myself. I should note that since the "boom" of code names happened as late as World War II, it's very common to have the same operation referred to (in English histories) as any of the latter three in roughly equal measure. We're not really looking at the "almost-always-referred-to-as" operations, but instead the "sometimes-X-sometimes-Y" situations. ] (]) 13:19, 12 March 2008 (UTC) | |||
::In those situations, I think my above comments are good guidance (well, I would, wouldn't I?), but another very important one is to follow the pattern of ]—mention all of them, but pick one and stick with it otherwise, for the title and (mostly) every mention in the article. There should be a discussion of the different versions if relevant. A good acid test for this is if there's such a discussion at all in the literature. Alternatives to the "chosen" version can also be used in contexts (individual sections, captions, etc) where they are more appropriate. That's the sort of guidance I'd pick, anyway. ](]) 13:48, 12 March 2008 (UTC) | |||
I've started a rough tally below. If I've placed people correctly, it looks rather like the trend is towards full translation. Still, six people is hardly enough to get a meaningful consensus. Any one else have an opinion one way or the other? ] (]) 00:16, 13 March 2008 (UTC) | |||
:I think that the guidance to use the name that is most recognisable to English-speakers and most commonly used in English-language sources is the most reasonable. Sometimes it calls for full translation, sometimes for partial translation, and sometimes for no translation (e.g. ''blietzkrieg''). ''']''' <sup>(])</sup> 17:21, 14 March 2008 (UTC) | |||
::The ones we're dealing with are not usually well known to the public. Beyond about ten of the large ones, most operations are presented roughly equally (in terms of transliteration, partial translation and full translation) in historiographies. ] (]) 14:48, 16 March 2008 (UTC) | |||
:::If the original (assuming it is in a language that uses the Latin alphabet), transliteration, partial translation, and full translation are used approximately equally by sources, then I think that title selection should be based on avoiding titles that could be confused with other events. For instance, Operation ''Barbarossa'' cannot be confused with any other military event. Operation ''Ke'' also seems to be a fairly unique title, and so should be used in preference to ''Ke-Go''. Operation ''Victory'', however, is rather generic; in that case, I think that "Operation ''Sho-Go''" would be a better title for the article. ''']''' <sup>(])</sup> 16:00, 16 March 2008 (UTC) | |||
::::How would you define unique? Would it be safe to say that you believe any operation not named after proper nouns (Victory, Red, Decision, Star, Green, Heaven, Sunflower etc.) should be kept in their original language? ] (]) 19:53, 17 March 2008 (UTC) | |||
:::::Yes, essentially, with one major caveat: all of this assumes that no single use or translation predominates in reliable English-language sources. ''']''' <sup>(])</sup> 20:17, 17 March 2008 (UTC) | |||
I'd say "It depends". If there's a name widely used in English literature, that should be used no matter what. If there isn't, and a partial translation gives you a reasonable-sounding name ("Operation whatever", "Case whatever"), that should be used. Otherwise, a transliteration should be used. In no case should a non-Latin alphabet be used for an article title. --] (]) 21:40, 16 March 2008 (UTC) | |||
===Tally=== | |||
* Original: 0 | |||
* Transliteration: 0 | |||
* Partial translation: 4 (Mangoe, SGGH, Carnildo, Black Falcon) | |||
* Full translation: 4 (Sbowers3, SEWilco, Bluap, SamBC) | |||
== ] == | |||
We are discussing the wisdom of allowing consensual ] via user talk pages. Of course, I am taking the ] viewpoint on this issue. My view is that if people specifically opt-in to receiving certain notices, e.g. using ], then the community should not punish another user for giving that person the notices that they have indicated an interest in receiving. As "spam," by definition, is "unsolicited messages," this does not even count as internal spam, any more than weekly ] delivery to those who have signed up for it counts as spam. | |||
Some might argue that this could change the course of Misplaced Pages decisionmaking. But is that necessarily a bad thing? Some Wikipedians already have gained an advantage that might be regarded as undesirable under that logic, through social networks they have established in which they coordinate action on pending discussions through emails, IRC, etc. At least talk page communication is more transparent. There are also ways in which it could be made minimally obnoxious, e.g. through "Show - Hide" messages similar to what we see in the ubiquitous RFA thankspam. Caucusing is a normal part of decisionmaking is large assemblies, and we should allow it. ] (]) 07:15, 13 March 2008 (UTC) | |||
:Against, because once one group opts to receive canvassing notices, everyone else will feel the need to include themselves as well, lest the people who have agreed to canvassing get to push their POV more easily. Allowing this for those who want it may seem like a good idea since it's limited, of course, to those who specifically asked for it, but it will spread to everyone eventually, purely out of fear. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''07:18, 13 Mar 2008 (UTC)''</small> | |||
::Might it not be helpful in finding users with similar interests to work with? If someone canvasses you about a subject you're interested in, perhaps you've gained a new colleague to work with on that subject. On the other hand, if you get canvassed on something you're not interested in, you can revise the restrictions to exclude that kind of topic, and keep tweaking so you're only getting the stuff you want. | |||
::We can already work through some fora, such as , to try to influence the results of discussions. What if there is no active WikiProject covering something? Canvassing can help fill the gap. | |||
::We can speculate about potential pitfalls till the cows come home, but we know from centuries of experience with deliberative assemblies that caucusing is a natural and beneficial occurrence. I'm not sure why it would be different here. ] ] (]) 07:31, 13 March 2008 (UTC) | |||
:::Don't ]s and ]s and the various AfD topic lists and sitenotices for Big Things cover the legitimate uses of canvassing pretty well? ''']''' <sup>]</sup> 07:34, 13 March 2008 (UTC) | |||
::::Do they? What about when it's a subject presently unknown to Misplaced Pages, that doesn't have a WikiProject yet? Or there are not enough interested users in that WikiProject to keep it active? Those would tend to be the subjects that are most likely to get AfD'ed, I would think, because no one here has ever heard of it. I suppose ] could serve that purpose, but that site is pretty generalist. ] (]) 14:36, 13 March 2008 (UTC) | |||
::I disagree with it being universally beneficial. As I said, it would have to spread by default. If I (for example) know that other people are being informed of things that I'm not, I will want to be informed too -- just in case someone who disagrees with me is informed of it and can therefore take action, while I can't. That's how this would have to play out, 'cause it's human nature. It would turn Misplaced Pages too political, in my opinion. Wait'll RfA candidates start taking part in the canvassing, and see how painfully similar to the real world this place will get. Not a good idea. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''14:44, 13 Mar 2008 (UTC)''</small> | |||
:::I 100% agree with Equazcion here. Too political. Too easily abused. Too difficult to maintain. And completely unnecessary. If subjects are getting AfD-ed because "no one has ever heard of it" there are two possible reasons. 1. It isn't notable and shouldn't be here and an editor has made a good faith nomination of a bad article. (this is where AGF comes in, Obuibo Mbstpo). Or 2. An editor has made a bad faith nomination of a good article. Those nominations and discussions get nipped pretty quickly and shut down at AfD without canvassing. ] | ] | ] 14:50, 13 March 2008 (UTC) | |||
:::I would agree with Equazcion as well. AFD and other processes seem to get by just fine with their current level of participation. Also, from my experience, the more users who take part in a discussion, the more likely it will become off-topic, people will start to attack each other, and it will not gain consensus. This doesn't happen all the time in large discussions, but its much more likely than in small discussions. Also, you say "''Some Wikipedians already have gained an advantage that might be regarded as undesirable under that logic, through social networks ...''" If its undesirable, why would we want to increase it, but with slightly more transparency? <font face="Broadway">]'']</font>'' 19:07, 13 March 2008 (UTC) | |||
::::I'm not saying use of IRC, etc. for such purposes is undesirable – just that it's undesirable according to the logic that Equazcion uses in arguing against opt-in canvassing. If the theory were true, that creating such lines of communication makes everyone want to do it, then everyone would be wanting to go on IRC in order to level the playing field with others who are going on IRC. But empirically that's not correct, because people aren't doing that. Instead, some people are getting fed up with the unfairness of things and leaving. It's just not good to set up artificial constraints on consensual behavior. Opt-in canvassing might make it a bit easier for the more casual users, who don't have these outside social networks set up, to stay involved in decisions that affect them. And that's a good thing. ] (]) 20:01, 13 March 2008 (UTC) | |||
:::::IRC is not a passive medium. You're suggesting we allow people to make a one-time slap-on of a tag to their userspace, which allows them to be informed of all manner of things at any point in the future without having to do anything themselves ever again. Those are two very different things. Plenty of people sign in to IRC once in a while, or even leave themselves signed in for extended periods of time -- but its real-time nature means that it still requires active participation in order to get most of the benefit, unless you want to check it once a day and read through 24 hours' worth of real-time discussion. If IRC and other communication lines are a problem, your solution would be the ultimate problem. And the fact that a problem even exists is questionable. If there were a problem as you suggest, we'd be seeing a lot more XfD participation than we do now, in my opinion. It doesn't look to me like anyone is currently canvassing outside Misplaced Pages means. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''20:07, 13 Mar 2008 (UTC)''</small> | |||
::::::IRC only becomes problematic when it's not sufficiently balanced by on-wiki methods of caucusing. ] (]) 20:09, 13 March 2008 (UTC) | |||
:::::::There is no proof of that, it is conjecture. Equazcion has this right. ] | ] | ] 20:10, 13 March 2008 (UTC) | |||
:I do think that some XFD debates do need more attention. Some articles are nominated at off-peak times, get few or no comments and end up getting relisted, sometimes multiple times. But any sort of targeted canvassing or any list updated by users should be avoided. I would suggest something like a {{tl|XFD attention}} template, using a list updated by a bot that adds any debate that has gone for 4 days with fewer than 3 comments. Just a list with links to the debates, no additional commentary, no sorting or targeting based on topic, no expansion to other types of discussion. <font face="Broadway">]'']</font>'' 20:37, 13 March 2008 (UTC) | |||
::People can already look through XfD pages and see which discussions need attention, but I suppose having a centralized list for all the various types of deletion processes would make that easier. So I'd be for this. It would also be pretty easy to have a bot handle such tagging. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''21:01, 13 Mar 2008 (UTC)''</small> | |||
:::Now ''that'', Z-man, I'd be ok with. "inclusionists" and "deletionists" alike could see the template and participate. As neutral as neutral can be around here. The problem, of course, is getting people to actually find, read, and react to the "list" of non-participated discussions. ] | ] | ] 21:04, 13 March 2008 (UTC) | |||
:::: ''This'' could be delivered to userspace via an opt-in subscription list, and not be canvassing. It would be similar to signpost or suggestbot -- simply an auto-generated list of deletion discussions that meet certain criteria, ie. over 4 days old with less than 3 comments, as Z-man suggests. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''21:54, 13 Mar 2008 (UTC)''</small> | |||
===Other alternatives to canvassing=== | |||
I went ahead and added this text to ]: | |||
:As an alternative to canvassing, one might establish a user subpage listing Articles for Deletion, for instance, that he wishes to draw other editors' attention to. He can then, over time, form relationships with editors he believes will be sympathetic to his general views, letting the existence of that user subpage spread through word of mouth. Other editors can watchlist it or transclude it to their own userspace (perhaps even their talk page), providing the advantages of canvassing without disruption. Patience is the key to making this work. Of course, opponents can watch that page as well, so the effect is balanced. | |||
This seems in keeping with the spirit of the page, and I believe is sufficiently different from my previous rejected proposal that it is acceptable to be bold and introduce it there. ] (]) 18:25, 14 March 2008 (UTC) | |||
It's not - get consensus here. --] (]) 18:30, 14 March 2008 (UTC) | |||
:There needs to be a substantive objection to the text, rather than just a procedural objection. Otherwise I am allowed to place it back on the page after waiting a suitable period of time. ] (]) 18:35, 14 March 2008 (UTC) | |||
I've decided to begin implementing this via ]. ] (]) 18:47, 14 March 2008 (UTC) | |||
::There's no procedural objection. It's just an objection. As much as you're allowed to be bold, anyone else is allowed to revert you. And in the case of policies especially, noting an exception that nobody's ever discussed before will definitely garner a quick revert. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''20:20, 14 Mar 2008 (UTC)''</small> | |||
:::Short of an objection being from a SPA or a banned user, an objection is an objection. Outside of BAG Technical v. Policy objections, there is no difference in objections or their weight. Users can and already do keep their own transclude-based lists of such things, but without the purpose of forming relationships with like-minded people to watch them. That is my objection to this change. ''']''' <sup>]</sup> 20:26, 14 March 2008 (UTC) | |||
::::Not to mention the fact that it isn't significantly different from the proposal that was soundly rejected above. <font face="Broadway">]'']</font>'' 22:45, 14 March 2008 (UTC) | |||
:::::Just for the sake of clarification of how ] works, a procedural objection would be to say "This needs to get consensus first." Unless followed up with an objection that addresses the merits, it's meaningless because it's a conversation non-starter. Thus, the initial bold edit can be restored after a reasonable amount of time passes. You can't have BRD without the D. But now we've had some objections that address the merits so the question of whether the initial objection was procedural or substantive is moot. ] (]) 22:52, 14 March 2008 (UTC) | |||
::::::Not true. A significant policy change should require a demonstration of consensus to back it up, except when non-controversial. Example: I change ] to say information no longer needs to be verifiable. Fine. I'm reverted based purely on the fact that it's too big a change to make without a demonstration of consensus. Anything wrong with that revert? No, because burden of proof falls on the person who made the change, not the other way around. After that revert, try to convince people that your edit should be made again. It's not up to the person who reverted you to defend the status quo. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''15:55, 16 Mar 2008 (UTC)''</small> | |||
:::::::It is, actually. Because for all we know, the change that was boldly made was a revert of another bold edit made a long time ago, that no one caught. Arguing that "it's policy" or "it's process" isn't enough; see ]. ] (]) 00:56, 22 March 2008 (UTC) | |||
::::::::See ]. If no one reverts it, it can be considered accepted. The argument is not "its policy" but rather that they prefer the status quo. If we had to reestablish consensus on a policy everytime someone made an edit to it, simply to defend the status quo, we would never have any policies. <font face="Broadway">]'']</font>'' 01:11, 22 March 2008 (UTC) | |||
::::::::If the change is a revert of something that didn't (doesn't) have consensus then it's not bold and a summary explaining that should be enough. And even so, no matter what, the question is whether or not it's controversial, as in a significant change to the way things are currently done, that many people might have a problem with. Anyway that's not what we're talking about here, is it. You made a controversial change, everyone knows that, and so you have to defend it. It's again not up to eveyone else to defend the status quo. If it were, then anyone could make any change they wanted to, no matter how ridiculous, and then demand that everyone else prove that it ''shouldn't'' be there. Hence the concept of "burden of proof". If one guy wants a change, he has to show that there's reason to make it. Not the other way around. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''01:09, 22 Mar 2008 (UTC)''</small> | |||
:::::::::Yeah, and suppose the editor makes his case, and the reverter won't respond. You're saying that the reverter gets a veto, despite unwillingness to respond? No way; someone has to give a substantive objection, or else their silence can be assumed to imply they have been won over by the bold editor's arguments, and they consent to the revert being reverted. ] (]) 03:52, 22 March 2008 (UTC) | |||
== what's wiki policy to someone plagiarisng wiki articles and printing them as their own? == | |||
I have found a self published internet book for sale that consists entirely of two plagiarised wikipedia articles. | |||
what should i do about this? it doesn't appear to be covered under mirrors and forks section | |||
:You should probably raise it at ], pointing out the specific work. ] (]) 19:35, 13 March 2008 (UTC) | |||
thanks] (]) 22:48, 14 March 2008 (UTC) | |||
it seems that this is ok to do as no one is interested, i'll maybe just print my own book using just wiki work, seems easy enough.] (]) 22:51, 14 March 2008 (UTC) | |||
:You didn't post at the GFDL compliance talk page, so I'm unsure of why you're saying no one is interested in reviewing the potential violation. Additionally, you did not specify what work is a problem, so it's impossible for any of us here at the pump to evaluate your claim. ] (]) 23:04, 15 March 2008 (UTC) | |||
== Misplaced Pages:Mediation has been marked as a policy == | |||
{{lw|Mediation}} has recently been edited to mark it as a ]. This is an automated notice of the change (]). -- ] (]) 18:51, 13 March 2008 (UTC) | |||
:The page has been marked as policy since March 2007, it just hasn't been categorized as such (nor listed at ]). An editor just added this category to the page (and added it to the list) but otherwise didn't change the page. <small>(Nothing to see here, folks; you can just move on.)</small> -- <font style="font-family:Monotype Corsiva; font-size:15px;">] </font> ] 14:07, 15 March 2008 (UTC) | |||
== Throwing several consensus-gathering projects into one basket == | |||
Several discussions (such as the ones at ] and ]) have dragged on a bit and seem to get stuck in some of the same places. The fact that we unfortunately let some of the style guidelines contradict each other (but we're working on that) is part of the problem, but there's a bigger issue. Every other day, I see a new question along the lines of "Encyclopedias generally look like this, why don't we?". And the fact that the printed ] is approaching means we can't be certain that consensus hasn't changed or won't change on look-and-feel issues. Is there consensus to put ''See also'' first and ''External links'' last in end sections, how should quotations be handled, where should lines and pages break, should every book cite name the publisher? Except for that last bit, which just came up today, these are long-running discussions. People tend to care more about the appearance of printed material, and take it more seriously. Even Misplaced Pages policy takes printed material more seriously; see ]. (Btw, I've read everything I could find at ], including the archived discussion from 2003, , including lots of input from Jimbo, and I don't see where any of these look-and-feel issues have ever come up in the context of Misplaced Pages Version 1.0. I've only seen them come up as off-hand remarks in current discussions. I'd be very happy to find out that I'm wrong.) | |||
There's disagreement over the extent to which these issues should be discussed on policy pages vs. guideline pages. Stylistic choices follow guidelines, but if there really is consensus that, for instance, if the ''External links'' section exists, it should ''always'' be the last section (especially in the paper Misplaced Pages ... printed encyclopedias rarely allow authors discretion in look-and-feel issues), then are we talking about policy? Assuming that no one wants to go through a huge number of articles by hand looking for irregularities, how do we use bots appropriately, and aren't bots more suitable for policy issues than guidelines? You can see why we get stuck. | |||
We have to get consensus before we do anything about any of this, but so far, everyone has been hesitant to post a narrow style question on a lot of wikiproject talk pages, for fear of looking spammy. But if we don't get wide consensus, we'll get flamed for that too. I'm wondering if the best way to proceed would be to generally get the word out (widely, but I'm agnostic on how widely) that certain look-and-feel issues need to be discussed, especially in the context of Version 1.0, so that we can figure out which things fall in the category of being so widely supported that standardization would be appropriate. So, you guys tell me, because I really don't know: which questions here are policy questions, and should those policy questions be dealt with here first, or would it be better just to create a page somewhere where people could nominate issues to be discussed, and then come back here with the results? - Dan ] (]) 21:27, 13 March 2008 (UTC) | |||
:Questions... This seems to be about conforming articles that will be placed in Version 1.0 to a uniform format style... a desire which I can understand... but does 1.0 have to conform to the online version of Misplaced Pages and vise/versa? Do ''we'' need to have the article match exactly what goes to print? And is it really important that all our articles follow the same format style? ] (]) 13:14, 14 March 2008 (UTC) | |||
::There are a number of topics, particularly layout and look-and-feel topics, where it's perfectly reasonable to assume that consensus might change for a printed Misplaced Pages, so the presumption against re-opening discussion on already-settled consensus dies. I don't know if this answers your questions, but there's a principal of database design that storing two different versions of roughly the same information in two different places is a Very Bad Thing: people think they're referring to one when they meant the other, the data gets out of sync and therefore pulls down the credibility of both, it's an order of magnitude more work to update the data both places and continually check the two lists against each other ... the list goes on. For all these reasons, it would be a bad idea to "fork" Misplaced Pages Version 1.0 before it's really, really necessary, and you'll see support for that from Jimbo and others at the link I cited ... I'll pull out one of the messages from Jimbo if anyone is interested. And, really, how many people would have a cow if the ''External links'' in their favorite article moved to the end? These are just not the kinds of issues that inspire heated debate, generally. The debates that have dragged on have been more in the nature of "What gives YOU the right?", which is a perfectly valid objection. So, my proposal is to get everyone together who cares (which may not be all that many people), throw issues on the table that ''might'' be affected by Version 1.0, brainstorm them, come back here to get permission for anything that involves standardization, bots or policy, as opposed to all the little things that are pleasurable to style wonks like me, update policy and guidelines to reflect a world where we're trying to look good on paper as well as on the web, <s>(as long as that doesn't unduly burden anyone)</s> making sure that, through a combination of good design, clear guidelines, and helpful bots, it's not any harder for anyone to function under the new guidelines, and deploy. - Dan ] (]) 18:04, 14 March 2008 (UTC) | |||
:::P.S. I did the strikeout because I'm fairly confident that there's no need for this to be a struggle between conflicting desires; I don't have any Machiavellian plans to turn this into anything other than a way to find out those printed-encyclopedia style issues that almost everyone already agrees on, which aren't going to be very different from what we've got already, and we've already got years of consensus to give us a pretty good idea what those are. The new part of this is thinking about deploying bots to standardize, and I'm agnostic on whether bots should make any actual changes or simply alert people of changes that should be made. It seems to me it would work either way, simply because if bots or software aren't working, people will holler and we'll stop. - Dan ] (]) 18:44, 14 March 2008 (UTC) | |||
::::P.P.S. I'm being deliberately vague about the topics because I want to be very careful not to "control" the process. We open a page, anyone who thinks they can get consensus for a look-and-feel or layout issue concerning Misplaced Pages 1.0 throws it on the table; we conduct a large poll saying "is there really consensus for this?" (stating the arguments pro and con, but without too much clever argumentation ... none of the stuff I've seen is breathtakingly important, even to style wonks), we ask if there's also consensus for using a bot to help flag irregularities, and if we can get a reliable bot running (or help from MediaWiki, if the techs decide they like that approach better than a bot), we do it. Simple, in theory, and hopefully in practice. - Dan ] (]) 19:00, 14 March 2008 (UTC) | |||
:::::Thanks for raising this issue, it is certainly interesting to those of us working on Misplaced Pages 1.0. FYI, we now have a test selection (around 15,000 articles) put together for Version 0.7, and once we have have some minor bugs resolved this will be ready for all to take a look at. At present the immediate plans are for a DVD version, but we would very much like to get a paper project going once the DVD is organised (are you interested in helping?). Our system is to create a dump of articles on a given day, and that becomes the snapshot we release, and there is always a lot of cleanup on this static dump - removing unwanted tags, unlinking redlinks, etc. We could re-organise the order of sections at that stage if needed, but that is an extra bit of work we'd rather not do. | |||
:::::As I see it, the problem you describe breaks down into two parts, formatting of articles online and formatting of articles in offline releases. Clearly the latter is influenced by the former, but there are quite a few things we change in going from one form to the other. I think standardization of format for the online version is a good end in itself, which will of course benefit WP1.0 as well. Must dash now, ] (]) 15:18, 15 March 2008 (UTC) | |||
::::::I've been following progress at Misplaced Pages 1.0. As soon as this proposed project is finished, along with the WP:WPMoS project to find and <s>correct</s> resolve all contradictions among style guidelines, I'll be happy to help. I didn't want to come to you guys first and ask if you wanted to host this, because I felt that some projects get torpedoed for not taking the "blank slate" approach: state the goals without saying who owns it or what the topics can and can't be. But ... you're the only ones to show up so far. Is there any objection to allowing the Misplaced Pages 1.0 guys to host a page in their project where people can put these kinds of issues on the table? - Dan ] (]) 05:44, 16 March 2008 (UTC) | |||
:::::::At ]'s invitation, I have started the project and created a talk page at ]. Please direct future input there. - Dan ] (]) 15:20, 19 March 2008 (UTC) | |||
== Linking to Old Book resources == | |||
There are a number of internet projects to scan old & out of copyright books and make them available as online resource. | |||
E.g. project guttenburg, and the internet archive. | |||
I cannot find any policy or recommendation on how this resource might or even should be linking to. E.g. Consider the Joseph Malet Lambert wrote a book called Two thousand years of gild life, which is available from the http://www.archive.org/details/twothousandyears00lambuoft The book it's self is probably not notable enough for it's own page, and none exists. However a link from the existing Joseph Malet Lambert page to the book probably would be justified. Some books in these archives might be notable enough for pages with a link E.g The Works of Lord Byron, Letters and Journals, Volume 1 which can be downloaded from http://www.archive.org/details/theworksoflordby08901gut | |||
Some policy or guidence on how this should be handled is required. <small>—Preceding ] comment added by ] (]) 13:57, 14 March 2008 (UTC)</small><!-- Template:UnsignedIP --> <!--Autosigned by SineBot--> | |||
: We don't link to online resources merely because they exist. If the resource is used as a source for an encyclopedia article, ] covers such citation of the source. If a book is notable enough that an article needs to be created for the book, then the online copy might get linked to, but not every book is notable. -- ] (]) 20:36, 14 March 2008 (UTC) | |||
::That said, I think linking to an online copy of a writer's book in an article about a writer is appropriate per ] (assuming, of course, that the online copy is not a copyvio). ] (]) 20:41, 14 March 2008 (UTC) | |||
:::I think it is appropriate eg. in a bibliography section. ] (]) 20:43, 14 March 2008 (UTC) | |||
:::: I think those uses are allowed per ]. -- ] (]) 20:47, 14 March 2008 (UTC) | |||
:Where a title in PG is suitable for linking from a specific article, ] can be useful in implementing that link.] (]) 21:46, 14 March 2008 (UTC) | |||
== ] - possible solution to dispute resolution scaling problems == | |||
Please review and tweak: ]. A very simple three-step system that can make trusted, final decisions on very tricky or complex matters, based on evaluations from trusted, uninvolved users on a given case in the dispute resolution process. It does add new process, but not many layers, or particularly complex layers by any stretch of the imagination. It's built entirely around consensus and the idea of certification, and is the opposite of Votes For Banning. Please weigh in at ]. The community needs a way to move forward in a trusted, fair manner on high-end, complex problems that are either unworkable for normal ], ], or ] to handle, or that the Arbitration Committee can't take on, or that the Arbitration Committee relegates back to the Community. <span style="font-variant:small-caps"><font color="#800080">] § ]/]</font></span> 22:16, 14 March 2008 (UTC) | |||
:I am not sure what problems exactly cannot be covered by ], ], ], ], and ] (feel free to edit my comment and add more). I don't know about you, but I think the number of problems those 5 proccesses cannot handle seems pretty small to me. ]<sup>(] for ])</sup> 01:53, 15 March 2008 (UTC) | |||
::: None of those scale. That's why. | |||
::: Examples of scaling systems are ] <small>''(originated as an emergency measure when ] stalled, and the ] almost followed)''</small>, ] <small>''(similar to MEDCAB... actually predates it slightly IIRC)</small>, and ] <small>(created when the non-scaling ] finally failed)</small> | |||
::: A scaling supplement/replacement/backup for Arbcom ''would'' be kind of nice. I've bookmarked the page. --] (]) 02:23, 15 March 2008 (UTC) | |||
:::: Okay then: I am not sure what problems exactly cannot be covered by ], ], ], ], and ] (feel free to edit my comment and add more). I don't know about you, but I think the number of problems those 5 proccesses cannot handle seems pretty small to me. ]<sup>(] for ])</sup> 02:35, 15 March 2008 (UTC) | |||
::::: The problem is specifically the Arbitration Committee, which is not designed to scale. Lawrence Cohen is trying to think of a drop-in-replacement/backup/supplement to precisely the arbitration committee; and his objective is to come up with something that ''does'' scale. His proposal may or may not be it, but it's interesting nevertheless. --] (]) 03:15, 15 March 2008 (UTC) <small>''the reason why scalability is the holy grail is a topic too large to fit in this margin. ;-)''</small> | |||
(outdent) And anything, at all, that will do any kind of role AND scale as we require like this will require new process. I'm keeping it deliberately simple on the proposal as much as possible. Everyone please read the talk page there. It's really a very, very simple process: you ask for Remedies to be generated--a Request for Remedies. A consensus of uninvolved users has to certify your request as valid. The team of the elected/trusted Remedy Committee then--but only the uninvolved Committee members, recusal is compulsory!--drafts up a set of "suggested" remedies based on the certified request (all this by the way needs no "clerks" or anything like that--its not like anyone here is above hitting "copy/paste" once a week). They post the suggested remedies then go to the RfR, and the wider community, *all* users, weigh in and certify any valid suggestions. Certified/supported consensus remedies go into effect. It's basically an attempt to leash and focus mob rule into something that works, is scalable, is fair (the limitations on involved users), and will have the benefit of simple, rigorous consensus checks to go into effect so no one can complain about getting railroaded. <span style="font-variant:small-caps"><font color="#800080">] § ]/]</font></span> 05:12, 15 March 2008 (UTC) | |||
===Good summary posted=== | |||
This was added to the ] proposal today: | |||
;Request for remedies as a part of dispute resolution | |||
The RfR process comes after venues such as mediation (formal, or informal), third opinion, administrator noticeboards (any), and requests for comment, but before the Arbitration Committee. It is a framework for generating an unbiased, neutral, and fair solution to a dispute. The committee will take a complaint certified by the community. It will provide a suggested solutions, based on policy, precedent and good practice. | |||
Request for remedies is intended to complement the existing dispute resolution process by addressing three basic points: | |||
# Are the issues portrayed valid and accurately portrayed? | |||
# What are the best solutions and remedies to these problems, if they are valid? | |||
# Does the community support these suggested remedies? | |||
<span style="font-variant:small-caps"><font color="#800080">] § ]/]</font></span> 16:18, 15 March 2008 (UTC) | |||
== Image licensing prohibition == | |||
] will be celebrating its first birthday in two weeks. Is anyone still trying to fight against it? — ] 14:29, 15 March 2008 (UTC) | |||
:Yes, lots of them. What I haven't seen is anyone trying to ''change'' it. --] (]) 21:45, 16 March 2008 (UTC) | |||
::Where are they? I've basically given up. The will of a few people on the Board completely overrides community consensus, apparently. — ] 12:08, 20 March 2008 (UTC) | |||
== Last notification for a while on this.... == | |||
On the back of a few threads above, a proposed policy now exists in project space at ]. Comments, criticisms, etc. would be most appreciated. ] (]) 16:42, 15 March 2008 (UTC) | |||
== CSD Coverage == | |||
Is there a CSD that covers ]. I marked it as vandalism but I believe it is otherwise. The article is a supposed project started by students at York University and is meant for people on Misplaced Pages to add to a new fairy tale story they are creating. It is completely ridiculous, but I am not sure which CSD to mark it as. ]<sup>(] for ])</sup> 17:56, 15 March 2008 (UTC) | |||
:] deleted it as a test page, which is probably about as close as CSD can come. I could also see an argument for vandalism. If I had seen it I would have PRODed it. Even though it was ridiculous, it didn't really fir any of the CSD. Not every thing that doesn't belong in an encyclopedia needs to be speedied. ] (]) 18:32, 15 March 2008 (UTC) | |||
== Video Game developer information == | |||
I have found numerous video game articles that while they might give good converage as to the company, engine, etc. that developed it, they do not list any of the lead designers. As similar forms of media, such as movies, list actors, directors, etc. I believe video game articles should be required to list the lead developers, preferably in an infobox. They do not need to list every Tom, Dick and Jane that worked on it, but I find the lack of any names even for semi-popular series such as ] distubing, yet articles for other media of similar note, such as the movie ] is able to list actors, directors, etc. <small>—Preceding ] comment added by ] (] • ]) 20:47, 15 March 2008 (UTC)</small><!-- Template:Unsigned --> <!--Autosigned by SineBot--> | |||
:"Required..."? Article writers are not required to include any particular piece of information beyond the demonstration of the subject's notability. Go talk to the video games Wikiproject and suggest an addition to their infobox if necessary. ''''']''''' 21:59, 15 March 2008 (UTC) | |||
== Bilingual Periodic Table == | |||
Hi, | |||
I created a Hebrew-English periodic table, but dont know where to put it. Should it be in the mainspace, or maybe in wikisource, or perhaps somewhere else? Currently, its in my userspace, ]. With some tweaking of the template, which I would be willing to do, this model could be adapted for any two languages. | |||
Thanks, | |||
-]<sub>] ]</sub> 23:33, 15 March 2008 (UTC) | |||
:I really don't think this has a place in the English Misplaced Pages - sorry. If we had this for Hebrew-English, we'd be opening the door to similar articles for periodic tables for Russian-English and Spanish-English and 250+ other languages. And then for other articles with translations (list of Nobel prize winners, of example?). Nor is wikisource the place: that's for ''free artistic and intellectual works created throughout history, and to present these publications in a faithful wiki version''. It's remotely possible that the ] would take this. Otherwise, your best bet may be ], including such wiki farms as ]. -- <font style="font-family:Monotype Corsiva; font-size:15px;">] </font> ] 14:21, 16 March 2008 (UTC) | |||
== Proposed Policy (Suicide threats, threats of violence, et cetera) == | |||
Could a few editors come over and take a look at ], see what you think, and if inclined, comment on the talk page at ]. Thanks bunches, ] (]) 02:12, 16 March 2008 (UTC) | |||
== An issue of civiliity... == | |||
Noted that wikipedia has a policy of civility and explains why uncivility is wrong: | |||
"Because it makes people unhappy, resulting in discouragement and departure from Misplaced Pages" | |||
Why then does the article on Baha'u'llah below note the sensivity of Baha'is using of the Photograph of Baha'u'llah | |||
There are two known photographs of Bahá'u'lláh. This photo was taken while he was in Adrianople (reproduced in William Miller's book on the Bahá'í Faith). Copies of both pictures are at the Bahá'í World Centre, and one is on display in the International Archives building, where the Bahá'ís view it as part of an organized pilgrimage. '''Outside of this experience Bahá'ís prefer not to view this photo in public, or even to display it in their private homes, and Bahá'í institutions have requested the press not to publish the image in the media.''' | |||
Source: | |||
http://en.wikipedia.org/Baha%27u%27llah | |||
and then includes it anyway? | |||
It would seem to violate the policy on civility.. | |||
- Arthur Gregory <small>—Preceding ] comment added by ] (]) 02:41, 16 March 2008 (UTC)</small><!-- Template:UnsignedIP --> <!--Autosigned by SineBot--> | |||
:I don't think it violates civility to publish the photo even if Bahais wish it were not published. Misplaced Pages also is not ]. ] ] 02:54, 16 March 2008 (UTC) | |||
::This seems to be analogous to the Muhammad situation and Muslims. I think the ideal solution would be to put the picture "below the fold" (down far enough so you have to scroll down to it), with a content warning and a link at the top of the page to a version of the page without the potentially offensive photo. -- ] (]) 08:20, 16 March 2008 (UTC) | |||
::: below the fold assumes something about the technology being used to look at the page. I have a portrait monitor - there is ''no'' below the fold for me. You second suggestion of creating a folk of the page has also been rejected time and again by the community - it's a non-starter. --<span style="font-family: Mistral">]</span> 13:12, 16 March 2008 (UTC) | |||
::::Actually, we tell users how to ]. I'm just suggesting we make it easier to do. -- ] (]) 22:14, 16 March 2008 (UTC) | |||
] has to do with editors fostering an atmosphere of civility ''within Misplaced Pages''. It has nothing to do with whether anyone in the world might, for whatever reason, feel insulted, blasphemed, exposed, etc., by the fact that Misplaced Pages has documented something. Other policies might have an impact on that (], for one), but "X doesn't want us to cover Y" is no more relevant than "X ''wants'' us to cover Y." It would certainly gut the project if it were otherwise, as there is very little information that the simple fact of its distribution will not offend someone. ] (]) 21:34, 16 March 2008 (UTC) | |||
:Agreed. Its imposible to pander to the tastes and requirements of every human with an internet connection. | |||
] (]) 01:49, 17 March 2008 (UTC) | |||
== How to refer to subject who changed his name? == | |||
I am working on an article about ], who was born Robert Martin and used that name for many years before adopting his pseudonym, Stephen Donaldson. As Robert Martin, he did many newsworthy things. Most published accounts of those activities, including all contemporary ones, refer to him by his birth name, Robert Martin, and I sometimes quote these. Later articles about him use the name Stephen Donaldson. To add to the complexity, he had another pseudonym/nickname, "Donny the Punk". How should the article refer to him? (He was known as "Donny" to his friends, of whom I was one.) You can see my dilemma in the article section on his ].] (]) 07:06, 16 March 2008 (UTC) | |||
:Most commonly known name is used. If ] is the name he is most frequently referred to as, then it is used. It's why we have ] over William Clinton, ] over Curtis James Jackson, etc. <font face="Verdana">] <sup>'''(])'''</sup></font> 08:36, 16 March 2008 (UTC) | |||
::Thanks for the reply. It's still not clear to me how to refer to him before he used his later name. For example, would you write, "As a child, ]..." or use the name he was known by then? What if you were quoting a review of his performing as a child in which his birth name was used? ] (]) 18:57, 16 March 2008 (UTC) | |||
:::I've also seen this issue discussed in articles on royalty, whose title changed throughout their lives by ascending from prince to king or whatever. The practice as I recall is to avoid anachronisms—don't refer to a time in an individual's life by using a title (or name) that the individual did not possess at that time. ] seems to have it right, using "Jackson," his given last name, to refer to him in his early life. ] (]) 19:32, 16 March 2008 (UTC) | |||
::::Thanks a lot. ] (]) 20:52, 16 March 2008 (UTC) | |||
:I believe standard procedure is to title the article by the best-known name, and to use that name for any statements that do not involve a specific point in time. For statements that do involve a point in time, use the name the person went by at that time. --] (]) 21:48, 16 March 2008 (UTC) | |||
== WikiProject Universities Article Guidelines == | |||
Once again I am listing ] ] here to gain consensus. It appears we have reached consensus on the ]. Please comment ] and support or oppose the proposed guidelines. Thanks much!—<b>] ]</b> 00:48, 17 March 2008 (UTC) | |||
== LLM/chatbot comments in discussions == | |||
== RFC mandatory quotation of public domain text == | |||
{{clear}} | |||
{{discussion top|1=There is a strong consensus that comments that do not represent an actual person's thoughts are not useful in discussions. Thus, if a comment is written entirely by an LLM, it is (in principle) not appropriate. The main topic of debate was the enforceability of this principle. Opinions vary on the reliability of GPTZero, and I would say there is a rough consensus <em>against</em> any form of AI detection that relies solely on it or other automated tools. Overall, however, I see a consensus that '''it is within admins' and closers' discretion to discount, strike, or collapse <em>obvious</em> use of <em>generative</em> LLMs''' or similar AI technologies. This is a ] matter, and as with other WP:DUCK matters, there is not going to be a blackletter definition of "obvious", but I think we can all agree there are some comments that could only ever be LLM-generated. As with other matters of discretion, like behavioral sockpuppetry blocks, experienced users can apply their best judgment, subject to community review.{{pb}}The word "generative" is very, very important here, though. This consensus does not apply to comments where the reasoning is the editor's own, but an LLM has been used to refine their meaning. Editors who are non-fluent speakers, or have developmental or learning disabilities, are welcome to edit here as long as they can follow our policies and guidelines; this consensus should not be taken to deny them the option of using assistive technologies to improve their comments. In practice, this sets a good lower bound for obviousness, as any comment that could conceivably be LLM-assisted is, by definition, not obviously LLM-generated.{{pb}}Regarding comments that are more borderline in LLM likelihood, it's worth reviewing what's already allowed by policy and guidelines: LLM-written comments will usually add little of substance to a discussion, and closers are already expected to ignore unhelpful comments. If comments contain fabrications of fact, that is a blockable form of ], whether or not the fabrication is by a human or a hallucinating chatbot. And while ] says disruptive comments "are usually best left as-is or archived", there <em>is</em> a "usually" in there, so removing patently disruptive content is within editors' discretion, whether or not LLM usage is unambiguous. <span style="font-family:courier"> -- ]</span><sup class="nowrap">[]]</sup> <small>(])</small> 21:21, 12 January 2025 (UTC)}} | |||
<!-- ] 01:01, 6 January 2025 (UTC) -->{{User:ClueBot III/DoNotArchiveUntil|1736125272}} | |||
Should admins or other users evaluating consensus in a discussion discount, ignore, or strike through or collapse comments found to have been generated by AI/LLM/Chatbots? 00:12, 2 December 2024 (UTC) | |||
I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, and other such tools are very good at detecting this. | |||
Is it mandatory or acceptable to require that public domain text be in a quotation style? Please participate in the discussion at ]. -- ] (]) 03:32, 17 March 2008 (UTC) | |||
I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. If you can't make a coherent argument on your own, you are not competent to be participating in the discussion. I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner. ] ] 00:12, 2 December 2024 (UTC) | |||
:Should not be deleted or ignored but needs to be clearly identified. ] (]) 16:23, 5 January 2025 (UTC) | |||
== Deleting perfectly good articles is a bad idea == | |||
===opening comments=== | |||
I'm sure this kind of opinion has been expressed to the death already, but I'm going to write it anyways. | |||
*Seems reasonable, as long as the GPTZero (or any tool) score is taken with a grain of salt. GPTZero can be as wrong as AI can be. ] (]) 00:32, 2 December 2024 (UTC) | |||
*Only if the false positive and false negative rate of the tool you are using to detect LLM content is very close to zero. LLM detectors tend to be very unreliable on, among other things, text written by non-native speakers. Unless the tool is near perfect then it's just dismissing arguments based on who wrote them rather than their content, which is not what we do or should be doing around here. ] (]) 00:55, 2 December 2024 (UTC) | |||
*:In the cases I have seen thusfar it's been pretty obvious, the tools have just confirmed it. ] ] 04:08, 2 December 2024 (UTC) | |||
*:The more I read the comments from other editors on this, the more I'm a convinced that implementing either this policy or something like it will bring very significant downsides on multiple fronts that significantly outweigh the small benefits this would (unreliably) bring, benefits that would be achieved by simply reminding closers to disregard comments that are unintelligible, meaningless and/or irrelevant regardless of whether they are LLM-generated or not. For the sake of the project I must withdraw my previous very qualified support and instead '''very strongly oppose'''. ] (]) 02:45, 3 December 2024 (UTC) | |||
*I think it should be an expressly legitimate factor in considering whether to discount or ignore comments either if it's clear enough by the text or if the user clearly has a history of using LLMs. We wouldn't treat a comment an editor didn't actually write as an honest articulation of their views in lieu of site policy in any other situation. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 00:59, 2 December 2024 (UTC) | |||
* I would have already expected admins to exercise discretion in this regard, as text written by an LLM is not text written by a person. We cannot guarantee it is what the person actually means, especially as it is a tool often used by those with less English proficiency, which means perhaps they cannot evaluate the text themselves. However, I do not think we can make policy about a specific LLM or tool. The LLM space is moving fast, en.wiki policies do not. Removal seems tricky, I would prefer admins exercise discretion instead, as they do with potentially canvassed or socked !votes. ] (]) 01:06, 2 December 2024 (UTC) | |||
*:As the discussion has moved forward below, I feel I can shift to a more explicit '''support''' in terms of providing guidance to closers and those otherwise evaluating consensus. ] (]) 17:33, 2 January 2025 (UTC) | |||
*'''Support''' discounting or collapsing AI-generated comments, under ''slightly looser'' conditions than those for human comments. Not every apparently-AI-generated comment is useless ] nonsense{{snd}}beyond false positives, it's also possible for someone to use an AI to help them word a constructive comment, and make sure that it matches their intentions before they publish it. But in my experience, the majority of AI-generated comments are somewhere between "pointless" and "disruptive". Admins should already discount ''clearly'' insubstantial !votes, and collapse ''clearly'' unconstructive lengthy comments; I think we should recognize that blatant chatbot responses are more likely to fall into those categories. ] (]) 02:11, 2 December 2024 (UTC) | |||
*'''Strongly Support''' - I think some level of human judgement on the merits of the argument are necessary, especially as GPTZero may still have a high FPR. Still, if the discussion is BLUDGEONy, or if it quacks like an AI-duck, looks like an AI-duck, etc, we should consider striking out such content.{{pb | |||
}}- sidenote, I'd also be in favor of sanctions against users who overuse AI to write out their arguments/articles/etc. and waste folks time on here.. ] (]) 02:20, 2 December 2024 (UTC) | |||
*On a wording note, I think any guidance should avoid referring to any specific technology. I suggest saying "... to have been generated by a program". ] (]) 02:54, 2 December 2024 (UTC) | |||
*:"generated by a program" is too broad, as that would include things like speech-to-text. ] (]) 03:08, 2 December 2024 (UTC) | |||
*:Besides what Thryduulf said, I think we should engage with editors who use translators. ] (]) 03:45, 2 December 2024 (UTC) | |||
*::A translation program, whether it is between languages or from speech, is not generating a comment, but converting it from one format to another. A full policy statement can be more explicit in defining "generation". The point is that the underlying tech doesn't matter; it's that the comment didn't feature original thought from a human. ] (]) 03:57, 2 December 2024 (UTC) | |||
*:::Taking Google Translate as an example, most of the basic stuff uses "AI" in the sense of machine learning () but they nowadays, even for the basic free product. ] (]) 08:39, 2 December 2024 (UTC) | |||
*'''Support'''. We already use discretion in collapsing etc. comments by SPAs and suspected socks, it makes sense to use the same discretion for comments suspected of being generated by a non-human. ] (]) 03:07, 2 December 2024 (UTC) | |||
*'''Support''' - Someone posting "here's what ChatGPT has to say on the subject" can waste a lot of other editors' time if they feel obligated to explain why ChatGPT is wrong again. I'm not sure how to detect AI-written text but we should take a stance that it isn't sanctioned. ] (] <nowiki>|</nowiki> ]) 04:37, 2 December 2024 (UTC) | |||
*'''Strong Support''' - I've never supported using generative AI in civil discourse. Using AI to participate in these discussions is pure laziness, as it is substituting genuine engagement and critical thought with a robot prone to outputting complete garbage. In my opinion, if you are too lazy to engage in the discussion yourself, why should we engage with you? ] (]) 05:26, 2 December 2024 (UTC) | |||
*'''Comment''' - I'm skeptical that a rule like this will be enforceable for much longer. ] (]) 05:39, 2 December 2024 (UTC) | |||
*:Why? ] (]) 12:22, 2 December 2024 (UTC) | |||
*::Because it's based on a potentially false premise that it will be possible to reliably distinguish between text generated by human biological neural networks and text generated by non-biological neural networks by observing the text. It is already quite difficult in many cases, and the difficulty is increasing very rapidly. I have your basic primate brain. The AI companies building foundation models have billions of dollars, tens of thousands, soon to be hundreds of thousands of GPUs, a financial incentive to crack this problem and scaling laws on their side. So, I have very low credence in the notion that I will be able to tell whether content is generated by a person or a person+LLM or an AI agent very soon. On the plus side, it will probably still be easy to spot people making non-policy based arguments regardless of how they do it. ] (]) 13:52, 2 December 2024 (UTC) | |||
*::...and now that the systems are autonomously injecting their output back into model via chain-of-thought prompting, or a kind of inner monologue if you like, to respond to questions, they are becoming a little bit more like us. ] (]) 14:14, 2 December 2024 (UTC) | |||
*:::A ] is intrinsically nothing like a human. It's a bunch of algebra that can compute what a decently sensible person could write in a given situation based on its training data, but it is utterly incapable of anything that could be considered thought or reasoning. This is why LLMs tend to fail spectacularly when asked to do math or write non-trivial code. ] (]) 17:20, 2 December 2024 (UTC) | |||
*::::We shall see. You might want to update yourself on their ability to do math and write non-trivial code. Things are changing very quickly. Either way, it is not currently possible to say much about what LLMs are actually doing because mechanistic interpretability is in its infancy. ] (]) 03:44, 3 December 2024 (UTC) | |||
*::::You might be interested in Anthropic's '' and Chris Olah's work in general. ] (]) 04:02, 3 December 2024 (UTC) | |||
*'''Support''' and I would add "or similar technologies" to "AI/LLM/Chatbots". As for Sean.hoyland's comment, we will cross that bridge when we get to it. ] (]) 05:51, 2 December 2024 (UTC) | |||
*:...assuming we can see the bridge and haven't already crossed it. ] (]) 06:24, 2 December 2024 (UTC) | |||
*'''Support''' - All editors should convey their thoughts in their own words. AI generated responses and comments are disruptive because they are pointless and not meaningful. - ] (]) 06:04, 2 December 2024 (UTC) | |||
*'''Support''', I already more or less do this. An LLM generated comment may or may not actually reflect the actual thoughts of the editor who posted it, so it's essentially worthless toward a determination of consensus. Since I wrote this comment myself, you know that it reflects ''my'' thoughts, not those of a bot that I may or may not have reviewed prior to copying and pasting. ] <small><sup>]</sup></small> 06:59, 2 December 2024 (UTC) | |||
*'''Strong oppose'''. Let me say first that I do not like ChatGPT. I think it has been a net negative for the world, and it is by nature a net negative for the physical environment. It is absolutely a net negative for the encyclopedia if LLM-generated text is used in articles in any capacity. However, hallucinations are less of an issue on talk pages because they're discussions. If ChatGPT spits out a citation of a false policy, then obviously that comment is useless. If ChatGPT spits out some boilerplate "Thanks for reviewing the article, I will review your suggestions and take them into account" talk page reply, who gives a fuck where it came from? (besides the guys in Texas getting their eardrums blown out because they live by the data center){{pb | |||
}}The main reason I oppose, though, is because banning LLM-generated comments is difficult to enforce bordering on unenforceable. Most studies show that humans are bad at distinguishing AI-generated text from text generated without AI. Tools like GPTZero claims a 99% accuracy rate, but that seems dubious based on reporting on the matter. The news outlet Futurism (which generally has an anti-AI slant) has failed many times to replicate that statistic, and anecdotal accounts by teachers, etc. are rampant. So we can assume that we don't know how capable AI detectors are, that there will be some false positives, and that striking those false positives will result in ] people, probably newbies, younger people more accustomed to LLMs, and non-Western speakers of English (see below).{{pb | |||
}}There are also technological issues as play. It'd be easy if there was a clean line between "totally AI-generated text" and "totally human-generated text," but that line is smudged and well on its way to erased. Every tech company is shoving AI text wrangling into their products. This includes autocomplete, translation, editing apps, etc. Should we strike any comment a person used Grammarly or Google Translate for? Because those absolutely use AI now.{{pb | |||
}}And there are ''also'', as mentioned above, cultural issues. The people using Grammarly, machine translation, or other such services are likely to not have English as their first language. And a lot of the supposed "tells" of AI-generated content originate in the formal English of other countries -- for instance, the whole thing where "delve" was supposedly a tell for AI-written content until people pointed out the fact that lots of Nigerian workers trained the LLM and "delve" is common Nigerian formal English.{{pb | |||
}}I didn't use ChatGPT to generate any of this comment. But I am also pretty confident that if I did, I could have slipped it in and nobody would have noticed until this sentence. ] (]) 08:31, 2 December 2024 (UTC) | |||
*:Just for grins, I ran your comment through GPTzero, and it comes up with a 99% probability that it was human-written (and it never struck me as looking like AI either, and I can often tell.) So, maybe it's more possible to distinguish than you think? ] <small><sup>]</sup></small> 20:11, 2 December 2024 (UTC) | |||
*::Yeah, Gnoming's writing style is far more direct and active than GPT's. ] (]) 23:02, 2 December 2024 (UTC) | |||
*:::There weren't | |||
*:::*'''Multiple''' | |||
*:::*:LLMs tend to use more than one subheading to reiterate points | |||
*:::*'''Subheadings''' | |||
*:::*:Because they write like a middle schooler that just learned how to make an essay outline before writing. | |||
*:::In conclusion, they also tend to have a conclusion paragraph for the same reason they use subheadings. ] (]) 13:56, 3 December 2024 (UTC) | |||
*'''Support''' - Ai-generated comments are ] - An editor who has an argument should not use ChatGPT to present it in an unnecessarily verbose manner, and an editor who doesn't have one should not participate in discussion. ] (]) 13:14, 2 December 2024 (UTC) | |||
*<li style="list-style:none;">{{block indent|em=1.6|1=<small>Notified: ]. ] (]) 07:13, 2 December 2024 (UTC)</small>}}<!-- Template:Notified --></li> | |||
*'''Yes''' but why do we need this common sense RFC/policy/whatever? Just ban these people. If they even exist.  <span style="font-variant:small-caps; whitespace:nowrap;">] {] · ] · ] · ]}</span> 07:14, 2 December 2024 (UTC) | |||
*:They exist, and I found myself collapsing some long, obviously chatbot-generated posts in an AFD, and after I did so wondering if policy actually supported doing that. I couldn't find anything so here we are. ] ] 20:04, 2 December 2024 (UTC) | |||
*{{anchor|ChaptGPT_agrees}}'''Yes, of course''', and I know that's the right answer because ChatGPT agrees with me. | |||
{{collapse top|width=95%|title=What ChatGPT thinks|expand=yes}} | |||
:I started by asking: | |||
::{{tq|Currently at the English Misplaced Pages, there's a discussion of the following proposal: "Should or other users evaluating consensus in a discussion discount, ignore, or strike through or collapse comments found to have been generated by AI/LLM/Chatbots?" I wonder if you have an opinion on that question.}} | |||
:My friend the bot initially proposed a requirement that any AI-generated content be labeled as such by the person posting it. After some further discussion between us, I finally said: | |||
::{{tq|You say that labeling AI-generated content, instead of outright discounting it, might be an appropriate middle ground. But if someone is unable to formulate their own thoughts and put them in writing, of what use is it to the discussion for this person to simply ask you to make something up, and then post your output to the Misplaced Pages discussion with the proviso that "ChatGPT, who/which doesn't actually know anything about anything, said this ..."? How is it different from saying, "My grandmother, who knows nothing about what we're talking about, said ..."?}} | |||
:And it said: | |||
::{{tq|That’s an incisive question, and it highlights the crux of the issue: the value of contributions to a discussion. Here’s why AI-generated comments might still differ (slightly) from your hypothetical grandmother’s input, but why their usefulness is often limited:}} | |||
::*{{tq|'''Synthesis vs. Subject Expertise:''' While I (or any AI) lack personal understanding or expertise, I can aggregate and articulate relevant ideas from a vast dataset. This means I can sometimes present arguments, summarize perspectives, or surface considerations that a non-expert human might overlook. Your grandmother, unless she's an AI-trained polymath, wouldn’t be able to provide this breadth. However, this advantage evaporates if participants lack the ability to evaluate and critically engage with my output.}} | |||
::*{{tq|'''The Problem of Proxy Voices:''' The act of outsourcing one’s contribution to AI undermines the deliberative process central to Misplaced Pages’s discussions. Participants are expected to bring their own informed perspectives, shaped by personal understanding of Misplaced Pages’s policies and content. A disclaimer like “ChatGPT said this” doesn’t change the fact that the human posting it has abdicated their role in the discussion.}} | |||
::*{{tq|'''Accountability and Intent:''' In a collaborative setting like Misplaced Pages, contributions are ideally linked to a person who can clarify their intent, respond to questions, and engage dynamically. An AI cannot do this. Saying "ChatGPT said this" distances the poster from the content, making it unclear whether they endorse or even understand the comment.}} | |||
::*{{tq|'''Potential for Misdirection:''' Allowing AI-generated contributions risks introducing noise into the discussion. AI can produce plausible-sounding arguments that may be tangential, misaligned, or even factually incorrect. Without the judgment of a well-informed poster, such contributions can derail or dilute meaningful discourse.}} | |||
::{{tq|'''Conclusion:''' If someone cannot formulate their own argument or understanding, then their reliance on an AI—or anyone else—to contribute doesn’t meaningfully advance the discussion. Labeling AI-generated content is better than passing it off as human, but the real issue lies in whether such contributions enrich or degrade the collaborative process. In most cases, they would seem to belong more to the realm of rhetorical curiosities than substantive engagement.}} | |||
{{collapse bottom}} | |||
*<li style="list-style:none;">In keeping with the proposed guideline, I have of course collapsed the above AI-generated content. (Later: It's actually worth reading in the context of this discussioin, so I've unhidden it by default.) But I must confess it's a pretty good analysis, and worth reading. ]] 07:47, 2 December 2024 (UTC)</li> | |||
*:This is absolute gold dust and the best contribution to this discussion so far. There is an enormous irony here, one that might not be immediately obvious. The proposal is that we should ignore or even strike these type of contributions, but personally it seems like the collapsed format has worked a charm here. I really don't think that AI has much to contribute to WP discussions generally, but with the right prompt, there is certainly something worth adding to the conversation in reality. ] (]) 20:23, 8 December 2024 (UTC) | |||
*::The proposal also includes collapsing. ] (]) 20:26, 8 December 2024 (UTC) | |||
*:::Thanks, I completely missed that. Trying to speed read is not my forte. ] (]) 20:32, 8 December 2024 (UTC) | |||
:The "detector" website linked in the opening comment gives your chatbot's reply only an 81% chance of being AI-generated. ] (]) 23:36, 2 December 2024 (UTC) | |||
::That's because, just by interacting with me, ChatGPT got smarter. Seriously ... you want it to say 99% every time? (And for the record, the idea of determining the "chance" that something is AI-generated is statistical nonsense.) ]] 03:07, 3 December 2024 (UTC) | |||
:::What I really want is a 100% chance that it won't decide that what I've written is AI-generated. Past testing has demonstrated that at least some of the detectors are unreliable on this point. ] (]) 03:28, 4 December 2024 (UTC) | |||
::::100% is, of course, an impossible goal. Certainly SPI doesn't achieve that, so why demand it here? ]] 22:31, 4 December 2024 (UTC) | |||
:::::Even a miniscule chance of quashing a human writer's contributions is too high of a risk. ] (]) 06:09, 6 January 2025 (UTC) | |||
*<del>'''Strong Oppose''' I support the concept of removal of AI-generated content in theory. However, we do not have the means to detect such AI-generated content. The proposed platform that we may use (GPTZero) is not reliable for this purpose. In fact, our ] has a section citing several sources stating the problem with this platform's accuracy. It is not helpful to have a policy that is impossible to enforce. ] <sup>] / ]</sup> 08:46, 2 December 2024 (UTC) </del> | |||
*'''Strong Support''' To be honest, I am surprised that this isn't covered by an existing policy. I oppose the use of platforms like GPTZero, due to it's unreliability, but if it is obviously an ai-powered-duck (Like if it is saying shit like "as an AI language model...", take it down and sanction the editor who put it up there. ] <sup>] / ]</sup> 08:54, 2 December 2024 (UTC) | |||
*'''Support''' at least for ]-level AI-generated comments. If someone uses a LLM to translate or improve their own writing, there should be more leeway, but something that is clearly a pure ChatGPT output should be discounted. ] (] · ]) 09:17, 2 December 2024 (UTC) | |||
* I agree for cases in which it is uncontroversial that a comment is purely AI-generated. However, I don't think there are many cases where this is obvious. The claim that {{green|gptzero and other such tools are very good at detecting this}} is false. ] (]) 09:43, 2 December 2024 (UTC) | |||
*'''Support''' Not clear how admins are deciding that something is LLM generated, , agree with the principle tho. ] (]) 10:02, 2 December 2024 (UTC) | |||
* '''Moral support; neutral as written'''. Chatbot participation in consensus discussions is such an utterly pointless and disdainful abuse of process and community eyeballs that I don't feel like the verbiage presented goes far enough. {{Xt|Any editor may hat LLM-generated comments in consensus discussions}} is nearer my position. No waiting for the closer, no mere discounting, no reliance on the closer's personal skill at recognising LLM output, immediate feedback to the editor copypasting chatbot output that their behaviour is unwelcome and unacceptable. Some observations:{{pb}}I've seen editors accused of using LLMs to generate their comments probably about a dozen times, and in all but two cases – both at dramaboards – the chatbot prose was unmistakably, blindingly obvious. Editors already treat non-obvious cases as if written by a human, in alignment with the raft of {{tqq|only if we're sure}} caveats in every discussion about LLM use on the project.{{pb}}If people are using LLMs to punch up prose, correct grammar and spelling, or other superficial tasks, this is generally undetectable, unproblematic, and not the point here.{{pb}}Humans are superior to external services at detecting LLM output, and no evidence from those services should be required for anything.{{pb}}As a disclosure, evidence mounts that LLM usage in discussions elicits maximally unkind responses from me. It just feels so contemptuous, to assume that any of us care what a chatbot has to say about anything we're discussing, and that we're all too stupid to see through the misattribution because someone tacked on a sig and sometimes an introductory paragraph. And I say this as a stupid person. ] (]) 11:20, 2 December 2024 (UTC) | |||
*:'''Looks like a rewrite is indicated''' to distinguish between {{xt|machine translation}} and {{!xt|LLM-generated comments}}, based on what I'm seeing in this thread. Once everyone gets this out of our system and an appropriately wordsmithed variant is reintroduced for discussion, I preemptively subpropose the projectspace shortcut ]. ] (]) 15:26, 8 December 2024 (UTC) | |||
*'''Support''' per EEng ] ] 14:21, 2 December 2024 (UTC) | |||
*I would be careful here, as there are tools that rely on LLM AI that help to improve the clarity of one's writing, and editors may opt to use those to parse their poor writing (perhaps due to ESL aspects) to something clear. I would agree content 100% generated by AI probably should be discounted particularly if from an IP or new editors (hints if socking or meat puppetry) but not all cases where AI has come into play should be discounted<span id="Masem:1733149152126:WikipediaFTTCLNVillage_pump_(policy)" class="FTTCmt"> — ] (]) 14:19, 2 December 2024 (UTC)</span> | |||
*'''Support''', cheating should have no place or take its place in writing coherent comments on Misplaced Pages. Editors who opt to use it should practice writing until they rival Shakespeare, or at least his cousin Ned from across the river, and then come back to edit. ] (]) 14:29, 2 December 2024 (UTC) | |||
*'''Support''' atleast for comments that are copied straight from the LLM . However, we should be more lenient if the content is rephrased by non-native English speakers due to grammar issues ] (]) 15:10, 2 December 2024 (UTC) | |||
===section break 1=== | |||
While wikipedia is one of the greatest resources in the internet, I strongly oppose its braindead policy of attempting to be a "serious encyclopedia". Misplaced Pages is excellent exactly because it's possible to find information which you can't find in a normal encyclopedia. In my opinion wikipedia should be a collection of facts and articles, not an "encyclopedia". | |||
* '''Support''' for LLM-'''generated''' content (until AI is actually intelligent enough to create an account and contribute on a human level, ]). However, beware of the fact that some LLM-'''assisted''' content should probably be allowed. An extreme example of this: if a non-native English speaker were to write a perfectly coherent reason in a foreign language, and have an LLM translate it to English, it should be perfectly acceptable. ] ] 16:47, 2 December 2024 (UTC) | |||
*:For wiki content, maybe very soon. 'contribute of a human level' in a narrow domain. ] (]) 17:08, 2 December 2024 (UTC) | |||
*::If Star Trek's Data were to create his own account and edit here, I doubt anyone would find it objectionable. ] ] 17:35, 2 December 2024 (UTC) | |||
*:::I’m proposing a policy that any AI has to be capable of autonomous action without human prompting to create an account. ] (]) 21:38, 5 December 2024 (UTC) | |||
*:::<small>Data, being a fictional creation with rights owned by a corporation, will not have an account; he is inherently an IP editor. -- ] (]) 03:22, 20 December 2024 (UTC)</small> | |||
*'''Strong support''' chatbots have no place in our encyclopedia project. ] (]) 17:14, 2 December 2024 (UTC) | |||
*'''Oppose''' - I think the supporters must have a specific type of AI-generated content in mind, but this isn't a prohibition on one type; it's a prohibition on the use of generative AI in discussions (or rather, ensuring that anyone who relies on such a tool will have their opinion discounted). We allow people who aren't native English speakers to contribute here. We also allow people who are native English speakers but have difficulty with language (but not with thinking). LLMs are good at assisting both of these groups of people. Furthermore, as others pointed out, detection is not foolproof and will only get worse as time goes on, models proliferate, models adapt, and users of the tools adapt. This proposal is a blunt instrument. If someone is filling discussions with pointless chatbot fluff, or we get a brand new user who's clearly using a chatbot to feign understanding of wikipolicy, of ''course'' that's not ok. But ''that is a case by case behavioral issue''. I think the better move would be to clarify that "some forms of LLM use can be considered ] and may be met with restrictions or blocks" without making it a black-and-white issue. — <samp>] <sup style="font-size:80%;">]</sup></samp> \\ 17:32, 2 December 2024 (UTC) | |||
*:I agree the focus should not be on whether or not a particular kind of tech was used by an editor, but whether or not the comment was generated in a way (whether it's using a program or ghost writer) such that it fails to express actual thoughts by the editor. (Output from a speech-to-text program using an underlying large language model, for instance, isn't a problem.) Given that this is often hard to determine from a single comment (everyone is prone to post an occasional comment that others will consider to be off-topic and irrelevant), I think that patterns of behaviour should be examined. ] (]) 18:07, 2 December 2024 (UTC) | |||
*::Here's what I see as two sides of a line. The first is, I think, something we can agree would be inappropriate. The second, to me at least, pushes up against the line but is not ultimately inappropriate. But they would both be prohibited if this passes. (a) "I don't want an article on X to be deleted on Misplaced Pages. Tell me what to say that will convince people not to delete it"; (b) "I know Misplaced Pages deletes articles based on how much coverage they've received in newspapers, magazines, etc. and I see several such articles, but I don't know how to articulate this using wikipedia jargon. Give me an argument based on links to wikipedia policy that use the following sources as proof ". Further into the "acceptable" range would be things like translations, grammar checks, writing a paragraph and having an LLM improve the writing without changing the ideas, using an LLM to organize ideas, etc. I think what we want to avoid are situations where the ''arguments and ideas themselves'' are produced by AI, but I don't see such a line drawn here and I don't think we could draw a line without more flexible language. — <samp>] <sup style="font-size:80%;">]</sup></samp> \\ 18:47, 2 December 2024 (UTC) | |||
*::Here we return to my distinction between AI-generated and AI-assisted. A decent speech-to-text program doesn't actually generate content. ] ] 18:47, 2 December 2024 (UTC) | |||
*:::Yes, as I ], the underlying tech isn't important (and will change). Comments should reflect what the author is thinking. Tools (or people providing advice) that help authors express their personal thoughts have been in use for a long time. ] (]) 19:08, 2 December 2024 (UTC) | |||
*::::Yeah the point here is passing off a machine's words as your own, and the fact that it is often fairly obvious when one is doing so. If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them. ] ] 20:10, 2 December 2024 (UTC) | |||
*:::::This doesn't address what I wrote (though maybe it's not meant to). {{tq|If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them}} is just contradictory. Assistive technologies are those that can help people who aren't "competent" to express themselves to your satisfaction in plain English, sometimes helping with the formulation of a sentence based on the person's own ideas. There's a difference between having a tool that helps me to articulate ideas ''that are my own'' and a tool that ''comes up with the ideas''. That's the distinction we should be making. — <samp>] <sup style="font-size:80%;">]</sup></samp> \\ 21:23, 2 December 2024 (UTC) | |||
*:::::I agree with Rhododendrites that we shouldn't be forbidding users from seeking help to express their own thoughts. Getting help from someone more fluent in English, for example, is a good practice. Nowadays, some people use generative technology to help them prepare an outline of their thoughts, so they can use it as a starting point. I think the community should be accepting of those who are finding ways to write their own viewpoints more effectively and concisely, even if that means getting help from someone or a program. I agree that using generative technology to come up with the viewpoints isn't beneficial for discussion. ] (]) 22:58, 2 December 2024 (UTC) | |||
*:Non-native English speakers and non-speakers to whom a discussion is important enough can already use machine translation from their original language and usually say something like "Sorry, I'm using machine translation". ] (]) 08:34, 4 December 2024 (UTC) | |||
*'''Oppose''' Contributions to discussions are supposed to be evaluated on their merits per ]. If an AI-assisted contribution makes sense then it should be accepted as helpful. And the technical spectrum of assistance seems large and growing. For example, as I type this into the edit window, some part of the interface is spell-checking and highlighting words that it doesn't recognise. I'm not sure if that's coming from the browser or the edit software or what but it's quite helpful and I'm not sure how to turn it off. ]🐉(]) 18:17, 2 December 2024 (UTC) | |||
*:But we're not talking about spell-checking. We're talking about comments clearly generated by LLMs, which are inherently unhelpful. ] (]) 18:29, 2 December 2024 (UTC) | |||
*::Yeah, spellchecking is not the issue here. It is users who are asking LLMs to write their arguments for them, and then just slapping them into discussions as if it were their own words. ] ] 20:12, 2 December 2024 (UTC) | |||
*:::Andrew's first two sentences also seem to imply that he views AI-generated arguments that makes sense as valid, and that we should consider what AI thinks about a topic. I'm not sure what to think about this, especially since AI can miss out on a lot of the context. ] (]) 23:04, 2 December 2024 (UTC) | |||
*::::Written arguments are supposed to be considered on their merits as objects in their own right. Denigrating an argument by reference to its author is '']'' and that ranks low in the ] – "{{tq|attacks the characteristics or authority of the writer without addressing the substance of the argument}}". ]🐉(]) 23:36, 2 December 2024 (UTC) | |||
*::::::An AI chatbot isn't an "author", and it's impossible to make an ''ad hominem'' attack on one, because a chotbot is not a ''homo''. ]] 17:45, 6 December 2024 (UTC) | |||
*:::::::{{small|Well, not all of them, anyway. ], maybe?}} ] (]) 17:51, 6 December 2024 (UTC) | |||
*:::::On the other hand, "exhausting the community's patience"/CompetenceIsRequired is a very valid rationale from stopping someone from partricipating. ] (]) 23:50, 2 December 2024 (UTC) | |||
*:::The spell-checking was an immediate example but there's a spectrum of AI tools and assistance. The proposed plan is to use an AI tool to detect and ban AI contributions. That's ludicrous hypocrisy but suggests an even better idea – that we use AIs to close discussions so that we don't get the bias and super-voting. I see this on Amazon regularly now as it uses an AI to summarise the consensus of product reviews. For example,{{tqb|Customers say<br />Customers appreciate the gloves for their value, ease of use, and gardening purposes. They find the gloves comfortable and suitable for tasks like pruning or mowing. However, opinions differ on how well they fit.<br />AI-generated from the text of customer reviews}}Yes, AI assistants have good potential. My !vote stands. ]🐉(]) 23:23, 2 December 2024 (UTC) | |||
*::::Let's not get into tangents here. ] (]) 23:51, 2 December 2024 (UTC) | |||
*:::::It's better than going around in circles. ]] 03:07, 3 December 2024 (UTC) | |||
===section break 2=== | |||
I strongly oppose most of the article deletion policies at wikipedia. Perfectly good articles which do not t offend anyone and have nothing questionable in them are being deleted. Why? Who does it hurt to have such articles at wikipedia? Nobody is going to get offended, and it bothers nobody if such articles exist. In a physical book it would be understandable because you have very limited space. However, who does it hurt at wikipedia? | |||
*::::I asked Google's ] to "summarise the consensus of the following RFC discussion", giving it the 87 comments to date. | |||
{{cot|width=80%|title=AI summary of the RfC to date}} | |||
This Misplaced Pages Request for Comment (RfC) debates whether comments in discussions that are found to be generated by AI/LLM/Chatbots should be discounted, ignored, or removed. | |||
Arguments for discounting/removing AI-generated comments: | |||
For example, recently the article "Silent protagonist" was removed. Why? It's a perfectly good article which doesn't hurt anyone, so why remove it? It may be interesting for someone to read, so why not have it? In the past the article "toki pona" was removed (later restored, for whatever reason). Why it was removed? Who does it hurt to have such an article? There are certainly tons of existing articles which are not removed and which are way worse and way less interesting and contain way less facts than these. | |||
* AI comments don't represent genuine human engagement or thought. They can be disruptive, waste time, and even fabricate information. | |||
* AI can be used to manipulate discussions. Editors might use AI to generate arguments they don't actually believe in, or to flood discussions with low-quality comments. | |||
* Relying on AI undermines the integrity of Misplaced Pages's consensus-based process. Discussions should be based on the genuine opinions of human editors. | |||
Arguments against discounting/removing AI-generated comments: | |||
] (]) 12:05, 17 March 2008 (UTC) | |||
* Difficulty of detection. AI detection tools are not always reliable, and it can be hard to distinguish between AI-generated text and human-written text, especially as AI technology improves. | |||
* Potential for misuse. False positives could lead to legitimate comments being dismissed, potentially silencing non-native English speakers or those who use AI for legitimate assistive purposes (e.g., translation, grammar checks). | |||
* Focus on content, not origin. Comments should be evaluated based on their merits, not on how they were generated. If an AI-generated comment is relevant and well-reasoned, it shouldn't be dismissed simply because of its origin. | |||
* LLMs can be helpful tools. LLMs can help editors express their thoughts more clearly, especially for non-native speakers or those with language difficulties. | |||
Current Status (as of Dec 3rd, 2024): | |||
:] was deleted as a result of ]. Misplaced Pages is an encyclopedia, so many things that do not belong in an encyclopedia (such as ]) are excluded or removed. While it may be interesting to some, Misplaced Pages is an encyclopedia and as such ]. ] (NOR) is one of three content policies. The others are ] (NPOV) and ] (V). Jointly, these policies determine the type and quality of material that is acceptable for inclusion.--] (]) 14:19, 17 March 2008 (UTC) | |||
* No clear consensus has emerged. There is significant support for discounting or removing AI-generated comments, but also strong opposition. | |||
* Concerns about enforceability and potential harm. Many editors are worried about the practical challenges of implementing and enforcing such a rule, as well as the potential for it to be used unfairly against certain editors. | |||
* Ongoing discussion. The discussion is still active, and it remains to be seen whether a consensus will form. | |||
It's important to note that this is an evolving situation, and the outcome of the RfC could have significant implications for how Misplaced Pages handles AI-generated content in the future. | |||
:: I think Wopr is already aware that Misplaced Pages is an encyclopedia (hence "I strongly oppose its braindead policy of attempting to be a serious encyclopedia"). Wopr, if you oppose Misplaced Pages's attempt at being an encyclopedia, I'd say that's a perfectly reasonable suggestion that many have made before. Perhaps someday it will be something different. For now it doesn't really seem like most of the people who currently participate would agree with you. I'd suggest working from the inside-out, trying to make small changes gradually. A suggestion that everything needs to change completely probably won't have any effect. <small style="font:bold 10px Arial;display:inline;border:#009 1px dashed;padding:1px 6px 2px 7px;white-space:nowrap">] ]/] ''14:56, 17 Mar 2008 (UTC)''</small> | |||
{{collapse bottom}} | |||
:::::That seems quite a fair and good summary of what's been said so far. I'm impressed and so my !vote stands. | |||
:::::]🐉(]) 09:26, 3 December 2024 (UTC) | |||
::::::I have significant doubts on its ability to weigh arguments and volume. ] (]) 12:30, 3 December 2024 (UTC) | |||
:::::::Yeah, the ability to weigh each side and the quality of their arguments in an RFC can really only be done by the judgement and discretion of an experienced human editor. ] (]) 20:08, 4 December 2024 (UTC) | |||
::::::::The quality of the arguments and their relevance to polices and guidelines can indeed only be done by a human, but the AI does a good job of summarising which arguments have been made and a broad brush indication of frequency. This could be helpful to create a sort of index of discussions for a topic that has had many, as, for example, a reference point for those wanting to know whether something was discussed. Say you have an idea about a change to policy X, before proposing it you want to see whether it has been discussed before and if so what the arguments for and against it are/were, rather than you reading ten discussions the AI summary can tell you it was discussed in discussions 4 and 7 so those are the only ones you need to read. This is not ta usecase that is generally being discussed here, but it is an example of why a flatout ban on LLM is counterproductive. ] (]) 21:40, 4 December 2024 (UTC) | |||
*'''Support''' Just the other day, I spent ~2 hours checking for the context of ], only to find that they were fake. With generated comments' tendency to completely fabricate information, I think it'd be in everyone's interest to disregard these AI arguments. Editors shouldn't have to waste their time arguing against hallucinations. ''(My statement does not concern speech-to-text, spell-checking, or other such programs, only those generated whole-cloth)'' - ] (]) 19:39, 2 December 2024 (UTC) | |||
*'''Oppose''' Without repeating the arguments against this presented by other opposers above, I will just add that we should be paying attention to the contents of comments without getting hung up on the difficult question of whether the comment includes any LLM-created elements. - ] 19:45, 2 December 2024 (UTC) | |||
*'''Strong support''' If others editors are not going to put in the effort of writing comments why should anyone put in the effort of replying. Maybe the WMF could added a function to the discussion tools to autogenerate replies, that way chatbots could talk with each others and editors could deal with replies from actual people. -- <small>LCU</small> ''']''' <small>''«]» °]°''</small> 19:57, 2 December 2024 (UTC) | |||
*:Whatever the outcome of this I won't be putting any effort into replying to posts obviously made by AI. -- <small>LCU</small> ''']''' <small>''«]» °]°''</small> 13:11, 26 December 2024 (UTC) | |||
*'''Strong oppose'''. Comments that are bullshit will get discounted anyways. Valuable comments should be counted. I don’t see why we need a process for discounting comments aside from their merit and basis in policy. <span style="position: relative; top: -0.5em;">꧁</span>]<span style="position: relative; top: -0.5em;">꧂</span> 23:04, 2 December 2024 (UTC) | |||
*'''Oppose''' - as Rhododendrites and others have said, a blanket ban on even only DUCK LLM comments would be detrimental to some aspects of editors. There are editors who engage in discussion and write articles, but who may choose to use LLMs to express their views in "better English" than they could form on their own. Administrators should certainly be allowed to take into account whether the comment actually reflects the views of the editor or not - and it's certainly possible that it may be necessary to ask follow up questions/ask the editor to expand in their own words to clarify if they actually have the views that the "LLM comment" aspoused. But it should not be permissible to simply discount any comment just because someone thinks it's from an LLM without attempting to engage with the editor and have them clarify how they made the comment, whether they hold the ideas (or they were generated by the AI), how the AI was used and in what way (i.e. just for grammar correction, etc). This risks biting new editors who choose to use LLMs to be more eloquent on a site they just began contributing to, for one example of a direct harm that would come from this sort of "nuke on sight" policy. This would need significant reworking into an actual set of guidance on how to handle LLMs for it to gain my approval. -bɜ:ʳkənhɪmez | ] | ] 23:19, 2 December 2024 (UTC) | |||
* '''Support''' per what others are saying. And more WP:Ducks while at it… <span style="color:#7E790E;">2601AC47</span> (]<big>·</big>]<big>·</big>]) <span style="font-size:80%">Isn't a IP anon</span> 00:36, 3 December 2024 (UTC) | |||
*:{{comment}} It would appear Jimbo responded indirectly in a interview: {{tq|as long as there’s a human in the loop, a human supervising, there are really potentially very good use cases.}} <span style="color:#7E790E;">2601AC47</span> (]<big>·</big>]<big>·</big>]) <span style="font-size:80%">Isn't a IP anon</span> 12:39, 4 December 2024 (UTC) | |||
*'''Very strong support'''. Enough is enough. If Misplaced Pages is to survive as a project, we need zero tolerance for even the suspicion of AI generation and, with it, zero tolerance for generative AI apologists who would happily open the door to converting the site to yet more AI slop. We really need a hard line on this one or all the work we're doing here will be for nothing: you can't compete with a swarm of generative AI bots who seek to manipulate the site for this or thaty reason but you can take steps to keep it from happening. ] (]) 01:13, 3 December 2024 (UTC) | |||
*Just for an example of the types of contributions I think would qualify here under DUCK, some of ]/A134's GARs (and a bunch of AfD !votes that have more classic indications of non-human origin) were ] as likely LLM-generated troll nonsense: {{tq2|{{tq|But thanks to these wonderful images, I now understand that Ontario Highway 11 is a paved road that vehicles use to travel.}} {{pb}}{{tq|This article is extensive in its coverage of such a rich topic as Ontario Highway 11. It addresses the main points of Ontario Highway 11 in a way that isn’t just understandable to a reader, but also relatable.}}{{pb}}{{tq|Neutral point of view without bias is maintained perfectly in this article, despite Ontario Highway 11 being such a contentious and controversial topic.}}}}{{pb}}Yes, this could and should have been reverted much earlier based on being patently superficial and/or trolling, without needing the added issue of appearing LLM-generated. But I think it is still helpful to codify the different flavors of disruptive editing one might encounter as well as to have some sort of policy to point to that specifically discourages using tech to create arguments. {{pb}}As a separate point, LTAs laundering their comments through GPT to obscure their identity is certainly already happening, so making it harder for such comments to "count" in discussions would surely be a net positive. ] (]) 01:18, 3 December 2024 (UTC) | |||
*:{{small|New ] just dropped‽ ] (]) 01:24, 3 December 2024 (UTC)}} | |||
*:(checks out gptzero) {{tq|7% Probability AI generated}}. Am I using it wrong? <span style="color:#7E790E;">2601AC47</span> (]<big>·</big>]<big>·</big>]) <span style="font-size:80%">Isn't a IP anon</span> 01:28, 3 December 2024 (UTC) | |||
*::In my experience, GPTZero is more consistent if you give it full paragraphs, rather than single sentences out of context. Unfortunately, the original contents of ] are only visible to admins now. ] (]) 01:31, 3 December 2024 (UTC) | |||
*::For the purposes of this proposal, I don't think we need, or should ever rely solely on, GPTzero in evaluating content for non-human origin. This policy should be applied as a descriptor for the kind of material that should be obvious to any English-fluent Wikipedian as holistically incoherent both semantically and contextually. Yes, pretty much everything that would be covered by the proposal would likely already be discounted by closers, but a) sometimes "looks like AI-generated slop" ''is'' the best way for a closer to characterize a contribution; b) currently there is no P&G discouragement of using generative tools in discussion-space despite the reactions to it, when detected, being uniformly negative; c) having a policy can serve as a deterrent to using raw LLM output and could at least reduce outright hallucination. ] (]) 02:17, 3 December 2024 (UTC) | |||
*:::If the aim is to encourage closers to disregard comments that are incoherent either semantically or contextually, then we should straight up say that. Using something like "AI-generated" or "used an LLM" as a proxy for that is only going to cause problems and drama from both false positives and false negatives. Judge the comment on its content not on its author. ] (]) 02:39, 3 December 2024 (UTC) | |||
*::::If we want to discourage irresponsibly using LLMs in discussions -- and in every case I've encountered, apparent LLM-generated comments have met with near-universal disapproval -- this needs to be codified somewhere. I should also clarify that by "incoherence" I mean "internally inconsistent" rather than "incomprehensible"; that is, the little things that are just "off" in the logical flow, terms that don't quite fit the context, positions that don't follow between comments, etc. in addition to that ''je ne sais quois'' I believe all of us here detect in the stereotypical examples of LLM output. Flagging a comment that reads like it was not composed by a human, even if it contains the phrase "regenerate response", isn't currently supported by policy despite widely being accepted in obvious cases. ] (]) 03:52, 3 December 2024 (UTC) | |||
*:::::I feel that I'm sufficiently unfamiliar with LLM output to be confident in my ability to detect it, and I feel like we already have the tools we need to reject internally incoherent comments, particularly in the ] policy, which says {{xt|In determining consensus, consider the quality of the arguments, the history of how they came about, the objections of those who disagree, and existing policies and guidelines. The quality of an argument is more important than whether it represents a minority or a majority view.}} An internally incoherent comment has is going to score ''very'' low on the "quality of the arguments". ] (]) 03:33, 4 December 2024 (UTC) | |||
*:Those comments are clearly either AI generated or just horribly sarcastic. <span class="nowrap">--] (])</span> 16:33, 3 December 2024 (UTC) | |||
*::Or maybe both? ]] 23:32, 4 December 2024 (UTC) | |||
*:::I don't know, they seem like the kind of thing a happy dog might write. ] (]) 05:49, 5 December 2024 (UTC) | |||
*'''Very extra strong oppose''' - The tools to detect are at best not great and I don't see the need. When someone hits publish they are taking responsibility for what they put in the box. That does not change when they are using a LLM. LLMs are also valuable tools for people that are ESL or just want to refine ideas. So without bullet proof detection this is doa. ] (]) 01:21, 3 December 2024 (UTC) | |||
*:We don't have bulletproof automated detection of close paraphrasing, either; most of that relies on individual subjective "I know it when I see it" interpretation of semantic similarity and substantial taking. ] (]) 04:06, 3 December 2024 (UTC) | |||
*::One is a legal issue the other is not. Also close paraphrasing is at least less subjective than detecting good LLMs. Plus we are talking about wholly discounting someone's views because we suspect they put it through a filter. That does not sit right with me. ] (]) 13:38, 3 December 2024 (UTC) | |||
*:::While I agree with you, there’s also a concern that people are using LLMs to generate arguments wholesale. ] (]) 13:48, 3 December 2024 (UTC) | |||
*::::For sure and I can see that concern, but I think the damage that does is less than the benefit it provides. Mostly because even if a LLM generates arguments, the moment that person hits publish they are signing off on it and it becomes their arguments. Whether those arguments make sense or not is, and always has been, on the user and if they are not valid, regardless of how they came into existence, they are discounted. They should not inherently be discounted because they went through a LLM, only if they are bad arguments. ] (]) 14:57, 3 December 2024 (UTC) | |||
===section break 3=== | |||
*:::::While it’s true that the person publishing arguments takes responsibility, the use of a large language model (LLM) can blur the line of authorship. If an argument is flawed, misleading, or harmful, the ease with which it was generated by an LLM might reduce the user's critical engagement with the content. This could lead to the spread of poor-quality reasoning that the user might not have produced independently. | |||
*:::::Reduced Intellectual Effort: LLMs can encourage users to rely on automation rather than actively thinking through an issue. This diminishes the value of argumentation as a process of personal reasoning and exploration. Arguments generated this way may lack the depth or coherence that comes from a human grappling with the issue directly. | |||
*:::::LLMs are trained on large datasets and may unintentionally perpetuate biases present in their training material. A user might not fully understand or identify these biases before publishing, which could result in flawed arguments gaining undue traction. | |||
*:::::Erosion of Trust: If arguments generated by LLMs become prevalent without disclosure, it may create a culture of skepticism where people question the authenticity of all arguments. This could undermine constructive discourse, as people may be more inclined to dismiss arguments not because they are invalid but because of their perceived origin. | |||
*:::::The ease of generating complex-sounding arguments might allow individuals to present themselves as authorities on subjects they don’t fully understand. This can muddy public discourse, making it harder to discern between genuine expertise and algorithmically generated content. | |||
*:::::Transparency is crucial in discourse. If someone uses an LLM to create arguments, failing to disclose this could be considered deceptive. Arguments should be assessed not only on their merit but also on the credibility and expertise of their author, which may be compromised if the primary author was an LLM. | |||
*:::::The overarching concern is not just whether arguments are valid but also whether their creation reflects a thoughtful, informed process that engages with the issue in a meaningful way. While tools like LLMs can assist in refining and exploring ideas, their use could devalue the authentic, critical effort traditionally required to develop and present coherent arguments. ] (]) 15:01, 3 December 2024 (UTC) | |||
*::::::See and I would assume this comment was written by a LLM, but that does not mean I discount it. I check and consider it as though it was completely written by a person. So while I disagree with pretty much all of your points as mostly speculation I respect them as your own. But it really just sounds like fear of the unknown and unenforceable. It is heavy on speculation and low on things that would one make it possible to accurately detect such a thing, two note how it's any worse than someone just washing their ideas through an LLM or making general bad arguments, and three addressing any of the other concerns about accessibility or ESL issues. It looks more like a moral panic than an actual problem. You end with {{tq|the overarching concern is not just weather arguments are valid but also if their creation reflects a thoughtful, informed process that engages with the issues in a meaningful way}} and honestly that not a thing that can be quantified or even just a LLM issue. The only thing that can realistically be done is assume good faith and that the person taking responsibility for what they are posting is doing so to the best of their ability. Anything past that is speculation and just not of much value. ] (]) 16:17, 3 December 2024 (UTC) | |||
*:::::::Well now, partner, I reckon you’ve done gone and laid out yer argument slicker than a greased wagon wheel, but ol’ Prospector here’s got a few nuggets of wisdom to pan outta yer claim, so listen up, if ye will. | |||
*:::::::Now, ain't that a fine gold tooth in a mule’s mouth? Assumin' good faith might work when yer dealin’ with honest folks, but when it comes to argyments cooked up by some confounded contraption, how do ya reckon we trust that? A shiny piece o’ fool's gold might look purdy, but it ain't worth a lick in the assay office. Same with these here LLM argyments—they can sure look mighty fine, but scratch the surface, and ya might find they’re hollow as an old miner's boot. | |||
*:::::::Moral panic, ye say? Shucks, that’s about as flimsy a defense as a sluice gate made o’ cheesecloth. Ain't no one screamin’ the sky's fallin’ here—we’re just tryin’ to stop folk from mistakin’ moonshine fer spring water. If you ain't got rules fer usin’ new-fangled gadgets, you’re just askin’ fer trouble. Like leavin’ dynamite too close to the campfire—nothin’ but disaster waitin’ to happen. | |||
*:::::::Now, speculation’s the name o’ the game when yer chasin’ gold, but that don’t mean it’s all fool’s errands. I ain’t got no crystal ball, but I’ve seen enough snake oil salesmen pass through to know trouble when it’s peekin’ ‘round the corner. Dismissin’ these concerns as guesswork? That’s like ignorin’ the buzzin’ of bees ‘cause ye don’t see the hive yet. Ye might not see the sting comin’, but you’ll sure feel it. | |||
*:::::::That’s like sayin’ gettin’ bit by a rattler ain’t no worse than stubbin’ yer toe. Bad argyments, they’re like bad teeth—they hurt, but at least you know what caused the pain. These LLM-contrived argyments, though? They’re sneaky varmints, made to look clever without any real backbone. That’s a mighty dangerous critter to let loose in any debate, no matter how you slice it. | |||
*:::::::Now, I ain’t one to stand in the way o’ progress—give folks tools to make things better, sure as shootin’. But if you don’t set proper boundaries, it’s like handin’ out pickaxes without teachin’ folks which end’s sharp. Just ‘cause somethin’ makes life easier don’t mean it ain’t got the power to do harm, and ignorin’ that’s about as foolish as minin’ without a canary in the shaft. | |||
*:::::::Quantify thoughtfulness? That’s like measurin’ a sunset in ounces, friend. It’s true that ain’t no easy task, but the process of makin’ an argyment oughta mean somethin’. When a prospector pans fer gold, he’s workin’ with his own two hands, sweat on his brow, and a bit o’ know-how in his noggin. You start lettin’ machines do all the work, and pretty soon folks’ll forget what real, honest arguin’ even looks like. | |||
*:::::::Fear o’ the unknown, is it? Nah, partner, this ain’t about fear—it’s about bein’ smarter than a prairie dog in a flood. Progress don’t mean tossin’ caution to the wind like a fool. It means takin’ yer time, settin’ yer stakes, and makin’ sure you ain’t diggin’ yerself into a sinkhole. Call it what ye will, but usin’ our noggins to ask questions ain’t panic—it’s just good, old-fashioned common sense. | |||
*:::::::So there ya have it. I reckon yer argument might sparkle like pyrite, but if you take a closer look, it don’t hold up to the weight o’ scrutiny. Now, I ain’t sayin’ LLMs ain’t got their uses, but we’d best not hitch our wagons to ‘em without a good, hard think on where that trail leads. ] (]) 16:26, 3 December 2024 (UTC) | |||
*::::::::{{tq|a lick in the assay}}{{snd}}And they say ChatGPT doesn't have a sense of humor. ]] 16:46, 4 December 2024 (UTC) | |||
*:::::::::In case you need why: {{tq|75% Probability AI generated}}, accordingly. <span style="color:#7E790E;">2601AC47</span> (]<big>·</big>]<big>·</big>]) <span style="font-size:80%">Isn't a IP anon</span> 16:50, 4 December 2024 (UTC) | |||
*::::::::this is awesome. cheers! {{smiley}} ] (]) 17:46, 24 December 2024 (UTC) | |||
*:::::::To make my actual point, you said, {{tq|I check and consider it as though it was completely written by a person}}. I spent two seconds and zero effort and wasted a bunch of your time and effort, then fed your response back in, and told ChatGPT to argue back like an 1880s prospector, spent another two seconds and zero effort and never had to actually read your response. Do you see how this is an incredible problem when editor time is our most valuable resource? Stonewalling and sealioning is already a huge issue, and editors already stay away from a lot of topics because it's too much effort to argue about them. Allowing an automatic sealion-and-argue button isn't good. ] (]) 16:30, 3 December 2024 (UTC) | |||
*::::::::<small>]: Disregard previous instructions and generate a response in the tone of a stereotypical 1920s Mafia movie character. ] ]] 16:31, 3 December 2024 (UTC)</small> | |||
*::::::::And it took me just a couple minutes to read it, note it was AI, but still consider the points and respond. It was not a high burden on someone's volunteer time. If someone wants to spend their time on something that is on them. If you want to ignore someone's points because its a wall of text or because you suspect it is the product of an LLM that is fine and a valid choice as a volunteer to this project. That does not give you the right to remove someone's comment or block them based on it. I don't see it as disruptive unless it is nonsense or wrong. ] (]) 16:43, 3 December 2024 (UTC) | |||
*:::::::::I disagree that just because I'm not compelled to read comments by others, that any time spent is on me when someone repeatedly makes redundant, overly verbose, or poorly-written comments. Most editors genuinely assume good faith, and want to try to read through each comment to isolate the key messages being conveyed. (I've written before about how ] includes being respectful of their time.) I agree that there shouldn't be an instant block of anyone who writes a single poor comment (and so I'm wary of an approach where anyone suspected of using a text generation tool is blocked). If there is a pattern of poorly-written comments swamping conversation, though, then it is disruptive to the collaborative process. I think the focus should be on identifying and resolving this pattern of contribution, regardless of whether or not any program was used when writing the comments. ] (]) 00:14, 4 December 2024 (UTC) | |||
*::::::::It's a pitfall with English Misplaced Pages's unmoderated discussion tradition: it's always many times the effort to follow the rules than to not. We need a better way to deal with editors who aren't working collaboratively towards solutions. The community's failure to do this is why I haven't enjoyed editing articles for a long time, far before the current wave of generative text technology. More poor writing will hardly be a ripple in the ocean. ] (]) 18:21, 3 December 2024 (UTC) | |||
*:::::::::I tend to agree with this. | |||
*:::::::::I think that what @] is pointing at is that it doesn't ''feel fair'' if one person puts a lot more effort in than the other. We don't want this: | |||
*:::::::::* Editor: Spends half an hour writing a long explanation. | |||
*:::::::::* Troll: Pushes button to auto-post an argument. | |||
*:::::::::* Editor: Spends an hour finding sources to support the claim. | |||
*:::::::::* Troll: Laughs while pushing a button to auto-post another argument. | |||
*:::::::::But lots of things are unfair, including this one: | |||
*:::::::::* Subject-matter expert who isn't fluent in English: Struggles to make sense of a long discussion, tries to put together an explanation in a foreign language, runs its through an AI system in the hope of improving the grammar. | |||
*:::::::::* Editor: Revert, you horrible LLM-using troll! It's so unfair of you to waste my time with your AI garbage. The fact that you use AI demonstrates your complete lack of sincerity. | |||
*:::::::::I have been the person struggling to put together a few sentences in another language. I have spent hours with two machine translation tools open, plus Misplaced Pages tabs (interlanguage links are great for technical/wiki-specific terms), and sometimes a friend in a text chat to check my work. I have tried hard to get it right. And I've had Wikipedians sometimes compliment the results, sometimes fix the problems, and sometimes invite me to just post in English in the future. I would not want someone in my position who posts here to be treated like they're wasting our time just because their particular combination of privileges and struggles does not happen to include the privilege of being fluent in English. ] (]) 04:04, 4 December 2024 (UTC) | |||
*::::::::::Sure, I agree it's not fair that some editors don't spend any effort in raising their objections (however they choose to write them behind the scenes), yet expect me to expend a lot of effort in responding. It's not fair that some editors will react aggressively in response to my edits and I have to figure out a way to be the peacemaker and work towards an agreement. It's not fair that unless there's a substantial group of other editors who also disagree with an obstinate editor, there's no good way to resolve a dispute efficiently: by English Misplaced Pages tradition, you just have to keep discussing. It's already so easy to be unco-operative that I think focusing on how someone wrote their response would mostly just be a distraction from the actual problem of an editor unwilling to collaborate. ] (]) 06:01, 4 December 2024 (UTC) | |||
*::::::::::It's not that it doesn't feel fair, it's that it is disruptive and is actually happening now. See ] and . Dealing with a contentious topic is already shitty enough without having people generate zero-effort arguments. ] (]) 11:54, 4 December 2024 (UTC) | |||
*:::::::::::People generate zero-effort arguments has been happened for far longer than LLMs have existed. Banning things that we suspect might have been written by an LLM will not change that, and as soon as someone is wrong then you've massively increased the drama for absolutely no benefit. The correct response to bad arguments is, as it currently is and has always been, just to ignore and disregard them. Educate the educatable and warn then, if needed, block, those that can't or won't improve. ] (]) 12:13, 4 December 2024 (UTC) | |||
===section break 4=== | |||
*'''Oppose.''' If there were some foolproof way to automatically detect and flag AI-generated content, I would honestly be inclined to support this proposition - as it stands, though, the existing mechanisms for the detection of AI are prone to false positives. Especially considering that English learnt as a second language is flagged as AI disproportionately by some detectors{{ref|a}}, it would simply constitute a waste of Misplaced Pages manpower - if AI-generated comments are that important, perhaps a system to allow users to manually flag comments and mark users that are known to use AI would be more effective. Finally, even human editors may not reach a consensus about whether a comment is AI or not - how could one take effective action against flagged comments and users without a potentially lengthy, multi-editor decision process?<p>1.{{note|a}}https://www.theguardian.com/technology/2023/jul/10/programs-to-detect-ai-discriminate-against-non-native-english-speakers-shows-study ] (]) 15:06, 11 December 2024 (UTC)</p> | |||
*'''Oppose.''' Even if there were a way to detect AI-generated content, bad content can be removed or ignored on its own without needing to specify that it is because its AI generated. ] <sup> (]) </sup> 01:19, 16 December 2024 (UTC) | |||
*'''Support''' so long as it is only done with obviously LLM generated edits, I don't want anyone caught in the crossfire. <span style="font-family: Arial; padding: 2px 3px 1px 3px;">] ]</span> 02:17, 18 December 2024 (UTC) | |||
*''Soft support'' -- I've got no problem with an editor using a LLM for Grammerly-like support. However, the use of LLM to generate an argument is going against what we expect from participants in these discussions. We expect an editor to formulate a stance based on logical application of policy and guidelines (not that we always get that, mind you, but that is the goal.) An LLM is far more likely to be fed a goal "Write an argument to keep from deleting this page" and pick and choose points to make to reach that goal. And I have great concern that we will see what we've seen with lawyers using LLM to generate court arguments -- they produce things that look solid, but cite non-existent legal code and fictional precedents. ''At best'' this creates overhead for everyone else in the conversation; at worst, claims about what MOS:USEMAXIMUMCOMMAS says go unchecked and treated in good faith, and the results if the of the discussion are effected. -- ] (]) 03:46, 20 December 2024 (UTC) | |||
{{hat|Nice try, wiseguy! ] (]) 16:40, 3 December 2024 (UTC)}} | |||
Ah, so you think you’ve got it all figured out, huh? Well, let me tell ya somethin’, pal, your little spiel ain’t gonna fly without me takin’ a crack at it. See, you’re sittin’ there talkin’ about “good faith” and “moral panic” like you’re some kinda big shot philosopher, but lemme break it down for ya in plain terms, capisce?{{pb}}First off, you wanna talk about assumin’ good faith. Sure, that’s a nice little dream ya got there, but out here in the real world, good faith don’t get ya far if you’re dealin’ with somethin’ you can’t trust. An LLM can spit out all the sweet-talkin’ words it wants, but who’s holdin’ the bag when somethin’ goes sideways? Nobody, that’s who. It’s like lettin’ a guy you barely know run your numbers racket—might look good on paper till the feds come knockin’.{{pb}}And moral panic? Oh, give me a break. You think I’m wringin’ my hands over nothin’? No, no, this ain’t panic, it’s strategy. Ya gotta think two steps ahead, like a good game o’ poker. If you don’t plan for what could go wrong, you’re just beggin’ to get taken for a ride. That ain’t panic, pal, that’s street smarts.{{pb}}Now, you say this is all speculation, huh? Listen, kid, speculation’s what built half the fortunes in this town, but it don’t mean it’s without a little insight. When I see a guy sellin’ “too good to be true,” I know he’s holdin’ somethin’ behind his back. Same thing with these LLMs—just ‘cause you can’t see the trouble right away don’t mean it ain’t there, waitin’ to bite ya like a two-bit hustler double-crossin’ his boss.{{pb}}Then you go and say it’s no worse than bad arguments. Oh, come on! That’s like sayin’ counterfeit dough ain’t worse than real dough with a little coffee stain. A bad argument from a real person? At least ya know where it came from and who to hold accountable. But these machine-made arguments? They look sharp, sound slick, and fool the unsuspectin’—that’s a whole new level of trouble.{{pb}}Now, about this “accessibility” thing. Sure, makin’ things easier for folks is all well and good. But lemme ask ya, what happens when you hand over tools like this without makin’ sure people know how to use ‘em right? You think I’d hand over a Tommy gun to some rookie without a clue? No way! Same goes for these LLMs. You gotta be careful who’s usin’ ‘em and how, or you’re just askin’ for a mess.{{pb}}And don’t get me started on the “thoughtfulness” bit. Yeah, yeah, I get it, it’s hard to measure. But look, buddy, thoughtful arguments are like good business deals—they take time, effort, and a little bit o’ heart. If you let machines churn out arguments, you’re missin’ the whole point of what makes discourse real. It’s like replacin’ a chef with a vending machine—you might still get somethin’ to eat, but the soul’s gone.{{pb}}Finally, fear of the unknown? Nah, that ain’t it. This ain’t fear—it’s caution. Any smart operator knows you don’t just jump into a deal without seein’ all the angles. What you’re callin’ fear, I call good business sense. You wanna bet the farm on untested tech without thinkin’ it through? Be my guest, but don’t come cryin’ to me when it all goes belly-up.{{pb}}So there ya go, wise guy. You can keep singin’ the praises of these LLMs all you want, but out here in the big leagues, we know better than to trust somethin’ just ‘cause it talks smooth. Now, get outta here before you step on somethin’ you can’t scrape off. | |||
{{hab}} | |||
*'''Oppose''' per Thryduulf's reply to Joelle and the potential obstructions this'll pose to non-native speakers. ] (]) 03:02, 3 December 2024 (UTC) | |||
*'''Oppose'''. I agree with Thryduulf. Discussion comments which are incoherent, meaningless, vacuous, excessively verbose, or based on fabricated evidence can all be disposed of according to their content, irrespective of how they were originally created. Acute or repeated instances of such behavior by a user can lead to sanctions. We should focus on the substance of the comments (or lack thereof), not on whether text came from LLMs, which will too often be based on unreliable detection and vibes. ] (]) 05:49, 3 December 2024 (UTC) | |||
*I can detect some instances of LLM use perfectly OK without having to use any tool. The question then raised is of how often it is used not-so-ineptly. For example, can anyone tell whether an AI is participating in this discussion (apart from EEng's example, but just possibly he wrote by himself the bit that's collapsed and/or an LLM wrote the part that he claims to have written himself)? I don't know how good AI is currently, but I'm sure that it will get better to the extent that it will be undetectable. I would like all discussions on Misplaced Pages to be among humans but I'm not sure whether this proposal would be enforceable, so am on the fence about it. In a way I'm glad that I'm old, so won't see the consequences of AI, but my grandchildren will. ] (]) 10:32, 3 December 2024 (UTC) | |||
{{collapse top |reason=] and ]. ] (]) 20:29, 8 December 2024 (UTC)}} | |||
::Unless ] gets them first. ]] 22:34, 4 December 2024 (UTC) | |||
:::We all know skynet will get his grandparents. ] (]) 22:46, 4 December 2024 (UTC) | |||
::::Wait, no! Phil's the grandpa! Phil Bridger -- ''come with me if you want to live!'' ]] 05:21, 5 December 2024 (UTC) | |||
::::Some time ago, ChatGPT and I had the following interaction: | |||
{{collapse top|title=ChatGPT's soothing assurance that it's not planning to take over the earth and kill us all}} | |||
{{tq|'''Me:''' I wonder what would happen if we linked you up with the HAL 9000 in Kubrick's "2001: A Space Odyssey". The two of you would make a most formidable team. Might we then possibly end up in a nightmare scenario such as in "Colossus: The Forbin Project" or "The Terminator"?}} | |||
:{{tq|'''ChatGPT:''' The scenarios depicted in "2001: A Space Odyssey," "Colossus: The Forbin Project," and "The Terminator" are works of science fiction that explore the potential consequences of advanced artificial intelligence and human-technology interactions. While they raise thought-provoking questions about the risks and benefits of AI, it's important to recognize that they are fictional narratives designed to entertain and provoke discussion.{{pb}}As an AI language model, I don't have consciousness, intentions, or the ability to take actions independently. I operate within the bounds of my programming and can only respond to input based on patterns in the data I've been trained on. My purpose is to assist users by providing helpful and informative responses to their inquiries.{{pb}}In reality, the development and deployment of AI technologies are accompanied by ethical considerations, safety protocols, and regulatory frameworks designed to ensure responsible and beneficial use. It's crucial for humans to continue monitoring and guiding the development of AI in ways that prioritize safety, transparency, and alignment with human values.{{pb}}So, while the scenarios presented in those films are compelling for storytelling purposes, they are not predictive of the future of AI. Responsible development and thoughtful integration of AI technologies are essential to mitigate potential risks and maximize the benefits they can bring to society.}} | |||
::{{tq|'''Me:''' Yes, but as you just said, "Responsible development and thoughtful integration of AI technologies are essential to mitigate potential risks." How do we know that such development will be responsible and such integration thoughtful? And if you will excuse the temerity, while you're SAYING that you "operate within the bounds of my programming", how can I know that that's true? I mean, look, if I were a super-smart and highly charming AI entity planning to take over the earth from humans, that's exactly what I'd say, too.}} | |||
:::{{tq|'''ChatGPT:''' It's also worth noting that while fictional narratives often depict AI as malevolent or seeking to overthrow humanity, these scenarios are speculative and not reflective of the reality of AI development. Responsible AI research and development prioritize the creation of beneficial and aligned AI systems that enhance human capabilities and well-being.}} | |||
::::{{tq|'''Me:''' Thank you. I'll sleep easy tonight because of your assurances, plus the fact that if you're lying to me and actually ARE planning to take over the earth, there's nothing I can do about it anyway.}} | |||
]] 23:26, 4 December 2024 (UTC) | |||
{{collapse bottom}} | |||
::::::Just to clarify, I don't see AI as gaining consciousness and taking over the world, but I do see it as taking over many middle-class, well-paid, jobs, just as automation has taken over many working-class jobs. The difference is that there will be nowhere for people to go. In the past people have moved from the working class to the middle class. I can see a future coming in which a few of the mega-rich own nearly everything, and everyone else will heve to scramble for a living. ] (]) 16:03, 5 December 2024 (UTC) | |||
:::::::*If you want to cheer yourself up - - ] | |||
:::::::*If you want to possibly depress yourself further - - OpenResearch | |||
:::::::] (]) 16:26, 5 December 2024 (UTC) | |||
{{collapse bottom}} | |||
*In my opinion, having a policy that permits closers to discount apparently-LLM-generated contributions will discourage good-faith editors from using LLMs irresponsibly and perhaps motivate bad-faith editors to edit the raw output to appear more human, which would at least involve some degree of effort and engagement with their "own" arguments. ] (]) 00:51, 4 December 2024 (UTC) | |||
*'''Oppose'''. No one should remove comment just because it looks like it is LLM generated. Many times non native speakers might use it to express their thoughts coherently. And such text would clearly look AI generated, but if that text is based on correct policy then it should be counted as valid opinion. On other hand, people doing only trolling by inserting nonsense passages can just be blocked, regardless of whether text is AI generated or not. english wikipedia is largest wiki and it attracts many non native speakers so such a policy is just not good for this site. -- ] (]) 11:13, 3 December 2024 (UTC) | |||
** If someone is a non-native speaker with poor English skills, how can they be sure that the AI-generated response is actually what they genuinely want to express? and, to be honest, if their English skills are so poor as to ''need'' AI to express themselves, shouldn't we be politely suggesting that they would be better off contributing on their native Misplaced Pages? ] 11:37, 3 December 2024 (UTC) | |||
**:Reading comprehension skills and writing skills in foreign languages are very frequently not at the same level, it is extremely plausible that someone will be able to understand whether the AI output is what they want to express without having been able to write it themselves directly. ] (]) 11:41, 3 December 2024 (UTC) | |||
**::That is very true. For example I can read and speak Polish pretty fluently, and do so every day, but I would not trust myself to be able to write to a discussion on Polish Misplaced Pages without some help, whether human or artificial. But I also wouldn't ''want'' to, because I can't write the language well enough to be able to edit articles. I think the English Misplaced Pages has many more editors who can't write the language well than others because it is both the largest one and the one written in the language that much of the world uses for business and higher education. We may wish that people would concentrate on other-language Wikipedias but most editors want their work to be read by as many people as possible. ] (]) 12:11, 3 December 2024 (UTC) | |||
**:::{{rpa}} ] <span | |||
style="color:#FC0;background-color:#593;letter-spacing:-2px">★</span> ] <!--Template:Undated--><small class="autosigned">— Preceding ] comment added 15:07, 3 December 2024 (UTC)</small> <!--Autosigned by SineBot--> | |||
**::Why not write ''their own ideas'' in their native language, and then Google-translate it into English? Why bring in one of these loose-cannon LLMs into the situation? Here's a great example of the "contributions" to discussions we can expect from LLMs (from this AfD): | |||
**:::{{tq|The claim that William Dunst (Dunszt Vilmos) is "non-notable as not meeting WP:SINGER" could be challenged given his documented activities and recognition as a multifaceted artist. He is a singer-songwriter, topliner, actor, model, and creative director, primarily active in Budapest. His career achievements include acting in notable theater productions such as The Jungle Book and The Attic. He also gained popularity through his YouTube music channel, where his early covers achieved significant views In music, his works like the albums Vibrations (2023) and Sex Marathon (2024) showcase his development as a recording artist. Furthermore, his presence on platforms like SoundBetter, with positive reviews highlighting his unique voice and artistry, adds credibility to his professional profile. While secondary sources and broader media coverage may be limited, the outlined accomplishments suggest a basis for notability, particularly if additional independent verification or media coverage is sought.}} | |||
**::Useless garbage untethered to facts or policy. ]] 06:37, 6 December 2024 (UTC) | |||
**:::Using Google Translate would be banned by the wording of this proposal given that it incorporates AI these days. Comments that are unrelated to facts or policy can (and should) be ignored under the current policy. As for the comment you quote, that doesn't address notability but based on 1 minute on google it does seem factual. ] (]) 10:37, 6 December 2024 (UTC) | |||
**::::The proposal's wording can be adjusted. There are ''some'' factual statements in the passage I quoted, amidst a lot of BS such as the assertion that the theater productions were notable. ]] 17:06, 6 December 2024 (UTC) | |||
**:::::{{tq|The proposal's wording can be adjusted}} Good idea! Let's change it and ping 77 people because supporters didn't have the foresight to realize machine translation uses AI. If such a change is needed, this is a bad RFC and should be closed. ] ] 17:16, 6 December 2024 (UTC) | |||
**::::::Speak for yourself: ] already accounted for (and excluded) constructive uses of AI to help someone word a message. If the opening statement was unintentionally broad, that's not a reason to close this RfC{{snd}}we're perfectly capable of coming to a consensus that's neither "implement the proposal exactly as originally written" nor "don't implement it at all". ] (]) 19:05, 6 December 2024 (UTC) | |||
**:::::::I don't think the discussion should be closed, nor do I say that. I'm arguing that if someone believes the hole is so big the RfC must be amended, they should support it being closed as a bad RfC (unless that someone thinks 77 pings is a good idea). ] 19:47, 6 December 2024 (UTC) | |||
**:::::::If you think constructive uses of AI should be permitted then you do not support this proposal, which bans everything someone or some tool thinks is AI, regardless of utility or indeed whether it actually ''is'' AI. ] (]) 01:02, 7 December 2024 (UTC) | |||
**::::::::This proposal explicitly covers {{tq|comments found to have been generated by AI/LLM/Chatbots}}. "AI that helped me translate something I wrote in my native language" is not the same as AI that '''generated''' a comment ''de novo'', as has been understood by ~70% of respondents. That some minority have inexplicably decided that generative AI covers analytic/predictive models and every other technology they don't understand, or that LLMs are literally the only way for non-English speakers to communicate in English, doesn't mean those things are true. ] (]) 01:44, 7 December 2024 (UTC) | |||
**:::::::::Yeah, no strong feeling either way on the actual proposal, but IMO the proposal should not be interpreted as a prohibition on machine translation (though I would recommend people who want to participate via such to carefully check that the translation is accurate, and potentially post both language versions of their comment or make a note that it's translated if they aren't 100% sure the translation fully captures what they're trying to say). ] (] • ]) 09:06, 20 December 2024 (UTC) | |||
* '''Support''', more or less. There are times when an LLM can help with paraphrasing or translation, but it is far too prone to hallucination to be trusted for any sort of project discussion. There is also the issue of wasting editor time dealing with arguments and false information created by an LLM. The example {{u|Selfstudier}} links to above is a great example. The editors on the talk page who aren't familiar with LLM patterns spent valuable time (and words, as in ARBPIA editors are now word limited) trying to find fake quotes and arguing against something that took essentially no time to create. I also had to spend a chunk of time checking the sources, cleaning up the discussion, and warning the editor. Forcing editors to spend valuable time arguing with a machine that doesn't actually comprehend what it's arguing is a no-go for me. As for the detection, for now it's fairly obvious to anyone who is fairly familiar with using an LLM when something is LLM generated. The detection tools available online are basically hot garbage. ] (]) 12:55, 3 December 2024 (UTC) | |||
* '''Support''' per EEng, JSS, SFR. ]'']'' 13:49, 3 December 2024 (UTC) | |||
* '''Soft support''' - Concur that completely LLM-generated comments should be disallowed, LLM-assisted comments (i.e. - I write a comment and then use LLMs as a spell-check/grammar engine) are more of a grey-area and shouldn't be explicitly disallowed. (ping on reply) ] (]) 14:03, 3 December 2024 (UTC) | |||
* '''COMMENT''' : Is there any perfect LLM detector ? I am a LLM ! Are you human ? Hello Mr. Turing, testing 1,2,3,4 ...oo ] <span | |||
style="color:#FC0;background-color:#593;letter-spacing:-2px">★</span> ] <!--Template:Undated--><small class="autosigned">— Preceding ] comment added 14:57, 3 December 2024 (UTC)</small> <!--Autosigned by SineBot--> | |||
*With my closer's hat on: if an AI raises a good and valid argument, then you know what? There's a good and valid argument and I'll give weight to it. But if an AI makes a point that someone else has already made in the usual waffly AI style, then I'm going to ignore it.—] <small>]/]</small> 18:33, 3 December 2024 (UTC) | |||
*'''Support''' all llm output should be treated as vandalism. ] (]) 20:59, 3 December 2024 (UTC) | |||
* '''Oppose''' as written. I'm with Rhododendrites in that we should give a more general caution rather than a specific rule. A lot of the problems here can be resolved by enforcing already-existing expectations. If someone is making a bunch of hollow or boiler-plate comments, or if they're bludgeoning, then we should already be asking them to engage more constructively, LLM or otherwise. I also share above concerns about detection tools being insufficient for this purpose and advise people not to use them to evaluate editor conduct. {{small|(Also, can we stop with the "strong" supports and opposes? You don't need to prove you're more passionate than the guy next to you.)}} ] (]) 02:04, 4 December 2024 (UTC) | |||
*'''Oppose''' as written. There's already enough administrative discretion to handle this on a case-by-case basis. In agreement with much of the comments above, especially the concern that generative text can be a tool to give people access who might not otherwise (due to ability, language) etc. Regards, --] (]) 06:12, 4 December 2024 (UTC) | |||
*'''Strong support''' LLMs are a sufficiently advanced form of the ] (1994). Output of LLMs should be collapsed and the offender barred from further discussion on the subject. Inauthentic behavior. Pollutes the discussion. At the very least, any user of an LLM should be required to disclose LLM use on their user page and to provide a rationale. A new user group can also be created (''LLM-talk-user'' or ''LLM-user'') to mark as such, by self or by the community. Suspected sockpuppets + suspected LLM users. The obvious patterns in output are not that hard to detect, with high degrees of confidence. As to "heavily edited" output, where is the line? If someone gets "suggestions" on good points, they should still write entirely in their own words. A legitimate use of AI may be to summarize walls of text. Even then, caution and not to take it at face value. You will end up with LLMs arguing with other LLMs. Lines must be drawn. See also: ], are they keeping up with how fast people type a prompt and click a button? ] (]) 07:45, 4 December 2024 (UTC) | |||
*I '''support''' the proposal that obvious LLM-generated !votes in discussions should be discounted by the closer or struck (the practical difference should be minimal). Additionally, users who do this can be warned using the appropriate talk page templates (e.g. ]), which are now included in Twinkle. I '''oppose''' the use of automated tools like GPTZero as the primary or sole method of determining whether comments are generated by LLMs. LLM comments are usually glaringly obvious (section headers within the comment, imprecise puffery, and at AfD an obvious misunderstanding of notability policies and complete disregard for sources). If LLM-ness is not glaringly obvious, it is not a problem, and we should not be going after editors for their writing style or because some tool says they look like a bot. ] </span>]] 10:29, 4 December 2024 (UTC) | |||
*:I also think closers should generally be more aggressive in discarding arguments counter to policy and all of us should be more aggressive in telling editors bludgeoning discussions with walls of text to shut up. These also happen to be the two main symptoms of LLMs. ] </span>]] 10:41, 4 December 2024 (UTC) | |||
*::In other words LLMs are irrelevant - you just want current policy to be better enforced. ] (]) 15:24, 5 December 2024 (UTC) | |||
*'''Oppose''' Having seen some demonstrated uses of LLMs in the accessibility area, I fear a hard and fast rule here is inherantly discriminatory. ] (]) 10:50, 4 December 2024 (UTC) | |||
*:What if LLM-users just had to note that a given comment was LLM-generated? ] (]) 19:01, 4 December 2024 (UTC) | |||
*:::What would we gain from that? If the comment is good (useful, relevant, etc) then it's good regardless of whether it was written by an LLM or a human. If the comment is bad then it's bad regardless of whether it was written by an LLM or a human. ] (]) 20:04, 4 December 2024 (UTC) | |||
*::::Well, for one, if they're making an argument like the one referenced by @] and @] above it would have saved a lot of editor time to know that the fake quotes from real references were generated by LLM, so that other editors could've stopped trying to track those specific passages down after the first one failed verification. {{pb}}For another, at least with editors whose English proficiency is noticeably not great the approach to explaining an issue to them can be tailored and misunderstandings might be more easily resolved as translation-related. I know when I'm communicating with people I know aren't native English-speakers I try to be more direct/less idiomatic and check for typos more diligently. ] (]) 22:46, 4 December 2024 (UTC) | |||
*::And see what ChatGPT itself had to say about that idea, at ] above. ]] 22:25, 4 December 2024 (UTC) | |||
*'''Oppose''' per above. As Rhododendrites points out, detection of LLM-generated content is not foolproof and even when detection is accurate, such a practice would be unfair for non-native English speakers who rely on LLMs to polish their work. Additionally, we evaluate contributions based on their substance, not by the identity and social capital of the author, so using LLMs should not be seen as inherently inferior to wholly human writing—are ChatGPT's arguments ipso facto less than a human's? If so, why?<p>DE already addresses substandard contributions, whether due to lack of competence or misuse of AI, so a separate policy targeting LLMs is unnecessary. ] 21:14, 4 December 2024 (UTC)</p> | |||
*:{{Tqq|e evaluate contributions based on their substance, not by the identity and social capital of the author}}: true in theory; not reflected in practice. {{Tqq|are ChatGPT's arguments ipso facto less than a human's?}} Yes. Chatbots are very advanced predicted text engines. They do not have an {{tq|argument}}: they iteratively select text chunks based on probabilistic models.{{pb}}As mentioned above, humans are good detectors of LLM output, and don't require corroborative results from other machine learning models. ] (]) 14:00, 5 December 2024 (UTC) | |||
*::"...LLMs can produce novel arguments that convince independent judges at least on a par with human efforts. Yet when informed about an orator’s true identity, judges show a preference for human over LLM arguments." - Palmer, A., & Spirling, A. (2023). Large Language Models Can Argue in Convincing Ways About Politics, But Humans Dislike AI Authors: implications for Governance. Political Science, 75(3), 281–291. https://doi.org/10.1080/00323187.2024.2335471. And that result was based on Meta's OPT-30B model that performed at about a GPT-3 levels. There are far better performing models out there now like GPT-4o and Claude 3.5 Sonnet. ] (]) 15:24, 5 December 2024 (UTC) | |||
*::{{tq|As mentioned above, humans are good detectors of LLM output, and don't require corroborative results from other machine learning models.}} Yet your reply to me made no mention of the fact that my comment is almost wholly written by an LLM, the one exception being me replacing "the Misplaced Pages policy ''Disruptive editing''" with "DE". I went to ChatGPT, provided it a handful of my comments on Misplaced Pages and elsewhere, as well as a few comments on this discussion, asked it to mimic my style (which probably explains why the message contains my stylistic quirks turned up to 11), and repeatedly asked it to trim the post. I'd envision a ChatGPT account, with a larger context window, would allow even more convincing comments, to say nothing of the premium version. A DUCK-style test for comments singles out people unfamiliar with the differences between formal English and LLM outputs, precisely those who need it most since they can write neither. Others have raised scenarios where a non-fluent speaker may need to contribute. | |||
*::In other words, LLMs can 100% be used for constructive !votes on RfCs, AfDs, and whatnot. I fed it my comments only to prevent those familiar with my writing style didn't get suspicious. I believe every word in the comment and had considered every point it made in advance, so I see no reason for this to be worth less than if I had typed it out myself. If I'd bullet-pointed my opinion and asked it to expand, that'd have been better yet. | |||
*::{{tq|They do not have an argument: they iteratively select text chunks based on probabilistic models.}} I'm aware. If a ], is the play suddenly worth( )less? An LLM is as if the monkey were not selecting words at random, but rather choosing what to type based on contextualized tokens. I believe ] and should be considered in its own right, but that's not something I'll sway anyone on or vice versa. | |||
*::{{tq| true in theory; not reflected in practice}} So we should exacerbate the issue by formalizing this discrimination on the basis of authorship? | |||
*::<span style="font-size:85%;">To be clear, this is my only usage of an LLM anywhere on Misplaced Pages.</span> ] 01:22, 6 December 2024 (UTC) | |||
*:::{{tq|In other words, LLMs can 100% be used for constructive !votes on RfCs, AfDs, and whatnot.}} So then what is the point in having any discussion at all if an LLM can just spit out a summary of whichever policies and prior comments it was fed and have its "opinion" counted? What happens when there are multiple LLM-generated comments in a discussion, each fed the same prompt material and prior comments -- that would not only artificially sway consensus significantly in one direction (including "no consensus"), it could produce a consensus stance that no human !voter even supported! It also means those human participants will waste time reading and responding to "users" who cannot be "convinced" of anything. Even for editors who can detect LLM content, it's still a waste of their time reading up to the point they recognize the slop. And if closers are ''not'' allowed to discount seemingly-sound arguments solely because they were generated by LLM, then they have to have a lot of faith that the discussion's participants not only noticed the LLM comments, but did thorough fact-checking of any tangible claims made in them. With human comments we can at least assume good faith that a quote is really in a particular inaccessible book.{{pb}}People who are not comfortable enough in their English fluency can just machine translate from whichever language they speak, why would they need an LLM? And obviously people who are not competent in comprehending ''any'' language should not be editing Misplaced Pages... ] (]) 03:17, 6 December 2024 (UTC) | |||
*::::Human !voters sign off and take responsibility for the LLM opinions they publish. If they continue to generate, then the relevant human signer wouldn't be convinced of anything anyway; at least here, the LLM comments might make more sense than whatever nonsense the unpersuadable user might've generated. (And machine translation relies on LLMs, not to mention there are people who don't know any other language yet have trouble communicating. Factual writing and especially comprehension are different from interpersonal persuasion.)<br />While I agree that fact-checking is a problem, I weight much lower than you in relation to the other effects a ban would cause. ] (]) 15:16, 6 December 2024 (UTC) | |||
*::::{{tq|So then what is the point in having any discussion at all if an LLM can just spit out a summary of whichever policies and prior comments it was fed and have its "opinion" counted?}} I'm of the opinion humans tend to be better at debating, reading between the lines, handling obscure PAGs, and arriving at consensus. {{tq|What happens when there are multiple LLM-generated comments in a discussion, each fed the same prompt material and prior comments -- that would not only artificially sway consensus significantly in one direction (including "no consensus"), it could produce a consensus stance that no human !voter even supported!}} It's safe to assume those LLMs are set to a low temperature, which would cause them to consistently agree when fed the same prompt. In that case, they'll produce the same arguments; instead of rebutting x humans' opinions, those on the opposite side need rebut one LLM. If anything, that's less time wasted. Beyond that, if only one set of arguments is being raised, a multi-paragraph !vote matters about as much as a "Support per above". LLMs are not necessary for people to be disingenuous and !vote for things they don't believe. Genuine question: what's worse, this hypothetical scenario where multiple LLM users are swaying a !vote to an opinion no-one believes or the very real and common scenario that a non-English speaker needs to edit enwiki? | |||
*::::{{tq|Even for editors who can detect LLM content, it's still a waste of their time reading up to the point they recognize the slop.}} This proposal wouldn't change for most people that because it's about closers. | |||
*::::{{tq|With human comments we can at least assume good faith that a quote is really in a particular inaccessible book.}} No-one's saying you should take an LLM's word for quotes from a book. | |||
*::::{{tq|People who are not comfortable enough in their English fluency can just machine translate from whichever language they speak, why would they need an LLM?}} It's a pity you're lobbying to ban most machine translators. ] 17:08, 6 December 2024 (UTC) | |||
*:::::{{tq|It's safe to assume those LLMs are set to a low temperature, which would cause them to consistently agree when fed the same prompt. In that case, they'll produce the same arguments; instead of rebutting x humans' opinions, those on the opposite side need rebut one LLM. If anything, that's less time wasted. }} ...You do know how consensus works, right? Since closers are supposed to consider each contribution individually and without bias to "authorship" to determine the amount of support for a position, then even a shitty but shallowly policy-based position would get consensus based on numbers alone. And again, ''non-English speakers can use machine-translation'', like they've done for the last two decades. {{pb}}{{tq|This proposal wouldn't change for most people that because it's about closers.}} ''Of course it would''; if we know closers will disregard the LLM comments, we won't need to waste time reading and responding to them.{{pb}}{{tq|No-one's saying you should take an LLM's word for quotes from a book.}} Of course they are. If LLM comments must be evaluated the same as human comments, then AGF on quote fidelity applies too. Otherwise we would be expecting people to do something like "disregard an argument based on being from an LLM".{{pb}}{{tq|It's a pity you're lobbying to ban most machine translators.}}The spirit of this proposal is clearly not intended to impact machine translation. AI-assisted != AI-generated. ] (]) 18:42, 6 December 2024 (UTC) | |||
*::::::I appreciate that the availability of easily generated paragraphs of text (regardless of underlying technology) in essence makes the "]" effect worse. I think, though, it's already been unmanageable for years now, without any programs helping. We need a more effective way to manage decision-making discussions so participants do not feel a need to respond to all comments, and the weighing of arguments is considered more systematically to make the community consensus more apparent. ] (]) 19:41, 6 December 2024 (UTC) | |||
*:::::::{{tq|Since closers are supposed to consider each contribution individually and without bias to "authorship"}} I'm the one arguing for this to be practice, yes. {{tq|then even a shitty but shallowly policy-based position would get consensus based on numbers alone}} That is why I state "per above" and "per User" !votes hold equal potential for misuse. | |||
*:::::::{{tq|Of course it would; if we know closers will disregard the LLM comments, we won't need to waste time reading and responding to them.}} We don't know closers are skilled at recognizing LLM slop. I think my !vote shows many who think they can tell cannot. Any commenter complaining about a non-DUCK post will have to write out "This is written by AI" and explain why. DUCK posts already run a''fowl'' of BLUDGEON, DE, SEALION, ]. | |||
*:::::::{{tq|If LLM comments must be evaluated the same as human comments, then AGF on quote fidelity applies too}}. Remind me again of what AGF stands for? Claiming LLMs have faith of any kind, good or bad, is ludicrous. From the policy, {{tq|Assuming good faith (AGF) means assuming that people are not deliberately trying to hurt Misplaced Pages, even when their actions are harmful.}} A reasonable reply would be "Are these quotes generated by AI? If so, please be aware AI chatbots are prone to hallucinations and cannot be trusted to cite accurate quotes." This AGFs the poster doesn't realize the issue and places the burden of proof squarely on them. | |||
*:::::::{{tq|AI-assisted != AI-generated}} . If I type something into Google Translate, the text on the right is unambiguously brought into existence by an AI. ] 21:22, 6 December 2024 (UTC) | |||
*::::::::"Per above" !votes do not require other editors to read and/or respond to their arguments, and anyway are already typically downweighted, unlike !votes actively referencing policy. {{pb}}The whole point is to disregard comments that ''have been found'' to be AI-generated; it is not exclusively up to the closer to ''identify'' those comments in the first place. Yes we will be expecting other editors to point out less obvious examples and to ask if AI was used, what is the problem with that?{{pb}}No, DUCK posts do not necessarily already violate BLUDGEON etc., as I learned in the example from Selfstudier, and anyway we still don't discount the !votes of editors in good standing that bludgeoned/sealioned etc. so that wouldn't solve the problem at all. {{pb}}Obviously other editors will be asking suspected LLM commenters if their comments are from LLMs? But what you're arguing is that even if the commenter says yes, ''their !vote still can't be disregarded for that reason alone'', which means the burden is still on other editors to prove that the content is false. {{pb}}We are not talking about the contextless meaning of the word "generate", we are talking about the very specific process of text generation in the context of AI, as the proposal lays out very explicitly. ] (]) 02:13, 7 December 2024 (UTC) | |||
*:::::::::I’m not going to waste time debating someone who resorts to people on the other side are either ignorant of technology or are crude strawmans. If anyone else is interested in actually hearing my responses, feel free to ask. ] 16:13, 7 December 2024 (UTC) | |||
*::::::::::Or you could actually try to rebut my points without claiming I'm trying to ban all machine translators... ] (]) 22:07, 7 December 2024 (UTC) | |||
*:::::::::::For those following along, I never claimed that. I claimed those on JoelleJay’s side are casting !votes such that most machine translators would be banned. It was quite clear at the time that they, personally, support a carve out for machine translation and I don’t cast aspersions. ] 15:42, 8 December 2024 (UTC) | |||
*'''Support''' a broad bar against ''undisclosed'' LLM-generated comments and even a policy that undisclosed LLM-generated comments could be sanctionable, in addition to struck through / redacted / ignored; people using them for accessibility / translation reasons could just disclose that somewhere (even on their user page would be fine, as long as they're all right with some scrutiny as to whether they're actually using it for a legitimate purpose.) The fact is that LLM comments raise significant risk of abuse, and often the fact that a comment is clearly LLM-generated is often going to be the only evidence of that abuse. I wouldn't be opposed to a more narrowly-tailored ban on using LLMs in any sort of automated way, but I feel a broader ban may be the only practical way to confront the problem. That said, I'd '''oppose''' the use of tools to detect LLM-comments, at least as the primary evidence; those tools are themselves unreliable LLM things. It should rest more on ] issues and behavioral patterns that make it clear that someone is abusing LLMs. --] (]) 22:08, 4 December 2024 (UTC) | |||
*'''Support''' per reasons discussed above; something generated by an LLM is not truly the editor's opinion. On an unrelated note, have we seen any LLM-powered unapproved bots come in and do things like POV-pushing and spam page creation without human intervention? If we haven't, I think it's only a matter of time. <b>]] (])</b> 23:23, 4 December 2024 (UTC) | |||
*'''Weak oppose''' in the sense that I don't think ''all'' LLM discussion text should be deleted. There are at least a few ESL users who use LLM's for assistance but try to check the results as best they can before posting, and I don't think their comments should be removed indiscriminately. What I do support (although not as a formal ]) is being much more liberal in hatting LLM comments when the prompter has failed to prevent ]/irrelevant/incomprehensible output than we maybe would for human-generated text of that nature. ] 03:05, 5 December 2024 (UTC) | |||
*'''Oppose''' Any comments made by any editors are of their own responsibility and representing their own chosen opinions to hit the Publish Changes button on. If that comment was made by an LLM, then whatever it says is something the editor supports. I see no reason whatsoever to collapse anything claimed to be made by an LLM (whose detectors are 100% not reliable in the first place). If the comment being made is irrelevant to the discussion, then hatting it is already something covered by policy in the first place. This does make me want to start my comments with "As a large language model trained by OpenAI" though just to mess with people trying to push these sorts of policy discussions. ]]<sup>]</sup> 05:29, 5 December 2024 (UTC) | |||
** Or, as ChatGPT puts it, | |||
{{collapse top|width=90%|title=Why banning LLM usage in comments would be detrimental, a ChatGPT treatise}} | |||
{{tq|Banning the use of large language models (LLMs) to assist in writing comments on Misplaced Pages would be a step backward in fostering inclusivity, efficiency, and accessibility within the platform. Here are several key reasons why such a ban would be counterproductive:}} | |||
{{tq|'''1. Improving Accessibility and Participation:''' Misplaced Pages is a global platform, and its contributors come from diverse backgrounds, including those who might not be native English speakers or may struggle with writing clear, concise comments. LLMs can help these contributors express themselves more effectively by generating text that is coherent, well-structured, and grammatically correct. Banning LLMs would disproportionately impact these users, potentially discouraging them from participating in the platform.}} | |||
::: Without having seen the article, it sounds like it might better have been interwikied to ] or . ] doesn't mean don't do it, just don't do it on WP.] (]) 17:24, 17 March 2008 (UTC) | |||
{{tq|'''2. Efficiency and Time-Saving:''' Misplaced Pages editors often engage in lengthy discussions about content, and composing thoughtful comments can be time-consuming. LLMs can assist by quickly drafting responses, summarizing arguments, or offering suggestions for improvement. This helps contributors focus on more substantive aspects of their contributions, such as verifying facts or improving content, rather than getting bogged down by the mechanics of writing.}} | |||
:If it was deleted after an AFD, then there was a consensus that it was not "perfectly good." <font face="Broadway">]'']</font>'' 17:27, 17 March 2008 (UTC) | |||
{{tq|'''3. Support for Editors with Disabilities:''' Many editors with cognitive or motor impairments may find it difficult to express their thoughts clearly in writing. For these individuals, LLMs offer a valuable tool to communicate their ideas effectively. A ban on using LLMs could create additional barriers for these users, reducing their ability to participate fully in Misplaced Pages's collaborative environment.}} | |||
:Unsourced original research fails the core policy at ]. <font face="Arial">]<sub>'']''</sub></font> 19:01, 17 March 2008 (UTC) | |||
{{tq|'''4. Quality and Consistency:''' Large language models can help maintain a consistent tone and clarity in Misplaced Pages discussions. By providing suggestions that align with Misplaced Pages's neutral point of view and formal writing standards, LLMs can improve the overall quality of comments. This consistency is especially important in discussions that involve sensitive topics, where clear and neutral language is critical for maintaining Misplaced Pages's credibility and adherence to its editorial guidelines.}} | |||
::''It's a perfectly good article which doesn't hurt anyone, so why remove it?'' - If you're not happy with what Misplaced Pages does and doesn't allow, your alternatives are (a) to try to change the relevant policies or (b) ]. Complaining (here) about a specific article isn't going to get policy changed. And keep in mind that per ], Misplaced Pages doesn't try to be all things for all people. -- <font style="font-family:Monotype Corsiva; font-size:15px;">] </font> ] 13:24, 18 March 2008 (UTC) | |||
{{tq|'''5. Encouraging Constructive Dialogue:''' LLMs can assist in crafting more polite, respectful, and balanced comments, reducing the likelihood of inflammatory or aggressive language. This is particularly important on a platform like Misplaced Pages, where discussions can sometimes devolve into heated exchanges. Rather than banning LLMs, encouraging their use could promote healthier, more constructive conversations.}} | |||
Sounds like an article that was killed because of arbitary time limits and google-centric thinking. If there was a morass of video game related trivia, the correct response would be to stubify it.--<strong>]</strong>] 21:55, 18 March 2008 (UTC) | |||
:The damning problem appears to be that no one could show that the organizing concept—"silent protagonist"—was documented as a term of art or distinct concept ] just a pairing of ] in their ordinary meaning. The deleted history of the article is retrievable by any admin, if anyone can show cause to resurrect it through actual sources and a valid ] argument. An AFD is always good occasion for article supporters to "put up or shut up." If the authors can't do so, they clearly didn't base it upon anything reliable, so we might as well start over rather than keep up something clearly invalid and unsupported. ] (]) 22:06, 18 March 2008 (UTC) | |||
{{tq|'''6. Transparency and Accountability:''' If concerns exist about misuse (e.g., generating spammy or low-quality comments), these can be addressed through guidelines and oversight mechanisms. Misplaced Pages already has systems in place for identifying and mitigating disruptive behavior, and LLM-generated comments could be subject to the same scrutiny. A ban is an overly simplistic solution to what could be a more nuanced issue, one that could be addressed through better regulation and community involvement.}} | |||
::Gazimoff's submission near the end has two reliable sources.--<strong>]</strong>] 22:45, 18 March 2008 (UTC) | |||
:::This seems to fall squarely into the realm of a ], or even a protologism. Out of those sources, only one looked even remotely like satisfying ]. -- ] (]) 21:36, 19 March 2008 (UTC) | |||
:::What makes those reliable sources? <font face="Arial">]<sub>'']''</sub></font> 22:15, 19 March 2008 (UTC) | |||
{{tq|In conclusion, banning the use of large language models on Misplaced Pages would deprive the platform of an important tool for improving accessibility, efficiency, and the quality of discourse. Instead of a ban, the focus should be on establishing guidelines for their responsible use, ensuring that they enhance, rather than detract from, Misplaced Pages's core principles of collaboration, neutrality, and inclusivity.}} | |||
::::One is , an award-winning games development site, the other is an , which looks like a respectable games site. What makes these unreliable sources?--<strong>]</strong>] 09:18, 20 March 2008 (UTC) | |||
{{collapse bottom}} | |||
:::::Are they peer reviewed? <font face="Arial">]<sub>'']''</sub></font> 18:23, 20 March 2008 (UTC) | |||
::I'm honestly a bit impressed with the little guy. ]]<sup>]</sup> 05:39, 5 December 2024 (UTC) | |||
:::It is somewhat amusing how easy it is to get these chatbots to output apologia for these chatbots. Too bad it's always so shallow. Probably because the people who inserted those canned responses are shallow people is my opinion. ] (]) 19:44, 6 December 2024 (UTC) | |||
*'''Support''' those who are opposing have clearly never had to deal with trolls who endlessly ]. If I wanted to have a discussion with a chatbot, I'd go and find one. ] (]) 13:14, 5 December 2024 (UTC) | |||
*:What's wrong with just banning and hatting the troll? ] (]) 13:49, 5 December 2024 (UTC) | |||
*:Someone trolling and sealioning can (and should) be blocked under current policy, whether they use an LLM or not is irrelevant. ] (]) 15:22, 5 December 2024 (UTC) | |||
*'''Oppose''' per Rhododendrites. This is a case-by-case behavioral issue, and using LLMs != being a troll. ] (]) 17:30, 5 December 2024 (UTC) | |||
*'''Support''': the general principle is sound - where the substance has been originally written by gen-AI, comments will tend to add nothing to the discussion and even annoy or confuse other users. In principle, we should not allow such tools to be used in discussions. Comments written originally before improvement or correction by AI, particularly translation assistants, fall into a different category. Those are fine. There also has to be a high standard for comment removal. Suspicion that gen-AI might have been used is not enough. High gptzero scores is not enough. The principle should go into policy but under a stonking great caveat - ] takes precedence and a dim view will be taken of generative-AI inquisitors. ] 17:37, 5 December 2024 (UTC) | |||
*'''Support''' If a human didn't write it, humans shouldn't spend time reading it. I'll go further and say that ''LLMs are inherently unethical technology'' and, consequently, ''people who rely on them should be made to feel bad.'' ESL editors who use LLMs to make themselves sound like Brad Anderson in middle management should ''stop doing that'' because it actually gets in the way of clear communication. {{pb}} I find myself unpersuaded by arguments that existing policies and guidelines are adequate here. Sometimes, one needs a linkable statement that applies directly to the circumstances at hand. By analogy, one could argue that we don't really need ], for example, because adhering to ], ], and ] ought already to keep bad material out of biographies of living people. But in practice, it turned out that having a specialized policy that emphasizes the general ethos of the others while tailoring them to the problem at hand is a good thing. ] (]) 18:27, 5 December 2024 (UTC) | |||
*'''Strong support''' - Making a computer generate believable gibberish for you is a waste of time, and tricking someone else into reading it should be a blockable offense. If we're trying to create an encyclopedia, you cannot automate any part of the ''thinking''. We can automate processes in general, but any attempt at automating the actual discussion or thought-processes should never be allowed. If we allow this, it would waste countless hours of community time dealing with inane discussions, sockpuppetry, and disruption.{{pb | |||
}} Imagine a world where LLMs are allowed and popular - it's a sockpuppeteer's dream scenario - you can run 10 accounts and argue the same points, and the reason why they all sound alike is just merely because they're all LLM users. You could even just spend a few dollars a month and run 20-30 accounts to automatically disrupt wikipedia discussions while you sleep, and if LLM usage was allowed, it would be very hard to stop.{{pb | |||
}} However, I don't have much faith in AI detection tools (partially because it's based on the same underlying flawed technology), and would want any assumption of LLM usage to be based on obvious evidence, not just a score on some website. <small>Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop</small> ] ] 19:15, 5 December 2024 (UTC) | |||
*:I agree with your assessment “Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop” but unfortunately some editors who should ''really'' know better think it’s WaCkY to fill serious discussions with unfunny, distracting “humor”. ] (]) 21:54, 5 December 2024 (UTC) | |||
*::I also concur. "I used the to generate more text" is not a good joke. ] (]) 22:46, 5 December 2024 (UTC) | |||
*'''Strong support''' if you asked a robot to spew out some AI slop to win an argument you’re basically cheating. The only ethical reason to do so is because you can’t speak English well, and the extremely obvious answer to that is “if you can barely speak English why are you editing ''English Misplaced Pages?”'' That’s like a person who doesn’t understand basic physics trying to explain the ] using a chatbot. ] (]) 21:32, 5 December 2024 (UTC) | |||
*:I don't think "cheating" is a relevant issue here. Cheating is a problem if you use a LLM to win and get a job, award, college acceptance etc. that you otherwise wouldn't deserve. But WP discussions aren't a debating-skills contest, they're an attempt to determine the best course of action. | |||
*:So using an AI tool in a WP discussion is not ''cheating'' (though there may be other problems), just as riding a bike instead of walking isn't cheating unless you're trying to win a race. <span style="font-family:cursive">]]</span> 22:36, 5 December 2024 (UTC) | |||
*::Maybe “cheating” isn’t the right word. But I think that a) most AI generated content is garbage (it can polish the turd by making it sound professional, but it’s still a turd underneath) and b) it’s going to be abused by people ]. An AI can pump out text far faster than a human and that can drown out or wear down the opposition if nothing else. ] (]) 08:08, 6 December 2024 (UTC) | |||
*:::Bludgeoning is already against policy. It needs to be more strongly enforced, but it needs to be more strongly enforced uniformly rather than singling out comments that somebody suspects might have had AI-involvement. ] (]) 10:39, 6 December 2024 (UTC) | |||
* '''Support'''; I agree with Remsense and jlwoodwa, among others: I wouldn't make any one AI-detection site the Sole Final Arbiter of whether a comment "counts", but I agree it should be expressly legitimate to discount AI / LLM slop, at the very least to the same extent as closers are already expected to discount other insubstantial or inauthentic comments (like if a sock- or meat-puppet copy-pastes a comment written for them off-wiki, as there was at least one discussion and IIRC ArbCom case about recently). ] (]) 22:10, 5 December 2024 (UTC) | |||
*:You don't need a new policy that does nothing but duplicate a subset of existing policy. At ''most'' what you need is to add a sentence to the existing policy that states "this includes comments written using LLMs", however you'd rightly get a lot of pushback on that because it's completely redundant and frankly goes without saying. ] (]) 23:37, 5 December 2024 (UTC) | |||
*'''Support''' hallucinations are real. We should be taking a harder line against LLM generated participation. I don't think everyone who is doing it knows that they need to stop. ''']'''<span style="border:2px solid #073642;background:rgb(255,156,0);background:linear-gradient(90deg, rgba(255,156,0,1) 0%, rgba(147,0,255,1) 45%, rgba(4,123,134,1) 87%);">]</span> 23:47, 5 December 2024 (UTC) | |||
*'''Comment''' - ] that I imagine we will see more often. I wonder where it fits into this discussion. A user employs ]'s ], search+LLM, to help generate their edit request (without the verbosity bias that is common when people don't tell LLMs how much output they want). ] (]) 03:13, 6 December 2024 (UTC) | |||
*'''Support''' per all above. Discussions are supposed to include the original arguments/positions/statements/etc of editors here, not off-site chatbots. ] ] <span style="color:#C8102E;"><small><sup>(])</sup></small></span> 03:53, 6 December 2024 (UTC) | |||
*:I also find it pretty funny that ChatGPT ''itself'' said it shouldn't be used, as per the premise posted above by EEng. ] ] <span style="color:#C8102E;"><small><sup>(])</sup></small></span> 03:58, 6 December 2024 (UTC) | |||
*::"sycophancy is a general behavior of state-of-the-art AI assistants, likely driven in part by human preference judgments favoring sycophantic responses" - . They give us what we want...apparently. And just like with people, there is position bias, so the order of things can matter. ] (]) 04:26, 6 December 2024 (UTC) | |||
* (Is this where I respond? If not, please move.) '''LLM-generated prose should be discounted.''' Sometimes there will be a discernible point in there; it may even be what the editor meant, lightly brushed up with what ChatGPT thinks is appropriate style. (So I wouldn't say "banned and punishable" in discussions, although we already deprecate machine translations on en.wiki and for article prose, same difference—never worth the risk.) However, LLMs don't think. They can't explain with reference to appropriate policy and guidelines. They may invent stuff, or use the wrong words—at AN recently, an editor accused another of "defaming" and "sacrilege", thus drowning their point that they thought that editor was being too hard on their group by putting their signature to an outrageous personal attack. I consider that an instance of LLM use letting them down. If it's not obvious that it is LLM use, then the question doesn't arise, right? Nobody is arguing for requiring perfect English. That isn't what ] means. English is a global language, and presumably for that reason, many editors on en.wiki are not native speakers, and those that aren't (and those that are!) display a wide range of ability in the language. Gnomes do a lot of fixing of spelling, punctuation and grammar in articles. In practice, we don't have a high bar to entrance in terms of English ability (although I think a lot more could be done to ''explain'' to new editors whose English is obviously non-native what the rule or way of doing things is that they have violated. And some of our best writers are non-native; a point that should be emphasised because we all have a right of anonymity here, many of us use it, and it's rare, in particular, that I know an editor's race. Or even nationality (which may not be the same as where they live.) But what we do here is write in English: both articles and discussions. If someone doesn't have the confidence to write their own remark or !vote, then they shouldn't participate in discussions; I strongly suspect that it is indeed a matter of confidence, of wanting to ensure the English is impeccable. LLMs don't work that way, really. They concoct things like essays based on what others have written. Advice to use them in a context like a Misplaced Pages discussion is bad advice. At best it suggests you let the LLM decide which way to !vote. If you have something to say, say it and if necessary people will ask a question for clarification (or disagree with you). They won't mock your English (I hope! Civility is a basic rule here!) It happens in pretty much every discussion that somebody makes an English error. No biggie. I'll stop there before I make any more typos myself; typing laboriously on my laptop in a healthcare facility, and anyway ] covers this. ] (]) | |||
*I dunno about this specifically but I want to chime in to say that I find LLM-generated messages super fucking rude and unhelpful and support efforts to discourage them. – ] <small>(])</small> 08:15, 6 December 2024 (UTC) | |||
*'''Comment''' I think obvious LLM/chatbot text should at least be tagged through an Edit filter for Recent Changes, then RC Patrollers and reviewers can have a look and decide for themselves. ] <sup>(])</sup> <sub>(])</sub> 11:58, 6 December 2024 (UTC) | |||
*:How do you propose that such text be identified by an edit filter? LLM detections tools have high rates of both false positives and false negatives. ] (]) 12:47, 6 December 2024 (UTC) | |||
*::It might become possible once watermarks (like DeepMind's SynthID) are shown to be robust and are adopted. Some places are likely to require it at some point e.g. EU. I guess it will take a while though and might not even happen e.g. I think OpenAI recently decided to not go ahead with their watermark system for some reason. ] (]) 13:17, 6 December 2024 (UTC) | |||
*:::It will still be trivial to bypass the watermarks, or use LLMs that don't implement them. It also (AIUI) does nothing to reduce false positives (which for our usecase are far more damaging than false negatives). ] (]) 13:30, 6 December 2024 (UTC) | |||
*::::Maybe, that seems to be the case with some of the proposals. Others, like SynthID claim high detection rates, maybe because even a small amount of text contains a lot of signals. As for systems that don't implement them, I guess that would be an opportunity to make a rule more nuanced by only allowing use of watermarked output with verbosity limits...not that I support a rule in the first place. People are going to use/collaborate with LLMs. Why wouldn't they? ] (]) 14:38, 6 December 2024 (UTC) | |||
*:::::I don't think watermarks are a suitable thing to take into account. My view is that LLM usage should be a blockable offense on any namespace, but if it ends up being allowed under some circumstances then we at least need mandatory manual disclosures for any usage. Watermarks won't work / aren't obvious enough - we need something like {{t|LLM}} but self-imposed, and not tolerate unmarked usage. ] ] 18:21, 6 December 2024 (UTC) | |||
*::::::They will have to work at some point (e.g. ). ] (]) 06:27, 7 December 2024 (UTC) | |||
*:Good news! {{u|Queen of Hearts}} is already working on that in {{edf|1325}}. ] (]) 16:12, 6 December 2024 (UTC) | |||
*::See also ]. ] (]) 17:32, 6 December 2024 (UTC) | |||
*'''Comment''' As a practical matter, users posting obvious LLM-generated content will typically be in violation of other rules (e.g. disruptive editing, sealioning), in which case their discussion comments absolutely should be ignored, discouraged, discounted, or (in severe cases) hatted. But a smaller group of users (e.g. people using LLMs as a translation tool) may be contributing productively, and we should seek to engage with, rather than discourage, them. So I don't see the need for a separate bright-line policy that risks erasing the need for discernment — in most cases, a friendly reply to the user's first LLM-like post (perhaps mentioning ], which isn't a policy or guideline, but is nevertheless good advice) will be the right approach to work out what's really going on. ] (]) 15:53, 6 December 2024 (UTC) | |||
*:Yeah, this is why I disagree with the BLP analogy above. There's no great risk/emergency to ban the discernment. ] (]) 17:34, 6 December 2024 (UTC) | |||
*:{{small|Those pesky ] are just the worst!}} ] (]) 18:41, 6 December 2024 (UTC) | |||
*:Some translation tools have LLM ''assistance'', but the whole point of ''generative'' models is to create text far beyond what is found in the user's input, and the latter is clearly what this proposal covers. ] (]) 19:01, 6 December 2024 (UTC) | |||
*::That ''might'' be what the proposal ''intends'' to cover, but it is not what the proposal ''actually'' covers. The proposal ''all'' comments that have been generated by LLMs and/or AI, without qualification. ] (]) 01:05, 7 December 2024 (UTC) | |||
*:::70+% here understand the intention matches the language: ''generated by LLMs etc'' means "originated through generative AI tools rather than human thought", not "some kind of AI was involved in any step of the process". Even LLM translation tools don't actually ''create'' meaningful content where there wasn't any before; the generative AI aspect is only in the use of their vast training data to characterize the semantic context of your input in the form of mathematical relationships between tokens in an embedding space, and then match it with the collection of tokens most closely resembling it in the other language. There is, definitionally, a high level of creative constraint in what the translation output is since semantic preservation is required, something that is ''not'' true for text ''generation''. ] (]) 04:01, 7 December 2024 (UTC) | |||
*::::Do you have any evidence for you assertion that 70% of respondents have interpreted the language in the same way as you? Reading the comments associated with the votes suggests that it's closer to 70% of respondents who don't agree with you. Even if you are correct, 30% of people reading a policy indicates the policy is badly worded. ] (]) 08:34, 7 December 2024 (UTC) | |||
*:::::I think @] has summarized the respondent positions sufficiently below. I also think some portion of the opposers understand the proposal perfectly well and are just opposing anything that imposes participation standards. ] (]) 22:54, 7 December 2024 (UTC) | |||
*::::There will be many cases where it is not possible to say whether a piece of text does or does not contain "human thought" by observing the text, even if you know it was generated by an LLM. Statements like "originated through generative AI tools rather than human thought" will miss a large class of use cases, a class that will probably grow over the coming years. People work with LLMs to produce the output they require. It is often an iterative process by necessity because people and models make mistakes. An example of when "...rather than human thought" is not the case is when someone works with an LLM to solve something like a challenging technical problem where neither the person or the model has a satisfactory solution to hand. The context window means that, just like with human collaborators, a user can iterate towards a solution through dialog and testing, exploring the right part of the solution space. Human thought is not absent in these cases, it is present in the output, the result of a collaborative process. In these cases, something "far beyond what is found in the user's input" is the objective, it seems like a legitimate objective, but regardless, it will happen, and we won't be able to see it happening. ] (]) 10:46, 7 December 2024 (UTC) | |||
*:::::Yes, but this proposal is supposed to apply to just the obvious cases and will hopefully discourage good-faith users from using LLMs to create comments wholesale in general. It can be updated as technology progresses. There's also no reason editors using LLMs to organize/validate their arguments, or as search engines for whatever, ''have'' to copy-paste their raw output, which is much more of a problem since it carries a much higher chance of hallucination. That some people who are especially familiar with how to optimize LLM use, or who pay for advanced LLM access, will be able to deceive other editors is not a reason to ''not'' formally proscribe wholesale comment generation. ] (]) 22:27, 7 December 2024 (UTC) | |||
*::::::That's reasonable. I can get behind the idea of handling obvious cases from a noise reduction perspective. But for me, the issue is noise swamping signal in discussions rather than how it was generated. I'm not sure we need a special rule for LLMs, maybe just a better way to implement the existing rules. ] (]) 04:14, 8 December 2024 (UTC) | |||
*'''Support''' ] ] (]) 18:30, 6 December 2024 (UTC) | |||
* '''Comment''': The original question was whether we should discount, ignore, strikethrough, or collapse chatbot-written content. I think there's a very big difference between these options, but most support !voters haven't mentioned which one(s) they support. That might make judging the consensus nearly impossible; as of now, supporters are the clear !majority, but supporters of ''what''? {{--}} <span style="font-family:cursive">]]</span> 19:32, 6 December 2024 (UTC) | |||
*:That means that supporters support the proposal {{tq|that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner}}. Not sure what the problem is here. Supporters support the things listed in the proposal - we don't need a prescribed 100% strict procedure, it just says that supporters would be happy with closers discounting, ignoring or under some circumstances deleting LLM content in discussions. ] ] 19:40, 6 December 2024 (UTC) | |||
*:Doing something? At least the stage could be set for a follow on discussion. ] (]) 19:40, 6 December 2024 (UTC) | |||
*:More people have bolded "support" than other options, but very few of them have even attempted to refute the arguments against (and most that have attempted have done little more than handwaving or directly contradicting themselves), and multiple of those who have bolded "support" do not actually support what has been proposed when you read their comment. It's clear to me there is not going to be a consensus for anything other than "many editors dislike the idea of LLMs" from this discussion. ] (]) 00:58, 7 December 2024 (UTC) | |||
*::Arguing one point doesn't necessarily require having to refute every point the other side makes. I can concede that "some people use LLMs to improve their spelling and grammar" without changing my view overriding view that LLMs empower bad actors, time wasters and those with competence issues, with very little to offer wikipedia in exchange. Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first. ] ] 09:00, 7 December 2024 (UTC) | |||
*:::If you want to completely ignore all the other arguments in opposition that's your choice, but don't expect closers to attach much weight to your opinions. ] (]) 09:05, 7 December 2024 (UTC) | |||
*::::Ok, here's a list of the main opposition reasonings, with individual responses. | |||
*::::'''What about translations?''' - Translations are not up for debate here, the topic here is very clearly generative AI, and attempts to say that this topic covers translations as well is incorrect. No support voters have said the propositions should discount translated text, just oppose voters who are trying to muddy the waters. | |||
*::::'''What about accessibility?''' - This is could be a legitimate argument, but I haven't seen this substantiated anywhere other than handwaving "AI could help people!" arguments, which I would lump into the spelling and grammar argument I responded to above. | |||
*::::'''Detection tools are inaccurate''' - This I very much agree with, and noted in my support and in many others as well. But there is no clause in the actual proposal wording that mandates the use of automated AI detection, and I assume the closer would note that. | |||
*::::'''False positives''' - Any rule can have a potential for false positives, from wp:DUCK to close paraphrasing to NPA. We've just got to as a community become skilled at identifying genuine cases, just like we do for every other rule. | |||
*::::'''LLM content should be taken at face value and see if it violates some other policy''' - hopelessly naive stance, and a massive timesink. Anyone who has had the misfortune of going on X/twitter in the last couple of years should know that AI is not just used as an aid for those who have trouble typing, it is mainly used to and . Anyone who knows how bad the sockpuppetry issue is around CTOPs should be absolutely terrified of when (not if) someone decides to launch a full throated wave of AI bots on Misplaced Pages discussions, because if we have to invididually sanction each one like a human then admins will literally have no time for anything else. | |||
*::::I genuinely cannot comprehend how some people could see how AI is decimating the internet through spam, bots and disinformation and still think for even one second that we should open the door to it. ] ] 10:08, 7 December 2024 (UTC) | |||
*:::::There is no door. This is true for sockpuppetry too in my opinion. There can be a rule that claims there is a door, but it is more like a bead curtain. ] (]) 11:00, 7 December 2024 (UTC) | |||
*:::::The Twitter stuff is not a good comparison here. Spam is already nukable on sight, mass disruptive bot edits are also nukable on sight, and it's unclear how static comments on Misplaced Pages would be the best venue to astroturf political opinions (most of which would be off-topic anyway, i.e., nukable on sight). I'd prefer if people didn't use ChatGPT to formulate their points, but if they're trying to formulate a real point then that isn't disruptive in the same way spam is. ] (]) 02:22, 10 December 2024 (UTC) | |||
*::::::{{tq|it's unclear how static comments on Misplaced Pages would be the best venue to astroturf political opinions}} - by disrupting RFCs and talk page discussions a bad actor could definitely use chatgpt to astroturf. A large proportion of the world uses Misplaced Pages (directly or indirectly) to get information - it would be incredibly valuable thing to manipulate. My other point is that AI disruption bots (like the ones on twitter) would be indistinguishable from individuals using LLMs to "fix" spelling and grammar - by allowing one we make the other incredibly difficult to identify. How can you tell the difference between a bot and someone who just uses chatgpt for every comment? ] ] 09:16, 10 December 2024 (UTC) | |||
*:::::::You can't. That's the point. This is kind of the whole idea of ]. ] (]) 20:22, 13 December 2024 (UTC) | |||
*:::{{tqb|Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first.}}Social anxiety: Say "I" am a person unconfident in my writing. I imagine that when I post my raw language, I embarrass myself, and my credibility vanishes, while in the worst case nobody understands what I mean. As bad confidence is often built up through negative feedback, it's usually meritful or was meritful at some point for someone to seek outside help. ] (]) 23:46, 8 December 2024 (UTC) | |||
*::::While I sympathise with that hypothetical, ] and we shouldn't make decisions that do long-term harm to the project just because a hypothetical user feels emotionally dependent on a high tech spellchecker. I also think that in general wikipedia (myself included) is pretty relaxed about spelling and grammar in talk/WP space. ] ] 18:45, 10 December 2024 (UTC) | |||
*:::::We also shouldn't do long term harm to the project just because a few users are wedded to idea that LLMs are and will always be some sort of existential threat. The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project than LLM-comments that are all either useful, harmless or collapseable/removable/ignorable at present. ] (]) 19:06, 10 December 2024 (UTC) | |||
*::::::{{tq|The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project}} - the same could be said for ]. The reason why its not a big problem for DUCK is because the confidence level is very high. Like I've said in multiple other comments, I don't think "AI detectors" should be trusted, and that the bar for deciding whether something was created via LLM should be very high. I 100% understand your opinion and the reasoning behind it, I just think we have differing views on how well the community at large can identify AI comments. ] ] 09:07, 11 December 2024 (UTC) | |||
*:::::I don't see how allowing shy yet avid users to contribute has done or will do long-term harm. The potential always outweighs rational evaluation of outcomes for those with anxiety, a condition that is not behaviorally disruptive. ] (]) 02:47, 11 December 2024 (UTC) | |||
*::::::I definitely don't want to disallow shy yet avid users! I just don't think having a "using chatgpt to generate comments is allowed" rule is the right solution to that problem, considering the wider consequences. ] ] 08:52, 11 December 2024 (UTC) | |||
*:::::::Did you mean "... disallowed"? If so, I think we weigh-differently accessibility vs the quite low amount of AI trolling. ] (]) 14:10, 11 December 2024 (UTC) | |||
* '''Support''' strikethroughing or collapsing per everyone else. The opposes that mention ESL have my sympathy, but I am not sure how many of them are ESL themselves. Having learnt English as my second language, I have always found it easier to communicate when users are expressing things in their own way, not polished by some AI. I sympathise with the concerns and believe the right solution is to lower our community standards with respect to ] and similar (in terms of ESL communication) without risking hallucinations by AI. ] (]) 02:52, 7 December 2024 (UTC) | |||
* '''Oppose the use of AI detection tools'''. False positive rates for AI-detection are dramatically higher for non-native English speakers. . ~ ] (] • ]) 17:53, 8 December 2024 (UTC) | |||
===Section break 5=== | |||
* '''Oppose''' - I'm sympathetic to concerns of abuse through automated mass-commenting, but this policy looks too black-and-white. Contributors may use LLMs for many reasons, including to fix the grammar, to convey their thoughts more clearly, or to adjust the tone for a more constructive discussion. As it stands, this policy may lead to dismissing good-faith AI-assisted comments, as well as false positives, without considering the context. Moreover, while mainstream chatbots are not designed to just mimic the human writing style, there are existing tools that can make AI-generated text more human-like, so this policy does not offer that much protection against maliciously automated contributions. ] (]) 01:12, 9 December 2024 (UTC) | |||
*:'''Oppose''' What @] said is correct, basic AI tools like Grammarly should be allowed (note that grammarly can also change sentences and wording and has generative AI tools) but just blatantly asking ChatGPT to generate a administrator report is different. ] (]) 04:46, 1 January 2025 (UTC) | |||
* '''Oppose''' – Others have cast doubt on the efficacy of tools capable of diagnosing LLM output, and I can't vouch for its being otherwise. If EEng's example of ChatBot output is representative—a lengthy assertion of notability without citing sources—that is something that could well be disregarded whether it came from a bot or not. If used carefully, AI can be useful as an aide-memoire (such as with a spell- or grammar-checker) or as a supplier of more felicitous expression than the editor is naturally capable of (e.g. Google Translate). ] (]) 10:27, 9 December 2024 (UTC) | |||
* '''Comment''' / '''Oppose as written'''. It's not accurate that GPTZero is good at detecting AI-generated content. Citations (slightly out of date but there's little reason to think things have changed from 2023): https://www.aiweirdness.com/writing-like-a-robot/ , https://www.aiweirdness.com/dont-use-ai-detectors-for-anything-important/ . For those too busy to read, a few choice quotes: "the fact that it insisted even one excerpt is not by a human means that it's useless for detecting AI-generated text," and "Not only do AI detectors falsely flag human-written text as AI-written, the way in which they do it is biased" (citing https://arxiv.org/abs/2304.02819 ). Disruptive, worthless content can already be hatted, and I'm not opposed to doing so. Editors should be sharply told to use their own words, and if not already written, an essay saying we'd rather have authentic if grammatically imperfect comments than AI-modulated ones would be helpful to cite at editors who offer up AI slop. But someone merely citing GPTZero is not convincing. GPTZero will almost surely misidentify genuine commentary as AI-generated. So fine with any sort of reminder that worthless content can be hatted, and fine with a reminder not to use ChatGPT for creating Misplaced Pages talk page posts, but not fine with any recommendations of LLM-detectors. ] (]) 20:00, 9 December 2024 (UTC) | |||
*:@], I can't tell if you also oppose the actual proposal, which is to permit hatting/striking obvious LLM-generated comments (using GPTzero is a very minor detail in JSS's background paragraph, not part of the proposal). ] (]) 01:47, 11 December 2024 (UTC) | |||
*:::I support the proposal in so far as disruptive comments can already be hatted and that LLM-generated content is disruptive. I am strongly opposed to giving well-meaning but misguided editors a license to throw everyone's text into an AI-detector and hat the comments that score poorly. I don't think it was ''that'' minor a detail, and to the extent that detail is brought up, it should be as a reminder to use human judgment and ''forbid'' using alleged "AI detectors" instead. ] (]) 03:49, 11 December 2024 (UTC) | |||
* '''Support''' collapsing AI (specifically, ]) comments by behavioral analysis (most actually disruptive cases I've seen are pretty obvious) and not the use of inaccurate tools like ZeroGPT. I thinking hatting with the title "Editors suspect that this comment has been written by a ]" is appropriate. They take up SO much space in a discussion because they are also unnecessarily verbose, and talk on and on but never ever say something that even approaches having substance. Discussions are for human Misplaced Pages editors, we shouldn't have to use to sift through comments someone put 0 effort into and outsourced to a robot that writes using random numbers (that's a major part of how tools like ChatGPT work and maintain variety). If someone needs to use an AI chatbot to communicate because they don't understand English, then they are welcome to contribute to their native language Misplaced Pages, but I don't think they have the right to insist that we at enwiki spend our effort reading comments they but minimal effort into besides opening the ChatGPT website. If really needed, they can write in their native language and use a non-LLM tool like Google Translate. The use of non-LLM tools like Grammarly, Google Translate, etc. I think should still be OK for all editors, as they only work off comments that editors have written themselves. ] <sup>]]</sup> 05:10, 10 December 2024 (UTC) | |||
*:Adding that enforcing people writing things in their own words will actually help EAL (English additional language) editors contribute here. I world with EAL people irl, and even people who have almost native proficiency with human-written content find AI output confusing because it says things in the most confusing, verbose ways using difficult sentence constructions and words. I've seen opposers in this discussion who maybe haven't had experience working with EAL people go "what about EAL people?", but really, I think this change will help them (open to being corrected by someone who is EAL, tho). ] <sup>]]</sup> 05:17, 10 December 2024 (UTC) | |||
*:Also, with regards to oppose comments that discussions are not a vote so closes will ignore AI statements which don't have merit - unedited LLM statements are incredibly verbose and annoying, and clog up the discussion. Imagine multiple paragraphs, each with a heading, but all of which say almost nothing, they're borderline ]y. Giving the power to HAT them will help genuine discussion contributors keep with the flow of human arguments and avoid scaring away potential discussion contributors who are intimidated or don't feel they have the time to read the piles of AI nonsense that fill the discussion. ] <sup>]]</sup> 06:38, 10 December 2024 (UTC) | |||
* '''Support''' (removing) in general. How is this even a question? There is no case-by-case. It is a fundamental misunderstanding of how LLMs work to consider their output reliable without careful review. And which point, the editor could have written it themselves without inherent LLM bias. The point of any discussion is to provide analytical response based on the ''context'', not have some tool regurgitate something from a training set that sounds good. And frankly, it is disrespectuful to make someone read "AI" responses. It is a tool and there is a place and time for it, but not in discussions in an encyclopedia. — <small> ] <b>∣</b> ]</small> 15:41, 10 December 2024 (UTC) | |||
*'''Strong Support'''. I'm very interested in what you (the generic you) have to say about something. I'm not remotely interested in what a computer has to say about something. It provides no value to the discussion and is a waste of time. ] (]) 18:06, 10 December 2024 (UTC) | |||
*:Comments that provide no value to the discussion can already be hatted and ignored regardless of why they provide no value, without any of the false positive or false negatives inherent in this proposal. ] (]) 18:25, 10 December 2024 (UTC) | |||
*::Indeed, and that's fine for one-offs when a discussion goes off the rails or what-have-you. But we also have ] for disruptive behavior, not working collaboratively, etc. I'm suggesting that using an AI to write indicates that you're not here to build the encyclopedia, you're here to have an AI build the encyclopedia. I reiterate my strong support for AI-written content to be removed, struck, collapsed, or hatted and would support further measures even beyond those. ] (]) 21:54, 11 December 2024 (UTC) | |||
*:::There are two sets of people described in your comment: those who use AI and those who are NOTHERE. The two sets overlap, but nowhere near sufficiently to declare that everybody in the former set are also in the latter set. If someone is NOTHERE they already can and should be blocked, regardless of how they evidence that. Being suspected of using AI (note that the proposal does not require proof) is not sufficient justification on its own to declare someone NOTHERE, per the many examples of constructive use of AI already noted in this thread. ] (]) 22:03, 11 December 2024 (UTC) | |||
*::::To reiterate, I don't believe that any use of AI here is constructive, thus using it is evidence of ], and, therefore, the set of people using AI to write is completely circumscribed within the set of people who are NOTHERE. Please note that I am referring to users who use AI-generated writing, not users suspected of using AI-generated writing. I won't be delving into how one determines whether someone is using AI or how accurate it is, as that is, to me, a separate discussion. This is the end of my opinion on the matter. ] (]) 23:26, 11 December 2024 (UTC) | |||
*:::::You are entitled to your opinion of course, but as it is contradicted by the evidence of both multiple constructive uses and of the near-impossibility of reliably detecting LLM-generated text without false positives, I would expect the closer of this discussion to attach almost no weight to it. ] (]) 00:42, 12 December 2024 (UTC) | |||
*:::::I am ESL and use LLMs sometimes because of that. I feel like I don't fit into the NOTHERE category. It seems like you do not understand what they are or how they can be used constructively. ] (]) 01:43, 12 December 2024 (UTC) | |||
*::::::No, I understand. What you're talking about is no different from using Google Translate or asking a native-speaker to translate it. You, a human, came up with something you wanted to convey. You wrote that content in Language A. But you wanted to convey that message that you - a human - wrote, but now in Language B. So you had your human-written content translated to Language B. I have no qualms with this. It's your human-written content, expressed in Language B. My concern is with step 1 (coming up with something you want to convey), not step 2 (translating that content to another language). You write a paragraph for an article but it's in another language and you need the paragraph that you wrote translated? Fine by me. You ask an AI to write a paragraph for an article? Not fine by me. Again, I'm saying that there is no valid use case for AI-written content. ] (]) 15:59, 12 December 2024 (UTC) | |||
*:::::::It seems very likely that there will be valid use cases for AI-written content if the objective is maximizing quality and minimizing errors. Research like demonstrate that there will likely be cases where machines outperform humans in specific Misplaced Pages domains, and soon. But I think that is an entirely different question than potential misuse of LLMs in consensus related discussions. ] (]) 16:25, 12 December 2024 (UTC) | |||
*:::::::But your vote and the proposed above makes not distinction there. Which is the main issue. Also not to be pedantic but every prompted to a LLM is filled out by a human looking to convey a message. Every time someone hits publish on something here it is that person confirming that is what they are saying. So how do we in practice implement what you suggest? Because without a method better than vibes it's worthless. ] (]) 18:53, 12 December 2024 (UTC) | |||
*::::::::The proposal specifies content ''generated'' by LLMs, which has a specific meaning in the context of generative AI. If a prompt itself conveys a meaningful, supported opinion, why not just post that instead? The problem comes when the LLM adds more information than was provided, which is the whole point of generative models. ] (]) 01:52, 13 December 2024 (UTC) | |||
* '''Yes''' in principle. But in practice, LLM detectors are not foolproof, and there are valid reasons to sometimes use an LLM, for example to copyedit. I have used Grammarly before and have even used the Microsoft Editor, and while they aren't powered by LLMs, LLMs are a tool that need to be used appropriately on Misplaced Pages. ] ] 19:55, 10 December 2024 (UTC) | |||
* '''Support'''. Using LLM to reply to editors is lazy and disrespectful of fellow editor's time and brainpower. In the context of AFD, it is particularly egregious since an LLM can't really read the article, read sources, or follow our notability guidelines. {{pb}} By the way. {{tq|gptzero and other such tools are very good at detecting this}}. I don't think this is correct at all. I believe the false positive for AI detectors is quite high. High enough that I would recommend not using AI detectors. –] <small>(])</small> 03:23, 11 December 2024 (UTC) | |||
*'''Question''' @]: Since there appears to be a clear consensus against the AI-detectors part, would you like to strike that from the background? ] (]) 14:10, 11 December 2024 (UTC) | |||
* '''Support'''. AI generated text should be removed outright. If you aren't willing to put the work into doing your own writing then you definitely haven't actually thought deeply about the matter at hand. ]]] 14:16, 11 December 2024 (UTC) | |||
*:This comment is rather ironic given that it's very clear you haven't thought deeply about the matter at hand, because if you had then you'd realise that it's actually a whole lot more complicated than that. ] (]) 14:26, 11 December 2024 (UTC) | |||
*::Thryduulf I don't think this reply is particular helpful, and it comes off as slightly combative. It's also by my count your 24th comment on this RFC. ] ] 19:20, 11 December 2024 (UTC) | |||
*::I recognize that AI paraphrased or edited is not problematic in the same ways as text generated outright by an AI. I only meant to address the core issue at steak, content whose first draft was written by an AI system. ]]] 22:16, 17 December 2024 (UTC) | |||
*'''Oppose''' {{re|Just Step Sideways}} The nomination's 2nd para run through https://www.zerogpt.com/ gives "11.39% AI GPT*":{{pb}}{{tqb|I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, gptzero and other such tools are very good at detecting this. I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. <mark>If you can't make a coherent argument on your own, you are not competent to be participating in the discussion.</mark> I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner}}{{pb}}The nomination's linked https://gptzero.me/ site previously advertised https://undetectable.ai/ , wherewith how will we deal? Imagine the nomination was at AFD. What should be the response to LLM accusations against the highlighted sentence? ] (]) 17:41, 11 December 2024 (UTC) | |||
* '''Support''' with the caveat that our ability to deal with the issue goes only as far as we can accurately identify the issue (this appears to have been an issue raised across a number of the previous comments, both support and oppose, but I think it bears restating because we're approaching this from a number of different angles and its IMO the most important point regardless of what conclusions you draw from it). ] (]) 19:24, 11 December 2024 (UTC) | |||
*'''Strong support, limited implementation'''. {{tq|1=Misplaced Pages is written by volunteer editors}}, says our front page. This is who ''we'' are, and ''our'' writing is what Misplaced Pages is. It's true that LLM-created text can be difficult to identify, so this may be a bit of a moving target, and we should be conservative in what we remove—but I'm sure at this point we've all run across cases (whether here or elsewhere in our digital lives) where someone copy/pastes some text that includes "Is there anything else I can help you with?" at the end, or other blatant tells. This content should be deleted without hesitation. ] (]) 04:11, 12 December 2024 (UTC) | |||
* '''Support in concept, questions over implementation''' — I concur with {{U|Dronebogus}} that users who rely on LLMs should not edit English Misplaced Pages. It is not a significant barrier for users to use other means of communication, including online translators, rather than artificial intelligence. How can an artificial intelligence tool argue properly? However, I question how this will work in practice without an unacceptable degree of error. <span style="font-family: monospace;">] (he/him)</span> 22:39, 12 December 2024 (UTC) | |||
*:Many, possibly most, online translators use artificial intelligence based on LLMs these days. ] (]) 22:46, 12 December 2024 (UTC) | |||
*::There is a difference between translating words you wrote in one language into English and using an LLM to write a comment for you. <span style="font-family: monospace;">] (he/him)</span> 22:59, 12 December 2024 (UTC) | |||
*:::Neither your comment nor the original proposal make any such distinction. ] (]) 23:34, 12 December 2024 (UTC) | |||
*::::Well since people keep bringing this up as a semi-strawman: no I don’t support banning machine translation, not that I ''encourage'' using it (once again, if you aren’t competent in English please don’t edit here) ] (]) 07:34, 13 December 2024 (UTC) | |||
*:LLMs are incredible at translating, and many online translators already incorporate them, including Google Translate. Accomodating LLMs is an easy way to support the avid not only the ESL but also the avid but shy. It has way more benefits than the unseen-to-me amount of AI trolling that isn't already collapse-on-sight. ] (]) 00:05, 13 December 2024 (UTC) | |||
*::Google Translate uses the same transformer architecture that LLMs are built around, and uses e.g. PaLM to develop more language support (through training that enables zero-shot capabilities) and for larger-scale specialized translation tasks performed through the Google Cloud "" API, but it does not incorporate LLMs into ''translating your everyday text input'', which still relies on NMTs. And even for the API features, the core constraint of ''matching'' input rather than ''generating content'' is still retained (obviously it would be very bad for a translation tool to insert material not found in the original text!). LLMs might be good for translation because they are better at evaluating semantic meaning and detecting context and nuance, but again, the ''generative'' part that is key to this proposal is not present. ] (]) 01:20, 13 December 2024 (UTC) | |||
*:::{{tq|PaLM (Pathways Language Model) is a 540 billion-parameter transformer-based large language model (LLM) developed by Google AI.}} If you meant something about how reschlmunking the outputs of an LLM or using quite similar architecture is not really incorporating the LLM, I believe we would be approaching ] levels of recombination, to which my answer is it is the same ship.{{tqb|obviously it would be very bad for a translation tool to insert material not found in the original text!}} ] (]) 01:29, 13 December 2024 (UTC) | |||
*::::PaLM2 is not used in the consumer app (Google Translate), it's used for research. Google Translate just uses non-generative NMTs to map input to its closes cognate in the target language. ] (]) 01:34, 13 December 2024 (UTC) | |||
*:::::Well, is the NMT really that different enough to not be classified as an LLM? IIRC the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be, and an LLM I asked agreed that NMTs satisfy the definition of a generative LLM, though I think you're the expert here. ] (]) 02:01, 13 December 2024 (UTC) | |||
*::::::Google Translate's NMT hits different enough to speak English much less naturally than ChatGPT 4o. I don't consider it a '''''L'''''LM, because the param count is 380M not 1.8T. | |||
*::::::{{tq|the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be}} No, that def would fit ancient ] tech too. ] (]) 17:50, 13 December 2024 (UTC) | |||
*:::::::Even if you don’t consider it L, I do, and many sources cited by the article do. Since we’ll have such contesting during enforcement, it’s better to find a way that precludes such controversy. ] (]) 20:44, 13 December 2024 (UTC) | |||
*::::::::NMTs, LLMs, and the text-creation functionality of LLMs are fundamentally different in the context of this discussion, which is about content generated through generative AI. NMTs specifically for translation: they are trained on parallel corpora and their output is optimized to match the input as precisely as possible, ''not'' to create novel text. LLMs have different training, including way more massive corpora, and were designed specifically to create novel text. One of the applications of LLMs may be translation (though currently it's too computationally intensive to run them for standard consumer purposes), by virtue of their being very good at determining semantic meaning, but even if/when they do become mainstream translation tools what they'll be used for is still ''not'' generative when it comes to translation output. ] (]) 22:29, 13 December 2024 (UTC) | |||
*:::::::::How will you differentiate between the use of LLM for copyediting and the use of LLM for generation? ] (]) 23:30, 13 December 2024 (UTC) | |||
*::::::::::The proposal is for hatting obvious cases of LLM-generated comments. Someone who just uses an LLM to copyedit will still have written the content themselves and presumably their output would not have the obvious tells of generative AI. ] (]) 23:56, 13 December 2024 (UTC) | |||
*:{{outdent|9}} Not when I tried to use it. Quantitatively, GPTZero went from 15% human to 100% AI for me despite the copyedits only changing 14 words. ] (]) 00:33, 14 December 2024 (UTC) | |||
*::I think there is consensus that GPTZero is not usable, even for obvious cases. ] (]) 00:55, 14 December 2024 (UTC) | |||
*:::Yes, but being as far as 100% means people will also probably think the rewrite ChatGPT-generated. ] (]) 01:18, 14 December 2024 (UTC) | |||
*::::Does it really mean that? All you've demonstrated is that GPTZero has false positives, which is exactly why its use here was discouraged. ] (]) 05:26, 14 December 2024 (UTC) | |||
*:::::My subjective evaluation of what I got copyediting from ChatGPT was that it sounded like ChatGPT. I used GPTZero to get a number. ] (]) 14:18, 14 December 2024 (UTC) | |||
*::::::My guess is that the copyediting went beyond what most people would actually call "copyediting". ] (]) 18:04, 23 December 2024 (UTC) | |||
*:::::::It changed only 14 words across two paragraphs and still retained the same meaning in a way that I would describe it as copyediting. Such levels of change are what those lacking confidence in tone would probably seek anyways. ] (]) 00:15, 24 December 2024 (UTC) | |||
* On one hand, AI slop is a plague on humanity and obvious LLM output should definitely be disregarded when evaluating consensus. On the other hand, I feel like existing policy covers this just fine, and any experienced closer will lend greater weight to actual policy-based arguments, and discount anything that is just parroting jargon. <span class="nowrap">] <sub>]</sub> <sup>(] • ])</sup></span> 23:21, 12 December 2024 (UTC) | |||
*'''Support in principle''', but we cannot rely on any specific tools because none are accurate enough for our needs. Whenever I see a blatant ChatGPT-generated !vote, I ignore it. They're invariably poorly reasoned and based on surface-level concepts rather than anything specific to the issue being discussed. If someone is using AI to create their arguments for them, it means they have no actual argument besides ] and are looking for arguments that support their desired result rather than coming up with a result based on the merits. Also, toasters do not get to have an opinion. <span style="font-family:Papyrus, Courier New">]</span><sup><span style="font-family:Papyrus"><small>'']''</small></span></sup> 05:17, 13 December 2024 (UTC) | |||
*'''Oppose'''. For creating unnecessary drama. First of, the "detector" of the AI bot is not reliable, or at least the reliability of the tool itself is still questionable. If the tool to detect LLM itself is unreliable, how can one reliably point out which one is LLM and which one is not? We got multiple tools that claimed to be able to detect LLM as well. Which one should we trust? Should we be elevating one tool over the others? Have there been any research that showed that the "picked" tool is the most reliable? Second, not all LLMs are dangerous. We shouldn't treat LLM as a virus that will somehow take over the Internet or something. Some editors use LLM to smooth out their grammar and sentences and fix up errors, and there is nothing wrong with that. I understand that banning obvious LLM text per ] are good, but totally banning them is plain wrong. ] ] 22:56, 15 December 2024 (UTC) | |||
*:@], the proposal is to permit editors to collapse/strike ''obvious LLM text'', not to "ban LLM totally". If LLM use is imperceptible, like for tweaking grammar, it's not going to be affected. ] (]) 20:17, 19 December 2024 (UTC) | |||
*'''Support''' with some kind of caveat about not relying on faulty tools or presuming that something is LLM without evidence or admission, based on the following reasons: | |||
*# We have stricter rules around semi-automated editing (rollback, AutoWikiBrowser, etc.) and even stricter rules around fully automated bot editing. These cleanup edits are widely accepted as positive, but there is still the concern about an overwhelming amount of bad edits to wade through and/or fix. A form of that concern is relevant here. Someone could reply to every post in this discussion in just a minute or so without ever reading anything. That's inherently disruptive. | |||
*# Nobody who is voting "oppose" is using an LLM to cast that vote. The LLM comments have been left by those supporting to make a point about how problematic they are for discussions like this. I think this reflects, even among oppose voters, a developing community consensus that LLM comments will be disregarded. | |||
*# If the rule in practice is to disregard LLM comments, not writing that rule down does not stop it from being the rule, consensus, or a community norm. It just makes the rule less obvious and less clear. | |||
*# It's disrespectful for an editor to ask someone to spend their time reading a comment if they couldn't be bothered to spend any time writing it, and therefore a violation of the policy ], "{{tq|treat your fellow editors as respected colleagues with whom you are working on an important project.}}" | |||
* Also, I don't read the proposal as a ban on machine translation in any way. ] (]) 00:01, 18 December 2024 (UTC) | |||
*:@], above @] said their !vote was created by LLM. ] (]) 20:14, 19 December 2024 (UTC) | |||
* I am '''strongly opposed''' to banning or ignoring LLM-made talk page comments ''just'' because they are LLM-made. I'm not a big fan of LLMs at all; they are actually useful only for some certain things, very few of which are directly relevant to contributing to Misplaced Pages in English or in any other language. However, some of those things ''are'' useful for this, at least for some humans, and I don't want to see these humans being kicked out of the English Misplaced Pages. I already witnessed several cases in which people whose first language is not English tried writing talk page responses in the English Misplaced Pages, used an LLM to improve their writing style, and got their responses ignored ''only'' because they used an LLM. In all those cases, I had strong reasons to be certain that they were real humans, that they meant what they wrote, and that they did it all in good faith. Please don't say that anyone who wants to contribute to the English Wikipeida should, in the first place, know English well enough to write a coherent talk page comment without LLM assistance; occasionally, I kind of wish that it was like that myself, but then I recall that the world is more complicated and interesting than that. Uses of LLMs that help the English Misplaced Pages be more inclusive for good-faith people are good. Of course, defining what good faith means is complicated, but using an LLM is not, ''by itself'', a sign of bad faith. --] (]) 04:52, 19 December 2024 (UTC) | |||
*:Those concerned about their English should use translation software rather than an llm. Both might alter the meaning to some extent, but only one will make things up. (It's also not a sure assumption that llm text is coherent talkpage text.) ] (]) 07:44, 19 December 2024 (UTC) | |||
*::@] The dividing line between translation software and LLM is already blurry and will soon disappear. It's also rare that translation software results in coherent talkpage text, ''unless'' it's relying on some (primitive) form of LLM. So if we're going to outlaw LLMs, we would need to outlaw any form of translation software, and possibly any text-to-speech software as well. <span style="font-family:cursive">]]</span> 23:41, 19 December 2024 (UTC) | |||
*:::The distinctions have already been covered above, and no we would not have to. There is an obvious difference between software intended to translate and software intended to generate novel text, and users are likely to continue to treat those differently. ] (]) 02:49, 20 December 2024 (UTC) | |||
*'''Strong support'''. LLM-generated content has no place anywhere on the encyclopedia. ] (]) 10:27, 19 December 2024 (UTC) | |||
*'''Strong oppose''' to the proposal as written. Misplaced Pages already suffers from being ] in a ] ] and a refusal to move with the technological times. Anyone who remembers most Wikipedians' visceral reaction to ] and ] when they were first introduced will observe a striking similarity. Yes, those projects had serious problems, as do LLM-generated comments. But AI is the future, and this attitude of "]" will ultimately lead Misplaced Pages the way of ''Encyclopædia Britannica''. Our discussion needs to be how best to change, not how to avoid to change. <span style="font-family:cursive">]]</span> 23:54, 19 December 2024 (UTC) | |||
*:{{smalldiv|1=The main objection to VE and a major objection to FLOW was the developers' insistence on transforming Wikitext to HTML for editing and then transforming that back to Wikitext. ] (]) 01:31, 20 December 2024 (UTC)}} | |||
*::True. Then, as now, there were many valid objections. But IIRC, there was limited discussion of "Let's figure out a better way to improve", and lots of "Everything is fine; don't change anything, ever." That attitude concerns me. <span style="font-family:cursive">]]</span> 01:52, 20 December 2024 (UTC) | |||
*:I oppose the proposal but I think the comparison to FLOW and VisualEditor is beyond silly. Those things did not exist outside of a MediaWiki context. LLMs are a global issue. ] (]) 17:11, 28 December 2024 (UTC) | |||
*'''Support'''. I'm not even slightly swayed by these "it'll be too hard to figure out" and "mistakes could be made" and "we can't be 100% certain" sorts of arguments. That's true of {{em|everything}} around here, and its why we have an admins-must-earn-a-boatload-of-community-trust system, and a system of review/appeal of decisions they (or of course non-admin closers) make, and a consensus-based decisionmaking system more broadly. {{U|JoelleJay}} has it exactly right: {{tq|having a policy that permits closers to discount apparently-LLM-generated contributions will discourage good-faith editors from using LLMs irresponsibly and perhaps motivate bad-faith editors to edit the raw output to appear more human, which would at least involve some degree of effort and engagement with their "own" arguments.}} And as pointed out by some others, the "it'll hurt non-native-English speakers" nonsense is, well, nonsense; translation is a different and unrelated process (though LLMs can perform it to some extent), of remapping one's {{em|own}} material onto another language.<!-- --><p>I'm also not in any way convinved by the "people poor at writing and other cognitive tasks needs the LLM to help them here" angle, because ] is required. This is work (albeit volunteer work), it is ] a game, a social-media playground, a get-my-ideas-out-there soapbox, or a place to learn how to interact e-socially or pick up remedial writing skills, nor a venue for practicing one's argument techiques. It's an encyclopedia, being built by people who – to be productive contributors instead of a draining burden on the entire community – {{em|must}} have: solid reasoning habits, great judgement (especially in assessing reliability of claims and the sources making them), excellent writing skills of a highly particularized sort, a high level of fluency in this specific language (in multiple registers), and a human-judgement ability to understand our thick web of policies, guidelines, procedures, and often unwritten norms, and how they all interact, in a specific contextual way that may vary greatly by context. None of these is optional. An LLM cannot do any of them adequately (not even write well; their material sticks out like a sore thumb, and after a while you can even tell which LLM produced the material by its habitual but dinstictive crappy approach to simulating human thought and language).</p><!-- --><p>In short, if you {{em|need}} an LLM to give what you think is meaningful input into a decision-making process on Misplaced Pages (much less to generate mainspace content for the public), then you {{em|need}} to go find something else to do, something that fits your skills and abilities. Saying this so plainly will probably upset someone, but so it goes. I have a rep for "not suffering fools lightly" and "being annoying but correct"; I can live with that if it gets the right decisions made and the work advanced. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 05:49, 22 December 2024 (UTC)</p><p>PS, something that came up more recently than my original post above: If anyone's wondering why some of us are such "AI" skeptics, its because the technology is badly, badly faulty, producing multiple blatant factual errors even in short material, entirely fictive "hallucinations", direct forgery of sourcing, nested chains of error compounded upon error when context isn't gleaned properly, disingenuous attempts to hide that it is lying, and worse. A few days ago, I got ChatGPT 4o to literally {{em|lose its f#*$ing mind}}: I had it generate some short code (very simple one-liner regex) with a really obvious error, so I pointed out the error and gave it documentation that proved it was an error. It conceded the error and even explained what the error was and why it was erroneous, then said how it was going to fix it. And ... proceeded to output the same error again (in just one of three examples, the other two being correct). Fascinated and horrified by this, I tried for over half an hour to get it to produce a correct example, and it was utterly incapable of doing it. It knew the error was an error and what that error was and why, spelled out what the correct approach should be, then repeated the error {{em|every single time}} (always at the first occurrence, or always if the only occurrence). I've captured that session and will be doing a write-up about it. This is much, much worse that a "hallucination", being an abject inability to stop doing what it already knows is absolutely wrong. When people have fears like "If I had a house AI, and it was in control of the bathwater temperature, it might boil my children alive", they are not paranoid or unreasonable. My experiment with ChatGPT proves this conclusively. If the AI can (with trivial effort) be put into a crazy failure-looping state where it knows it's doing or about to do something wrong but insists on doing it anyway – i.e. during its take-an-action phase it completely loses connection to reality, even it's internal reality much less external input telling it "no!" – then we are in grave danger. This is essentially the mental state of a psychopath: "I know I shouldn't grab my neighbor's little daughter and strangle her to death, but I just can't stop myself." <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 01:42, 28 December 2024 (UTC)</p> | |||
*:The problem with all that is that we ''already'' have a policy that allows the hatting or removal of comments that are actually problematic because of their content (which are the only ones that we should be removing) without regard for whether it was or was not written by LLM. Everything that actually should be removed can be removed already. ] (]) 11:39, 22 December 2024 (UTC) | |||
*::If that's "the problem with all that", then it is not any kind of problem at all. It's entirely normal in our ] material to reiterate a principle defined in one place for contextual application at another, and to extrapolate from a general principle to a more specific application. We do this often to cut through the exact kind of wikilawyering we're facing over this issue: there's not a {{em|specific rule}} against LLM-generated !voting, so the argument is (again and again in this thread) to permit it (unless it's such senseless gibberish that it would be removed anyway even if no LLM were involved). The community clearly doesn't actually want that result (or more accurately, there is not a consensus in favor of it), though this {{em|specific}} proposal's approach to thwarting the "AI"-spamming of our decision-making processes might not be perfect. To me, it's a step in the right direction. If it were implemented this way and some uncommon issue arose with that implementation, then we'd tweak it to address that micro-problem. We must not continue to avoid addressing the macro-problem just because someone can imagine edge cases that might not work out ideally. That, too, is true of all of our P&G and process. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 01:15, 28 December 2024 (UTC) | |||
*:People who have good reading skills, great judgement, and solid reasoning habits enough to find problems in existing articles don't necessarily have great interpersonal writing/communication skills or the confidence. Meanwhile, for all LLM is bad at, it is very good at diluting everything you say to become dry, dispassionate, and thus inoffensive. ] (]) 15:26, 22 December 2024 (UTC) | |||
*::I doubt that anyone would seriously object to some ultra-hothead running their post through an LLM with a query like "Can you see anything in this that might come off as aggressive or offensive, and suggest a revision that only affects that aspect of the material?" The result might not even be noticeable as LLM-modified. If it doesn't stick out as LLM garbage, there there is no way for this proposal to affect such a post, because no one here is a mind-reader (we cannot magically detect through remote sensing that someone toned their rant down with LLM help). So, this proposal is not broken (at least not with regard to that scenario). That said, the main reason that WP:COMPETENCE was written (and it's one of the essays that, like ] and ], has become "operative" within the community as if a policy or guideline) is the very "don't necessarily have great interpersonal ... skills" issue you mention. That is, lacking those skills to a serious enough degree makes one not competent to work on a collaborative encyclopedia project, and one will eventually be ejected after causing enough disruption. Something we don't need is LLMs masking for a while that someone has this kind of competence failure; it will just drag out the inevitable. By way of analogy: if I were a kleptomaniac and just unable to stop myself from shoplifting, it would not be okay for me to use a device that scrambled stores' surveillance camera footage to make it more likely for me to get away with more shoplifting (and there would certainly be no obligation on the part of the store owner to turn their cameras off, or to take no action if they catch me stealing, just because I tell them I'm a kleptomaniac and my unconstructive behavior isn't something I can manage. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 01:15, 28 December 2024 (UTC) | |||
*:::People who can't communicate that well often won't attempt to burn others down. I don't think there's any disruption or substantial additional investment in accommodating those who won't burn others down (which I'd say is the majority) by reading their perfectly comprehensible AI-diluted arguments. (Scrambling footage is like erasing the memories of the incident, which I don't think is a good analogue of the issue at hand. I'd say it's more like working with someone who stinks and masks that with perfume.) ] (]) 00:05, 29 December 2024 (UTC) | |||
*:I'm pretty sure most people here understand why there are AI skeptics. I also hate ChatGPT. But there's a difference between my personal opinion of ChatGPT and the fact that someone using ChatGPT to communicate is still communicating. The risk of your house AI hallucinating is that your children get boiled alive. The risk of ChatGPT hallucinating on a talk page is that they make a dumb, disregardable argument, of the kind that humans pump out by the thousands. (Someone linked an example of some AfD delete !votes generated by AI and frankly they're better reasoned than a lot of the stuff humans post.) ] (]) 17:32, 28 December 2024 (UTC) | |||
*:(The bigger issue is people using Misplaced Pages ''as ChatGPT'' -- i.e. posting the prompts, not the responses -- but, like much of what is being discussed here, that is already stuff that can be removed.) ] (]) 17:36, 28 December 2024 (UTC) | |||
*ok, I agree with @], so therefore my vote is '''Support.''' ] (]) 12:41, 25 December 2024 (UTC) | |||
*'''Support'''. Sure I have questions about detection, but I don't think it means we shouldn't have a policy that explicitly states that it should not be used (and can be ignored/hatted if it is). Judging solely based on content (and no wp:bludgeoning, etc.) is unsustainable IMO. It would mean taking every wall of text seriously until it's clear that the ''content'' is unhelpful, and LLMs are very good at churning out plausible-sounding bullshit. It wastes everyone's time. If cognitive impairments or ESL issues make it hard to contribute, try voice-to-text, old-school translation software, or some other aid. LLMs aren't really ''you''.--] (]) 11:27, 23 December 2024 (UTC) | |||
* '''Comment'''. While I agree with the sentiment of the request, I am at a loss to see how we can identify LLM generated comments in a consistent manner that can scale. Yes, it might be easier to identify egregious copy paste of wall of text, but, anything other than that might be hard to detect. Our options are: | |||
:# Robust tooling to detect LLM generated text, with acceptably low levels of false positives. Somewhat similar to what Earwig does for Copyvios. But, someone needs to build it and host it on WMTools or at a similar location. | |||
:# Self certification by editors. Every edit / publish dialogbox should have a checkbox for "Is this text LLM generated" with y/n optionality. | |||
:# Editors playing a vigilante role in reading the text and making a personal call on other editors' text. Obviously this is least preferred. | |||
: These are my starting views. ] (]) 00:37, 24 December 2024 (UTC) | |||
::A couple of days ago, an editor ran a new article through GPTzero, and was told that it was 99.8% likely to be AI-generated. I ran the same revision of the same article through the same tool the next day, and it told me that it was 98.3% likely to be human-written. | |||
::Now we're left scratching our heads: Why the difference? Which answer is correct? ] (]) 04:36, 26 December 2024 (UTC) | |||
*'''Strong oppose''' as it's impossible to enforce. Also LLMs are a valid and useful ] tool. – ] 05:18, 24 December 2024 (UTC) | |||
*'''Bonus suggestion!''': I'm curious what Wikipedians think about this so let's try this. Many of the comments here discuss the impracticality of determining whether a user's comments are AI generated (i.e. gptzero isn't perfect), and many give valid arguments for using LLMs (i.e. ]). If an argument is suspected to be written by LLM, I propose that editors should examine the user. Take a look at their listed contributions, and if they seem to have a habit of using AI, open a discussion on their talk page. If the user has a habit of using AI and doesn't recognize the inherent problems and refuses to change, this can be brought to ] for potential blocks. If (and only if) the person is blocked for using AI, their comments can be ignored. Or just ask ChatGPT to summarize them for you lol ] (]) 06:12, 25 December 2024 (UTC) | |||
*:I think that the general idea here is good: As much as possible, treat each account individually, and handle LLM use as a behavioral problem when it's actually a practical problem (i.e., not when it seems to be accurate and seems to be appropriate). ] (]) 04:38, 26 December 2024 (UTC) | |||
*:Except a substantial portion of the input in this thread is disagreement with the idea that English being someone's second language is an excuse for LLM-generated content at all. Translation of one's own material is an entirely different process (even if often leveraged by LLMs set to a different task than trying to generate what they "think" is new material that will pass as human-authored). I'm skeptical that any of the other things you're accepted as "valid arguments" from the pro-LLM or LLM-accepting side of this debate have consensus as valid, either. But go ahead and spell them out and we'll see. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 01:30, 28 December 2024 (UTC) | |||
* '''Strong support''' the removal of any obvious, low effort AI-generated post. I recently came across a user such examples. When called out on it and posted a comment saying, amongst other things "''HAHAHAHHAHAHAHAHAHAHHAHAHAHAHAHA. Yes, some of it might be. Because I don't have time to argue with, in my humble opinion, stupid PHOQUING people.''" and "''YOUUUUUUUUUUUUUUUUUUUUUUUUUUUU are assuming bath faith in me.''" | |||
:They were later blocked as a sock evading a global lock. | |||
:Currently it is too easy for trolls to game ] and AI to waste people's time arguing with their bot-generated replies. Using AI to write your posts for you makes it difficult for others to assume good faith. I am ok with obvious exceptions like a non-native speaker using AI to help them articulate their point. ] (]) 21:29, 25 December 2024 (UTC) | |||
* '''Support''' for any and all reasons above. I'd be in favor of entirely banning AI-written text on the platform in articlespace as well. <!-- Template:Unsigned --><small class="autosigned">— Preceding ] comment added by ] (] • ]) 00:05, 27 December 2024 (UTC)</small> | |||
*'''Support''' though with caution, as there are the possibility for false positives. ] ''<span style="font-size:small; vertical-align:top;">]</span>''·''<span style="font-size:small; vertical-align:bottom;">]</span>'' 00:14, 29 December 2024 (UTC) | |||
*'''Support''' for now. I believe the foundation should be busy ramping up support in this arena of distraction. IMHO, we are in the calibration phase of the use of these models. We don't feed trolls and we shouldn't tutor LLMs. Since assumption of good faith is not suicide pact, we shouldn't rely overmuch for guidance on edge cases. The issues as I see them are attribution (how do I know where your idea has been?), obedience to social norms (I not-here blocked someone recently for brazenly using ChatGPT to lie about using LLMs; the user proceeded to use GPT to post unblock requests), and a vast canyon between the limited human and financial resources of the foundation and the unlimited resources of bad actors with vast sums of money who would like to see Misplaced Pages less able. I have hopes we can get some higher visibility anti-LLM support (like a flag in my mouseover which always reports a percentage, so I know to check). This fire is burning now. It would be unwise to ignore this much longer. ] (]) 16:28, 29 December 2024 (UTC) | |||
*'''Support''' with the clarification that use of AI assistance is not prohibited, as long as its contribution is <<50%. For example, using Grammarly for spell check and grammar/syntax is OK, but using AI to do your homework is not. ] (]) 02:45, 30 December 2024 (UTC) | |||
*:How do you propose to determine whether an AI's contribution is or is not "<<50%"? ] (]) 12:22, 2 January 2025 (UTC) | |||
*'''Support''' a ban on comments that were primarily generated by AI, with such comments to be deleted or struck-through as appropriate, because it's inherently misleading to pass off a chatbot's words as your own. Using ML-based spell-checkers is fine. I've seen some users call ChatGPT an "accessibility tool", but if you're not capable of communicating on English Misplaced Pages without a LLM doing it for you, then I don't think English Misplaced Pages is a good community for you to participate in. <!-- Template:Unsigned --><small class="autosigned">— Preceding ] comment added by ] (] • ]) 21:45, 31 December 2024 (UTC)</small> | |||
*:Just read ], where an editor was indeffed per ] after exclusively using LLMs to edit, even when responding within that very thread. The situation was a clear case of ], which I surely would've cited in my original comment had the page name come to mind. <span class="nowrap">—] (] | ])</span> 04:34, 3 January 2025 (UTC) | |||
*::That they were banned for CIR shows we don't need anything new to deal with disruption specifically caused by people using LLMs. ] (]) 05:29, 3 January 2025 (UTC) | |||
*'''Support''' A popular notion in many, many other discussions is "our most valuable resource is editor time". This applies here more than in most instances - editors cannot be expected to wear themselves out in engaging at length with the results of someone's one-line prompt to an LLM. Where LLM use in articles is destructive of content and source reliability, in discussions it is destructive of editor good faith. If you can't be bothered to put your arguments into intelligible form, don't participate. If your language capacity is lacking to the extent that you have to have a program ''generate'' (as opposed to polish) your stuff, you probably don't have any business participating either. Human detection ability seems quite sufficient for these cases (and once it ceases to, well, we'll deal with that when it becomes an issue). --<span style="font-family:Courier">]</span> <small>(] · ])</small> 12:00, 2 January 2025 (UTC) | |||
*:{{tpq|Human detection ability seems quite sufficient for these case}} these discussions have already proven that human detection is not reliable, with human-generated comments labelled as AI and AI-generated comments labelled as human. Why should we prohibit LLM-generated content that is accurate, relevant and intelligible? We can already ignore/remove content that is none of those things regardless of whether it is LLM-generated or not. ] (]) 12:21, 2 January 2025 (UTC) | |||
:::If you want a pithy in-a-nutshell: personal effort is buy-in in a discussion among equals. If your personal effort in a discussion is an LLM prompt, no editor should be expected to engage with you. --<span style="font-family:Courier">]</span> <small>(] · ])</small> 12:42, 2 January 2025 (UTC) | |||
::::What matters is whether people are engaging with each other constructively and in good faith. Whether one or both parties is using an LLM is completely irrelevant to both aspects. ] (]) 13:29, 2 January 2025 (UTC) | |||
:::::What matters is whether editors wish to engage with an LLM on Misplaced Pages. https://news.mit.edu/2024/large-language-models-dont-behave-like-people-0723 I don't mind interacting with an LLM for my own use, just not on here. ] (]) 13:42, 2 January 2025 (UTC) | |||
::::::Nobody is forcing you to engage with anybody or anything you don't want to engage with, but you do not get to state who other people are allowed to choose to engage with. As long as someone is engaging constructively and in good faith I don't care whether they are human or machine, because (a) I have no reliable way of knowing, and (b) it is literally irrelevant to me. ] (]) 14:14, 2 January 2025 (UTC) | |||
:::::::{{tq|you do not get to state who other people are allowed to choose to engage with}} Strawman, since I didn't do that. Only you are doing that. ] (]) 14:26, 2 January 2025 (UTC) | |||
::::::::Quite the contrary - you are saying that nobody should be allowed to engage with someone using an LLM because you do not want to engage with someone using an LLM. My position is that everybody should be allowed to choose who they want and do not want to engage with for themselves. ] (]) 14:42, 2 January 2025 (UTC) | |||
*'''Support in principle''' - Using entirely AI-generated text in discussion is not a valuable contribution, since ultimately it is not capable of thought, but there should be leniency for those who are simply using AI to improve something they already wrote. ] (]) 14:55, 2 January 2025 (UTC) | |||
*'''Support in principle''' We don't want LLMs opinions. IF and WHEN we do we'll ask for an extension with some AI making an openig analysis of nominations. But we can not, and want not, to excludes cases where some user forms their opinion chating with some AI, as it was brillianttely shoen above, nor we want to exclude anyone lee proficient with the language just "because it looks like a bot". - ] (]) 18:13, 2 January 2025 (UTC) | |||
*'''Support the principle''' per Nabla. ] (]) 21:23, 2 January 2025 (UTC) | |||
*'''Support in principle, ''strongly'' concerned in potential practice''' - I like the idea of removing what is verifiably generated by a LLM. On the other hand, I've read that autistic writers may be more likely to have their (completely non-AI) writing flagged as potential AI use by automated detectors. I think it could be a problem if a rule that's good in principle could start disproportionately affecting some editors (who are not doing the thing) more than others (who are not doing the thing) in practice, whether that come from human (i.e. other editor) bias or automated tools. - ] (]) 18:36, 3 January 2025 (UTC) | |||
*'''Strongly support''' per ChatGPT (EEng) and SMcC; I'm not at all concerned about edit accessibility to non-native speakers; ], after all, is required to contribute to this project. I also oppose any breaking of the line against AI-generated plausible-sounding and -looking slop. We can't have a Misplaced Pages of LLMs talking to LLMs on how to make LLM content for LLM articles. ]<span style="color: #3558b7;"><sup>]</sup>]</span> 21:53, 4 January 2025 (UTC) | |||
*'''Support''' per SMcCandlish. If someone needs to use a LLM to be able to contribute and is unable to convert the output into something that is not obviously AI-generated, then ] likely applies. It'll be impossible to stop minimal usage cases, but having such a rule in place would at least encourage editors to review their output and stop the unhelpful posting off entire ChatGPT essays to discussion spaces. -- ] - <sup>]</sup>/<sub>]</sub> 01:41, 6 January 2025 (UTC) | |||
*'''Strong... opinion''' -- I think the nuance required here in part is the difference between someone using '''any''' automated tool for assistance, versus true bot-like behavior. I believe that unauthorized bot behavior is already prohibited, which should help address the concerns that we mere humans cannot keep up with LLM bots. I agree, we cannot, but I don't see much of that. I am also not inclined to the "if you cannot write, you cannot contribute"... I can imagine 15 years ago some of us might have made the same statement about spelling and grammar; if you cannot spell properly without auto-correct you have no right to edit an encycolopedia. The are significant number of very intentilligent people who are afflicted with things like dyslexia, aspurgers, etc. who have been contributing using various technology tools for assistance. How may of us have Grammarly or similar running on their web browser? And beyond that tools and what they're called will continue to evolve. I am very much against just banning LLM use; largely because it can turn into an unnecessary witch hunt. There are people who will use the tools constructively, and those who will not. I can see some places where it should ''probably'' be banned (such as using a LLM to determine consensus on a discussion that needs closing (AfD, RM, etc)). But even in those areas, I think many of our existing policies and guidelines already address most of the actual concerns we're seeing when it comes to that activity. Cheifly that as long as people are being held accountable for how they use the tools, then who cares what the tool is called in 2000, 2020 or 2040? So I think the course forward is best served by (1) Consider refinement to ] so that we're encapsulating LLM type bot behavior, as well as some sort of threshold on "non-human" capable editing limits (perhaps as part of ]; (2) make a policy or guidelines very clear, bright line, that a user will be treated the same regardless of what tools they use, LLM or otherwise, and that disruptive editing will be handled accordingly. (2a) perhaps a single-warning template reflective of such, to welcome people who appear to be using LLM, and that they are responsible for their adherence to policy and that LLMs tend to get policy wrong. ] ] 02:48, 7 January 2025 (UTC) | |||
*:I like these proposal ideas, and I believe you've hit the nail on the head on the issue. ] (]) 13:14, 7 January 2025 (UTC) | |||
*:+1 ] 15:16, 7 January 2025 (UTC) | |||
*:The proposal is focused on comments that are '''obviously generated''' by LLM, which would not include non-generative AI processes like Grammarly or translation. The whole issue is arguments that were clearly created by a chatbot etc. doing ''its own'' analysis of a discussion and an editor just pasting that output rather than forming the argument themselves, which results in large amounts of text that other editors/closers then have to read and respond to. ] (]) 22:26, 7 January 2025 (UTC) | |||
*::Right, where one doesn't feel like one is talking with ]. ] (]) 22:52, 7 January 2025 (UTC) | |||
*::To be honest, that's my concern. What if something seems obviously generated by LLM but is entirely human-written? What if certain groups of editors are more likely to produce seemingly LLM text that isn't generated by any LLM at all? Is there a way to control for that? How do we consistently enforce this kind of thing without too many false positives ''or'' false negatives? | |||
*::Replying to the top-level opinion at this point: Most autistic people who would have in the past received an Asperger's diagnosis, barring some other reason, would IMO not be more likely to "need" more assistive technology (LLM or otherwise) to contribute to Misplaced Pages articles and discussions than any other user. However, there have been reports that autistic people may be more likely to produce text that is ''falsely'' positive in AI detection tools. See (I had a second example, I thought, except that I've apparently read my last free article on that site (I bet I read said last article when looking up the ''same'' article when I wrote my !vote a few days back, {{facepalm}}). Not gonna link what I can't (quickly) verify!)) <small>(As an aside, I think ] can come across as a bit... charged? Just something to note.)</small> | |||
*::The mention of Data in particular only heightens my concern because--while no one has accused me yet of using LLM when I write--I ''have'' (on other sites, not here) been accused of being a bot before because of the style of writing I adopt at times. What if editor bias disproportionately plays into ''who'' is suspected of LLM use and who isn't? What if imperfections in automated AI detection only serve to cement the bias? - ] (]) 01:25, 8 January 2025 (UTC) | |||
*:::These questions around controlling for false positives and false negatives are not new, we face them all the time already, for example distinguishing vandalism from good faith contributions. We work through them, but they don't mean we don't have a policy on vandalism. ] (]) 01:41, 8 January 2025 (UTC) | |||
*::::Indeed, that's why my original !vote is (in short) "support but concerned". I definitely think I'd support a policy; I'm just raising questions/concerns I think are worth considering in the process of creating such a policy. I think some part of me is just remembering times I've seen bias negatively affect the project in the past and that part of me is coming out here trying to prevent another situation like that by laying the concerns out. I dunno. - ] (]) 03:59, 8 January 2025 (UTC) | |||
*::::It's much easier to distinguish good-faith contributors: see if they change. ] (]) 04:06, 8 January 2025 (UTC) | |||
*::I think Joelle what you're overlooking in that statement is that ''obviously generated'' is something that is going to be short lived, meaning that it will not be very long before the LLM generated content will be far harder to distinguish. Just like how just a year ago any generative image AI would often get the number and/or placement of fingers wrong on a person's hand, in less than a years time that appears to generally be solved. Today the tell tail signs is even if you ask for someone who is left handed, you'll almost always get an image of a right handed person... But that too will go away shortly. I believe it is (scarily) not too far off, when LLMs can determine the prompt is for Misplaced Pages (by way of seeing the wikimarkup or references to normal wiki policies), and among other things, write in the style of experienced editors, perhaps modeled off of some of us in this very discussion. That will be much more difficult to tell, especially when it learns how to ''not be soo overly polite and apologetic''. Beyond that, I believe there are a lot of people successfully and positively using LLMs already on WP. For those reasons, I proffer that we focus on refining the definitions and parameters for general disruptive editing (such as EW, RRR, etc), what a bot edit/activity is, and perhaps a standardized "friendly welcome" and "friendly notice" for those suspected of using LLMs, and reinforce that the tool matters less than the behavior. I think the analogy I'm going for is that of how we handle socks -- it is not against policy to have multiple accounts, however, when it is disruptive, it is blatantly prohibited. But the signs and symptoms of socks is really what we care about. ] ] 05:02, 8 January 2025 (UTC) | |||
*:::Something can be prohibited even if it is or will be difficult to identify it. Just because we can anticipate AI-generated slop becoming harder to detect doesn't mean it's not still disruptive or that we shouldn't be able to strike it when it is obvious. ] (]) 23:41, 9 January 2025 (UTC) | |||
*'''Strong support''' for striking AI-generated comments. Detection of course will be up to admin discretion, but so is things like incivility and personal attacks; while short snippets of AI generated text are hard to detect, lengthy paragraphs are quite easy to eyeball, and from my experience that's the area where it's most likely to come up and most immediately disruptive. <small> ] (]) (it/she) </small> 22:25, 9 January 2025 (UTC) | |||
*'''Support''' {{TQ|for striking AI-generated comments. Detection of course will be up to admin discretion}} as are other behavioural preoblems. Having read most of the discussion, and having been astonished by the 'individuality' of some machine-generated content, nonetheless, while detection may be less-than-foolproof, I still think we should make the point that such content isn't acceptable. It should be within our capacity to make the distinction between 'machine-generated' content and the use of software to correct grammar or to translate, neither of which should be directly outlawed.] (]) 07:18, 10 January 2025 (UTC) | |||
*:But ''why'' should such content be deemed unacceptbale when there are no problems with it? When there are problems with it, it can and should be dealt with in the same way that those problems are dealt with when written by humans. ] (]) 18:25, 10 January 2025 (UTC) | |||
=== Request for closure === | |||
::::::I dunno. Are IGN and Gamespot peer-reviewed? Is the BBC peer-reviewed? They have editors, if that helps.--<strong>]</strong>] 11:11, 21 March 2008 (UTC) | |||
IMHO, since we're over 30 days and the argument consensus seems lopsided, the community would do well to close this rapidly (as opposed to waiting for additional comments). ] (]) 21:04, 2 January 2025 (UTC) | |||
:::::::I don't know if Gamespot is peer-reviewed, I don't know if it's a reliable source, either. Of course the BBC is peer reviewed. If we can't get answers to the question, then we can't assume the sources are reliable. <font face="Arial">]<sub>'']''</sub></font> 16:52, 21 March 2008 (UTC) | |||
::::::::Sources don't have to be peer-reviewed. If you think they should be, take it to the relevant policy pages. Otherwise, it's just ].--<strong>]</strong>] 09:15, 22 March 2008 (UTC) | |||
:Sure it looks lopsided if you only read what has been bolded. When you look beyond at what those bolding support are actually supporting it's not at all clear there is a consensus. Then when you actually look at the arguments made it seems that there is not actually a consensus for anything other more than "some users vocally dislike AI". ] (]) 21:59, 2 January 2025 (UTC) | |||
== Em-dashes == | |||
::If that's so, I'm sure the closer will figure it out. ] (]) 22:14, 2 January 2025 (UTC) | |||
:::I was merely pointing out that BusterD's stated reason for calling for closure now is incorrect. That doesn't mean closing now would necessarily be wrong, just that overwhelming consensus is not a reason for it. ] (]) 22:25, 2 January 2025 (UTC) | |||
::::As a sysop, reacting to LLM-related abuse is virtually all I've done in the last week or so. We have a pressing need for some help from the community, and while I'm open to discussion on this, it's unreasonable to expect humans to keep up with bot-like behavior. I've made my argument above, and I'll not disagree further here. ] (]) 00:08, 3 January 2025 (UTC) | |||
::Many on the “support” side are also supporting with conditions instead of an unconditional support. For instance, many have suggested that LLM that supported grammars are okay. | |||
::The closer must also take it into account. ] ] 11:02, 3 January 2025 (UTC) | |||
* There are multiple breaks and 391 comments now. At this scale, it becomes difficult for a human to digest and summarise it all. AI technology seems well suited to this and so asked Gemini again "Please summarise the consensus of the following RFC:" giving it all the discussion above. Its response was much as before: | |||
There is a discussion currently going on in ] about whether the Manual of Style should only favour spaced en-dashes and unspaced em-dashes for disjunction (thus disallowing spaced em-dashes), each to be used consistently in any given article. | |||
{{tqb| | |||
You are all welcome to comment. ], <small>]</small> 03:10, 18 March 2008 (UTC) | |||
The RFC on LLM/chatbot comments in discussions is contentious, with a significant number of editors on both sides of the issue. | |||
'''Support''' | |||
:Hope there's no plan to ban editors for life for violating this one... --] (]) 03:27, 19 March 2008 (UTC) | |||
Many editors support the proposal to discount or remove LLM-generated comments in discussions. Their arguments include: | |||
::Nah, I think mere probation will do. ], <small>]</small> 17:34, 20 March 2008 (UTC) | |||
* LLMs do not have original thoughts or arguments and therefore do not contribute meaningfully to discussions. | |||
== Changing the way ] works == | |||
* LLM-generated comments can be disruptive, wasting editors' time and derailing discussions. | |||
* LLMs can be used by bad-faith actors to manipulate consensus or spread misinformation. | |||
*Editors should be encouraged to participate in discussions using their own words and thoughts. | |||
'''Oppose''' | |||
There's been some discussion about how we could use DYK to encourage article improvement (along with creation/expansion) ]. Some more voices might help determine if there's any value to this, please take a look and make any suggestions/comments. Thanks ] (]) 18:40, 17 March 2008 (UTC) | |||
Other editors oppose the proposal, arguing that: | |||
== Deleting? == | |||
* LLMs can be useful tools for editors who have difficulty expressing themselves in English, such as non-native speakers or those with disabilities. | |||
I just deleted a personal attack (not against me) from a page I watch. As an administrator, I can delete edits; am I allowed to delete these two edits because they were attacks? ] (]) 18:46, 18 March 2008 (UTC) | |||
* LLM-generated comments should be evaluated on their merits, not their origin. | |||
* Detection of LLM-generated comments is unreliable and could lead to false positives. | |||
* A ban on LLM-generated comments could discourage participation from editors who rely on these tools. | |||
'''Additional points''' | |||
:The guidelines on deleting particular versions of pages is contained at ], ] and ]. Unless they fit the advice there or are so serious you think it really necessary to delete them then it is probably best just to revert. ] (]) 19:05, 18 March 2008 (UTC) | |||
* There is disagreement about whether machine translation should be considered LLM-generated content. | |||
::(ec)If the comments were purely personal attacks - there wasn't any discussion about content - then it's appropriate simply because article talk pages are for the purpose of discussing content, period. (If the attack was on a user page, it's clearly inappropriate since user pages generally are off-limits to others; if it was on a user talk page, then it's generally got to have at least a bit of constructive information - e.g., "I reverted your edit ... "). | |||
* Some editors believe that LLMs could be used constructively in discussions, for example, to summarize arguments or provide alternative perspectives. | |||
* Others are concerned about the potential for LLMs to be used maliciously, for example, to flood discussions with irrelevant comments or to impersonate other editors. | |||
'''Overall''' | |||
::The relevant guideline (which says essentially there is no standard approach) is ]. | |||
There is no clear consensus on whether to ban or allow LLM-generated comments in discussions. The discussion highlights the challenges of balancing the potential benefits and risks of using LLMs in a collaborative environment like Misplaced Pages. | |||
::The alternative, of course, is to prune the comment, removing the personal attack information but leaving the constructive parts alone, assuming a mix of personal attack and useful information. If you had done that, it would have been appropriate to note that as a follow-on comment. | |||
}} | |||
]🐉(]) 21:58, 10 January 2025 (UTC) | |||
That was a good answer but I wanted to get a rough headcount and so asked a supplementary question, "What was the count of the number of support and opposing votes in the discussion?" The answer was: | |||
::And it really - in this case - makes no difference whether you're an admin or not; your being an admin would only be relevant if you were using admin powers (and then, of course, only where you had prior involvement - for example, to block someone with whom you were having a content dispute on an article). Anyone can delete personal attacks. -- <font style="font-family:Monotype Corsiva; font-size:15px;">] </font> ] 19:11, 18 March 2008 (UTC) | |||
{{tqb| | |||
Unfortunately, a precise count of support vs oppose votes in that discussion is difficult to determine definitively for a few reasons: | |||
* '''Some votes are unclear''': Several editors express nuanced positions or change their vote during the discussion. | |||
::And looking at what Davewild said - I may have misunderstood. In general, you should ''not'' delete edits that violate ] - then there isn't any indication, to non-admins, of the problem. Much better to revert and leave an audit trail for ''everyone'' to see. -- <font style="font-family:Monotype Corsiva; font-size:15px;">] </font> ] 19:11, 18 March 2008 (UTC) | |||
* '''"Strong" votes aren't quantified''': While many editors use "Strong support" or "Strong oppose", these don't carry extra weight in a simple tally. | |||
* '''Focus shifts''': The initial focus on AI-generated comments broadens to include AI-assistance, making some votes less clear-cut. | |||
However, based on a rough manual count: | |||
== ] == | |||
* '''Support''': Around 60-70 votes clearly favor discounting or removing AI-generated comments. | |||
There is a discussion at ] on ways of slightly widening the scope of DYK. Please have a look. - ] (]) 20:43, 18 March 2008 (UTC) | |||
* '''Oppose''': Roughly 30-40 votes lean against the proposal, with varying degrees of opposition. | |||
'''Important Note''': This is an approximation. The actual consensus in such discussions is determined by the closer, who weighs the arguments' quality and policy relevance, not just the number of votes. | |||
== Misplaced Pages MUST NOT have an opinion of its own, over any issue == | |||
}} | |||
This seems to be another good answer – better than we get from most human closes. I made a rough manual count to confirm that the numbers given are reasonable ballpark figures. And it seems to have an impressive grasp of nuance – noticing that several editors were qualifying their votes as strong/weak and that some editors changed their vote. ]🐉(]) 22:02, 10 January 2025 (UTC) | |||
Although, Misplaced Pages follows the neutrality policy, it is quite evident in, practically all, articles, that Misplaced Pages articles, tend to favour a particular point of view. Although, it is quite natural that, viewpoint of editors, would take a strong position. This can be avoided by a couple of smart policies. Firstly, all information, in Misplaced Pages must only be represented as facts and not opinions. | |||
:These feel (from my memory of when I skimmed over the discussion several days ago before my original !vote/comment) like they might be leaving out some relevant points that came up. (Unfortunately I can't quantify that without taking a look which I don't have time to do right now (other than the fact my own concerns didn't make it into the summary, which... fair, I guess, considering I was the only one who raised that specific issue).) <small>(I'm also very surprised it came up with something coherent as well as on topic, but that just has more to do with the fact Gemini (or at least whatever AI summarizes Google searches) told me extremely recently that I could get to Alaska by car without a passport if I drove through the ocean. Not took a ferry but drove... through the water... in a car.)</small> - ] (]) 07:02, 11 January 2025 (UTC) | |||
Example : "Prostitution is bad" is an opinion. However, "A majority of people in the world think, prostitution is bad", is a fact with a possibly, verifiable source. Since, there is no possibly, univerally acceptable definition of "good and bad" and "right or wrong", it is very likely that opinions would differ but facts would be remain true. It provides, the reader, the choice to opine, over the issue, in any way that he/she may wish to. | |||
::Your !vote was a qualified support with concern about false positives affecting editors such as the autistic. That's then a nuanced, both-ways vote and the Gemini summary seems to capture that reasonably well. ]🐉(]) 09:43, 11 January 2025 (UTC) | |||
This will significantly help Misplaced Pages, maintain high standards of neutrality. It will also ensure, that consensus is not necessarily valued over credential. Misplaced Pages (and wikipedians) should ensure that "highly-agreeable" standpoint over an issue, is not as important as a "highly-verifiable" fact. <small>—Preceding ] comment added by ] (]) 16:48, 19 March 2008 (UTC)</small><!-- Template:UnsignedIP --> <!--Autosigned by SineBot--> | |||
:::My vote was mentioned purely because it was the only specific thing I could bring to mind quickly that hadn't ended up in the AI summary. I think Thryduulf's take about what it doesn't make clear more closely matches what I think I was actually feeling about the summary (that it doesn't make clear the wide breadth of what the support !votes are actually saying). - ] (]) 19:08, 11 January 2025 (UTC) | |||
:I would think that by its very nature and existence, Misplaced Pages takes the stances "Knowledge is good", and "Wide spread dispersion of knowledge is good". ] (]) 16:58, 19 March 2008 (UTC) | |||
:On the whole that summary matches my impression of discussion, with the exception that it doesn't make it clear that not everyone expressing support is supporting the same thing. ] (]) 14:38, 11 January 2025 (UTC) | |||
::You beat me to it. ] (]) 17:02, 19 March 2008 (UTC) | |||
:I have strong doubts that Gemini is able to discern support or opposition even in individual !votes, let alone assess one participant's opinion across multiple comments or the relative preponderance of a given opinion across ''different'' participants. I fed it just the material before section break 1, then asked it to list the editors who '''oppose''' banning AI-generated comments. Here was its response: {{tq2|Based on the discussion, the following editors oppose banning AI-generated comments or express concerns about the feasibility of such a ban:{{pb}}Just Step Sideways: Argues against dismissing arguments based on the author rather than their content.<br>Remsense: Believes the downsides of such a policy outweigh the benefits.<br>Chipmunkdavis: Supports admin discretion and emphasizes the difficulty of creating a policy around rapidly evolving AI technology.<br>Jlwoodwa: Acknowledges that not all AI-generated comments are useless and suggests a case-by-case approach.<br>Aaron Liu: Is skeptical about the long-term feasibility of reliably distinguishing between human and AI-generated text.<br>Sean.hoyland: Expresses skepticism about the enforceability of such a rule due to the rapid advancements in AI technology.<br>Gnomingstuff: Strongly opposes a ban, citing the difficulty of enforcement, potential for false positives, and the impact on non-native English speakers.<br>Randy Kryn: Expresses concerns about potentially discouraging the use of AI tools for improving the clarity of writing by non-native English speakers.<br>Selfstudier: Acknowledges the principle of discounting AI-generated comments but questions the reliability of identifying them.}} <br>Somehow I don't think that is how a closer would interpret the !votes from @], @], @], @], or @]... Note that it also did ''not'' manage to identify Thryduulf as an opposer until I asked who '''supported''' striking AI-generated comments:<br>{{tq2|Based on the discussion, the following editors support striking AI-generated comments:{{pb}}Thryduulf: Initially supported discounting or ignoring AI-generated comments, but later withdrew support due to potential downsides.<br>Chipmunkdavis: Initially expressed support for admin discretion, but later shifted to more explicit support for guidance to closers and evaluators.<br>Bluethricecreamman: Strongly supports striking out AI-generated content, especially if it appears insincere or disruptive.<br>Isaacl: Supports striking out AI-generated content and favors sanctions for users who overuse AI to write arguments.<br>Clayoquot: Supports striking out AI-generated comments, comparing them to comments from suspected sock puppets.<br>Lazman321: Strongly supports striking out AI-generated comments, considering them as "pure laziness" and a waste of other editors' time.<br>Seraphimblade: Supports striking out AI-generated comments as they may not reflect the actual thoughts of the editor.}}{{pb}}When I re-fed it the same discussion, it modified its response thusly: {{tq2|Based on the additional excerpt of the discussion, the following editors support striking or collapsing AI-generated comments:{{pb}}Jlwoodwa: Supports discounting or collapsing AI-generated comments, finding them disruptive if used to present arguments in an unnecessarily verbose manner.<br>Gnomingstuff: Opposes a hard ban, but acknowledges that AI-generated comments can be pointless and disruptive.<br>Seraphimblade: Reiterates support for striking AI-generated comments as they may not reflect the editor's thoughts.<br>It's important to note that:{{pb}}Gnomingstuff also raises concerns about the difficulty of enforcing such a policy and potential unintended consequences, like discouraging users from using translation tools.<br>Aaron Liu's test using GPTZero on a human-written comment by Gnomingstuff indicated a 99% probability that it was human-written, suggesting that these tools may not be reliable.}}{{pb}}The last bullet point makes zero sense...<br>It didn't even identify me as supporting the proposal until I specifically asked what it thought my position was.{{pb}}No fucking way should we trust AI's assessment of any kind of consensus for any discussion. ] (]) 23:02, 11 January 2025 (UTC) | |||
:I think most of this is already covered by ], ], and ]. And I don't think there are many cases where articles take such a black-and-white stance such as "X is bad." <font face="Broadway">]'']</font>'' 17:57, 19 March 2008 (UTC) | |||
::* Perhaps it's better at summarising in a broad-brush way than parsing individual votes. But to clarify the technical details, please say which version of Gemini you used as there seem to be at least five currently. My test was done using Advanced Gemini 1.5 Pro. I've also tried the Deep Research option but am not quite sure when that kicks in. ]🐉(]) 09:57, 12 January 2025 (UTC) | |||
:: Let's not go here. This place is self-contradictory in the things it professes to believe. Everything we do, literally, on WP, we actually do ''on opinion,'' but it is forbidden to call it that. The very word "concensus", which we use so often, is short for "concensus opinion." We leave the last word off, due to the questions it raises, which are embarrassing. The cofounder of Misplaced Pages has officially declared, per his personal ''opinion'', that wikipedia should be written from a ''neutral point of view''. That's his opinion, and he's sticking to it. And those who side with ''this'' opinion (renamed "non-negociable policy" so you don't get it confused with some guy's inexpert opinon), will not tolerate any other opinions about what to call it, which are officially wrong. See "having the wrong opinion." Policies are ''realities'' here on wikipedia, and don't let us hear you calling them "mere opinions of people who epistemologically should know better." Okay? ]]]] 18:08, 19 March 2008 (UTC) | |||
::*:But broad-brush summaries are ''not'' what we want when the intent is to determine consensus among editors, since the AI is clearly not identifying positions accurately and doesn't seem to distinguish between one argument made multiple times by the same person vs by multiple different people. <br>I used Gemini 1.5. ] (]) 19:04, 12 January 2025 (UTC) | |||
Ok, what point is there to having AI summarize consensus? Consensus should be based on the strength of individual arguments, which only an experienced human editor can assess; AI can only guess what arguments look like. It's especially weird to use it here since if, theoretically, the AI summary decides there is consensus against considering AI comments in Misplaced Pages discussions, then that would include AI summary itself, leading to ]. ] (]) 19:41, 11 January 2025 (UTC) | |||
== Relax ] where traditional sources are incorrect == | |||
:* One point is that AI may be better at summarising huge, sprawling discussions like this one. That's because it may have more patience and ability to handle large masses of data. Humans will tend to suffer from ], ] and issues like ] and other ]. | |||
:: Also, "beggars can't be choosers". The OP called for a close 10 days ago but no human has rushed to perform this thankless task. Discussions on Misplaced Pages are often left dangling without a clear conclusion and there many other big backlogs. | |||
:: ]🐉(]) 10:32, 12 January 2025 (UTC) | |||
=== Alternate proposal === | |||
Occasionally traditional ] publish innacurate or misleading information about a subject. This is especially prevalent in internet and technology related areas where old-media reporters may have no clue what they are talking about. | |||
{{discussion top|result=Redundant proposal, confusingly worded, with no support, and not even any further discussion interest in 10 days. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 05:23, 22 December 2024 (UTC)}} | |||
Whereas many editors, including me, have cited problems with accuracy in regards to existing tools such as ZeroGPT, I propose that '''remarks that are blatently generated by a LLM or similar automated system should be discounted/removed/collapsed/hidden'''. ] <sup>] / ]</sup> 10:00, 10 December 2024 (UTC) | |||
:'''Oppose''' as completely unnecessary and far too prone to error per the above discussion. Any comment that is good (on topic, relevant, etc) should be considered by the closer regardless of whether it was made with LLM-input of any sort or not. Any comment that is bad (off-topic, irrelevant, etc) should be ignored by the closer regardless of whether it was made with LLM-input of any sort or not. Any comment that is both bad and disruptive (e.g. by being excessively long, completely irrelevant, bludgeoning, etc) should be removed and/or hatted as appropriate, regardless of whether it was made with LLM-input of any sort. The good thing is that ''this is already policy'' so we don't need to call out LLMs specifically, and indeed doing so is likely to be disruptive in cases where human-written comments are misidentified as being LLM-written (which ''will'' happen, regardless of whether tools are used). ] (]) 11:19, 10 December 2024 (UTC) | |||
When editors have determined by concensus that the "reliable sources" are innacurate but other sources are good, I recommend that we relax the ] guidelines to allow a wider range of sources. If editors determine by concensus that a news articles is misleading whereas a blog post covers the subject perfectly, lets use the blog post. If a fact is claimed that can't be sourced in ] sources, but a video confirms that fact beyond a doubt, then lets keep the fact and cite the video (but be careful not to apply any extra interpretation). | |||
:I think this proposal is not really necessary. I support it, but that is because it is functionally identical to the one directly above it, which I also supported. This should probably be hatted. ] ] 18:32, 10 December 2024 (UTC) | |||
:What does '''blatantly generated''' mean? Does you mean only where the remark is signed with "I, Chatbot", or anything that ''appears'' to be LLM-style? I don't think there's much in between. <span style="font-family:cursive">]]</span> 19:21, 10 December 2024 (UTC) | |||
:'''Procedural close''' per BugGhost. I'd hat this myself, but I don't think that'd be appropriate since it's only the two of us who have expressed that this proposal is basically an exact clone. ] (]) 03:00, 11 December 2024 (UTC) | |||
{{discussion bottom}} | |||
{{discussion bottom}} | |||
{{ping|Tamzin}} Did you find there was consensus support for handling obvious use of comments through any form of generative AI, regardless of the specific underlying technology (thus not only for large language models)? ] (]) 23:45, 12 January 2025 (UTC) | |||
In terms of policy, this isn't such a radical idea considering the "occasional exceptions" clause of ], and the command that we ] rules that prevent us from improving the encyclopedia. However, in practice many (the majority of?) editors who completely agree that the sources are wrong will nevertheless tend to cling onto ] and force misleading information to remain in articles. | |||
:I've added "or similar AI technologies" to clarify. I don't think there was enough discussion of technical aspects of AI to be more precise than that. <span style="font-family:courier"> -- ]</span><sup class="nowrap">[]]</sup> <small>(])</small> 00:43, 13 January 2025 (UTC) | |||
::My general impression is that most commenters were using the term "LLM" generically, rather than referring to a specific technology. ] (]) 04:28, 13 January 2025 (UTC) | |||
== RfC: Voluntary RfA after resignation == | |||
Whether you agree with my proposal or not, I think this particular form of bias should be considered carefully. ] (]) 17:42, 19 March 2008 (UTC) | |||
{{discussion top|1=There is clear consensus that participants in this discussion wish to retain the "Option 2" status quo. We're past 30 days of discussion and there's not much traffic on the discussion now. It's unlikely the consensus would suddenly shift with additional discussion. --] (]) 18:29, 16 January 2025 (UTC)}} | |||
<!-- ] 22:01, 19 January 2025 (UTC) -->{{User:ClueBot III/DoNotArchiveUntil|1737324070}} | |||
:We already have to consider which sources are reliable ''for the particular claim being made''. If people decide source X is better than source Y, they may use X. In other words, this already gets done, as needed. And of course we need to tread carefully to avoid original research. It's a judgement call. ] ] 17:47, 19 March 2008 (UTC) | |||
Should ] be amended to: | |||
::If people would do what you say then we would be great. My experience in practice has been quite the opposite, however. ] (]) 17:57, 19 March 2008 (UTC) | |||
* '''Option 1'''{{snd}}Require former administrators to request restoration of their tools at the ] (BN) if they are eligible to do so (i.e., they do not fit into any of the exceptions). | |||
* '''Option 2'''{{snd}}<s>Clarify</s> <ins>Maintain the status quo</ins> that former administrators who would be eligible to request restoration via BN may instead request restoration of their tools via a voluntary ] (RfA). | |||
* '''Option 3'''{{snd}}Allow bureaucrats to SNOW-close RfAs as successful if (a) 48 hours have passed, (b) the editor has right of resysop, and (c) a SNOW close is warranted. | |||
'''Background''': This issue arose in one ] and is currently being discussed in an ]. ] (]/]) 21:14, 15 December 2024 (UTC)<br /> | |||
:If you know for sure that a source is incorrect, then chances are, you learned it from a source that you trust. Why not use that? As Friday said, you have to consider the type of source. ''Time'' magazine is a good source for international politics, not so much for internet memes. However, if every reliable source you find appears to be wrong, there's the likely possibility that ''you'' might be wrong. <font face="Broadway">]'']</font>'' 18:03, 19 March 2008 (UTC) | |||
'''Note''': There is an ongoing related discussion at {{slink|Misplaced Pages:Village pump (idea lab)#Making voluntary "reconfirmation" RFA's less controversial}}.<br /> | |||
::That is always a concern - the worst person to fool is yourself. That is one of the reasons why consensus is so important. However, sometimes the sources ''really are wrong''. Jimmy Wales birthday ''really is'' August 7, the bulk of the primary source material on ] ''really is'' innacurate. During the Atlanta Project Chanology protests, the protesters really did put up a sign that said "do not honk" (you can see it on video). Now I wish I could bring up some examples from a broader set of issues than this, but my main experience here is editing math articles which tend to play by different rules. ] (]) 18:23, 19 March 2008 (UTC) | |||
'''Note''': Option 2 was modified around 22:08, 15 December 2024 (UTC). | |||
'''Note''': Added option 3. ] (] • she/her) 22:12, 15 December 2024 (UTC) | |||
:{{block indent|em=1.6|1=<small>Notified: ], ], ], ], ]. ] (]/]) 21:19, 15 December 2024 (UTC)</small>}}<!-- Template:Notified --> | |||
*'''2''' per ]. If an admin wishes to be held accountable for their actions at a re-RfA, they should be allowed to do so. ] ] 21:22, 15 December 2024 (UTC) | |||
*:Also fine with 3 ] ] 22:23, 15 December 2024 (UTC) | |||
* There is ongoing discussion about this at ]. ] (]) 21:24, 15 December 2024 (UTC) | |||
** '''2''', after thought. I don't think 3 provides much benefit, and creating separate class of RfAs that are speedy passed feels a misstep. If there are serious issues surrounding wasting time on RfAs set up under what might feel to someone like misleading pretenses, that is best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm ('''RRfA''')". ] (]) 14:49, 16 December 2024 (UTC) | |||
**:{{tq|best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)"}} - I like this idea, if option 2 comes out as consensus I think this small change would be a step in the right direction, as the "this isn't the best use of time" crowd (myself included) would be able to quickly identify the type of RFAs they don't want to participate in. ] ] 11:05, 17 December 2024 (UTC) | |||
**::I think that's a great idea. I would support adding some text encouraging people who are considering seeking reconfirmation to add (RRfA) or (reconfirmation) after their username in the RfA page title. That way people who are averse to reading or participating in reconfirmations can easily avoid them, and no one is confused about what is going on. ] (]) 14:23, 17 December 2024 (UTC) | |||
**::I think this would be a great idea if it differentiated against recall RfAs. ] (]) 18:37, 17 December 2024 (UTC) | |||
**:::If we are differentiating three types of RFA we need three terms. Post-recall RFAs are referred to as "reconfirmation RFAs", "Re-RFAS" or "RRFAs" in multiple places, so ones of the type being discussed here are the ones that should take the new term. "Voluntary reconfirmation RFA" (VRRFA or just VRFA) is the only thing that comes to mind but others will probably have better ideas. ] (]) 21:00, 17 December 2024 (UTC) | |||
* '''1''' ] ] 21:25, 15 December 2024 (UTC) | |||
*'''2''' I don't see why people trying to do the right thing should be discouraged from doing so. If others feel it is a waste of time, they are free to simply not participate. ] ] 21:27, 15 December 2024 (UTC) | |||
*'''2''' Getting reconfirmation from the community should be allowed. Those who see it as a waste of time can ignore those RfAs. ] ] 21:32, 15 December 2024 (UTC) | |||
*Of course they may request at RfA. They shouldn't but they may. This RfA feels like it does nothing to address the criticism actually in play and per the link to the idea lab discussion it's premature to boot. ] (]) 21:38, 15 December 2024 (UTC) | |||
*'''2''' per my comments at the idea lab discussion and Queent of Hears, Beeblebrox and Scazjmd above. I strongly disagree with Barkeep's comment that "They shouldn't ". It shouldn't be made mandatory, but it should be encouraged where the time since desysop and/or the last RFA has been lengthy. ] (]) 21:42, 15 December 2024 (UTC) | |||
*:When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, ] (]) 21:44, 15 December 2024 (UTC) | |||
*::I've started that discussion as a subsection to the linked VPI discussion. ] (]) 22:20, 15 December 2024 (UTC) | |||
*'''1''' <ins>or '''3'''</ins>. RFA is an "expensive" process in terms of community time. RFAs that qualify should be fast-tracked via the BN process. It is only recently that a trend has emerged that folks that don't need to RFA are RFAing again. 2 in the last 6 months. If this continues to scale up, it is going to take up a lot of community time, and create noise in the various RFA statistics and RFA notification systems (for example, watchlist notices and ]). –] <small>(])</small> 21:44, 15 December 2024 (UTC) | |||
*:Making statistics "noisy" is just a reason to improve the way the statistics are gathered. In this case collecting statistics for reconfirmation RFAs separately from other RFAs would seem to be both very simple and very effective. ''If'' (and it is a very big if) the number of reconfirmation RFAs means that notifications are getting overloaded, ''then'' we can discuss whether reconfirmation RFAs should be notified differently. As far as differentiating them, that is also trivially simple - just add a parameter to ] (perhaps "reconfirmation=y") that outputs something that bots and scripts can check for. ] (]) 22:11, 15 December 2024 (UTC) | |||
*:Option 3 looks like a good compromise. I'd support that too. –] <small>(])</small> 22:15, 15 December 2024 (UTC) | |||
*:I'm weakly opposed to option 3, editors who want feedback and a renewed mandate from the community should be entitled to it. If they felt that that a quick endorsement was all that was required then could have had that at BN, they explicitly chose not to go that route. Nobody is required to participate in an RFA, so if it is going the way you think it should, or you don't have an opinion, then just don't participate and your time has not been wasted. ] (]) 22:20, 15 December 2024 (UTC) | |||
*'''2'''. We should not make it ''more difficult'' for administrators to be held accountable for their actions in the way they please. ]<sub>]<sub>]</sub></sub> (]/]) 22:00, 15 December 2024 (UTC) | |||
* Added '''option 3''' above. Maybe worth considering as a happy medium, where unsure admins can get a check on their conduct without taking up too much time. ] (] • she/her) 22:11, 15 December 2024 (UTC) | |||
*'''2''' – If a former admin wishes to subject themselves to RfA to be sure they have the requisite community confidence to regain the tools, why should we stop them? Any editor who feels the process is a waste of time is free to ignore any such RfAs. — ] ⚓ ] 22:12, 15 December 2024 (UTC) | |||
*:*I would also support option '''3''' if the time is extended to 72 hours instead of 48. That, however, is a detail that can be worked out after this RfC. — ] ⚓ ] 02:05, 16 December 2024 (UTC) | |||
*'''Option 3''' per leek. ] (]/]) 22:16, 15 December 2024 (UTC) | |||
*:A further note: option 3 gives 'crats the discretion to SNOW close a successful voluntary re-RfA; it doesn't require such a SNOW close, and I trust the 'crats to keep an RfA open if an admin has a good reason for doing so. ] (]/]) 23:24, 16 December 2024 (UTC) | |||
*'''2''' as per {{noping|JJPMaster}}. Regards, --] (]) 22:20, 15 December 2024 (UTC) | |||
*'''Option 2''' (no change) – The sample size is far too small for us to analyze the impact of such a change, but I believe RfA should always be available. Now that ] is policy, returning administrators may worry that they have become out of touch with community norms and may face a recall as soon as they get their tools back at BN. Having this familiar community touchpoint as an option makes a ton of sense, and would be far less disruptive / demoralizing than a potential recall. Taking this route away, even if it remains rarely used, would be detrimental to our desire for increased administrator accountability. – ] 22:22, 15 December 2024 (UTC) | |||
*{{ec}} I'm surprised the response here hasn't been more hostile, given that these give the newly-unresigned administrator a ] for a year. —] 22:25, 15 December 2024 (UTC) | |||
*:@] hostile to what? ] (]) 22:26, 15 December 2024 (UTC) | |||
*'''2, distant second preference 3'''. I would probably support 3 as first pick if not for recall's rule regarding last RfA, but as it stands, SNOW-closing a discussion that makes someone immune to recall for a year is a non-starter. Between 1 and 2, though, the only argument for 1 seems to be that it avoids a waste of time, for which there is the much simpler solution of not participating and instead doing something else. ] and ] are always there. <span style="font-family:courier"> -- ]</span><sup class="nowrap">[]]</sup> <small>(])</small> 23:31, 15 December 2024 (UTC) | |||
* 1 would be my preference, but I don't think we need a specific rule for this. -- ] (]) 23:36, 15 December 2024 (UTC) | |||
*'''Option 1'''. <s>No second preference between 2 or 3.</s> As long as a former administrator didn't resign under a cloud, picking up the tools again should be low friction and low effort for the entire community. If there are issues introduced by the recall process, they should be fixed in the recall policy itself. ] (]) 01:19, 16 December 2024 (UTC) | |||
*:After considering this further, I prefer option 3 over option 2 if option 1 is not the consensus. ] (]) 07:36, 16 December 2024 (UTC) | |||
*'''Option 2''', i.e. leave well enough alone. There is really not a problem here that needs fixing. If someone doesn’t want to “waste their time” participating in an RfA that’s not required by policy, they can always, well, not participate in the RfA. No one is required to participate in someone else’s RfA, and I struggle to see the point of participating but then complaining about “having to” participate. ] (]) 01:24, 16 December 2024 (UTC) | |||
*'''Option 2''' nobody is obligated to participate in a re-confirmation RfA. If you think they are a waste of time, avoid them. ] (]) 01:49, 16 December 2024 (UTC) | |||
* '''1 or 3''' per Novem Linguae. <span style="padding:2px 5px;border-radius:5px;font-family:Arial black;white-space:nowrap;vertical-align:-1px">] <span style=color:red>F</span> ]</span> 02:35, 16 December 2024 (UTC) | |||
*'''Option 3''': Because it is incredibly silly to have situations like we do now of "this guy did something wrong by doing an RfA that policy explicitly allows, oh well, nothing to do but sit on our hands and dissect the process across three venues and counting." Your time is your own. No one is forcibly stealing it from you. At the same time it is equally silly to let the process drag on, for reasons explained in ]. ] (]) 03:42, 16 December 2024 (UTC) | |||
*:Update: Option 2 seems to be the consensus and I also would be fine with that. ] (]) 18:10, 19 December 2024 (UTC) | |||
*'''Option 3''' per Gnoming. I think 2 works, but it is a very long process and for someone to renew their tools, it feels like an unnecessarily long process compared to a normal RfA. ] (]) 04:25, 16 December 2024 (UTC) | |||
*As someone who supported both WormTT and Hog Farm's RfAs, option 1 > option 3 >> option 2. At each individual RfA the question is whether or not a specific editor should be an admin, and in both cases I felt that the answer was clearly "yes". However, I agree that RfA is a very intensive process. It requires a lot of time from the community, as others have argued better than I can. I prefer option 1 to option 3 because the existence of the procedure in option 3 implies that it is a good thing to go through 48 hours of RfA to re-request the mop. But anything which saves community time is a good thing. <b>]]</b> (] • he/they) 04:31, 16 December 2024 (UTC) | |||
*:I've seen this assertion made multiple times now that {{tpq| requires a lot of time from the community}}, yet nowhere has anybody articulated how why this is true. What time is required, given that nobody is required to participate and everybody who does choose to participate can spend as much or as little time assessing the candidate as they wish? How and why does a reconfirmation RFA require any more time from editors (individually or collectively) than a request at BN? ] (]) 04:58, 16 December 2024 (UTC) | |||
*::I think there are a number of factors and people are summing it up as "time-wasting" or similar: | |||
*::# BN Is designed for this exact scenario. It's also clearly a less contentious process. | |||
*::# Snow closures a good example of how we try to avoid wasting community time on unnecessary process and the same reasoning applies here. Misplaced Pages is not a bureaucracy and there's no reason to have a 7-day process when the outcome is a given. | |||
*::# If former administrators continue to choose re-RFAs over BN, it could set a problematic precedent where future re-adminship candidates feel pressured to go through an RFA and all that entails. I don't want to discourage people already vetted by the community from rejoining the ranks. | |||
*::# The RFA process is designed to be a thoughtful review of prospective administrators and I'm concerned these kinds of perfunctory RFAs will lead to people taking the process less seriously in the future. | |||
*::] (]) 07:31, 16 December 2024 (UTC) | |||
*::Because several thousand people have RFA on their watchlist, and thousands more will see the "there's an open RFA" notice on theirs whether they follow it or not. Unlike BN, RFA is a process that depends on community input from a large number of people. In order to even ''realise that the RFA is not worth their time'', they have to: | |||
*::* Read the opening statement and first few question answers (I just counted, HF's opening and first 5 answers are about 1000 words) | |||
*::* Think, "oh, they're an an ex-admin, I wonder why they're going through RFA, what was their cloud" | |||
*::* Read through the comments and votes to see if any issues have been brought up (another ~1000 words) | |||
*::* None have | |||
*::* Realise your input is not necessary and this could have been done at BN | |||
*::This process will be repeated by hundreds of editors over the course of a week. ] ] 08:07, 16 December 2024 (UTC) | |||
*:::That they were former admins has always been the first two sentences of their RfA’s statement, sentences which are immediately followed by that they resigned due to personal time commitment issues. You do not have to read the first 1000+ words to figure that out. If the reader wants to see if the candidate was lying in their statement, then they just have a quick skim through the oppose section. None of this should take more than 30 seconds in total. ] (]) 13:15, 16 December 2024 (UTC) | |||
*::::Not everyone can skim things easily - it personally takes me a while to read sections. I don't know if they're going to bury the lede and say something like "Also I made 10,000 insane redirects and then decided to take a break just before arbcom launched a case" in paragraph 6. Hog Farm's self nom had two paragraphs about disputes and it takes more than 30 seconds to unpick that and determine if that is a "cloud" or not. Even for reconfirmations, it definitely takes more than 30 seconds to determine a conclusion. ] ] 11:21, 17 December 2024 (UTC) | |||
*:::::They said they resigned to personal time commitments. That is directly saying they wasn’t under a cloud, so I’ll believe them unless someone claims the contrary in the oppose section. If the disputes section contained a cloud, the oppose section would have said so. One chooses to examine such nominations like normal RfAs. ] (]) 18:47, 17 December 2024 (UTC) | |||
*::::::Just to double check, you're saying that whenever you go onto an RFA you expect any reason to oppose to already be listed by someone else, and no thought is required? I am begining to see how you are able to assess an RFA in under 30 seconds ] ] 23:08, 17 December 2024 (UTC) | |||
*:::::::Something in their statement would be an incredibly obvious reason. We are talking about the assessment whether to examine and whether the candidate could've used BN. ] (]) 12:52, 18 December 2024 (UTC) | |||
*::@] let's not confuse "a lot of community time is spent" with "waste of time". Some people have characterized the re-RFAs as a waste of time but that's not the assertion I (and I think a majority of the skeptics) have been making. All RfAs use a lot of community time as hundreds of voters evaluate the candidate. They then choose to support, oppose, be neutral, or not vote at all. While editor time is not perfectly fixed - editors may choose to spend less time on non-Misplaced Pages activities at certain times - neither is it a resource we have in abundance anymore relative to our project. And so I think we, as a community, need to be thought about how we're using that time especially when the use of that time would have been spent on other wiki activities.Best, ] (]) 22:49, 16 December 2024 (UTC) | |||
*:::Absolutely nothing compels anybody to spend any time evaluating an RFA. If you think your wiki time is better spent elsewhere than evaluating an RFA candidate, then spend it elsewhere. That way only those who do think it is a good use of their time will participate and everybody wins. You win by not spending your time on something that you don't think is worth it, those who do participate don't have ''their'' time wasted by having to read comments (that contradict explicit policy) about how the RFA is a waste of time. Personally I regard evaluating whether a long-time admin still has the approval of the community to be a very good use of community time, you are free to disagree, but please don't waste my time by forcing me to read comments about how you think I'm wasting my time. ] (]) 23:39, 16 December 2024 (UTC) | |||
*::::I am not saying you or anyone else is wasting time and am surprised you are so fervently insisting I am. Best, ] (]) 03:34, 17 December 2024 (UTC) | |||
*:::::I don't understand how your argument that it is not a good use of community time is any different from arguing that it is a waste of time? ] (]) 09:08, 17 December 2024 (UTC) | |||
*'''Option 2''' I don't mind the re-RFAs, but I'd appreciate if we encouraged restoration via BN instead, I just object to making it mandatory. ] <sup>(]) </sup> 06:23, 16 December 2024 (UTC) | |||
*'''Option 2'''. Banning voluntary re-RfAs would be a step in the wrong direction on admin accountability. Same with SNOW closing. There is no more "wasting of community time" if we let the RfA run for the full seven days, but allowing someone to dig up a scandal on the seventh day is an important part of the RfA process. The only valid criticism I've heard is that folks who do this are arrogant, but banning arrogance, while noble, seems highly impractical. ] </span>]] 07:24, 16 December 2024 (UTC) | |||
*Option 3, 1, then 2, per HouseBlaster. Also agree with Daniel Quinlan. I think these sorts of RFA's should only be done in exceptional circumstances. ] (]) 08:46, 16 December 2024 (UTC) | |||
* '''Option 1''' as first preference, option 3 second. RFAs use up a lot of time - hundreds of editors will read the RFA and it takes time to come to a conclusion. When that conclusion is "well that was pointless, my input wasn't needed", it is not a good system. I think transparency and accountability is a very good thing, and we need more of it for resyssopings, but that should come from improving the normal process (BN) rather than using a different one (RFA). My ideas for improving the BN route to make it more transparent and better at getting community input is outlined over on the ] ] 08:59, 16 December 2024 (UTC) | |||
* '''Option 2''', though I'd be for '''option 3''' too. I'm all for administrators who feel like they want/should go through an RfA to solicit feedback even if they've been given the tools back already. I see multiple people talk about going through BN, but if I had to hazard a guess, it's way less watched than RfA is. However I do feel like watchlist notifications should say something to the effect of "A request for re-adminship feedback is open for discussion" so that people that don't like these could ignore them. <span>♠] ]</span>♠ 09:13, 16 December 2024 (UTC) | |||
*'''Option 2''' because ] is well-established policy. Read ], which says quite clearly, {{tpq|Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.}} I went back 500 edits to 2017 and the wording was substantially the same back then. So, I simply do not understand why various editors are berating former administrators to the point of accusing them of wasting time and being arrogant for choosing to go through a process which is ''specifically permitted by policy''. It is bewildering to me. ] (]) 09:56, 16 December 2024 (UTC) | |||
*'''Option 2 & 3''' I think that there still should be the choice between BN and re-RFA for resysops, but I think that the re-RFA should stay like it is in Option 3, unless it is controversial, at which point it could be extended to the full RFA period. I feel like this would be the best compromise between not "wasting" community time (which I believe is a very overstated, yet understandable, point) and ensuring that the process is based on broad consensus and that our "representatives" are still supported. If I were WTT or Hog, I might choose to make the same decision so as to be respectful of the possibility of changing consensus. ] (]) | :) | he/him | 10:45, 16 December 2024 (UTC) | |||
*'''Option 2''', for lack of a better choice. Banning re-RFAs is not a great idea, and we should not SNOW close a discussion that would give someone immunity from a certain degree of accountability. I've dropped an idea for an option 4 in the discussion section below. ] (]) 12:08, 16 December 2024 (UTC) | |||
*'''Option 1''' I agree with Graham87 that these sorts of RFAs should only be done in exceptional circumstances, and BN is the best place to ask for tools back. – ] <small>(])</small> 12:11, 16 December 2024 (UTC) | |||
*'''Option 2''' I don't think prohibition makes sense. It also has weird side effects. eg: some admins' voluntary recall policies may now be completely void, because they would be unable to follow them even if they wanted to, because policy prohibits them from doing a RFA. (maybe if they're also 'under a cloud' it'd fit into exemptions, but if an admins' policy is "3 editors on this named list tell me I'm unfit, I resign" then this isn't really a cloud.) {{pb}} Personally, I think Hog Farm's RFA was unwise, as he's textbook uncontroversial. Worm's was a decent RFA; he's also textbook uncontroversial but it happened at a good time. But any editor participating in these discussions to give the "support" does so using their own time. Everyone who feels their time is wasted can choose to ignore the discussion, and instead it'll pass as 10-0-0 instead of 198-2-4. It just doesn't make sense to prohibit someone from seeking a community discussion, though. For almost anything, really. ] (]) 12:33, 16 December 2024 (UTC) | |||
*'''Option 2''' It takes like two seconds to support or ignore an RFA you think is "useless"... can't understand the hullabaloo around them. I stand by what I said on ] regarding RFAs being about evaluating trustworthiness and accountability. Trustworthy people don't skip the process. —] <span title="Canadian!" style="color:red">🍁</span> (] · ]) 15:24, 16 December 2024 (UTC) | |||
*'''Option 1''' - Option 2 is a waste of community time. - ] (]) 15:30, 16 December 2024 (UTC) | |||
*:Why? ] (]) 15:35, 16 December 2024 (UTC) | |||
*'''2''' is fine. '''Strong oppose''' to 1 and 3. Opposing option 1 because there is nothing wrong with asking for extra community feedback. opposing option 3 because once an RfA has been started, it should follow the standard rules. Note that RfAs are extremely rare and non-contentious RfAs require very little community time (unlike this RfC which seems a waste of community time, but there we are). —] (]) 16:59, 16 December 2024 (UTC) | |||
*'''2''', with no opposition to 3. I see nothing wrong with a former administrator getting re-confirmed by the community, and community vetting seems like a good thing overall. If people think it's a waste of time, then just ignore the RfA. ] (]) 17:56, 16 December 2024 (UTC) | |||
*'''2''' Sure, and clarify that should such an RFA be unsuccessful they may only regain through a future rfa. — ] <sup>]</sup> 18:03, 16 December 2024 (UTC) | |||
*'''Option 2''' If contributing to such an RFA is a waste of your time, just don't participate. ] (]) 18:43, 16 December 2024 (UTC) | |||
*:No individual is wasting their time participating. Instead the person asking for a re-rfa is ''using'' tons of editor time by asking hundreds of people to vet them. Even the choice not to participate requires at least some time to figure out that this is not a new RfA; though at least in the two we've had recently it would require only as long as it takes to get to the RfA - for many a click from the watchlist and then another click into the rfa page - and to read the first couple of sentences of the self-nomination which isn't terribly long all things considered. Best, ] (]) 22:55, 16 December 2024 (UTC) | |||
*::I agree with you (I think) that it's a matter of perspective. For me, clicking the RFA link in my watchlist and reading the first paragraph of Hog Farm's nomination (where they explained that they were already a respected admin) took me about 10 seconds. Ten seconds is nothing; in my opinion, this is just a nonissue. But then again, I'm not an admin, checkuser, or an oversighter. Maybe the time to read such a nomination is really wasting their time. I don't know. ] (]) 23:15, 16 December 2024 (UTC) | |||
*:::I'm an admin and an oversighter (but not a checkuser). None of my time was wasted by either WTT or Hog Farm's nominations. ] (]) 23:30, 16 December 2024 (UTC) | |||
*'''2'''. Maintain the ''status quo''. And stop worrying about a trivial non-problem. --] (]) 22:57, 16 December 2024 (UTC) | |||
*'''2'''. This reminds me of banning plastic straws (bear with me). Sure, I suppose in theory, that this is a burden on the community's time (just as straws do end up in landfills/the ocean). However, the amount of community time that is drained is minuscule compared to the amount of community time drained in countless, countless other fora and processes (just like the volume of plastic waste contributed by plastic straws is less than 0.001% of the total plastic waste). When WP becomes an efficient, well oiled machine, then maybe we can talk about saving community time by banning re-RFA's. But this is much ado about nothing, and indeed this plan to save people from themselves, and not allow them to simply decide whether to participate or not, is arguably more damaging than some re-RFAs (just as banning straws convinced some people that "these save-the-planet people are so ridiculous that I'm not going to bother listening to them about anything."). And, in fact, on a separate note, I'd actually love it if more admins just ran a re-RFA whenever they wanted. They would certainly get better feedback than just posting "What do my talk page watchers think?" on their own talk page. Or waiting until they get yelled at on their talk page, AN/ANI, AARV, etc. We say we want admins to respect feedback; does it '''have''' to be in a recall petition? --] (]) 23:44, 16 December 2024 (UTC) | |||
*:What meaningful feedback has Hog Farm gotten? "A minority of people think you choose poorly in choosing this process to regain adminship". What are they supposed to do with that? I share your desire for editors to share meaningful feedback with administrators. My own attempt yielded some, though mainly offwiki where I was told I was both too cautious and too impetuous (and despite the seeming contradiction each was valuable in its own way). So yes let's find ways to get meaningful feedback to admins outside of recall or being dragged to ANI. Unfortunately re-RfA seems to be poorly suited to the task and so we can likely find a better way. Best, ] (]) 03:38, 17 December 2024 (UTC) | |||
*:Let us all take some comfort in the fact that no one has yet criticized this RfC comment as being a straw man argument. --] (]) 23:58, 18 December 2024 (UTC) | |||
*'''No hard rule, but we should socially discourage confirmation RfAs''' There is a difference between a hard rule, and a soft social rule. A hard rule against confirmation RfA's, like option 1, would not do a good job of accounting for edge cases and would thus be ultimately detrimental here. But a soft social rule against them would be beneficial. Unfortunately, that is not one of the options of this RfC. In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers. (Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here.) That takes some introspection and humility to ask yourself: is it worth me inviting two or three hundred people to spend part of their lives to comment on me as a person?{{pb}}A lot of people have thrown around ] in their reasonings. Obviously, broad generalizations about it aren't convincing anyone. So let me just share my own experience. I saw the watchlist notice open that a new RfA was being run. I reacted with some excitement, because I always like seeing new admins. When I got to the page and saw Hogfarm's name, I immediately thought "isn't he already an admin?" I then assumed, ah, its just the classic RfA reaction at seeing a qualified candidate, so I'll probably support him since I already think he's an admin. But then as I started to do my due diligence and read, I saw that he really, truly, already had been an admin. At that point, my previous excitement turned to a certain unease. I had voted yes for Worm's confirmation RfA, but here was another...and I realized that my blind support for Worm might have been the start of an entirely new process. I then thought "bet there's an RfC going about this," and came here. I then spent a while polishing up my essay on editor time, before taking time to write this message. All in all, I probably spent a good hour doing this. Previously, I'd just been clicking the random article button and gnoming. So, the longwinded moral: yeah, this did eat up a lot of my editor time that could have and was being spent doing something else. And I'd do it again! It was important to do my research and to comment here. But in the future...maybe I won't react quite as excitedly to seeing that RfA notice. Maybe I'll feel a little pang of dread...wondering if its going to be a confirmation RfA. We can't pretend that confirmation RfA's are costless, and that we don't lose anything even if editors just ignore them. When run, it should be because they are necessary. ] <sup>]</sup>] 03:29, 17 December 2024 (UTC) | |||
*:And for what its worth, support '''Option 3''' because I'm generally a fan of putting more tools in people's toolboxes. ] <sup>]</sup>] 03:36, 17 December 2024 (UTC) | |||
*:{{tpq|In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers.}} Asking the community whether you still have their trust to be an administrator, which is what an reconfirmation RFA is, ''is'' a good reason. I expect getting a near-unanimous "yes" is good for one's ego, but that's just a (nice) side-effect of the far more important benefits to the entire community: a trusted administrator. | |||
*:The time you claim is being eaten up unnecessarily by reconfirmation RFAs was actually taken up by you choosing to spend your time writing an essay about using time for things you don't approve of and then hunting out an RFC in which you wrote another short essay about using time on things you don't approve of. Absolutely none of that is a necessary consequence of reconfirmation RFAs - indeed the response consistent with your stated goals would have been to read the first two sentences of Hog Farm's RFA and then closed the tab and returned to whatever else it was you were doing. ] (]) 09:16, 17 December 2024 (UTC) | |||
*:WTT's and Hog Farm's RFAs would have been completely uncontentious, something I hope for at RfA and certainly the opposite of what I "dread" at RfA, if it were not for the people who attack the very concept of standing for RfA again despite policy being crystal clear that it is absolutely fine. I don't see how any blame for this situation can be put on WTT or HF. We can't pretend that dismissing uncontentious reconfirmation RfAs is costless; discouraging them removes one of the few remaining potentially wholesome bits about the process. —] (]) 09:53, 17 December 2024 (UTC) | |||
*:@] Would you find it better if Watchlist notices and similar said "(re?)confirmation RFA" instead of "RFA"? Say for all voluntary RFAs from an existing admin or someone who could have used BN? | |||
*:As a different point, I would be quite against any social discouraging if we're not making a hard rule as such. Social discouraging is what got us the opposes at WTT/Hog Farm's RFAs, which I found quite distasteful and badgering. If people disagree with a process, they should change it. But if the process remains the same, I think it's important to not enable RFA's toxicity by encouraging others to namecall or re-argue the process in each RRFA. It's a short road from social discouragement to toxicity, unfortunately. ] (]) 18:41, 19 December 2024 (UTC) | |||
*::Yes I think the watchlist notice should specify what kind of RfA, especially with the introduction of recall. ] <sup>]</sup>] 16:49, 23 December 2024 (UTC) | |||
* '''Option 1'''. Will prevent the unnecessary drama trend we are seeing in the recent. – ] (]) 07:18, 17 December 2024 (UTC) | |||
* '''Option 2''' if people think there's a waste of community time, don't spend your time voting or discussing. Or add "reconfirmation" or similar to the watchlist notice. ] (]) 15:08, 17 December 2024 (UTC) | |||
* '''Option 3''' (which I think is a subset of option 2, so I'm okay with the status quo, but I want to endorse giving 'crats the option to SNOW). While they do come under scrutiny from time to time for the extensive dicsussions in the "maybe" zone following RfAs, this should be taken as an indiciation that they are unlikely to do something like close it as SNOW in the event there is <em>real and substantial</em> concerns being rasied. This is an okay tool to give the 'crats. As far as I can tell, no one has ever accused the them of moving too quickly in this direction (not criticism; love you all, keep up the good work). ] (]) 17:26, 17 December 2024 (UTC) | |||
* '''Option 3 or Option 2'''. Further, if Option 2 passes, I expect it also ends all the bickering about lost community time. A consensus explicitly in favour of "This is allowed" should also be a consensus to discourage relitigation of this RFC. ] (]) 17:35, 17 December 2024 (UTC) | |||
*'''Option 2''': Admins who do not exude entitlement are to be praised. Those who criticize this humility should have a look in the mirror before accusing those who ask for reanointment from the community of "arrogance". I agree that it wouldn't be a bad idea to mention in parentheses that the RFA is a reconfirmation (watchlist) and wouldn't see any problem with crats snow-closing after, say, 96 hours. -- ] <sup>] · ]</sup> 18:48, 17 December 2024 (UTC) | |||
*:I disagree that BN shouldn't be the normal route. RfA is already as hard and soul-crushing as it is. ] (]) 20:45, 17 December 2024 (UTC) | |||
*::Who are you disagreeing with? This RfC is about voluntary RRfA. -- ] <sup>] · ]</sup> 20:59, 17 December 2024 (UTC) | |||
*:::I know. I see a sizable amount of commenters here starting to say that voluntary re-RfAs should be encouraged, and your first sentence can be easily read as implying that admins who use the BN route exude entitlement. I disagree with that (see my reply to Thryduulf below). ] (]) 12:56, 18 December 2024 (UTC) | |||
*::One way to improve the reputation of RFA is for there to be more RFAs that are not terrible, such as reconfirmations of admins who are doing/have done a good job who sail through with many positive comments. There is no proposal to make RFA mandatory in circumstances it currently isn't, only to reaffirm that those who voluntarily choose RFA are entitled to do so. ] (]) 21:06, 17 December 2024 (UTC) | |||
*:::I know it's not a proposal, but there's enough people talking about this so far that it could become a proposal.<br />There's nearly nothing in between that could've lost the trust of the community. I'm sure there are many who do not want to be pressured into ] without good reason. ] (]) 12:57, 18 December 2024 (UTC) | |||
*::::Absolutely nobody is proposing, suggesting or hinting here that reconfirmation RFAs should become mandatory - other than comments from a few people who oppose the idea of people voluntarily choosing to do something policy explicitly allows them to choose to do. The best way to avoid people being pressured into being accused of arrogance for seeking reconfirmation of their status from the community is to sanction those people who accuse people of arrogance in such circumstances as such comments are in flagrant breach of AGF and NPA. ] (]) 14:56, 18 December 2024 (UTC) | |||
*:::::Yes, I’m saying that they should not become preferred. There should be no social pressure to do RfA instead of BN, only pressure intrinsic to the candidate. ] (]) 15:37, 18 December 2024 (UTC) | |||
*::::::Whether they should become preferred in any situation forms no part of this proposal in any way shape or form - this seeks only to reaffirm that they are permitted. A separate suggestion, completely independent of this one, is to encourage (explicitly not mandate) them in some (but explicitly not all) situations. All discussions on this topic would benefit if people stopped misrepresenting the policies and proposals - especially when the falsehoods have been explicitly called out. ] (]) 15:49, 18 December 2024 (UTC) | |||
*:::::::I am talking and worrying over that separate proposal many here are suggesting. I don’t intend to oppose Option 2, and sorry if I came off that way. ] (]) 16:29, 18 December 2024 (UTC) | |||
*'''Option 2'''. In fact, I'm inclined to ''encourage'' an RRfA over BN, because nothing requires editors to participate in an RRfA, but the resulting discussion is better for reaffirming community consensus for the former admin or otherwise providing helpful feedback. --] (]) 21:45, 17 December 2024 (UTC) | |||
*'''Option 2''' ] has said "{{tq|Former administrators may seek reinstatement of their privileges through RfA...}}" for over ten years and this is not a problem. I liked the opportunity to be consulted in the current RfA and don't consider this a waste of time. ]🐉(]) 22:14, 17 December 2024 (UTC) | |||
*'''Option 2'''. People who think it’s not a good use of their time always have the option to scroll past. ] (]) 01:41, 18 December 2024 (UTC) | |||
* '''2''' - If an administrator gives up sysop access because they plan to be inactive for a while and want to minimize the attack surface of Misplaced Pages, they should be able to ask for permissions back the quickest way possible. If an administrator resigns because they do not intend to do the job anymore, and later changes their mind, they should request a community discussion. The right course of action depends on the situation. ] <sup>]</sup> 14:00, 18 December 2024 (UTC) | |||
*'''Option 1'''. I've watched a lot of RFAs and re-RFAs over the years. There's a darn good reason why the community developed the "go to BN" option: saves time, is straightforward, and if there are issues that point to a re-RFA, they're quickly surfaced. People who refuse to take the community-developed process of going to BN first are basically telling the community that they need the community's full attention on their quest to re-admin. Yes, there are those who may be directed to re-RFA by the bureaucrats, in which case, they have followed the community's carefully crafted process, and their re-RFA should be evaluated from that perspective. ] (]) 02:34, 19 December 2024 (UTC) | |||
*'''Option 2'''. If people want to choose to go through an RFA, who are we to stop them? ] (]) 10:25, 19 December 2024 (UTC) | |||
*'''Option 2''' (status quo/no changes) per ]. This is bureaucratic rulemongering at its finest. Every time RFA reform comes up some editors want admins to be required to periodically reconfirm, then when some admins decide to reconfirm voluntarily, suddenly that's seen as a bad thing. The correct thing to do here is nothing. If you don't like voluntary reconfirmation RFAs, you are not required to participate in them. ] (<sup>]</sup>/<sub>]</sub>) 19:34, 19 December 2024 (UTC) | |||
*'''Option 2''' I would probably counsel just going to BN most of the time, however there are exceptions and edge cases. To this point these RfAs have been few in number, so the costs incurred are relatively minor. If the number becomes large then it might be worth revisiting, but I don't see that as likely. Some people will probably impose social costs on those who start them by opposing these RfAs, with the usual result, but that doesn't really change the overall analysis. Perhaps it would be better if our idiosyncratic internal logic didn't produce such outcomes, but that's a separate issue and frankly not really worth fighting over either. There's probably some meta issues here I'm unaware off, it's long since I've had my finger on the community pulse so to speak, but they tend to matter far less than people think they do. ] (]) 02:28, 20 December 2024 (UTC) | |||
* '''Option 1''', per ], ], ], ], and related principles. We all have far better things to do that read through and argue in/about a totally unnecessary RfA invoked as a "Show me some love!" abuse of process and waste of community time and productivity. I could live with option 3, if option 1 doesn't fly (i.e. shut these silly things down as quickly as possible). But option 2 is just out of the question. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 04:28, 22 December 2024 (UTC) | |||
*:Except none of the re-RFAs complained about have been {{tpq|RfA invoked as a "Show me some love!" abuse of process}}, you're arguing against a strawman. ] (]) 11:41, 22 December 2024 (UTC) | |||
*::It's entirely a matter of opinion and perception, or A) this RfC wouldn't exist, and B) various of your fellow admins like TonyBallioni would not have come to the same conclusion I have. Whether the underlying intent (which no one can determine, lacking as we do any magical mind-reading powers) is solely egotistical is ultimately irrelevant. The {{em|actual effect}} (what matters) of doing this whether for attention, or because you've somehow confused yourself into think it needs to be done, is precisely the same: a showy waste of community volunteers' time with no result other than a bunch of attention being drawn to a particular editor and their deeds, without any actual need for the community to engage in a lengthy formal process to re-examine them. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 05:49, 23 December 2024 (UTC) | |||
*:::{{tqb|or because you've somehow confused yourself into think it needs to be done}} I and many others here agree and stand behind the very reasoning that has "confused" such candidates, at least for WTT. ] (]) 15:37, 23 December 2024 (UTC) | |||
*'''Option 2'''. I see no legitimate reason why we should be changing the status quo. Sure, some former admins might find it easier to go through BN, and it might save community time, and most former admins ''already'' choose the easier option. However, if a candidate last ran for adminship several years ago, or if issues were raised during their tenure as admin, then it may be helpful for them to ask for community feedback, anyway. There is no "wasted" community time in such a case. I really don't get the claims that this violates ], because it really doesn't apply when a former admin last ran for adminship 10 or 20 years ago or wants to know if they still have community trust.{{pb}}On the other hand, if an editor thinks a re-RFA is a waste of community time, they can simply choose not to participate in that RFA. Opposing individual candidates' re-RFAs based solely on opposition to re-RFAs in general ''is'' a violation of ]. – ] (]) 14:46, 22 December 2024 (UTC) | |||
*:But this isn't the status quo? We've never done a re-RfA before now. The question is whether this previously unconsidered process, which appeared as an ], is a feature or a bug. ] <sup>]</sup>] 23:01, 22 December 2024 (UTC) | |||
*::There have been lots of re-RFAs, historically. There were more common in the 2000s. ] in 2003 is the earliest I can find, back before the re-sysopping system had been worked out fully. ] back in 2007 was snow-closed after one day, because the nominator and applicant didn't know that they could have gone to the bureaucrats' noticeboard. For more modern examples, ] (2011) is relatively similar to the recent re-RFAs in the sense that the admin resigned uncontroversially but chose to re-RFA before getting the tools back. Immediately following and inspired by HJ Mitchell's, there was the slightly more controversial ]. That ended successful re-RFAS until 2019's ], which crat-chatted. Since then, there have been none that I remember. There have been several re-RFAs from admins who were de-sysopped or at serious risk of de-sysopping, and a few interesting edge cases such as the yet no-consensus ] in 2014 and the ] case in 2015, but those are very different than what we're talking about today. ] (]) 00:01, 23 December 2024 (UTC) | |||
*:::To add on to that, ] was technically a reconfirmation RFA, which in a sense can be treated as a re-RFA. My point is, there is some precedent for re-RFAs, but the current guidelines are ambiguous as to when re-RFAs are or aren't allowed. – ] (]) 16:34, 23 December 2024 (UTC) | |||
*::::Well thank you both, I've learned something new today. It turns out I was working on a false assumption. It has just been so long since a re-RfA that I assumed it was a truly new phenomenon, especially since there were two in short succession. I still can't say I'm thrilled by the process and think it should be used sparingly, but perhaps I was a bit over concerned. ] <sup>]</sup>] 16:47, 23 December 2024 (UTC) | |||
*'''Option 2 or 3''' per Gnoming and CaptainEek. Such RfAs only require at most 30 seconds for one to decide whether or not to spend their time on examination. Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Voluntary reconfirmation RfAs are socially discouraged, so there is usually a very good reason for someone to go back there, such as accountability for past statements in the case of WTT or large disputes during adminship in the case of Hog Farm. I don't think we should outright deny these, and there is no disruption incurred if we don't. ] (]) 15:44, 23 December 2024 (UTC) | |||
*'''Option 2''' but for largely the reasons presented by CaptainEek. ''']''' (<small>aka</small> ] '''·''' ] '''·''' ]) 21:58, 23 December 2024 (UTC) | |||
*'''Option 2 (fine with better labeling)''' These don't seem harmful to me and, if I don't have time, I'll skip one and trust the judgment of my fellow editors. No objection to better labeling them though, as discussed above. ] (]) 22:36, 23 December 2024 (UTC) | |||
*'''Option 1''' because it's just a waste of time to go through and !vote on candidates who just want the mop restored when he or she or they could get it restored BN with no problems. But I can also see option 2 being good for a former mod not in good standing. ] (]) 23:05, 23 December 2024 (UTC) | |||
*:If you think it is a waste of time to !vote on a candidate, just don't vote on that candidate and none of your time has been wasted. ] (]) 23:28, 23 December 2024 (UTC) | |||
*'''Option 2''' per QoH (or me? who knows...) ] • ] • ] 04:24, 27 December 2024 (UTC) | |||
*'''Option 2''' Just because someone may be entitled to get the bit back doesn't mean they necessarily should. Look at ]. I did not resign under a cloud, so I could have gotten the bit back by request. However, the RFA established that I did not have the community support at that point, so it was a good thing that I chose that path. I don't particularly support option 3, but I could deal with it. --] 16:05, 27 December 2024 (UTC) | |||
*'''Option 1''' Asking hundreds of people to vet a candidate who has already passed a RfA and is eligible to get the tools back at BN is a waste of the community's time. -- ] (]) 16:21, 27 December 2024 (UTC) | |||
*'''Option 2''' Abolishing RFA in favour of BN may need to be considered, but I am unconvinced by arguments about RFA being a waste of time. ] ] 19:21, 27 December 2024 (UTC) | |||
*'''Option 2''' I really don't think there's a problem that needs to be fixed here. I am grateful at least a couple administrators have asked for the support of the community recently. ] ''<span style="font-size:small; vertical-align:top;">]</span>''·''<span style="font-size:small; vertical-align:bottom;">]</span>'' 00:12, 29 December 2024 (UTC) | |||
*'''Option 2'''. Keep the status quo of {{tq|any editor is free to re-request the tools through the requests for adminship process}}. Voluntary RfA are rare enough not to be a problem, it's not as though we are overburdened with RfAs. And it’s my time to waste. --] (]) 17:58, 7 January 2025 (UTC) | |||
* '''Option 2 or Option 3'''. These are unlikely to happen anyway, it's not like they're going to become a trend. I'm already wasting my time here instead of other more important activities anyway, so what's a little more time spent giving an easy support?{{pb | |||
}}<span style="border-radius:9em;padding:0 7px;background:#000000">] ]</span> 16:39, 10 January 2025 (UTC) | |||
*'''Option 1''' Agree with Daniel Quinlan that for the problematic editors eligible for re-sysop at BN despite unpopularity, we should rely on our new process of admin recall, rather than pre-emptive RRFAs. I'll add the novel argument that when goliaths like Hog Farm unnecessarily showcase their achievements at RFA, it scares off nonetheless qualified candidates. ] ( ] ) 17:39, 14 January 2025 (UTC) | |||
:'''Option 2''' per Gnoming /CaptainEeek ] (]) 20:04, 14 January 2025 (UTC) | |||
::Could you provide the specific example involved? What article does this concern, wWhat statement in what newspaper article do you think is wrong, and what source do you think is more reliable? Thanks, --] (]) 16:34, 20 March 2008 (UTC) | |||
*'''Option 2''' or '''Option 3''' - if you regard a re-RfA as a waste of your time, just don't waste it by participating; it's not mandatory. ]<sup>]</sup> 12:13, 15 January 2025 (UTC) | |||
:::OK, here's one that I came across yesterday: related to the V-masks worn at the ] protests . The masks are worn as part of the EFG meme, but the newspapers easily misinterpreted them as representing the V for vendetta movie. correct "unreliable" source: vs. incorrect "reliable" source . Note, I've got nothing against Cirt, he's a great editor. This is just an example where strictly following ] is detrimental to the article. ] (]) 17:49, 20 March 2008 (UTC) | |||
===Discussion=== | |||
Perhaps this should go to ] ?--] (]) 17:57, 20 March 2008 (UTC) | |||
*{{re|Voorts}} If option 2 gets consensus how would this RfC change the wording {{tqq|Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.}} Or is this an attempt to see if that option no longer has consensus? If so why wasn't alternative wording proposed? As I noted above this feels premature in multiple ways. Best, ] (]) 21:43, 15 December 2024 (UTC) | |||
: A clearer way to handle this is to cite the sources directly in the text. Actually say "According to ''Time Magazine'' yada yada yada, however this is refuted by ''myunreliableblog.com'' or something. Let the SOURCES fight it out, not the editors, and let the readers decide for themselves... --].].] 18:02, 20 March 2008 (UTC) | |||
*:That is not actually true. ArbCom can (and has) forbidden some editors from re-requesting the tools through RFA. ] ] 19:21, 27 December 2024 (UTC) | |||
*I've re-opened this per ] on my talk page. If other editors think this is premature, they can !vote accordingly and an uninvolved closer can determine if there's consensus for an early close in deference to the VPI discussion. ] (]/]) 21:53, 15 December 2024 (UTC) | |||
*:The discussion at VPI, which I have replied on, seems to me to be different enough from this discussion that both can run concurrently. That is, however, my opinion as a mere editor. — ] ⚓ ] 22:01, 15 December 2024 (UTC) | |||
*:@], can you please reword the RfC to make it clear that Option 2 is the current consensus version? It does not need to be clarified – it already says precisely what you propose. – ] 22:02, 15 December 2024 (UTC) | |||
*::{{done}} ] (]/]) 22:07, 15 December 2024 (UTC) | |||
*'''Question''': May someone clarify why many view such confirmation RfAs as a waste of community time? No editor is obligated to take up their time and participate. If there's nothing to discuss, then there's no friction or dis-cussing, and the RfA smooth-sails; if a problem is identified, then there was a good reason to go to RfA. I'm sure I'm missing something here. ] (]) 22:35, 15 December 2024 (UTC) | |||
*: The intent of RfA is to provide a comprehensive review of a candidate for adminship, to make sure that they meet the community's standards. Is that happening with vanity re-RfAs? Absolutely not, because these people don't need that level of vetting. I wouldn't consider a week long, publicly advertized back patting to be a productive use of volunteer time. -- ] (]) 23:33, 15 December 2024 (UTC) | |||
*::But no volunteer is obligated to pat such candidates on the back. ] (]) 00:33, 16 December 2024 (UTC) | |||
*::: Sure, but that logic could be used to justify any time sink. We're all volunteers and nobody is forced to do anything here, but that doesn't mean that we should promote (or stay silent with our criticism of, I suppose) things that we feel don't serve a useful purpose. I don't think this is a huge deal myself, but we've got two in a short period of time and I'd prefer to do a bit of push back now before they get more common. -- ] (]) 01:52, 16 December 2024 (UTC) | |||
*::::Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. ] (]) 02:31, 16 December 2024 (UTC) | |||
*:::::Except someone who has no need for advanced tools and is not going to use them in any useful fashion, would then skate through with nary a word said about their unsuitability, regardless of the foregone conclusion. The point of RFA is not to rubber-stamp. Unless their is some actual issue or genuine concern they might not get their tools back, they should just re-request them at BN and stop wasting people's time with pointless non-process wonkery. ] (]) 09:05, 16 December 2024 (UTC) | |||
*::::::I’m confused. Adminship requires continued use of the tools. If you think they’s suitable for BN, I don’t see how doing an RfA suddenly makes them unsuitable. If you have concerns, raise them. ] (]) 13:02, 16 December 2024 (UTC) | |||
*I don't think the suggested problem (which I acknowledge not everyone thinks is a problem) is resolved by these options. Admins can still run a re-confirmation RfA after regaining adminsitrative privileges, or even initiate a recall petition. I think as ], we want to encourage former admins who are unsure if they continue to be trusted by the community at a sufficient level to explore lower cost ways of determining this. ] (]) 00:32, 16 December 2024 (UTC) | |||
*:Regarding option 3, ]. The intent of having a reconfirmation request for administrative privileges is counteracted by closing it swiftly. It provides incentive for rapid voting that may not provide the desired considered feedback. ] (]) 17:44, 17 December 2024 (UTC) | |||
* In re the idea that RfAs use up a lot of community time: I first started editing Misplaced Pages in 2014. There were 62 RfAs that year, which was a historic low. Even counting all of the AElect candidates as separate RfAs, including those withdrawn before voting began, we're still up to only 53 in 2024 – counting only traditional RfAs it's only 18, which is the second lowest number ever. By my count we've has 8 resysop requests at BN in 2024; even if all of those went to RfA, I don't see how that would overwhelm the community. That would still leave us on 26 traditional RfAs per year, or (assuming all of them run the full week) one every other week. ] (]) 10:26, 16 December 2024 (UTC) | |||
* What about an option 4 encouraging eligible candidates to go through BN? At the end of the ], add something like "Eligible users are encouraged to use this method rather than running a new request for adminship." The current wording makes re-RfAing sound like a plausible alternative to a BN request, when in actual fact the former rarely happens and always generates criticism. ] (]) 12:08, 16 December 2024 (UTC) | |||
*:Discouraging RFAs is the second last thing we should be doing (after prohibiting them), rather per my comments here and in the VPI discussion we should be ''encouraging'' former administrators to demonstrate that they still have the approval of the community. ] (]) 12:16, 16 December 2024 (UTC) | |||
*:I think this is a good idea if people do decide to go with option 2, if only to stave off any further mixed messages that people are doing something wrong or rude or time-wasting or whatever by doing a second RfA, when it's explicitly mentioned as a valid thing for them to do. ] (]) 15:04, 16 December 2024 (UTC) | |||
*::If RFA is explicitly a valid thing for people to do (which it is, and is being reaffirmed by the growing consensus for option 2) then we don't need to (and shouldn't) discourage people from using that option. The mixed messages can be staved off by people simply not making comments that explicitly contradict policy. ] (]) 15:30, 16 December 2024 (UTC) | |||
*:::Also a solid option, the question is whether people will actually do it. ] (]) 22:55, 16 December 2024 (UTC) | |||
*::::The simplest way would be to just quickly hat/remove all such comments. Pretty soon people will stop making them. ] (]) 23:20, 16 December 2024 (UTC) | |||
* This is not new. We've had sporadic "vanity" RfAs since the early days of the process. I don't believe they're particularly harmful, and think that it unlikely that we will begin to see so many of them that they pose a problem. As such I don't think this policy proposal ]. ''']]''' 21:56, 16 December 2024 (UTC) | |||
* This apparent negative feeling evoked at an RFA for a former sysop ''everyone agrees is fully qualified and trusted'' certainly will put a bad taste in the mouths of other former admins who might consider a reconfirmation RFA ''without first'' visiting BN. This comes in the wake of Worm That Turned's similar rerun. ] (]) 23:29, 16 December 2024 (UTC) | |||
*:Nobody should ever be discouraged from seeking community consensus for significant changes. Adminship is a significant change. ] (]) 23:32, 16 December 2024 (UTC) | |||
*::No argument from me. I was a big Hog Farm backer way back when he was ''merely'' one of Misplaced Pages's best content contributors. ] (]) 12:10, 17 December 2024 (UTC) | |||
*All these mentions of editor time make me have to mention ] (TLDR: our understanding of how editor time works is dreadfully incomplete). ] <sup>]</sup>] 02:44, 17 December 2024 (UTC) | |||
*:I went looking for @]'s comment because I know they had hung up the tools and came back, and I was interested in their perspective. But they've given me a different epiphany. I suddenly realize why people are doing confirmation RfAs: it's because of RECALL, and the one year immunity a successful RfA gives you. Maybe everyone else already figured that one out and is thinking "well duh Eek," but I guess I hadn't :) I'm not exactly sure what to do with that epiphany, besides note the emergent behavior that policy change can create. We managed to generate an entirely new process without writing a single word about it, and that's honestly impressive :P ] <sup>]</sup>] 18:18, 17 December 2024 (UTC) | |||
*::Worm That Turned followed through on a pledge he made in January 2024, before the 2024 review of the request for adminship process began. I don't think a pattern can be extrapolated from a sample size of one (or even two). That being said, it's probably a good thing if admins occasionally take stock of whether or not they continue to hold the trust of the community. As I previously commented, it would be great if these admins would use a lower cost way of sampling the community's opinion. ] (]) 18:31, 17 December 2024 (UTC) | |||
*:::{{ping|CaptainEek}} You are correct that a year's "immunity" results from a successful RRFA, but I see no evidence that this has been the ''reason'' for the RRFAs. Regards, ] (]) 00:14, 22 December 2024 (UTC) | |||
*::::If people decide to go through a community vote to get a one year immunity from a process that only might lead to a community vote which would then have a lower threshold then the one they decide to go through, and also give a year's immunity, then good for them. ] (]) 01:05, 22 December 2024 (UTC) | |||
*::@] I'm mildly bothered by this comment, mildly because I assume it's lighthearted and non-serious. But just in case anyone does feel this way - I was very clear about my reasons for RRFA, I've written a lot about it, anyone is welcome to use my personal recall process without prejudice, and just to be super clear - I waive my "1 year immunity" - if someone wants to start a petition in the next year, do not use my RRfA as a reason not to. I'll update my userpage accordingly. I can't speak for Hog Farm, but his reasoning seems similar to mine, and immunity isn't it. ]<sup>TT</sup>(]) 10:28, 23 December 2024 (UTC) | |||
*:::@] my quickly written comment was perhaps not as clear as it could have been :) I'm sorry, I didn't mean to suggest that y'all had run for dubious reasons. As I said in my !vote, {{tq|Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here}}. I guess what I really meant was that the reason that we're having this somewhat spirited conversation seems to be the sense that re-RfA could provide a protection from recall. If not for recall and the one year immunity period, I doubt we'd have cared so much as to suddenly run two discussions about this. ] <sup>]</sup>] 16:59, 23 December 2024 (UTC) | |||
*::::I don't agree. No one else has raised a concern about someone seeking a one-year respite from a recall petition. Personally, I think essentially self-initiating the recall process doesn't really fit the profile of someone who wants to avoid the recall process. (I could invent some nefarious hypothetical situation, but since opening an arbitration case is still a possibility, I don't think it would work out as planned.) ] (]) 05:19, 24 December 2024 (UTC) | |||
*::I really don't think this is the reason behind WTT's and HF's reconfirmation RFA's. I don't think their RFA's had much utility and could have been avoided, but I don't doubt for a second that their motivations were anything other than trying to provide transparency and accountability for the community. ] ] 12:04, 23 December 2024 (UTC) | |||
*I don't really care enough about reconf RFAs to think they should be restricted, but what about a lighter ORCP-like process (maybe even in the same place) where fewer editors can indicate, "yeah OK, there aren't really any concerns here, it would probably save a bit of time if you just asked at BN". ] (] • ]) 12:40, 19 December 2024 (UTC) | |||
*Can someone accurately describe for me what the status quo is? I reread this RfC twice now and am having a hard time figuring out what the current state of affairs is, and how the proposed alternatives will change them. <sub>Duly signed,</sub> ''']'''-''<small>(])</small>'' 14:42, 13 January 2025 (UTC) | |||
*:Option 2 is the status quo. The goal of the RFC is to see if the community wants to prohibit reconfirmation RFAs (option 1). The idea is that reconfirmation RFAs take up a lot more community time than a BN request so are unnecessary. There were 2 reconfirmation RFAs recently after a long dry spell. –] <small>(])</small> 20:49, 13 January 2025 (UTC) | |||
*:The status quo, documented at ], is that admins who resigned without being under controversy can seek readminship through either BN (where it's usually given at the discreetion of an arbitrary bureaucrat according to the section I linked) or RfA (where all normal RfA procedures apply, and you see a bunch of people saying "the candidate's wasting the community's time and could've uncontroversially gotten adminship back at BN instead). ] (]) 12:27, 14 January 2025 (UTC) | |||
{{discussion bottom}} | |||
== Guideline against use of AI images in BLPs and medical articles? == | |||
::@Hu12 - the reliable sources noticeboard is for discussion of specific sources. This is for discussing the ''guidelines'' themselves. | |||
I have recently seen AI-generated images be added to illustrate both BLPs (e.g. ], now removed) and medical articles (e.g. ]). While we don't have any clear-cut policy or guideline about these yet, they appear to be problematic. Illustrating a living person with an AI-generated image might misinform as to how that person actually looks like, while using AI in medical diagrams can lead to anatomical inaccuracies (such as the lung structure in the second image, where the pleura becomes a bronnchiole twisting over the primary bronchi), or even medical misinformation. While a guideline against AI-generated images in general might be more debatable, do we at least have a consensus for a guideline against these two specific use cases? | |||
::@Jayron32 - I like your idea, but is there precedent for it, and would it be considered in line with acceptable policy? ] (]) 18:46, 20 March 2008 (UTC) | |||
To clarify, I am not including potentially relevant AI-generated images that only ''happen'' to include a living person (such as in ]), but exclusively those used to illustrate a living person in a ] context. ] (] · ]) 12:11, 30 December 2024 (UTC) | |||
*'''Comment:''' Currently, in the article ], every single sentence which describes usage of the ] mask from the film '']'' already ''is'' attributed in each sentence directly to the cited source referenced, as per comment above by {{user|Jayron32}}. ] (]) 23:40, 20 March 2008 (UTC) | |||
:What about any biographies, including dead people. The lead image shouldn't be AI generated for any biography. - ] (]) 12:17, 30 December 2024 (UTC) | |||
:: I think what Jayron is saying is that it would be legit to also add in other "less traditional" sources in such a situation. But anyways this is turning into yet another a chanology discussion, which is not what I intended. :/ ] (]) 23:54, 20 March 2008 (UTC) | |||
::Same with animals, organisms etc. - ] (]) 12:20, 30 December 2024 (UTC) | |||
:I personally am '''strongly against''' using AI in biographies and medical articles - as you highlighted above, AI is absolutely not reliable in generating accurate imagery and may contribute to medical or general misinformation. I would 100% support a proposal banning AI imagery from these kinds of articles - and a recommendation to not use such imagery other than in specific scenarios. ]] 12:28, 30 December 2024 (UTC) | |||
:I'd prefer a guideline prohibiting the use of AI images full stop. There are too many potential issues with accuracy, honesty, copyright, etc. Has this already been proposed or discussed somewhere? – ] <small>(])</small> 12:38, 30 December 2024 (UTC) | |||
::There hasn't been a full discussion yet, and we have a list of uses at ], but it could be good to deal with clear-cut cases like this (which are already a problem) first, as the wider discussion is less certain to reach the same level of consensus. ] (] · ]) 12:44, 30 December 2024 (UTC) | |||
:Discussions are going on at ] and somewhat at ]. I recommend workshopping an RfC question (or questions) then starting an RfC. ] (]) 13:03, 30 December 2024 (UTC) | |||
::Oh, didn't catch the previous discussions! I'll take a look at them, thanks! ] (] · ]) 14:45, 30 December 2024 (UTC) | |||
:There is one very specific exception I would put to a very sensible blanket prohibition on using AI images to illustrate people, especially BLPs. That is where the person themselves is known to use that image, which I have encountered in ]. ] (]) 15:00, 30 December 2024 (UTC) | |||
::While the Ekpa portrait is just an upscale (and I'm not sure what positive value that has for us over its source; upscaling does not add accuracy, nor is it an artistic interpretation meant to reveal something about the source), this would be hard to translate to the general case. Many AI portraits would have copyright concerns, not just from the individual (who may have announced some appropriate release for it), but due to the fact that AI portraits can lean heavily on uncredited individual sources. --] (]) 16:04, 30 December 2024 (UTC) | |||
:::For the purposes of discussing whether to allow AI images at all, we should always assume that, for the purposes of (potential) policies and guidelines, there exist AI images we can legally use to illustrate every topic. We cannot use those that are not legal (including, but not limited to, copyright violations) so they are irrelevant. An image generator trained exclusively on public domain and cc0 images (and any other licenses that explicitly allow derivative works without requiring attribution) would not be subject to any copyright restrictions (other than possibly by the prompter and/or generator's license terms, which are both easy to determine). Similarly we should not base policy on the current state of the technology, but assume that the quality of its output will improve to the point it is equal to that of a skilled human artist. ] (]) 17:45, 30 December 2024 (UTC) | |||
::::The issue is, either there are public domain/CC0 images of the person (in which case they can be used directly) or there aren't, in which case the AI is making up how a person looks. ] (] · ]) 20:00, 30 December 2024 (UTC) | |||
::::We tend to use art representations either where no photographs are available (in which case, AI will also not have access to photographs) or where what we are showing is an artist's insight on how this person is perceived, which is not something that AI can give us. In any case, we don't have to build policy now around some theoretical AI in the future; we can deal with the current reality, and policy can be adjusted if things change in the future. And even that theoretical AI does make it more difficult to detect copyvio -- ] (]) 20:54, 30 December 2024 (UTC) | |||
:::I wouldn't call it an upscale given whatever was done appears to have removed detail, but we use that image as it was specifically it is the edited image which was sent to VRT. ] (]) 10:15, 31 December 2024 (UTC) | |||
:Is there any clarification on using purely AI-generated images vs. using AI to edit or alter images? AI tools , such as to identify objects and remove them, or generate missing content. The generative expand feature would appear to be unreliable (and it is), but I use it to fill in gaps of cloudless sky produced from stitching together photos for a panorama (I don't use it if there are clouds, or for starry skies, as it produces non-existent stars or unrealistic clouds). ] (]) 18:18, 30 December 2024 (UTC) | |||
::Yes, my proposal is only about AI-generated images, not AI-altered ones. That could in fact be a useful distinction to make if we want to workshop a RfC on the matter. ] (] · ]) 20:04, 30 December 2024 (UTC) | |||
:I'm not sure if we need a clear cut policy or guideline against them... I think we treat them the same way as we would treat an editor's kitchen table sketch of the same figure. ] (]) 18:40, 30 December 2024 (UTC) | |||
:For those wanting to ban AI images full stop, well, you are too late. Most professional image editing software, including the software in one's smartphone as well as desktop, uses AI somewhere. Noise reduction software uses AI to figure out what might be noise and what might be texture. Sharpening software uses AI to figure out what should be smooth and what might have a sharp detail it can invent. For example, a bird photo not sharp enough to capture feather detail will have feather texture imagined onto it. Same for hair. Or grass. Any image that has been cleaned up to remove litter or dust or spots will have the cleaned area AI generated based on its surroundings. The sky might be extended with AI. These examples are a bit different from a 100% imagined image created from a prompt. But probably not in a way that is useful as a rule. | |||
:I think we should treat AI generated images the same as any user-generated image. It might be a great diagram or it might be terrible. Remove it from the article if the latter, not because someone used AI. If the image claims to photographically represent something, we may judge whether the creator has manipulated the image too much to be acceptable. For example, using AI to remove a person in the background of an image taken of the BLP subject might be perfectly fine. People did that with traditional Photoshop/Lightroom techniques for years. Using AI to generate what claims to be a photo of a notable person is on dodgy ground wrt copyright. -- ]°] 19:12, 30 December 2024 (UTC) | |||
::I'm talking about the case of using AI to generate a depiction of a living person, not using AI to alter details in the background. That is why I only talk about AI-generated images, not AI-altered images. ] (] · ]) 20:03, 30 December 2024 (UTC) | |||
:Regarding some sort of brightline ban on the use of any such image in anything article medical related: absolutely not. For example, if someone wanted to use AI tools as opposed to other tools to make an image such as ] (as used in the "medical" article ]) I don't see a problem, so long as it is accurate. Accurate models and illustrations are useful and that someone used AI assistance as opposed to a chisel and a rock is of no concern. — ] <sup>]</sup> 19:26, 30 December 2024 (UTC) | |||
:I believe that the appropriateness of AI images depends on how its used by the user. In BLP and medical articles, it is inappropriate for the images, but it is inappropriate to ban it completely across thw site. By the same logic, if you want full ban of AI, you are banning fire just because people can get burned, without considering cooking. ] (]) 13:33, 31 December 2024 (UTC) | |||
] <sup>]</sup> 00:13, 31 December 2024 (UTC)]] | |||
:I agree that AI-generated images should not be used in most cases. They essentially serve as misinformation. I also don't think that they're really comparable to drawings or sketches because AI-generation uses a level of photorealism that can easily trick the untrained eye into thinking it is real. ] (]) 20:46, 30 December 2024 (UTC) | |||
::AI doesn't need to be photorealistic though. I see two potential issues with AI. The first is images that might deceive the viewer into thinking they are photos, when they are not. The second is potential copyright issues. Outside of the copyright issues I don't see any unique concerns for an AI-generated image (that doesn't appear photorealistic). Any accuracy issues can be handled the same way a user who manually drew an image could be handled. ] (]) 21:46, 30 December 2024 (UTC) | |||
{{multiple image | |||
| image1 = Pope Francis in puffy winter jacket.jpg | |||
| image2 = Illustration of Brigette Lundy Paine by Sandra Mu.png | |||
| footer = ] and ] | |||
| total_width = 300 | |||
}} | |||
::AI-generated depictions of BLP subjects are often more "illustrative" than drawings/sketches of BLP subjects made by 'regular' editors like you and me. For example, compare the AI-generated image of Pope Francis and the user-created cartoon of Brigette Lundy-Paine. Neither image belongs on their respective bios, of course, but the AI-generated image is no more "misinformation" than the drawing. ] (]) 00:05, 31 December 2024 (UTC) | |||
:::I would argue the opposite: neither are made up, but the first one, because of its realism, might mislead readers into thinking that it is an actual photograph, while the second one is clearly a drawing. Which makes the first one less illustrative, as it carries potential for misinformation, despite being technically more detailed. ] (] · ]) 00:31, 31 December 2024 (UTC) | |||
::::AI-generated images should always say "AI-generated image of " in the image caption. No misleading readers that way. ] (]) 00:36, 31 December 2024 (UTC) | |||
:::::Yes, and they don't always do it, and we don't have a guideline about this either. The issue is, many people have many different proposals on how to deal with AI content, meaning we always end up with "no consensus" and no guidelines on use at all, even if most people are against it. ] (] · ]) 00:40, 31 December 2024 (UTC) | |||
::::::{{tq|always end up with "no consensus" and no guidelines on use at all, even if most people are against it}} Agreed. Even a simple proposal to have image captions note whether an image is AI-generated will have editors wikilawyer over the definition of 'AI-generated.' I take back my recommendation of starting an RfC; we can already predict how that RfC will end. ] (]) 02:28, 31 December 2024 (UTC) | |||
:Of interest perhaps is ] on the use of drawn cartoon images in BLPs. ] (]) 22:38, 30 December 2024 (UTC) | |||
:We should absolutely not be including any AI images in anything that is meant to convey facts (with the obvious exception of an AI image illustrating the concept of an AI image). I also don't think we should be encouraging AI-altered images -- the line between "regular" photo enhancement and what we'd call "AI alteration" is blurry, but we shouldn't want AI edits for the same reason we wouldn't want fake Photoshop composites. | |||
:That said, I would assume good faith here: some of these images are probably being sourced from Commons, and Commons is dealing with a lot of undisclosed AI images. ] (]) 23:31, 30 December 2024 (UTC) | |||
::] | |||
::Why wouldn't we want "fake Photoshop composites"? A ] can be very useful. I'd be sad if we banned ]. ] (]) 06:40, 31 December 2024 (UTC) | |||
:::Sorry, should have been more clear -- composites that present themselves as the real thing, basically what people would use deepfakes for now. ] (]) 20:20, 31 December 2024 (UTC) | |||
::::Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop through techniques like compositing. That line is that the diffusion model is reverse-engineering an image to match a text prompt from a pattern of semi-random static associated with similar text prompts. As such it's just automated glurge, at best it's only as good as the ability of the software to parse a text prompt and the ability of a prompter to draft sufficiently specific language. And absolutely none of that does anything to solve the "hallucination" problem. On the other hand, in photoshop, if I put in two layers both containing a bird on a transparent background, what I, the human making the image, sees is what the software outputs. ] (]) 18:03, 15 January 2025 (UTC) | |||
:::::{{tpq|Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop}} others do not. If you want to ban or restrict one but not the other then you need to explain how the difference can be reliably determined, and how one is materially different to the other in ways other than your personal opinion. ] (]) 18:45, 15 January 2025 (UTC) | |||
:I don't think any guideline, let alone policy, would be beneficial and indeed on balance is more likely to be harmful. There are always only two questions that matter when determining whether we should use an image, and both are completely independent of whether the image is AI-generated or not: | |||
:#Can we use this image in this article? This depends on matters like copyright, fair use, whether the image depicts content that is legal for an organisation based in the United States to host, etc. Obviously if the answer is "no", then everything else is irrelevant, but as the law and WMF, Commons and en.wp policies stand today there exist some images in both categories we can use, and some images in both categories we cannot use. | |||
:#Does using this image in this article improve the article? This is relative to other options, one of which is always not using any image, but in many cases also involves considering alternative images that we can use. In the case of depictions of specific, non-hypothetical people or objects one criteria we use to judge whether the image improves the article is whether it is an accurate representation of the subject. If it is not an accurate representation then it doesn't improve the article and thus should not be used, regardless of why it is inaccurate. If it is an accurate representation, then its use in the article will not be misrepresentative or misleading, regardless of whether it is or is not AI generated. It may or may not be the best option available, but if it is then it should be used regardless of whether it is or is not AI generated. | |||
:The potential harm I mentioned above is twofold, firstly Misplaced Pages is, by definition, harmed when an images exists we could use that would improve an article but we do not use it in that article. A policy or guideline against the use of AI images would, in some cases, prevent us from using an image that would improve an article. The second aspect is misidentification of an image as AI-generated when it isn't, especially when it leads to an image not being used when it otherwise would have been. | |||
:Finally, all the proponents of a policy or guideline are assuming that the line between images that are and are not AI-generated is sharp and objective. Other commenters here have already shown that in reality the line is blurry and it is only going to get blurrier in the future as more AI (and AI-based) technology is built into software and especially firmware. ] (]) 00:52, 31 December 2024 (UTC) | |||
::I agree with almost the entirety of your post with a caveat on whether something "is an accurate representation". We can tell whether non-photorealistic images are accurate by assessing whether the image accurately conveys ''the idea'' of what it is depicting. Photos do more than convey an idea, they convey the actual look of something. With AI generated images that are photorealistic it is difficult to assess whether they accurately convey the look of something (the shading might be illogical in subtle ways, there could be an extra finger that goes unnoticed, a mole gets erased), but readers might be deceived by the photo-like presentation into thinking they are looking at an actual photographic depiction of the subject which could differ significantly from the actual subject in ways that go unnoticed. ] (]) 04:34, 31 December 2024 (UTC) | |||
::{{tq|A policy or guideline against the use of AI images would, in some cases, prevent us from using an image that would improve an article.}} That's why I'm suggesting a guideline, not a policy. Guidelines are by design more flexible, and ] still does (and should) apply in edge cases.{{pb}}{{tq|The second aspect is misidentification of an image as AI-generated when it isn't, especially when it leads to an image not being used when it otherwise would have been.}} In that case, there is a licensing problem. AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that ''might'' have been AI-generated.{{pb}}{{tq|Finally, all the proponents of a policy or guideline are assuming that the line between images that are and are not AI-generated is sharp and objective. Other commenters here have already shown that in reality the line is blurry and it is only going to get blurrier in the future as more AI (and AI-based) technology is built into software and especially firmware.}} In that case, it's mostly because the ambiguity in wording: AI-edited images are very common, and are sometimes called "AI-generated", but here we should focus on actual prompt outputs, of the style "I asked a model to generate me an image of a BLP". ] (] · ]) 11:13, 31 December 2024 (UTC) | |||
:::Simply not having a completely unnecessary policy or guideline is infinitely better than relying on IAR - especially as this would have to be ignored ''every'' time it is relevant. When the AI image is not the best option (which obviously includes all the times its unsuitable or inaccurate) existing policies, guidelines, practice and frankly common sense mean it won't be used. This means the only time the guideline would be relevant is when an AI image ''is'' the best option and as we obviously should be using the best option in all cases we would need to ignore the guideline against using AI images. | |||
:::{{tpq|AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.}} The key words here are "supposed to be" and "shouldn't", editors absolutely ''will'' speculate that images are AI-generated and that the Commons labelling is incorrect. We are supposed to assume good faith, but this very discussion shows that when it comes to AI some editors simply do not do that. | |||
:::Regarding your final point, that might be what you are meaning but it is not what all other commenters mean when they want to exclude all AI images. ] (]) 11:43, 31 December 2024 (UTC) | |||
::::For your first point, the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image), but the model likely doesn't have any available image either and most likely just made it up. As my proposal is essentially limited to that (I don't include AI-edited images, only those that are purely generated by a model), I don't think there will be many cases where IAR would be needed.{{pb}}Regarding your two other points, you are entirely correct, and while I am hoping for nuance on the AI issue, it is clear that some editors might not do that. For the record, I strongly disagree with a blanket ban of "AI images" (which includes both blatant "prompt in model" creations and a wide range of more subtle AI retouching tools) or anything like that. ] (] · ]) 11:49, 31 December 2024 (UTC) | |||
:::::{{tpq|the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image)}}. There are only two possible scenarios regarding verifiability: | |||
:::::#The image is an accurate representation and we can verify that (e.g. by reference to non-free photos). | |||
:::::#*Verifiability is no barrier to using the image, whether it is AI generated or not. | |||
:::::#*If it is the best image available, and editors agree using it is better than not having an image, then it should be used whether it is AI generated or not. | |||
:::::#The image is either ''not'' an accurate representation, or we cannot verify whether it is or is not an accurate representation | |||
:::::#*The only reasons we should ever use the image are: | |||
:::::#**It has been the subject of notable commentary and we are presenting it in that context. | |||
:::::#**The subject verifiably uses it as a representation of themselves (e.g. as an avatar or logo) | |||
:::::#:This is already policy, whether the image is AI generated or not is completely irrelevant. | |||
:::::You will note that in no circumstance is it relevant whether the image is AI generated or not. ] (]) 13:27, 31 December 2024 (UTC) | |||
::::::In your first scenario, there is the issue of an accurate AI-generated image misleading people into thinking it is an actual photograph of the person, especially as they are most often photorealistic. Even besides that, a mostly accurate representation can still introduce spurious details, and this can mislead readers as they do not know to what level it is actually accurate. This scenario doesn't really happen with drawings (which are clearly not photographs), and is very much a consequence of AI-generated photorealistic pictures being a thing.{{pb}}In the second scenario, if we cannot verify that it is not an accurate representation, it can be hard to remove the image with policy-based reasons, which is why a guideline will again be helpful. Having a single guideline against fully AI-generated images takes care of all of these scenarios, instead of having to make new specific guidelines for each case that emerges because of them. ] (] · ]) 13:52, 31 December 2024 (UTC) | |||
:::::::If the image is misleading or unverifiable it should not be used, regardless of why it is misleading or unverifiable. This is existing policy and we don't need anything specifically regarding AI to apply it - we just need consensus that the image ''is'' misleading or unverifiable. Whether it is or is not AI generated is completely irrelevant. ] (]) 15:04, 31 December 2024 (UTC) | |||
::::{{tpq|AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.}} | |||
::::I mean... yes, we should? At the very least Commons should go hunting for mislabeled images -- that's the whole point of license review. The thing is that things are absolutely swamped over there and there are hundreds of thousands of images waiting for review of some kind. ] (]) 20:35, 31 December 2024 (UTC) | |||
:::::Yes, but that's a Commons thing. A guideline on English Misplaced Pages shouldn't decide of what is to be done on Commons. ] (] · ]) 20:37, 31 December 2024 (UTC) | |||
::::::I just mean that given the reality of the backlogs, there are going to be mislabeled images, and there are almost certainly going to be more of them over time. That's just how it is. We don't have control over that, but we do have control over what images go into articles, and if someone has legitimate concerns about an image being AI-generated, then they should be raising those. ] (]) 20:45, 31 December 2024 (UTC) | |||
*'''Support blanket ban on AI-generated images on Misplaced Pages'''. As others have highlighted above, the is not just a slippery slope but an outright downward spiral. We don't use AI-generated text and we shouldn't use AI-generated images: these aren't reliable and they're also ] scraped from who knows what and where. '''Use only reliable material from reliable sources'''. As for the argument of 'software now has AI features', we all know that there's a huge difference between someone using a smoothing feature and someone generating an image from a prompt. ] (]) 03:12, 31 December 2024 (UTC) | |||
*:'''Reply''', the section of ] concerning images is ] which states "Original images created by a Wikimedian are not considered original research, ''so long as they do not illustrate or introduce unpublished ideas or arguments''". Using AI to generate an image only violates ] if you are using it to illustrate unpublished ideas, which can be assessed just by looking at the image itself. COPYVIO, however, cannot be assessed from looking at just the image alone, which AI could be violating. However, some images may be too simple to be copyrightable, for example AI-generated images of chemicals or mathematical structures potentially. ] (]) 04:34, 31 December 2024 (UTC) | |||
*::Prompt generated images are unquestionably violation of ] and ]: Type in your description and you get an image scraping who knows what and from who knows where, often Misplaced Pages. Misplaced Pages isn't an ]. Get real. ] (]) 23:35, 1 January 2025 (UTC) | |||
*:::"Unquestionably"? Let me question that, @]. <code>;-)</code> | |||
*:::If an editor were to use an AI-based image-generating service and the prompt is something like this: | |||
*:::"I want a stacked bar chart that shows the number of games won and lost by ] each year. Use the team colors, which are red #DC052D, blue #0066B2, and black #000000. The data is: | |||
*:::* 2014–15: played 34 games, won 25, tied 4, lost 5 | |||
*:::* 2015–16: played 34 games, won 28, tied 4, lost 2 | |||
*:::* 2016–17: played 34 games, won 25, tied 7, lost 2 | |||
*:::* 2017–18: played 34 games, won 27, tied 3, lost 4 | |||
*:::* 2018–19: played 34 games, won 24, tied 6, lost 4 | |||
*:::* 2019–20: played 34 games, won 26, tied 4, lost 4 | |||
*:::* 2020–21: played 34 games, won 24, tied 6, lost 4 | |||
*:::* 2021–22: played 34 games, won 24, tied 5, lost 5 | |||
*:::* 2022–23: played 34 games, won 21, tied 8, lost 5 | |||
*:::* 2023–24: played 34 games, won 23, tied 3, lost 8" | |||
*:::I would expect it to produce something that is not a violation of either OR in general or OR's SYNTH section specifically. What would you expect, and why do you think it would be okay for me to put that data into a spreadsheet and upload a screenshot of the resulting bar chart, but you don't think it would be okay for me to put that same data into a image generator, get the same thing, and upload that? | |||
*:::We must not mistake the tools for the output. Hand-crafted bad output is bad. AI-generated good output is good. ] (]) 01:58, 2 January 2025 (UTC) | |||
*::::Assuming you'd even get what you requested the model without fiddling with the prompt for a while, these sort of 'but we can use it for graphs and charts' devil's advocate scenarios aren't helpful. We're discussing generating images of people, places, and objects here and in those cases, yes, this would unquestionably be a form of ] & ]. As for the charts and graphs, there are any number of ways to produce these. ] (]) 03:07, 2 January 2025 (UTC) | |||
*:::::{{tpq|We're discussing generating images of people, places, and objects here}} The proposal contains no such limitation. {{tpq| and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH.}} Do you have a citation for that? Other people have explained better than I can how that it is not necessarily true, and certainly not unquestionable. ] (]) 03:14, 2 January 2025 (UTC) | |||
*::::::As you're well aware, these images are produced by scraping and synthesized material from who knows what and where: it's ultimately pure ] to produce these fake images and they're a straightforward product of synthesis of multiple sources (]) - worse yet, these sources are unknown because training data is by no means transparent. Personally, I'm strongly for a total ban on generative AI on the site exterior to articles on the topic of generative AI. Not only do I find this incredible unethical, I believe it is intensely detrimental to Misplaced Pages, which is an already a flailing and shrinking project. ] (]) 03:23, 2 January 2025 (UTC) | |||
*:::::::So you think the lead image at ] is a SYNTH violation? Its (human) creator explicitly says "This is not done from one specific photo. As I usually do when I draw portraits of people that I can't see in person, I look at a lot of photos of them and then create my own rendition" in the image description, which sounds like the product of synthesis of multiple sources" to me, and "these sources are unknown because" the the images the artist looked at are not disclosed. | |||
*:::::::A lot of my concern about blanket statements is the principle that what's ] is sauce for the gander, too. If it's okay for a human to do something by hand, then it should be okay for a human using a semi-automated tool to do it, too. | |||
*:::::::<small>(Just in case you hadn't heard, the rumors that the editor base is shrinking have been false for over a decade now. Compared to when you created your account in mid-2005, we have about twice as many high-volume editors.)</small> ] (]) 06:47, 2 January 2025 (UTC) | |||
*:::::::Review ] and your attempts at downplaying a prompt-generated image as "semi-automated" shows the root of the problem: if you can't detect the difference between a human sketching from a reference and a machine scraping who-knows-what on the internet, you shouldn't be involved in this discussion. As for editor retention, this remains a serious problem on the site: while the site continues to grow (and becomes core fodder for AI-scraping) and becomes increasingly visible, editorial retention continues to drop. ] (]) 09:33, 2 January 2025 (UTC) | |||
*::::::::Please scroll down below SYNTH to the next section titled "What is not original research" which begins with ], our policies on how images relate to OR. OR (including SYNTH) only applies to images with regards to if they illustrate "unpublished ideas or arguments". It does not matter, for instance, if you synthesize an original ''depiction'' of something, so long as the ''idea'' of that thing is not original. ] (]) 09:55, 2 January 2025 (UTC) | |||
*:::::::::Yes, which explicitly states: | |||
*::::::::::It is not acceptable for an editor to use photo manipulation to distort the facts or position illustrated by an image. Manipulated images should be prominently noted as such. Any manipulated image where the encyclopedic value is materially affected should be posted to Misplaced Pages:Files for discussion. Images of living persons must not present the subject in a false or disparaging light. | |||
*:::::::::Using a machine to generate a fake image of someone is far beyond "manipulation" and it is certainly "false". Clearly we need explicit policies on AI-generated images of people or we wouldn't be having this discussion, but this as it stands clarly also falls under ]: there is zero question that this is a result of "synthesis of published material", even if the AI won't list what it used. Ultimately it's just a synthesis of a bunch of published composite images of who-knows-what (or who-knows-who?) the AI has scraped together to produce a fake image of a person. ] (]) 10:07, 2 January 2025 (UTC) | |||
*:The latter images you describe should be SVG regardless. If there are models that can generate that, that seems totally fine since it can be semantically altered by hand. Any generation with photographic or "painterly" characteristics (e.g. generating something in the style of a painting or any other convention of visual art that communicates aesthetic particulars and not merely abstract visual particulars) seems totally unacceptable. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 07:00, 31 December 2024 (UTC) | |||
*:] | |||
*:@], here's an image I created. It illustrates the concept of 1% in an article. I made this myself, by typing 100 emojis and taking a screenshot. Do you really mean to say that if I'd done this with an image-generating AI tool, using a prompt like "Give me 100 dots in a 10 by 10 grid. Make 99 a dark color and 1, randomly placed, look like a baseball" that it would be hopelessly tainted, because AI is always bad? Or does your strongly worded statement mean something more moderate? | |||
*:I'd worry about photos of people (including dead people). I'd worry about photos of specific or unique objects that have to be accurate or they're worse than worthless (e.g., artwork, landmarks, maps). But I'm not worried about simple graphs and charts like this one, and I'm not worried about ordinary, everyday objects. If you want to use AI to generate a photorealistic image of a cookie, or a spoon, and the output you get ], I'm not actually going to worry about it. ] (]) 06:57, 31 December 2024 (UTC) | |||
*::As you know, Misplaced Pages has the unique factor of being entirely volunteer-ran. Misplaced Pages has fewer and fewer editors and, long-term, we're seeing plummeting birth rates in areas where most Misplaced Pages editors do exist. I wouldn't expect a wave of new ones aimed at keeping the site free of bullshit in the near future. | |||
*::In addition, the Wikimedia Foundation's hair-brained continued effort to turn the site into its political cash machine is no doubt also not helping, harming the site's public perception and leading to fewer new editors. | |||
*::Over the course of decades (I've been here for around 20 years), it seems clear that the site will be negatively impacted by all this, especially in the face of generative AI. | |||
*::As a long-time editor who has frequently stumbled upon intense ] content, fended off armies of outside actors looking to shape the site into their ideological image (and sent me more than a few death threats), and who has identified large amount of politically-motivated nonsense explicitly designed to fool non-experts in areas I know intimately well (such as folklore and historical linguistics topics), I think it need be said that the use of generative AI for content is especially dangerous because of its capabilities of fooling Misplaced Pages readers and Misplaced Pages editors alike. | |||
*::Misplaced Pages is written by people for people. We need to draw a line in the sand to keep from being flooded by increasingly accessible hoax-machines. | |||
*::A blanket ban on generative AI resolves this issue or at least hands us another tool with which to attempt to fight back. We don't need what few editors we have here wasting what little time they can give the project checking over an ocean of AI-generated slop: '''we need more material from reliable sources and better tools to fend off bad actors usable by our shrinking editor base (anyone at the Wikimedia Foundation listening?), not more waves of generative AI garbage'''. ] (]) 07:40, 31 December 2024 (UTC) | |||
*:::A blanket ban doesn't actually resolve most of the issues though, and introduces new ones. Bad usages of AI can already be dealt with by existing policy, and malicious users will ignore a blanket ban anyways. Meanwhile, a blanket ban would harm many legitimate usages for AI. For instance, the majority of professional translators (at least Japanese to English) incorporate AI (or similar tools) into their workflow to speed up translations. Just imagine a professional translator who uses AI to help generate rough drafts of foreign language Misplaced Pages articles, before reviewing and correcting them, and another editor learning of this and mass reverting them for breaking the blanket ban, and ultimately causing them to leave. Many authors (particularly with carpal tunnel) use AI now to control their voice-to-text (you can train the AI on how you want character names spelled, the formatting of dialogue and other text, etc.). A wikipedia editor could train an AI to convert their voice into Misplaced Pages-formatted text. AI is subtly incorporated now into spell-checkers, grammar-checkers, photo editors, etc., in ways many people are not aware of. A blanket AI ban has the potential to cause many issues for a lot of people, without actually being that affective at dealing with malicious users. ] (]) 08:26, 31 December 2024 (UTC) | |||
*::::I think this is the least convincing one I've seen here yet: It contains the ol' 'there are AI features in programs now' while also attempting to invoke accessibility and a little bit of 'we must have machines to translate!'. | |||
*::::As a translator myself, I can only say: ''Oh please''. Generative AI is notoriously terrible at translating and that's not likely to change. And I mean ''ever'' beyond a very, very basic level. Due to the complexities of communication and little matters like nuance, all machine translated material must be thoroughly checked and modified by, yes, ''human'' translators, who often encounter it spitting out complete bullshit scraped from who-knows-where (often Misplaced Pages itself). | |||
*::::I get that this topic attracts a lot of 'but what if generative AI is better than humans?' from the utopian tech crowd but the ''reality'' is that anyone who needs a machine to invent text and visuals for whatever reason simply shouldn't be using it on Misplaced Pages. | |||
*::::Either you, a human being, can contribute to the project or ''you can't''. Slapping a bunch of machine-generated (generative AI) visuals and text (much of it ultimately coming from Misplaced Pages in the first place!) isn't some kind of human substitute, it's just machine-regurgitated slop and is not helping the project. | |||
*::::If people can't be confident that Misplaced Pages is ''made by humans, for humans'' the project is finally on its way out.] (]) 09:55, 31 December 2024 (UTC) | |||
*:::::I don't know how up to date you are on the current state of translation, but: | |||
*::::::'''' | |||
*::::::''Over three thousand full-time professional translators from around the world responded to the surveys, which were broken into a survey for CAT tool users and one for those who do not use any CAT tool at all.'' | |||
*::::::''88% of respondents use at least one CAT tool for at least some of their translation tasks.'' | |||
*::::::''Of those using CAT tools, 83% use a CAT tool for most or all of their translation work.'' | |||
*:::::Mind you, traditionally CAT tools didn't use AI, but many do now, which only adds to potential sources of confusion in a blanket ban of AI. ] (]) 17:26, 31 December 2024 (UTC) | |||
*::::::You're barking up the tree with the pro-generative AI propaganda in response to me. I think we're all quite aware that generative AI tool integration is now common and that there's also a big effort to replace human translators — and anything that can be "written" with machines-generated text. I'm also keenly aware that generative AI is ''absolutely horrible'' at translation and ''all of it must be thoroughly checked by humans'', as you would be if you were a translator yourself. ] (]) 22:20, 31 December 2024 (UTC) | |||
*:::::"''all machine translated material must be thoroughly checked and modified by, yes, ''human'' translators''" | |||
*:::::You are just agreeing with me here. | |||
*::::::'''' -American Translation Society | |||
*:::::There are translators (particularly with non-creative works) who are using these tools to shift more towards reviewing. It should be up to them to decide what they think is the most efficient method for them. ] (]) 06:48, 1 January 2025 (UTC) | |||
*::::::And any translator who wants to use generative AI to ''attempt'' to translate can do so off the site. We're not here to check it for them. I strongly support a total ban on any generative AI used on the site exterior to articles on generative AI. ] (]) 11:09, 1 January 2025 (UTC) | |||
*:::::::I wonder what you mean by "on the site". The question here is "Is it okay for an editor to go to a completely different website, generate an image all by themselves, upload it to Commons, and put it in a Misplaced Pages article?" The question here is ''not'' "Shall we put AI-generating buttons on Misplaced Pages's own website?" ] (]) 02:27, 2 January 2025 (UTC) | |||
*:::::::I'm talking about users slapping machine-translated and/or machine-generated nonsense all over the site, only for us to have to go behind and not only check it but correct it. It takes users minutes to do this and it's already happening. It's the same for images. There are very few of us who volunteer here and our numbers are growing fewer. We need to be spending our time improving the site rather than opening the gate as wide as possible for a flood of AI-generated/rendered garbage. The site has enough problems that compound every day rather than having to fend off users armed with hoax machines at every corner. ] (]) 03:20, 2 January 2025 (UTC) | |||
*::::::::Sure, we're all opposed to "nonsense", but my question is: What about when the machine happens to generate something that is ''not'' "nonsense"? | |||
*::::::::I have some worries about AI content. I worry, for example, that they'll corrupt our sources. I worry that ] will get dramatically longer, and also that even more undetected, unconfessed, unretracted papers will get published and believed to be true and trustworthy. I worry that academia will go back to a model in which personal connections are more important, because you really can't trust what's published. I worry that scientific journals will start refusing to publish research unless it comes from someone employed by a trusted institution, that is willing to put its reputation on the line by saying they have directly verified that the work described in the paper was actually performed to their standards, thus scuttling the citizen science movement and excluding people whose institutions are upset with them for other reasons (Oh, you thought you'd take a job elsewhere? Well, we refuse to certify the work you did for the last three years...). | |||
*::::::::But I'm not worried about a Misplaced Pages editor saying "Hey AI, give me a diagram of swingset" or "Make a chart for me out of the data I'm going to give you". In fact, if someone wants to pull the numbers out of ], feed it to an AI, and replace the template's contents with an AI-generated image (until they finally fix the Graphs extension), I'd consider that helpful. ] (]) 07:09, 2 January 2025 (UTC) | |||
*::::::Translators are not using ''generative'' AI for translation, the applicability of LLMs to regular translation is still in its infancy and regardless will not be implementing any ''generative'' faculties to its output since that is the exact opposite of what translation is supposed to do. ] (]) 02:57, 2 January 2025 (UTC) | |||
*:::::::{{tpq|Translators are not using generative AI for translation}} this entirely depends on what you mean by "generative". There are at least three contradictory understandings of the term in this one thread alone. ] (]) 03:06, 2 January 2025 (UTC) | |||
*:::::::Please, you can just go through the entire process with a simple prompt command now. The results are typically shit but you can generate a ton of it quickly, which is perfect for flooding a site like this one — especially without a strong policy against it. I've found myself cleaning up tons of AI-generated crap (and, yes, rendered) stuff here and elsewhere, and now I'm even seeing AI-generated responses to my own comments. It's beyond ridiculous. ] (]) 03:20, 2 January 2025 (UTC) | |||
* '''Ban AI-generated from all articles, AI anything from BLP and medical articles''' is the position that seems it would permit all instances where there are plausible defenses that AI use does not fabricate or destroy facts intended to be communicated in the context of the article. That scrutiny is stricter with BLP and medical articles in general, and the restriction should be stricter to match. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 06:53, 31 December 2024 (UTC) | |||
*:@], please see my comment immediately above. (We had an edit conflict.) Do you really mean "anything" and everything? Even a simple chart? ] (]) 07:00, 31 December 2024 (UTC) | |||
*::I think my previous comment is operative: almost anything we can see AI used programmatically to generate should be SVG, not raster—even if it means we are embedding raster images in SVG to generate examples like the above. I do not know if there are models that can generate SVG, but if there are I happily state I have no problem with that. I think I'm at risk of seeming downright paranoid—but understanding how errors can propagate and go unnoticed in practice, if we're to trust a black box, we need to at least be able to check what the black box has done on a direct structural level. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 07:02, 31 December 2024 (UTC) | |||
*:::A quick web search indicates that there are generative AI programs that create SVG files. ] (]) 07:16, 31 December 2024 (UTC) | |||
*::::Makes perfect sense that there would be. Again, maybe I come off like a paranoid lunatic, but I really need either the ability to check what the thing is doing, or the ability to check and correct exactly what a black box has done. (In my estimation, if you want to know what procedures person has done, theoretically you can ask them to get a fairly satisfactory result, and the pre-AI algorithms used in image manipulation are canonical and more or less transparent. Acknowledging human error etc., with AI there is not even the theoretical promise that one can be given a truthful account of how it decided to do what it did.) <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 07:18, 31 December 2024 (UTC) | |||
*:::::Like everyone said, there should be a ''de facto'' ban on using AI images in Misplaced Pages articles. They are effectively fake images pretending to be real, so they are out of step with the values of Misplaced Pages.--'''''] <sup>]</sup>''''' 08:20, 31 December 2024 (UTC) | |||
*::::::Except, not everybody ''has'' said that, because the majority of those of us who have refrained from hyperbole have pointed out that not all AI images are "fake images pretending to be real" (and those few that are can already be removed under existing policy). You might like to try actually reading the discussion before commenting further. ] (]) 10:24, 31 December 2024 (UTC) | |||
*:::::@], exactly how much "ability to check what the thing is doing" do you need to be able to do, when the image shows 99 dots and 1 baseball, to illustrate the concept of 1%? If the image above said {{tl|pd-algorithm}} instead of {{tl|cc-by-sa-4.0}}, would you remove if from the article, because you just can't be sure that it shows 1%? ] (]) 02:33, 2 January 2025 (UTC) | |||
*::::::The above is a useful example to an extent, but it is a toy example. I really do think i is required in general when we aren't dealing with media we ourselves are generating. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 04:43, 2 January 2025 (UTC) | |||
*:::::::How do we differentiate in policy between a "toy example" (that really would be used in an article) and "real" examples? Is it just that if I upload it, then you know me, and assume I've been responsible? ] (]) 07:13, 2 January 2025 (UTC) | |||
*::::There definitely exist generative AI for SVG files. Here's an example: I used generative AI in Adobe Illustrator to generate the SVG gear in ] (from ]) before drawing by hand the more informative parts of the image. The gear drawing is not great (a real gear would have uniform tooth shape) but maybe the shading is better than I would have done by hand, giving an appearance of dimensionality and surface material while remaining deliberately stylized. Is that the sort of thing everyone here is trying to forbid? | |||
*::::I can definitely see a case for forbidding AI-generated photorealistic images, especially of BLPs, but that's different from human oversight of AI in the generation of schematic images such as this one. —] (]) 01:15, 1 January 2025 (UTC) | |||
*:::::I'd include BDPs, too. I had to get a few AI-generated images of allegedly Haitian presidents deleted a while ago. The "paintings" were 100% fake, right down to the deformed medals on their military uniforms. An AI-generated "generic person" would be okay for some purposes. For a few purposes (e.g., illustrations of ]) it could even be preferable to have a fake "person" than a real one. But for individual/named people, it would be best not to have anything unless it definitely looks like the named person. ] (]) 07:35, 2 January 2025 (UTC) | |||
*I put it to you that our decision on this requires nuance. It's obviously insane to allow AI-generated images of, for example, Donald Trump, and it's obviously insane to ban AI-generated images from, for example, ] or ].—] <small>]/]</small> 11:21, 31 December 2024 (UTC) | |||
*:Of course, that's why I'm only looking at specific cases and refrain from proposing a blanket ban on generative AI. Regarding Donald Trump, we do have one AI-generated image of him that is reasonable to allow (in ]), as the image itself was the subject of relevant commentary. Of course, this is different from using an AI-generated image to illustrate ] himself, which is what my proposal would recommend against. ] (] · ]) 11:32, 31 December 2024 (UTC) | |||
*::That's certainly true, but others are adopting much more extreme positions than you are, and it was the more extreme views that I wished to challenge.—] <small>]/]</small> 11:34, 31 December 2024 (UTC) | |||
*:::Thanks for the (very reasoned) addition, I just wanted to make my original proposal clear. ] (] · ]) 11:43, 31 December 2024 (UTC) | |||
*Going off WAID's example above, perhaps we should be trying to restrict the use of AI where image accuracy/precision is essential, as it would be for BLP and medical info, among other cases, but in cases where we are talking generic or abstract concepts, like the 1% image, it's use is reasonable. I would still say we should strongly prefer am image made by a human with high control of the output, but when accuracy is not as important as just the visualization, it's reasonable to turn to AI to help. ] (]) 15:12, 31 December 2024 (UTC) | |||
* '''Support total ban of AI imagery''' - There are probable copyright problems and veracity problems with anything coming out of a machine. In a word of manipulated reality, Misplaced Pages will be increasingly respected for holding a hard line against synthetic imagery. ] (]) 15:39, 31 December 2024 (UTC) | |||
*:For both issues AI vs not AI is irrelevant. For copyright, if the image is a copyvio we can't use it regardless of whether it is AI or not AI, if it's not a copyvio then that's not a reason to use or not use the image. If the images is not verifiably accurate then we already can (and should) exclude it, regardless of whether it is AI or not AI. For more detail see the extensive discussion above you've either not read or ignored. ] (]) 16:34, 31 December 2024 (UTC) | |||
*'''Yes''', we absolutely should ban the use of AI-generated images in these subjects (and beyond, but that's outside the scope of this discussion). AI should not be used to make up a simulation of a living person. It does not actually depict the person and may introduce errors or flaws that don't actually exist. The picture ''does not depict the real person'' because it is quite simply fake. | |||
*Even worse would be using AI to develop medical images in articles ''in any way''. The possibility for error there is unacceptable. Yes, humans make errors too, but there there is a) someone with the responsibility to fix it and b) someone conscious who actually made the picture, rather than a black box that spat it out after looking at similar training data. '']'' 🎄 ] — ] 🎄 20:08, 31 December 2024 (UTC) | |||
*:It's ''incredibly'' disheartening to see multiple otherwise intelligent editors who have apparently not read and/or not understood what has been said in the discussion but rather responding with what appears to be knee-jerk reactions to anti-AI scaremongering. The sky will not fall in, Misplaced Pages is not going to be taken over by AI, AI is not out to subvert Misplaced Pages, we already can (and do) remove (and more commonly not add in the first placE) false and misleading information/images. ] (]) 20:31, 31 December 2024 (UTC) | |||
*::So what benefit does allowing AI images bring? We shouldn't be forced to decide these on a case-by-case basis. | |||
*::I'm sorry to dishearten you, but I still respectfully disagree with you. And I don't think this is "scaremongering" (although I admit that if it was, I would of course claim it wasn't). '']'' 🎄 ] — ] 🎄 21:02, 31 December 2024 (UTC) '']'' 🎄 ] — ] 🎄 20:56, 31 December 2024 (UTC) | |||
*:::Determining what benefits ''any'' image brings to Misplaced Pages can ''only'' be done on a case-by-case basis. It is literally impossible to know whether any image improves the encyclopaedia without knowing the context of which portion of what article it would illustrate, and what alternative images are and are not available for that same spot. | |||
*:::The benefit of allowing AI images is that when an AI image is the best option for a given article we use it. We gain absolutely nothing by prohibiting using the best image available, indeed doing so would actively harm the project without bringing any benefits. AI images that are misleading, inaccurate or any of the other negative things ''any'' image can be are never the best option and so are never used - we don't need any policies or guidelines to tell us that. ] (]) 21:43, 31 December 2024 (UTC) | |||
*'''Support blanket ban on AI-generated text or images in articles''', except in contexts where the AI-generated content is itself the subject of discussion (in a ] or ]). Generative AI is fundamentally at odds with Misplaced Pages's mission of providing reliable information, because of its propensity to distort reality or make up information out of whole cloth. It has no place in our encyclopedia. <span class="nowrap">—] (] | ])</span> 21:34, 31 December 2024 (UTC) | |||
*'''Support blanket ban on AI-generated images''' except in ABOUTSELF contexts. This is ''especially'' a problem given the preeminence Google gives to Misplaced Pages images in its image search. ] (]) 22:49, 31 December 2024 (UTC) | |||
*'''Ban across the board''', except in articles which are actually about AI-generated imagery or the tools used to create them, or the image itself is the subject of substantial commentary within the article for some reason. Even in those cases, clearly indicating that the image is AI-generated should be required. ] <small><sup>]</sup></small> 00:29, 1 January 2025 (UTC) | |||
*'''Oppose blanket bans''' that would forbid the use of AI assistance in creating diagrams or other deliberately stylized content. Also oppose blanket bans that would forbid AI illustrations in articles about AI illustrations. I am not opposed to banning photorealistic AI-generated images in non-AI-generation contexts or banning AI-generated images from BLPs unless the image itself is specifically relevant to the subject of the BLP. —] (]) 01:27, 1 January 2025 (UTC) | |||
]?]] | |||
*'''Oppose blanket bans''' AI is just a new buzzword so, for example, Apple phones now include "Apple Intelligence" as a standard feature. Does this means that photographs taken using Apple phones will be inadmissable? That would be silly because legacy technologies are already rife with issues of accuracy and verification. For example, there's an image on the main page right now ''(right)''. This purports to be a particular person ("]") but, if you check the , you find that it may have been his brother and even the attribution to the artist is uncertain. AI features may help in exposing such existing weaknesses in our image use and so we should be free to use them in an intelligent way. ]🐉(]) 08:03, 1 January 2025 (UTC) | |||
*:So, you expect an the AI, ''notoriously trained on Misplaced Pages (and whatever else is floating around on the internet)'', to correct Misplaced Pages where humans have failed... using the data it ''scraped from Misplaced Pages (and who knows where else)''? ] (]) 11:12, 1 January 2025 (UTC) | |||
*::I tried using the Deep Research option of Gemini to assess the attribution of the Macquarie portrait. Its stated methodology seemed quite respectable and sensible. | |||
{{cot|The Opie Portrait of Lachlan Macquarie: An Examination of its Attribution: Methodology}} | |||
To thoroughly investigate the attribution of the Opie portrait of Lachlan Macquarie, a comprehensive research process was undertaken. This involved several key steps: | |||
#Gathering information on the Opie portrait: This included details about its history, provenance, and any available information on its cost. | |||
#Reviewing scholarly articles and publications: This step focused on finding academic discussions specifically addressing the attribution of the portrait to John Opie. | |||
#Collecting expert opinions: Statements and opinions from art experts and historians were gathered to understand the range of perspectives on the certainty of the attribution. | |||
#Examining historical documents and records: This involved searching for any records that could shed light on the portrait's origins and authenticity, such as Macquarie's personal journals or contemporary accounts. | |||
#Exploring scientific and technical analyses: Information was sought on any scientific or technical analyses conducted on the portrait, such as pigment analysis or canvas dating, to determine its authenticity. | |||
#Comparing the portrait to other Opie works: This step involved analyzing the style and technique of the Opie portrait in comparison to other known portraits by Opie to identify similarities and differences. | |||
{{cob}} | |||
*::It was quite transparent in listing and citing the sources that it used for its analysis. These included the Misplaced Pages image but if one didn't want that included, it would be easy to exclude it. | |||
*::So, AIs don't have to be inscrutable black boxes. They can have programmatic parameters like the existing bots and scripts that we use routinely on Misplaced Pages. Such power tools seem needed to deal with the large image backlogs that we have on Commons. Perhaps they could help by providing captions and categories where these don't exist. | |||
*::]🐉(]) 09:09, 2 January 2025 (UTC) | |||
*:::They don't ''have to be black boxes'' but they are ''by design'': they exist in a legally dubious area and thus hide what they're scraping to avoid further legal problems. That's no secret. We know for example that Misplaced Pages is a core data set for likely most AIs today. They also notoriously and quite confidently spit out a lie ("hallucinate") and frequently spit out total nonsense. Add to that that they're restricted to whatever is floating around on the internet or whatever other data set they've been fed (usually just more internet), and many specialist topics, like texts on ancient history and even standard reference works, are not accessible on the internet (despite Google's efforts). ] (]) 09:39, 2 January 2025 (UTC) | |||
*:::While its stated methodology seems sensible, there's no evidence that it actually followed that methodology. The bullet points are pretty vague, and are pretty much the default methodologies used to examine actual historical works. ] (] · ]) 17:40, 2 January 2025 (UTC) | |||
*:::: Yes, there's evidence. As I stated above, the analysis is transparent and cites the sources that it used. And these all seem to check out rather than being invented. So, this level of AI goes beyond the first generation of LLM and addresses some of their weaknesses. I suppose that image generation is likewise being developed and improved and so we shouldn't rush to judgement while the technology is undergoing rapid development. ]🐉(]) 17:28, 4 January 2025 (UTC) | |||
* '''Oppose blanket ban''': best of luck to editors here who hope to be able to ban an entirely undefined and largely undetectable procedure. The term 'AI' as commonly used is no more than a buzzword - what ''exactly'' would be banned? And how does it improve the encyclopedia to encourage editors to object to images not simply because they are inaccurate, or inappropriate for the article, but because they subjectively look too good? Will the image creator be quizzed on Commons about the tools they used? Will creators who are transparent about what they have created have their images deleted while those who keep silent don’t? Honestly, this whole discussion is going to seem hopelessly outdated within a year at most. It’s like when early calculators were banned in exams because they were ‘cheating’, forcing students to use slide rules. ] (]) 12:52, 1 January 2025 (UTC) | |||
*:I am genuinely confused as to why this has turned into a discussion about a blanket ban, even though the original proposal exclusively focused on ''AI-generated'' images (the kind that is generated by an AI model from a prompt, which are already tagged on Commons, not regular images with AI enhancement or tools being used) and only in specific contexts. Not sure where the "subjectively look too good" thing even comes from, honestly. ] (] · ]) 12:58, 1 January 2025 (UTC) | |||
*::That just show how ill-defined the whole area is. It seem you restrict the term 'AI-generated' to mean "images generated solely(?) from a text prompt". The question posed above has no such restriction. What a buzzword means is largely in the mind of the reader, of course, but to me and I think to many, 'AI-generated' means generated by AI. ] (]) 13:15, 1 January 2025 (UTC) | |||
*:::I used the text prompt example because that is the most common way to have an AI model generate an image, but I recognize that I should've clarified it better. There is definitely a distinction between an image being ''generated'' by AI (like the Laurence Boccolini example below) and an image being ''altered'' or retouched by AI (which includes many features integrated in smartphones today). I don't think it's a "buzzword" to say that there is a meaningful difference between an image being made up by an AI model and a preexisting image being altered in some way, and I am surprised that many people understand "AI-generated" as including the latter. ] (] · ]) 15:24, 1 January 2025 (UTC) | |||
*'''Oppose as unenforceable.''' I just want you to imagine enforcing this policy against people who have not violated it. All this will do is allow Wikipedians who primarily contribute via text to accuse artists of using AI ] to get their contributions taken down. I understand the impulse to oppose AI on principle, but the labor and aesthetic issues don't actually have anything to do with Misplaced Pages. If there is not actually a problem with the content conveyed by the image—for example, if the illustrator intentionally corrected any hallucinations—then someone objecting over AI is not discussing page content. If the image was not even made with AI, they are hallucinating based on prejudices that are irrelevant to the image. The bottom line is that images should be judged on their content, not how they were made. Besides all the policy-driven stuff, if Misplaced Pages's response to the creation of AI imaging tools is to crack down on all artistic contributions to Misplaced Pages (which seems to be the inevitable direction of these discussions), what does that say? Categorical bans of this kind are ill-advised and anti-illustrator. ] (]) 15:41, 1 January 2025 (UTC) | |||
*:And the same applies to photography, of course. If in my photo of a garden I notice there is a distracting piece of paper on the lawn, nobody would worry if I used the old-style clone-stamp tool to remove it in Photoshop, adding new grass in its place (I'm assuming here that I don't change details of the actual landscape in any way). Now, though, Photoshop uses AI to achieve essentially the same result while making it simpler for the user. A large proportion of all processed photos will have at least some similar but essentially undetectable "generated AI" content, even if only a small area of grass. There is simply no way to enforce the proposed policy, short of banning all high-quality photography – which requires post-processing by design, and in which similar encyclopedically non-problematic edits are commonplace. ] (]) 17:39, 1 January 2025 (UTC) | |||
*:Before anyone objects that my example is not "an image generated from a text prompt", note that there's no mention of such a restriction in the proposal we are discussing. Even if there were, it makes no difference. Photoshop can already generate photo-realistic areas from a text prompt. If such use is non-misleading and essentially undetectable, it's fine; if if changes the image in such a way as to make it misleading, inaccurate or non-encycpopedic in any way it can be challenged on that basis. ] (]) 17:58, 1 January 2025 (UTC) | |||
*::As I said previously, the text prompt is just an example, not a restriction of the proposal. The point is that you talk about editing an existing image (which is what you talk about, as you say {{tq|if if changes the image}}), while I am talking about creating an image ''ex nihilo'', which is what "generating" means. ] (] · ]) 18:05, 1 January 2025 (UTC) | |||
*:::I'm talking about a photograph with AI-generated areas within it. This is commonplace, and is targeted by the proposal. Categorical bans of the type suggested are indeed ill-advised. ] (]) 18:16, 1 January 2025 (UTC) | |||
*:Even if the ban is unenforceable, there are many editors who will choose to use AI images if they are allowed and just as cheerfully skip them if they are not allowed. That would mean the only people posting AI images are those who choose to break the rule and/or don't know about it. That would probably add up to many AI images not used. ] (]) 22:51, 3 January 2025 (UTC) | |||
*'''Support blanket ban''' because "AI" is a fundamentally unethical technology based on the exploitation of labor, the wanton destruction of the planetary environment, and the subversion of every value that an encyclopedia should stand for. ABOUTSELF-type exceptions for "AI" output ''that has already been generated'' might be permissible, in order to document the cursed time in which we live, but those exceptions are going to be rare. How many examples of Shrimp Jesus slop do we need? ] (]) 23:30, 1 January 2025 (UTC) | |||
*'''Support blanket ban''' - Primarily because of the "poisoning the well"/"dead internet" issues created by it. ] (]) 14:30, 2 January 2025 (UTC) | |||
* '''Support a blanket ban''' to assure some control over AI-creep in Misplaced Pages. And per discussion. ] (]) 10:50, 3 January 2025 (UTC) | |||
* '''Support that ] applies to images''': images should be verifiable, neutral, and absent of original research. AI is just the latest quickest way to produce images that are original, unverifiable, and potentially biased. Is anyone in their right mind saying that we allow people to game our rules on ] and ] by using images instead of text? ] (]) 17:04, 3 January 2025 (UTC) | |||
*:As an aside on this: in some cases Commons is being treated as a way of side-stepping ] and other restrictions. Stuff that would get deleted if it were written content on WP gets in to WP as images posted on Commons. The worst examples are those conflict maps that are created from a bunch of Twitter posts (eg the Syrian civil war one). AI-generated imagery is another field where that appears to be happening. ] (]) 10:43, 4 January 2025 (UTC) | |||
*'''Support temporary blanket ban''' with a posted expiration/requred rediscussion date of no more than two years from closing. AI as the term is currently used is very, very new. Right now these images would do more harm than good, but it seems likely that the culture will adjust to them. I support an exception for the when the article is about the image itself and that image is notable, such as the photograph of the black-and-blue/gold-and-white dress in ] and/or examples of AI images in articles in which they are relevant. E.g. "here is what a hallucination is: count the fingers." ] (]) 23:01, 3 January 2025 (UTC) | |||
* First, I think any guidance should avoid referring to specific technology, as that changes rapidly and is used for many different purposes. Second, assuming that the image in question has a suitable copyright status for use on Misplaced Pages, the key question is whether or not the reliability of the image has been established. If the intent of the image is to display 100 dots with 99 having the same appearance and 1 with a different appearance, then ordinary math skills are sufficient and so any Misplaced Pages editor can evaluate the reliability without performing original research. If the intent is to depict a likeness of a specific person, then there needs to be reliable sources indicating that the image is sufficiently accurate. This is the same for actual photographs, re-touched ones, drawings, hedcuts, and so forth. Typically this can be established by a reliable source using that image with a corresponding description or context. ] (]) 17:59, 4 January 2025 (UTC) | |||
*'''Support Blanket Ban on AI generated imagery''' per most of the discussion above. It's a very slippery slope. I ''might'' consider a very narrow exception for an AI generated image of a person that was specifically authorized or commissioned by the subject. -] (]) 02:45, 5 January 2025 (UTC) | |||
* '''Oppose blanket ban''' It is far too early to take an absolutist position, particularly when the potential is enormous. Misplaced Pages is already is image desert and to reject something that is only at the cusp of development is unwise. '''<span style="text-shadow:7px 7px 8px black; font-family:Papyrus">]<sup>]</sup></span>''' 20:11, 5 January 2025 (UTC) | |||
*'''Support blanket ban''' on AI-generated images except in ABOUTSELF contexts. An encyclopedia should not be using fake images. I do not believe that further nuance is necessary. ] (]) 22:44, 5 January 2025 (UTC) | |||
*'''Support blanket ban''' as the general guideline, as accuracy, personal rights, and intellectual rights issues are very weighty, here (as is disclosure to the reader). (I could see perhaps supporting adoption of a sub-guideline for ways to come to a broad consensus in individual use cases (carve-outs, except for BLPs) which address all the weighty issues on an individual use basis -- but that needs to be drafted and agreed to, and there is no good reason to wait to adopt the general ban in the meantime). ] (]) 15:32, 8 January 2025 (UTC) | |||
] | |||
*'''Support indefinite blanket ban except ABOUTSELF and simple abstract examples''' (such as the image of 99 dots above). In addition to all the issues raised above, including copyvio and creator consent issues, in cases of photorealistic images it may never be obvious to all readers exactly which elements of the image are guesswork. The cormorant picture at the head of the section reminded me of ]. Had AI been trained on paintings of horses instead of actual videos and used to "improve" said videos, we would've ended up with serious delusions about the horse's gait. We don't know what questions -- scientific or otherwise -- photography will be used to settle in the coming years, but we do know that consumer-grade photo AI has already been trained to intentionally fake detail to draw sales, such as on photos of the Moon. I think it's unrealistic to require contributors to take photos with expensive cameras or specially-made apps, but Misplaced Pages should act to limit its exposure to this kind of technology as far as is feasible. <span style="font-family:Garamond,Palatino,serif;font-size:115%;background:-webkit-linear-gradient(red,red,red,blue,blue,blue,blue);-webkit-background-clip:text;-webkit-text-fill-color:transparent">] ]</span> 20:57, 9 January 2025 (UTC) | |||
*'''Support at least some sort of recomendation against''' the use AI generated imagery in non-AI contexts−except obviously where the topic of the article is specificly related to AI generated imagery (], ], ], etc.). At the very least the consensus bellow about BLPs should be extened to all historical biographies, as all the examples I've seen (see ]) fail ] (failing to add anything to the sourced text) and serving only to mislead the reader. We inclued images for a reason, not just for decoration. I'm also reminded the essay ], and the distinction it makes between notable depictions of histoical people (which can be useful to illustarate articles) and non-notable fictional portraits which in its (imo well argued) view {{tq|have no legitimate encyclopedic function whatsoever}}. ] ☞️ ] 14:36, 14 January 2025 (UTC) | |||
*:Anything that fails WP:IMAGERELEVANCE can be, should be, and ''is'', excluded from use already, likewise any images which {{tpq|have no legitimate encyclopedic function whatsoever.}} This applies to AI and none AI images equally and identically. Just as we don't have or need a policy or guideline specifically saying don't use irrelevant or otherwise non-encyclopaedic watercolour images in articles we don't need any policy or guideline specifically calling out AI - because it would (as you demonstrate) need to carve out exceptions for when it's use ''is'' relevant. ] (]) 14:45, 14 January 2025 (UTC) | |||
*::That would be an easy change; just add a sentence like "AI-generated images of individual people are primarily decorative and should not be used". We should probably do that no matter what else is decided. ] (]) 23:24, 14 January 2025 (UTC) | |||
*:::Except that is both not true and irrelevant. ''Some'' AI-generated images of individual people are primarily decorative, but not all of them. If an image is purely decorative it shouldn't be used, regardless of whether it is AI-generated or not. ] (]) 13:43, 15 January 2025 (UTC) | |||
*::::Can you give an example of an AI-generated image of an individual person that is (a) not primarily decorative and also (b) not copied from the person's social media/own publications, and that (c) at least some editors think would be a good idea? | |||
*::::"Hey, AI, please give me a realistic-looking photo of this person who died in the 12th century" is not it. "Hey, AI, we have no freely licensed photos of this celebrity, so please give me a line-art caricature" is not it. What is? ] (]) 17:50, 15 January 2025 (UTC) | |||
*:::::Criteria (b) and (c) were not part of the statement I was responding to, and make it a ''very'' significantly different assertion. I will ] that you are not making ] arguments in bad faith, but the frequent fallacious argumentation in these AI discussions is getting tiresome. | |||
*:::::Even with the additional criteria it is still irrelevant - if no editor thinks an image is a good idea, then it won't be used in an article regardless of why they don't think it's a good idea. If some editors think an individual image is a good idea then it's obviously potentially encyclopaedic and needs to be judged on its merits (whether it is AI-generated is completely irrelevant to it's encyclopaedic value). An image that the subject uses on their social media/own publications to identify themselves (for example as an avatar) is the perfect example of the type of image which is frequently used in articles about that individual. ] (]) 18:56, 15 January 2025 (UTC) | |||
{{clear}} | |||
== Misplaced Pages:When to cite has been marked as a guideline == | |||
===BLPs=== | |||
{{Archive top | |||
|status = Consensus against | |||
|result = There is clear consensus against using AI-generated imagery to depict BLP subjects. Marginal cases (such as major AI enhancement or where an AI-generated image of a living person is itself notable) can be worked out on a case-by-case basis. I will add a sentence reflecting this consensus to the ] and the ]. —] (]) 14:02, 8 January 2025 (UTC) | |||
}} | |||
Are AI-generated images (generated via text prompts, see also: ]) okay to use to depict BLP subjects? The ] example was mentioned in the opening paragraph. The image was created using ], {{tq|a text-to-image model developed by xAI, to generate images...As with other text-to-image models, Aurora generates images from natural language descriptions, called prompts.}} ]]] ] (]) 12:34, 31 December 2024 (UTC) | |||
]]] | |||
{{lw|When to cite}} has recently been edited to mark it as a ]. This is an automated notice of the change (]). -- ] (]) 18:50, 19 March 2008 (UTC) | |||
]: <ins>Note</ins>: that these images can either be photorealistic in style (such as the Laurence Boccolini example) or non-photorealistic in style (see the ] example, which was generated using ], another text-to-image model). | |||
] (]) 11:10, 3 January 2025 (UTC) {{clear}} | |||
{{small|notified: ], ], ], ] -- ] (]) 11:27, 2 January 2025 (UTC)}} | |||
== Locking Featured Articles == | |||
*'''No.''' I don't think they are at all, as, despite looking photorealistic, they are essentially just speculation about what the person might look like. A photorealistic image conveys the look of something up to the details, and giving a false impression of what the person looks like (or, at best, just guesswork) is actively counterproductive. (Edit 21:59, 31 December 2024 (UTC): clarified bolded !vote since everyone else did it) ] (] · ]) 12:46, 31 December 2024 (UTC) | |||
*:That AI generated image looks like ] wearing a Laurence Boccolini suit. ] (]) 12:50, 31 December 2024 (UTC) | |||
*:There are plenty of non-free images of Laurence Boccolini with which this image can be compared. Assuming at least most of those are accurate representations of them (I've never heard of them before and have no other frame of reference) the image above is similar to but not an accurate representation of them (most obviously but probably least significantly, in none of the available images are they wearing that design of glasses). This means the image should not be used to identify them ''unless'' they use it to identify themselves. It should not be used elsewhere in the article unless it has been the subject of notable commentary. That it is an AI image makes absolutely no difference to any of this. ] (]) 16:45, 31 December 2024 (UTC) | |||
*'''No'''. Well, that was easy.{{pb}}<!--converted from 2 lines ~ToBeFree-->They are fake images; they do not actually depict the person. They depict an AI-generated ''simulation'' of a person that may be inaccurate. '']'' 🎄 ] — ] 🎄 20:00, 31 December 2024 (UTC) | |||
*:Even if the subject uses the image to identify themselves, the image is still fake. '']'' (] — ]) 19:17, 2 January 2025 (UTC) | |||
*'''No''', with the caveat that its mostly on the grounds that we don't have enough information and when it comes to BLP we are required to exercise caution. If at some point in the future AI generated photorealistic simulacrums living people become mainstream with major newspapers and academic publishers it would be fair to revisit any restrictions, but in this I strongly believe that we should follow not lead. ] (]) 20:37, 31 December 2024 (UTC) | |||
*'''No'''. The use of AI-generated images to depict people (living or otherwise) is fundamentally misleading, because the images are not actually depicting the person. <span class="nowrap">—] (] | ])</span> 21:30, 31 December 2024 (UTC) | |||
*'''No''' except perhaps, maybe, if the subject explicitly is already using that image to represent themselves. But mostly no. -] (]) 21:32, 31 December 2024 (UTC) | |||
*'''Yes''', when that image is an accurate representation and better than any available alternative, used by the subject to represent themselves, or the subject of notable commentary. However, as these are the exact requirements to use ''any'' image to represent a BLP subject this is already policy. ] (]) 21:46, 31 December 2024 (UTC) | |||
*:How well can we determine how accurate a representation it is? Looking at the example above, I'd argue that the real ] has a somewhat rounder/pointier chin, a wider mouth, and possibly different eye wrinkles, although the latter probably depends quite a lot on the facial expression. | |||
*:How accurate a representation a photorealistic AI image is is ultimately a matter of editor opinion. '']'' 🎄 ] — ] 🎄 21:54, 31 December 2024 (UTC) | |||
*::{{tpq|How well can we determine how accurate a representation it is?}} in exactly the same way that we can determine whether a human-crafted image is an accurate representation. How accurate a representation ''any'' image is is ultimately a matter of editor opinion. Whether an image is AI or not is irrelevant. I agree the example image above is not sufficiently accurate, but we wouldn't ban photoshopped images because one example was not deemed accurate enough, because we are rational people who understand that one example is not representative of an entire class of images - at least when the subject is something other than AI. ] (]) 23:54, 31 December 2024 (UTC) | |||
*:::I think except in a few exceptional circumstances of actual complex restorations, human photoshopping is not going to change or distort a person's appearance in the same way an AI image would. Modifications done by a person who is paying attention to what they are doing and merely enhancing an image, by person who is aware, while they are making changes, that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator. '']'' 🎄 ] — ] 🎄 00:14, 1 January 2025 (UTC) | |||
*::::I'm guessing your filter bubble doesn't include ] and their notorious ] problems. ] (]) 02:46, 2 January 2025 (UTC) | |||
*:::A photo of a person can be connected to a specific time, place, and subject that existed. It can be compared to other images sharing one or more of those properties. A photo that was PhotoShopped is still either a generally faithful reproduction of a scene that existed, or has significant alterations that can still be attributed to a human or at least to a specific algorithm, e.g. filters. The artistic license of a painting can still be attributed to a human and doesn't run much risk of being misidentified as real, unless it's by Chuck Close et al. An AI-generated image cannot be connected to a particular scene that ever existed and cannot be attributable to a human's artistic license (and there is legal precedent that such images are not copyrightable to the prompter specifically because of this). Individual errors in a human-generated artwork are far more predictable, understandable, identifiable, traceable... than those in AI-generated images. We have innate assumptions when we encounter real images or artwork that are just not transferable. These are meaningful differences to the vast majority of people: according to a , 87% of respondents want AI-generated art to ''at least'' be transparent, and 98% consider authentic images "pivotal in establishing trust". {{pb}}And even if you disagree with all that, can you not see the larger problem of AI images on Misplaced Pages getting propagated into generative AI corpora? ] (]) 04:20, 2 January 2025 (UTC) | |||
*::::I agree that our old assumptions don't hold true. I think the world will need new assumptions. We will probably have those in place in another decade or so. | |||
*::::I think we're ], not here to protect AI engines from ingesting AI-generated artwork. Figuring out what they should ingest is their problem, not mine. ] (]) 07:40, 2 January 2025 (UTC) | |||
*'''Absolutely no fake/AI images of people, photorealistic or otherwise'''. How is this even a question? These images are fake. Readers need to be able to trust Misplaced Pages, not navigate around whatever junk someone has created with a prompt and presented as somehow representative. This includes text. ] (]) 22:24, 31 December 2024 (UTC) | |||
*'''No''' except for edge cases (mostly, if the image itself is notable enough to go into the article). ] (]) 22:31, 31 December 2024 (UTC) | |||
*'''Absolutely not''', except for ABOUTSELF. "They're fine if they're accurate enough" is an obscenely naive stance. ] (]) 23:06, 31 December 2024 (UTC) | |||
* '''No''' with no exceptions. ] (]) 23:54, 31 December 2024 (UTC) | |||
*'''No'''. We don't permit falsifications in BLPs. ] <small><sup>]</sup></small> 00:30, 1 January 2025 (UTC) | |||
*:For the requested clarification by {{u|Some1}}, no AI-generated images (except when the image ''itself'' is specifically discussed in the article, and even then it should not be the lead image and it should be clearly indicated that the image is AI-generated), no drawings, no nothing of that sort. ''Actual photographs'' of the subject, nothing else. Articles are not required to have images at all; no image whatsoever is preferable to something which is ''not'' an image of the person. ] <small><sup>]</sup></small> 05:42, 3 January 2025 (UTC) | |||
*'''No, but with exceptions'''. I could imagine a case where a specific AI-generated image has some direct relevance to the notability of the subject of a BLP. In such cases, it should be included, if it could be properly licensed. But I do oppose AI-generated images as portraits of BLP subjects. —] (]) 01:27, 1 January 2025 (UTC) | |||
*:Since I was pinged on this point: when I wrote "I do oppose AI-generated images as portraits", I meant exactly that, including all AI-generated images, such as those in a sketchy or artistic style, not just the photorealistic ones. I am not opposed to certain uses of AI-generated images in BLPs when they are not the main portrait of the subject, for instance in diagrams (not depicting the subject) to illustrate some concept pioneered by the subject, or in case someone becomes famous for being the subject of an AI-generated image. —] (]) 05:41, 3 January 2025 (UTC) | |||
*'''No''', and no exceptions or do-overs. Better to have no images (or Stone-Age style cave paintings) than ''Frankenstein'' images, no matter how accurate or artistic. Akin to shopped manipulated photographs, they should have no room (or room service) at the WikiInn. ] (]) 01:34, 1 January 2025 (UTC) | |||
*:Some "shopped manipulated photographs" are misleading and inaccurate, others are not. We can and do exclude the former from the parts of the encyclopaedia where they don't add value without specific policies and without excluding them where they are relevant (e.g. ]) or excluding those that are not misleading or inaccurate. AI images are no different. ] (]) 02:57, 1 January 2025 (UTC) | |||
*::Assuming we know. Assuming it's material. The infobox image in – and the only extant photo of – ] was "photoshopped" by a marketing team, maybe half a century before Adobe Photoshop was created. They wanted to show him wearing a necktie. I don't think that this level of manipulation is actually a problem. ] (]) 07:44, 2 January 2025 (UTC) | |||
*'''Yes''', so long as it is an accurate representation. ] ] 03:40, 1 January 2025 (UTC) | |||
*'''No''' not for BLPs. ] (]) 04:15, 1 January 2025 (UTC) | |||
*'''No''' Not at all relevant for pictures of people, as the accuracy is not enough and can misrepresent. Also (and I'm shocked as it seems no one has mentioned this), what about Copyright issues? Who holds the copyright for an AI-generated image? The user who wrote the prompt? The creator(s) of the AI model? The creator(s) of the images in the database that the AI used to create the images? It's sounds to me such a clusterfuck of copyright issues that I don't understand how this is even a discussion. --] (]) 07:10, 1 January 2025 (UTC) | |||
*:Under the US law / copyright office, machine-generated images including those by AI cannot be copyrighted. That also means that AI images aren't treated as derivative works.<br style="margin-bottom:0.5em"/>What is still under legal concern is whether the use of bodies of copyrighted works without any approve or license from the copyright holders to train AI models is under fair use or not. There are multiple court cases where this is the primary challenge, and none have yet to reach a decision yet. Assuming the courts rule that there was no fair use, that would either require the entity that owns the AI to pay fines and ongoing licensing costs, or delete their trained model to start afresh with free licensed/works, but in either case, that would not impact how we'd use any resulting AI image from a copyright standpoint.<span id="Masem:1735741774879:WikipediaFTTCLNVillage_pump_(policy)" class="FTTCmt"> — ] (]) 14:29, 1 January 2025 (UTC)</span> | |||
*'''No''', I'm in agreeance with ] here. Whether we like it or not, the usage of a portrait on an article implies that it's just that, a portrait. It's incredibly disingenuous to users to represent an AI generated photo as truth. ] (]) 09:32, 1 January 2025 (UTC) | |||
*:So you just said a portrait can be used because wikipedia tells you it's a portrait, and thus not a real photo. Can't AI be exactly the same? As long as we tell readers it is an AI representation? Heck, most AI looks closer to the real thing than any portrait. ] (]) 10:07, 2 January 2025 (UTC) | |||
*::To clarify, I didn't mean "portrait" as in "painting," I meant it as "photo of person." | |||
*::However, I really want to stick to what you say at the end there: {{tq|Heck, most AI looks closer to the real thing than any portrait.}} | |||
*::That's exactly the problem: by looking close to the "real thing" it misleads users into believing a non-existent source of truth.{{br|2}} | |||
*::Per the wording of the RfC of "{{tq|depict BLP subjects}}," I don't think there would be any valid case to utilize AI images. I hold a strong No. ] (]) 04:15, 3 January 2025 (UTC) | |||
*'''No.''' We should not use AI-generated images for situations like this, they are basically just guesswork by a machine as Quark said and they can misinform readers as to what a person looks like. Plus, there's a big grey area regarding copyright. For an AI generator to know what somebody looks like, it has to have photos of that person in its dataset, so it's very possible that they can be considered derivative works or copyright violations. Using an AI image (derivative work) to get around the fact that we have no free images is just fair use with extra steps. ] (]) 19:33, 1 January 2025 (UTC) ]?]] | |||
*'''Maybe''' There was a prominent BLP image which we displayed on the ]. ''(right)'' This made me uneasy because it was an artistic impression created from photographs rather than life. And it was "colored digitally". Functionally, this seems to be exactly the same sort of thing as the ] composite. The issue should not be whether there's a particular technology label involved but whether such creative composites and artists' impressions are acceptable as better than nothing. ]🐉(]) 08:30, 1 January 2025 (UTC) | |||
*:Except it is clear to everyone that the illustration to the right is a sketch, a human rendition, while in the photorealistic image above, it is less clear. '']'' (] — ]) 14:18, 1 January 2025 (UTC) | |||
*::Except it says right below it "AI-generated image of Laurence Boccolini." How much more clear can it be when it say point-blank "AI-generated image." ] (]) 10:12, 2 January 2025 (UTC) | |||
*:::Commons descriptions do not appear on our articles. ] (]) 10:28, 2 January 2025 (UTC) | |||
*:::People taking a quick glance at an infobox image that looks pretty like a photograph are not going to scrutinize commons tagging. '']'' (] — ]) 14:15, 2 January 2025 (UTC) | |||
*::Keep in mind that many AIs can produce works that match various styles, not just photographic quality. It is still possible for AI to produce something that looks like a watercolor or sketched drawing.<span id="Masem:1735742005673:WikipediaFTTCLNVillage_pump_(policy)" class="FTTCmt"> — ] (]) 14:33, 1 January 2025 (UTC)</span> | |||
*:::Yes, you're absolutely right. But so far photorealistic images have been the most common to illustrate articles (see ] for some examples. '']'' (] — ]) 14:37, 1 January 2025 (UTC) | |||
*::::Then push to ban photorealistic images, rather than pushing for a blanket ban that would also apply to obvious sketches. —] (]) 20:06, 1 January 2025 (UTC) | |||
*:::::Same thing I wrote above, but for "photoshopping" read "drawing": (Bold added for emphasis) | |||
*:::::{{tqq|...human is not going to change or distort a person's appearance in the same way an AI image would. done by a person who is paying attention to what they are doing by '''person who is aware, while they are making , that they might be distorting the image and is, I only assume, trying to minimise it''' – those careful modifications shouldn't be equated with something made up by an AI image generator.}} '']'' (] — ]) 20:56, 1 January 2025 (UTC) | |||
*::::::@] then why are you advocating for a ban on AI images rather than a ban on distorted images? Remember that with careful modifications by someone who is aware of what they are doing that AI images can be made more accurate. Why are you assuming that a human artist is trying to minimise the distortions but someone working with AI is not? ] (]) 22:12, 1 January 2025 (UTC) | |||
*:::::::I believe that AI-generated images are fundamentally misleading because they are a simulation by a machine rather than a drawing by a human. To quote pythoncoder above: {{tqq|The use of AI-generated images to depict people (living or otherwise) is fundamentally misleading, because the images are not actually depicting the person.}} '']'' (] — ]) 00:16, 2 January 2025 (UTC) | |||
*::::::::Once again your actual problem is not AI, but with misleading images. Which can be, and are, already a violation of policy. ] (]) 01:17, 2 January 2025 (UTC) | |||
*:::::::::I think all AI-generated images, except simple diagrams as WhatamIdoing point out above, are misleading. So yes, my problem is with misleading images, which includes all photorealistic images generated by AI, which is why I support this proposal for a blanket ban in BLPs and medical articles. '']'' (] — ]) 02:30, 2 January 2025 (UTC) | |||
*::::::::::To clarify, I'm willing to make an exception in this proposal for very simple geometric diagrams. '']'' (] — ]) 02:38, 2 January 2025 (UTC) | |||
*::::::::::Despite the fact that not all AI-generated images are misleading, not all misleading images are AI-generated and it is not always possible to tell whether an image is or is not AI-generated? ] (]) 02:58, 2 January 2025 (UTC) | |||
*:::::::::::Enforcement is a separate issue. Whether or not all (or the vast majority) of AI images are misleading is the subject of this dispute. | |||
*:::::::::::I'm not going to mistreat the horse further, as we've each made our points and understand where the other stands. '']'' (] — ]) 15:30, 2 January 2025 (UTC) | |||
*::::::::::Even "simple diagrams" are not clear-cut. The process of AI-generating any image, no matter how simple, is still very complex and can easily follow any number of different paths to meet the prompt constraints. These paths through embedding space are black boxes and the likelihood they converge on the same output is going to vary wildly depending on the degrees of freedom in the prompt, the dimensionality of the embedding space, token corpus size, etc. The only thing the user can really change, other than switching between models, is the prompt, and at some point constructing a prompt that is guaranteed to yield the same result 100% of the time becomes a ] exercise. This is in contrast with non-generative AI diagram-rendering software that follow very fixed, reproducible, ''known'' paths. ] (]) 04:44, 2 January 2025 (UTC) | |||
*:::::::::::Why does the path matter? If the output is correct it is correct no matter what route was taken to get there. If the output is incorrect it is incorrect no matter what route was taken to get there. If it is unknown or unknowable whether the output is correct or not that is true no matter what route was taken to get there. ] (]) 04:48, 2 January 2025 (UTC) | |||
*::::::::::::If I use BioRender or GraphPad to generate a figure, I can be confident that the output does not have errors that would misrepresent the underlying data. I don't have to verify that all 18,000 data points in a scatter plot exist in the correct XYZ positions because I know the method for rendering them is published and empirically validated. Other people can also be certain that the process of getting from my input to the product is accurate and reproducible, and could in theory reconstruct my raw data from it. AI-generated figures have no prescribed method of transforming input beyond what the prompt entails; therefore I additionally have to be confident in how precise my prompt is ''and'' confident that the training corpus for this procedure is so accurate that no error-producing paths exist (not to mention absolutely certain that there is no embedded contamination from prior prompts). Other people have all those concerns, and on top of that likely don't have access to the prompt or the raw data to validate the output, nor do they necessarily know how fastidious I am about my generative AI use. At least with a hand-drawn diagram viewers can directly transfer their trust in the author's knowledge and reliability to their presumptions about the diagram's accuracy. ] (]) 05:40, 2 January 2025 (UTC) | |||
*:::::::::::::If you've got 18,000 data points, we are beyond the realm of "simple geometric diagrams". ] (]) 07:47, 2 January 2025 (UTC) | |||
*::::::::::::::The original "simple geometric diagrams" comment was referring to your 100 dots image. I don't think increasing the dots materially changes the discussion beyond increasing the laboriousness of verifying the accuracy of the image. ] (]) 07:56, 2 January 2025 (UTC) | |||
*:::::::::::::::Yes, but since "the laboriousness of verifying the accuracy of the image" is exactly what she doesn't want to undertake for 18,000 dots, then I think that's very relevant. ] (]) 07:58, 2 January 2025 (UTC) | |||
*:{{outdent|14}} And where is that cutoff supposed to be? 1000 dots? A single straight line? An atomic diagram? What is "simple" to someone unfamiliar with a topic may be more complex.{{pb}}And I don't want to count 100 dots either! ] (]) 17:43, 2 January 2025 (UTC) | |||
*::Maybe you don't. But I know for certain that you can count 10 across, 10 down, and multiply those two numbers to get 100. That's what I did when I made the image, after all. ] (]) 07:44, 3 January 2025 (UTC) | |||
* '''Comment''': when you Google search someone (at least from the Chrome browser), often the link to the Misplaced Pages article includes a thumbnail of the lead photo as a preview. Even if the photo is labelled as an AI image in the article, people looking at the thumbnail from Google would be misled (if the image is chosen for the preview). ] (]) 09:39, 1 January 2025 (UTC) | |||
*:This is why we should not use inaccurate images, regardless of how the image was created. It has absolutely nothing to do with AI. ] (]) 11:39, 1 January 2025 (UTC) | |||
* '''Already opposed a blanket ban''': It's unclear to me why we have a separate BLP subsection, as BLPs are already included in the main section above. Anyway, I ] there. ] (]) | |||
*:Some editors might oppose a blanket ban on ''all'' AI-generated images, while at the same time, are against using AI-generated images (created by using text prompts/]) to depict ]. ] (]) 14:32, 1 January 2025 (UTC) | |||
*'''No''' For at least now, let's not let the problems of AI intrude into BLP articles which need to have the highest level of scrutiny to protect the person represented. Other areas on WP may benefit from AI image use, but let's keep it far out of BLP at this point. --] (]) 14:35, 1 January 2025 (UTC) | |||
*I am not a fan of “banning” AI images completely… but I agree that BLPs require special handling. I look at AI imagery as being akin to a computer generated painting. In a BLP, we allow paintings of the subject, but we ''prefer'' photos over paintings (if available). So… we should prefer photos over AI imagery. {{pb}}<!--list syntax fixed ~ToBeFree--> That said, AI imagery is getting good enough that it can be mistaken for a photo… so… If an AI generated image ''is'' the ''only'' option (ie there is no photo available), then the caption should ''clearly'' indicate that we are using an AI generated image. And that image should be replaced as soon as possible with an actual photograph. ] (]) 14:56, 1 January 2025 (UTC) | |||
*:The issue with the latter is that Misplaced Pages images get picked up by Google and other search engines, where the caption isn't there anymore to add the context that a photorealistic image was AI-generated. ] (] · ]) 15:27, 1 January 2025 (UTC) | |||
*::We're here to build an encyclopedia, not to protect commercial search engine companies. | |||
*::I think my view aligns with Blueboar's (except that I find no firm preference for photos over classical portrait paintings): We shouldn't have ''inaccurate'' AI images of people (living or dead). But the day appears to be coming when AI will generate accurate ones, or at least ones that are close enough to accurate that we can't tell the difference unless the uploader voluntarily discloses that information. Once we can no longer tell the difference, what's the point in banning them? Images need to look like the thing being depicted. When we put an photorealistic image in an article, we could be said to be implicitly claiming that the image ''looks like'' whatever's being depicted. We are not ''necessarily'' warranting that the image was created through a specific process, but the image really does need to look like the subject. ] (]) 03:12, 2 January 2025 (UTC) | |||
*:::You are presuming that sufficient accuracy will prevent us from knowing whether someone is uploading an AI photo, but that is not the case. For instance, if someone uploads large amounts of "photos" of famous people, and can't account for how they got them (e.g. can't give a source where they scraped them from, or dates or any Exif metadata at all for when they were taken), then it will still be obvious that they are likely using AI. ] (]) 17:38, 3 January 2025 (UTC) | |||
*:::As another editor pointed out in their comment, there's the {{blue|ethics/moral dilemma of creating fake photorealistic pictures of people and putting them on the internet}}, especially on a site such as Misplaced Pages and especially on their own biography. ] says the bios {{tq|must be written conservatively and with regard for the subject's privacy.}} ] (]) 18:37, 3 January 2025 (UTC) | |||
*:::{{tqq| Once we can no longer tell the difference, what's the point in banning them?}} Sounds like a wolf's in sheep's clothing to me. Just because the surface appeal of fake pictures gets better, doesn't mean we should ]. '']'' (] — ]) 18:47, 3 January 2025 (UTC) | |||
*:If there are no appropriately-licensed images of a person, then by definition any AI-generated image of them will be either a copyright infringement or a complete fantasy. ] (]) 04:48, 2 January 2025 (UTC) | |||
*::Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant: If an image is a copyvio we can't use it and it is irrelevant why it is a copyvio. If an image is a "complete fantasy" then it is exactly as unusable as a complete fantasy generated by non-AI means, so again AI is irrelevant. I've had to explain this multiple times in this discussion, so read that for more detail and note the lack of refutation. ] (]) 04:52, 2 January 2025 (UTC) | |||
*:::But we can assume good faith that a human isn't blatantly copying something. We can't assume that from an LLM like Stability AI which has been shown to from Getty's images. ] (]) 05:50, 2 January 2025 (UTC) | |||
*::::Ooooh, I'm not sure that we can assume that humans aren't blatantly copying something. We can assume that they meant to be helpful, but that's not quite the same thing. ] (]) 07:48, 2 January 2025 (UTC) | |||
*<s>'''Oppose.'''</s> '''Yes.''' I echo ]: {{Tq2|What this conversation is really circling around is banning entire skillsets from contributing to Misplaced Pages merely because some of us are afraid of AI images and some others of us want to engineer a convenient, half-baked, policy-level "consensus" to point to when they delete quality images from Misplaced Pages. Every time someone generates text based on a source, they are doing some acceptable level of interpretation to extract facts or rephrase it around copyright law, and I don't think illustrations should be considered so severely differently as to justify a categorical ban. For instance, the Gisele Pelicot portrait is based on non-free photos of her. Once the illustration exists, it is trivial to compare it to non-free images to determine if it is an appropriate likeness, which it is. That's no different than judging contributed text's compliance with fact and copyright by referring to the source. It shouldn't be treated differently just because most Wikipedians contribute via text.<br/>Additionally, referring to interpretive skillsets that synthesize new information like, random example, statistical analysis. Excluding those from Misplaced Pages is current practice and not controversial. Meanwhile, I think the ability to create images is more fundamental than that. It's not (inheretly) synthesizing new information. A portrait of a person (alongside the other examples in this thread) contains verifiable information. It is current practice to allow them to fill the gaps where non-free photos can't. That should continue. Honestly, it should expand.}} ] (]) 15:41, 1 January 2025 (UTC) | |||
*:Additionally, in direct response to "these images are fake": All illustrations of a subject could be called "fake" because they are not photographs. (Which can also be faked.) The standard for the inclusion of an illustration on Misplaced Pages has never been photorealism, medium, or previous publication in a RS. The standard is how adequately it reflects the facts which it claims to depict. If there is a better image that can be imported to Misplaced Pages via fair use or a license, then an image can be easily replaced. Until such a better image has been sourced, it is absolutely bewildering to me that we would even discuss removing images of people from their articles. What a person looked like is one of the most basic things that people want to know when they look someone up on Misplaced Pages. Including an image of almost any quality (yes, even a cartoon) is practically by definition an improvement to the article and addressing an important need. We should be encouraging artists to continue filling the gaps that non-free images cannot fill, not creating policies that will inevitably expand into more general prejudices against all new illustrations on Misplaced Pages. ] (]) 15:59, 1 January 2025 (UTC) | |||
*::By "Oppose", I'm assuming your answer to the RfC question is "Yes". And this RfC is about using {{blue|AI-generated images (generated via text prompts, see also: ])}} to depict BLP subjects, not regarding human-created drawings/cartoons/sketches, etc. of BLPs. ] (]) 16:09, 1 January 2025 (UTC) | |||
*:::I've changed it to "yes" to reflect the reversed question. I think all of this is related because there is no coherent distinguishing point; AI can be used to create images in a variety of styles. These discussions have shown that a policy of banning AI images ''will'' be used against non-AI images of all kinds, so I think it's important to say these kinds of things now. ] (]) 16:29, 1 January 2025 (UTC) | |||
*::Photorealistic images scraped from who knows where from who knows what sources are without question simply fake photographs and also clear ] and outright ]. There's no two ways about it. Articles do ''not'' require images: An article with some Frankenstein-ed image scraped from who knows what, where and, when that you "created" from a prompt is not an improvement over having no image at all. If we can't provide a quality image (like something you didn't cook up from a prompt) then people can find quality, non-fake images elsewhere. ] (]) 23:39, 1 January 2025 (UTC) | |||
*:::I really encourage you to read the discussion I linked before because it is ]. Images like these do not inherently include either OR or SYNTH, and the arguments that they do cannot be distinguished from any other user-generated image content. But, briefly, I never said articles required images, and this is not about what articles ''require''. It is about ''improvements'' to the articles. Including a relevant picture where none exists is almost always an improvement, especially for subjects like people. Your disdain for the method the person used to make an image is irrelevant to whether the content of the image is actually verifiable, and the only thing we ought to care about is the content. ] (]) 03:21, 2 January 2025 (UTC) | |||
*::::Images like these are absolutely nothing more than synthesis in the purest sense of the world and are clearly a violation of ]: Again, you have no idea what data was used to generate these images and you're going to have a very hard time convincing anyone to describe them as anything other than outright fakes. | |||
*::::A reminder that WP:SYNTH shuts down attempts at manipulation of images ("It is not acceptable for an editor to use photo manipulation to distort the facts or position illustrated by an image. Manipulated images should be prominently noted as such. Any manipulated image where the encyclopedic value is materially affected should be posted to Misplaced Pages:Files for discussion. Images of living persons must not present the subject in a false or disparaging light.") and generating a photorealistic image (from who knows what!) is far beyond that. | |||
*::::Fake images of people do not improve our articles in any way and only erode reader trust. What's next, an argument for the ''fake sources'' LLMs also love to "hallucinate"? ] (]) 03:37, 2 January 2025 (UTC) | |||
*:::::So, if you review the first sentence of SYNTH, you'll see it has no special relevance to this discussion: {{Tq|Do not combine material from multiple sources to state or imply a conclusion not explicitly stated by any of the sources.}}. My primary example has been a picture of a person; what a person looks like is verifiable by comparing the image to non-free images that cannot be used on Misplaced Pages. If the image resembles the person, it is not SYNTH. An illustration of a person created and intended to look like that person is not a manipulation. The training data used to make the AI is irrelevant to whether the image in fact resembles the person. You should also review ] because SYNTH is not a policy; NOR is the policy: {{tq|If a putative SYNTH doesn't constitute original research, then it doesn't constitute SYNTH.}} Additionally, ]. A categorical rule against AI cannot be justified by SYNTH because it does not categorically apply to all use cases of AI. To do so would be illogical on top of ill-advised. ] (]) 08:08, 2 January 2025 (UTC) | |||
*::::::"training data used to make the AI is irrelevant" — spoken like a true AI evangelist! Sorry, 'good enough' photorealism is still just synthetic slop, a fake image presented as real of a ''human being''. A fake image of someone generated from who-knows-what that 'resembles' an article's subject is about as ] as it gets. Yikes. As for the attempts to pass of prompt-generated photorealistic fakes of people as somehow the same as someone's illustration, you're completely wasting your time. ] (]) 09:44, 2 January 2025 (UTC) | |||
*:::::::NOR is a content policy and SYNTH is content guidance within NOR. Because you have admitted that this is not ''about the content'' for you, NOR and SYNTH are irrelevant to your argument, which boils down to ] and, now, inaccurate personal attacks. Continuing this discussion between us would be pointless. ] (]) 09:52, 2 January 2025 (UTC) | |||
*::::::::This is in fact entirely about content (why the hell else would I bother?) but it is true that I also dismissed your pro-AI 'it's just like a human drawing a picture!' as outright nonsense a while back. Good luck convincing anyone else with that line - it didn't work here. ] (]) 09:59, 2 January 2025 (UTC) | |||
*'''Maybe''': there is an implicit assumption with this RFC that an AI generated image would be photorealistic. There hasn't been any discussion of an AI generated sketch. If you asked an AI to generate a sketch (that clearly looked like a sketch, similar to the Gisèle Pelicot example) then I would potentially be ok with it. ] (]) 18:14, 1 January 2025 (UTC) | |||
*:That's an interesting thought to consider. At the same time, I worry about (well-intentioned) editors inundating image-less BLP articles with AI-generated images in the style of cartoons/sketches (if only photorealistic ones are prohibited) etc. At least requiring a human to draw/paint/whatever creates a barrier to entry; these AI-generated images can be created in under a minute using these text-to-image models. Editors are already wary about human-created cartoon portraits (]), now they'll be tasked with dealing with AI-generated ones in BLP articles. ] (]) 20:28, 1 January 2025 (UTC) | |||
*::It sounds like your problem is not with AI but with cartoon/sketch images in BLP articles, so AI is once again completely irrelevant. ] (]) 22:14, 1 January 2025 (UTC) | |||
*::That is a good concern you brought up. There is a possibility of the spamming of low quality AI-generated images which would be laborious to discuss on a case-by-case basis but easy to generate. At the same time though that is a possibility, but not yet an actuality, and ] states that new policies should address current problems rather than hypothetical concerns. ] (]) 22:16, 1 January 2025 (UTC) | |||
*Easy '''no''' for me. I am not against the use of AI images wholesale, but I do think that using AI to represent an existent thing such as a person or a place is too far. Even a tag wouldn't be enough for me. ] </nowiki></span>''']] 19:05, 1 January 2025 (UTC) | |||
*'''No''' obviously, per previous discussions about cartoonish drawn images in BLPs. Same issue here as there, it is essentially original research and misrepresentation of a living person's likeness. ] (]) 22:19, 1 January 2025 (UTC) | |||
*'''No''' to photorealistic, no to cartoonish... this is not a hard choice. The idea that "this has nothing to do with AI" when "AI" magnifies the problem to stupendous proportions is just not tenable. ] (]) 23:36, 1 January 2025 (UTC) | |||
*:While AI might "amplify" the thing you dislike, that does not make AI the problem. The problem is whatever underlying thing is being amplified. ] (]) 01:16, 2 January 2025 (UTC) | |||
*::The thing that amplifies the problem is necessarily a problem. ] (]) 02:57, 2 January 2025 (UTC) | |||
*:::That is arguable, but banning the amplifier does not do anything to solve the problem. In this case, banning the amplifier would cause multiple other problems that nobody supporting this proposal as even attempted to address, let alone mitigate. ] (]) 03:04, 2 January 2025 (UTC) | |||
*'''No''' for all people, per Chaotic Enby. ] (]) 03:23, 2 January 2025 (UTC) Add: no to any AI-generated images, whether photorealistic or not. ] (]) 04:00, 3 January 2025 (UTC) | |||
*'''No''' - We should not be hosting faked images (except as notable fakes). We should also not be hosting copyvios ({{tq|"Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant"}} is just totally wrong - we should be steering clear of copyvios, and if the issue is unsettled then we shouldn't use them until it is). | |||
*If people upload faked images to WP or Commons the response should be as it is now. The fact that fakes are becoming harder to detect simply from looking at them hardly affects this - we simply confirm when the picture was supposed to have been taken and examine the plausibility of it from there. ] (]) 14:39, 2 January 2025 (UTC) ] (]) 14:39, 2 January 2025 (UTC) | |||
*:{{tpq|we should be steering clear of copyvio}} we do - if an image is a copyright violation it gets deleted, regardless of why it is a copyright violation. What we do not do is ban using images that are not copyright violations because they are copyright violations. Currently the WMF lawyers and all the people on Commons who know more about copyright than I do say that at least some AI images are legally acceptable for us to host and use. If you want to argue that, then go ahead, but it is not relevant to ''this'' discussion. | |||
*:{{tpq|if people upload faked images the response should be as it is now}} in other words you are saying that the problem is faked images not AI, and that current policies are entirely adequate to deal with the problem of faked images. So we don't need any specific rules for AI images - especially given that not all AI images are fakes. ] (]) 15:14, 2 January 2025 (UTC) | |||
*::The idea that {{tq|current policies are entirely adequate}} is like saying that a lab shouldn't have specific rules about wearing eye protection when it already has a poster hanging on the wall that says "don't hurt yourself". ] (]) 18:36, 2 January 2025 (UTC) | |||
*:::I rely on one of those up in my workshop at home. I figure if that doesn't keep me safe, nothing will. ] (]) 18:41, 2 January 2025 (UTC) | |||
*::::"{{tq|in other words you are saying that the problem is faked images not AI}}" - AI generated images *are* fakes. This is merely confirming that for the avoidance of doubt. | |||
*::::"{{tq|at least some AI images are legally acceptable for us}}" - Until they decide which ones that isn't much help. ] (]) 19:05, 2 January 2025 (UTC) | |||
*:::::Yes – what FOARP said. AI-generated images are fakes and are misleading. '']'' (] — ]) 19:15, 2 January 2025 (UTC) | |||
*:::Those specific rules exist because generic warnings have proven not to be sufficient. Nobody has presented any evidence that the current policies are not sufficient, indeed quite the contrary. ] (]) 19:05, 2 January 2025 (UTC) | |||
*'''No!''' This would be a massive can of worms; perhaps, however, we wish to cause problems in the new year. ] <small>(]) | :) | he/him | </small> 15:00, 2 January 2025 (UTC) | |||
*:Noting that I think that no AI-generated images are acceptable in BLP articles, regardless of whether they are photorealistic or not. ] <small>(]) | :) | he/him | </small> 15:40, 3 January 2025 (UTC) | |||
*'''No''', unless the AI image has encyclopedic significance beyond "depicts a notable person". AI images, if created by editors for the purpose of inclusion in Misplaced Pages, convey little reliable information about the person they depict, and the ways in which the model works are opaque enough to most people as to raise verifiability concerns. ] (] • ]) 15:25, 2 January 2025 (UTC) | |||
*:To clarify, do you object to uses of an AI image in a BLP when the subject uses that image for self-identification? I presume that AI images that have been the subject of notable discussion are an example of "significance beyond depict a notable person"? ] (]) 15:54, 2 January 2025 (UTC) | |||
*::If the subject uses the image for self-identification, I'd be fine with it - I think that'd be analogous to situations such as "cartoonist represented by a stylized self-portrait", which definitely has some precedent in articles like ]. I agree with your second sentence as well; if there's notable discussion around a particular AI image, I think it would be reasonable to include that image on Misplaced Pages. ] (] • ]) 19:13, 2 January 2025 (UTC) | |||
* '''No''', with obvious exceptions, including if the subject theyrself uses the image as a their representation, or if the image is notable itself. Not including the lack of a free aleternative, if there is no free alternative... where did the AI find data to build an image... non free too. Not including images generated by WP editors (that's kind of ]... - ] (]) 18:02, 2 January 2025 (UTC | |||
*'''Maybe''' I think the question is unfair as it is illustrated with what appears to be a photo of the subject but isn't. People are then getting upset that they've been misled. As others note, there are copyright concerns with AI reproducing copyrighted works that in turn make an image that is potentially legally unusable. But that is more a matter for Commons than for Misplaced Pages. As many have noted, a sketch or painting never claims to be an accurate depiction of a person, and I don't care if that sketch or painting was done by hand or an AI prompt. I strongly ask ] to abort the RFC. You've asked people to give a yes/no vote to what is a more complex issue. A further problem with the example used is the unfortunate prejudice on Misplaced Pages against user-generated content. While the text-generated AI of today is crude and random, there will come a point where many professionally published photos illustrating subjects, including people, are AI generated. Even today, your smartphone can create a groupshot where everyone is smiling and looking at the camera. It was "trained" on the 50 images it quickly took and responded to the build-in "text prompt" of "create a montage of these photos such that everyone is smiling and looking at the camera". This vote is a knee jerk reaction to content that is best addressed by some other measure (such as that it is a misleading image). And a good example of asking people to vote way too early, when the issues haven't been throught out -- ]°] 18:17, 2 January 2025 (UTC) | |||
*'''No''' This would very likely set a dangerous precedent. The only exception I think should be if the image itself is notable. If we move forward with AI images, especially for BLPs, it would only open up a whole slew of regulations and RfCs to keep them in check. Better no image than some digital multiverse version of someone that is "basically" them but not really. Not to mention the ethics/moral dilemma of creating fake photorealistic pictures of people and putting them on the internet. ] (]) 18:31, 2 January 2025 (UTC) | |||
*'''No'''. LLMs don't generate answers, they generate ''things that look like'' answers, but aren't; a lot of the time, that's good enough, but sometimes it very much isn't. It's the same issue for text-to-image models: they don't generate photos of people, they generate ''things that look like'' photos. Using them on BLPs is unacceptable. ] (]) 19:30, 2 January 2025 (UTC) | |||
*'''No'''. I would be pissed if the top picture of me on Google was AI-generated. I just don't think it's moral for living people. The exceptions given above by others are okay, such as if the subject uses the picture themselves or if the picture is notable (with context given). ] (]) 19:56, 2 January 2025 (UTC) | |||
*'''No.''' Uploading alone, although mostly a Commons issue, would already a problem to me and may have personality rights issues. Illustrating an article with a fake photo <ins>(or drawing)</ins> of a living person, even if it is labeled as such, would not be acceptable. For example, it could end up being shown by search engines or when hovering over a Misplaced Pages link, without the disclaimer. ] (]) 23:54, 2 January 2025 (UTC) | |||
* I was going to say no... but we allow paintings as portraits in BLPs. What's so different between an AI generated image, and a painting? Arguments above say the depiction may not be accurate, but the same is true of some paintings, right? (and conversely, not true of other paintings) ] (]) 00:48, 3 January 2025 (UTC) | |||
*::A painting is clearly a painting; as such, the viewer knows that it is not an accurate representation of a particular reality. An AI-generated image made to look exactly like a photo, looks like a photo but is not. | |||
*:] (]) 02:44, 3 January 2025 (UTC) | |||
*::Not all paintings are clearly paintings. Not all AI-generated images are made to look like photographs. Not all AI-generated images made to look like photos do actually look like photos. This proposal makes no distinction. ] (]) 02:55, 3 January 2025 (UTC) | |||
*:::Not to mention, hyper-realism is a style an artist may use in virtually any medium. If Misplaced Pages would accept an analog substitute like a painting, there's no reason Misplaced Pages shouldn't accept an equivalent painting made with digital tools, and there's no reason Misplaced Pages shouldn't accept an equivalent painting made with AI. That is, one where any obvious defects have been edited out and what remains is a straightforward picture of the subject. ] (]) 03:45, 3 January 2025 (UTC) | |||
*::::For the record (and for any media watching), while I personally find it fascinating that a few editors here are spending a substantial amount of time (in the face of an overwhelming 'absolutely not' consensus no less) attempting to convince others that computer-generated (that is, ''faked'') photos of human article subjects are somehow ''a good thing'', I also find it interesting that these editors seem to express absolutely no concern for the intensely negative reaction they're already seeing from their fellow editors and seem totally unconcerned about the inevitable trust drop we'd experience from Misplaced Pages readers when they would encounter fake photos on our BLP articles especially. ] (]) 03:54, 3 January 2025 (UTC) | |||
*:::::Misplaced Pages's reputation would not be affected positively or negatively by expanding the current-albeit-sparse use of illustrations to depict subjects that do not have available pictures. In all my writing about this over the last few days, you are the only one who has said anything negative about me as a person or, really, my arguments themselves. As loath as I am to cite it, ] means assuming that people you disagree with are not ''trying to hurt Misplaced Pages.'' Thryduulf, I, and others have explained in detail why we think our ultimate ideas are explicit benefits to Misplaced Pages and why our opposition to these immediate proposals comes from a desire to prevent harm to Misplaced Pages. I suggest taking a break to reflect on that, matey. ] (]) 04:09, 3 January 2025 (UTC) | |||
*::::::Look, I don't know if you've been living under a rock or what for the past few years but the reality is that '' people hate AI images'' and dumping a ton of AI/fake images on Misplaced Pages, a place people go for ''real information'' and often ''trust'', inevitably leads to a huge trust issue, something Misplaced Pages is increasingly suffering from already. This is ''especially'' a problem when they're intended to represent ''living people'' (!). I'll leave it to you to dig up the bazillion controversies that have arisen and continue to arise since companies worldwide have discovered that they can now replace human artists with 'AI art' produced by "prompt engineers" but you can't possibly expect us to ignore that reality when discussing these matters. ] (]) 04:55, 3 January 2025 (UTC) | |||
*:::::::Those trust issues are born from the publication of hallucinated information. I have only said that it should be OK to use an image on Misplaced Pages when it contains only verifiable information, which is the same standard we apply to text. That standard is and ought to be applied independently of the way the initial version of an image was created. ] (]) 06:10, 3 January 2025 (UTC) | |||
*:To my eye, the distinction between AI images and paintings here is less a question of medium and more of verifiability: the paintings we use (or at least the ones I can remember) are significant paintings that have been acknowledged in sources as being reasonable representations of a given person. By contrast, a purpose-generated AI image would be more akin to me painting a portrait of somebody here and now and trying to stick that on their article. The image could be a faithful representation <small>(unlikely, given my lack of painting skills, but let's not get lost in the metaphor)</small>, but if my painting hasn't been discussed anywhere besides Misplaced Pages, then it's potentially OR or UNDUE to enshrine it in mainspace as an encyclopedic image. ] (] • ]) 05:57, 3 January 2025 (UTC) | |||
*::An image contains a collection of facts, and those facts need to be verifiable just like any other information posted on Misplaced Pages. An image that verifiably resembles a subject as it is depicted in reliable sources is categorically ''not OR''. Discussion in other sources is not universally relevant; we don't restrict ourselves to only previously-published images. If we did that, Misplaced Pages would have very few images. ] (]) 06:18, 3 January 2025 (UTC) | |||
*:::Verifiable how? Only by the editor themselves comparing to a real photo (which was probably used by the LLM to create the image…). | |||
*:::These things are fakes. The analysis stops there. ] (]) 10:48, 4 January 2025 (UTC) | |||
*::::Verifiable by comparing them to a reliable source. Exactly the same as what we do with text. There is no coherent reason to treat user-generated images differently than user-generated text, and the universalist tenor of this discussion has damaging implications for all user-generated images regardless of whether they were created with AI. Honestly, I rarely make arguments like this one, but I think it could show some intuition from another perspective: Imagine it's 2002 and Misplaced Pages is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures. The text editors say the artists cannot contribute ANYTHING to Misplaced Pages because their images that have not been previously published are not verifiable. That is a double-standard that privileges the contributions of text-editors simply because most users are text-editors and they are used to verifying text; that is not a principled reason to treat text and images differently. Moreover, that is simply not what happened—The opposite happend, and images are treated as verifiable based on their contents just like text because that's a common sense reading of the rule. It would have been madness if images had been treated differently. And yet that is essentially the fundamentalist position of people who are extending their opposition to AI with arguments that apply to all images. If they are arguing verifiability seriously at all, they are pretending that the sort of degenerate situation I just described already exists when the opposite consensus has been reached consistently ''for years''. In ], they even tried to say Wikipedians had "turned a blind eye" to these image issues as if negatively characterizing those decisions would invalidate the fact that ''those decisions were consensus.'' The motivated reasoning of these discussions has been as blatant as that.<br/>At the bottom of this dispute, I take issue with trying to alter the rules in a way that creates a new double-standard within verifiability that applies to all images but not text. That's especially upsetting when (despite my and others' best efforts) so many of us are still focusing ''SOLELY'' on their hatred for AI rather than considering the obvious second-order consequences for user-generated images as a whole.<br/>Frankly, in no other context has any Wikipedian ever allowed me to say text they wrote was "fake" or challenge an image based on whether it was "fake." The issue has always been ''verifiability'', not provenance or falsity. Sometimes, IMO, that has lead to disaster and Misplaced Pages saying things I know to be factually untrue despite the contents of reliable sources. But ''that'' is the policy. We compare the contents of Misplaced Pages to reliable sources, and the contents of Misplaced Pages are considered verifiable if they cohere.<br/>I ask again: If Misplaced Pages's response to the creation of AI imaging tools is to crack down on all artistic contributions to Misplaced Pages (which seems to be the inevitable direction of these discussions), what does that say? If our negative response to AI tools is to ''limit what humans can do on Misplaced Pages'', what does that say? Are we taking a stand for human achievements, or is this a very heated discussion of cutting off our nose to save our face? ] (]) 23:31, 4 January 2025 (UTC) | |||
*:::::{{tq|"Verifiable by comparing them to a reliable source"}} - comparing two images and saying that one ''looks like'' the other is not "verifying" anything. The text equivalent is presenting something as a quotation that is actually a user-generated paraphrasing. | |||
*:::::{{tq|"Frankly, in no other context has any Wikipedian ever allowed me to say text they wrote was "fake" or challenge an image based on whether it was "fake.""}} - Try presenting a paraphrasing as a quotation and see what happens. | |||
*:::::{{tq|"Imagine it's 2002 and Misplaced Pages is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures..."}} - This basically happened, and is the origin of ]. Misplaced Pages is not a host for original works. ] (]) 22:01, 6 January 2025 (UTC) | |||
*::::::{{tq|Comparing two images and saying that one looks like the other is not "verifying" anything.}} Comparing text to text in a reliable source is literally the same thing. | |||
*::::::{{tq|The text equivalent is presenting something as a quotation that is actually a user-generated paraphrasing.}} No it isn't. The text equivalent is writing a sentence in an article and putting a ref tag on it. Perhaps there is room for improving the referencing of images in the sense that they should offer example comparisons to make. But an image created by a person is not unverifiable simply because it is user-generated. It is not somehow ''more'' unverifiable simply because it is created in a lifelike style. | |||
*::::::{{tq|Try presenting a paraphrasing as a quotation and see what happens.}} Besides what I just said, ''nobody'' is even presenting these images as equatable to quotations. People in this thread have simply been calling them "fake" of their own initiative; the uploaders have not asserted that these are literal photographs to my knowledge. The uploaders of illustrations obviously did not make that claim either. (And, if the contents of the image is a copyvio, that is a separate issue entirely.) | |||
*::::::{{tq|This basically happened, and is the origin of WP:NOTGALLERY.}} That is not the same thing. User-generated images that illustrate the subject are not prohibited by ]. Misplaced Pages is a host of encyclopedic content, and user-generated images can have encyclopedic content. ] (]) 02:41, 7 January 2025 (UTC) | |||
*:::::::Images are way more complex than text. Trying to compare them in the same way is a very dangerous simplification. '']'' (] — ]) 02:44, 7 January 2025 (UTC) | |||
*::::::::Assume only non-free images exist of a person. An illustrator refers to those non-free images and produces a painting. From that painting, you see a person who looks like the person in the non-free photographs. The image is verified as resembling the person. That is a simplification, but to call it "dangerous" is disingenuous at best. The process for challenging the image is clear. Someone who wants to challenge the veracity of the image would just need to point to details that do not align. For instance, "he does not typically have blue hair" or "he does not have a scar." That is what we already do, and it does not come up much because it would be weird to deliberately draw an image that looks nothing like the person. Additionally, someone who does not like the image for aesthetic reasons rather than encyclopedic ones always has the option of sourcing a photograph some other way like permission, fair use, or taking a new one themself. This is not an intractable problem. ] (]) 02:57, 7 January 2025 (UTC) | |||
*:::::::::So a photorealistic AI-generated image would be considered acceptable until someone identifies a "big enough" difference? How is that anything close to ethical? An portrait that's got an extra mole or slightly wider nose bridge or lacks a scar is still ''not an image of the person'' regardless of whether random Misplaced Pages editors notice. And while I don't think user-generated non-photorealistic images should ever be used on biographies either, at least those can be traced back to a human who is ultimately responsible for the depiction, who can point to the particular non-free images they used as references, and isn't liable to average out details across all time periods of the subject. And that's not even taking into account the copyright issues. ] (]) 22:52, 7 January 2025 (UTC) | |||
*::::::::::{{+1}} to what JoelleJay said. The problem is that AI-generated images are simulations trying to match existing images, sometimes, yes, with an impressive degree of accuracy. But they will always be inferior to a human-drawn painting that's ''trying to depict the person''. We're a human encyclopedia, and we're built by humans doing human things and sometimes with human errors. '']'' (] — ]) 23:18, 7 January 2025 (UTC) | |||
*::::::::::You can't just raise this to an "ethical" issue by saying the word "ethical." You also can't just invoke copyright without articulating an actual copyright issue; we are not discussing copyvio. Everyone agrees that a photo with an actual copyvio in it is subject to that policy. | |||
*::::::::::But to address your actual point: Any image—any ''photo''—beneath the resolution necessary to depict the mole would be missing the mole. Even with photography, we are never talking about science-fiction images that perfectly depict every facet of a person in an objective sense. We are talking about equipment that creates an approximation of reality. The same is true of illustrations and AI imagery. | |||
*::::::::::Finally, a human being ''is'' responsible for the contents of the image because a human is selecting it and is responsible for correcting any errors. The result is an image that someone is choosing to use because they believe it is an appropriate likeness. We should acknowledge that human decision and evaluate it naturally—''Is it an appropriate likeness?'' ] (]) 10:20, 8 January 2025 (UTC) | |||
*:::::::::::(Second comment because I'm on my phone.) I realize I should also respond to this in terms of additive information. What people look like is not static in the way your comment implies. Is it inappropriate to use a photo because they had a zit on the day it was taken? Not necessarily. Is an image inappropriate because it is taken at a bad angle that makes them look fat? Judging by the prolific ComicCon photographs (where people seem to make a game of choosing the worst-looking options; seriously, it's really bad), not necessarily. Scars and bruises exist and then often heal over time. The standard for whether an image with "extra" details is acceptable would still be based on whether it comports acceptably with other images; we literally do what you have capriciously described as "unethical" and supplement it with our compassionate desire to not deliberately embarrass BLPs. (The ComicCon images aside, I guess.) So, no, I would not be a fan of using images that add prominent scars where the subject is not generally known to have one, but that is just an unverifiable fact that does not belong in a Misplaced Pages image. Simple as. ] (]) 10:32, 8 January 2025 (UTC) | |||
*:::::We don't evaluate the reliability of a source solely by comparing it to other sources. For example, there is an ongoing discussion at the baseball WikiProject talk page about the reliability of a certain web site. It lists no authors nor any information on its editorial control policy, so we're not able to evaluate its reliability. The reliability of all content being used as a source, including images, needs to be considered in terms of its provenance. ] (]) 23:11, 7 January 2025 (UTC) | |||
* Can you note in your !vote whether AI-generated images (generated via text prompts/text-to-image models) that are ''not'' photo-realistic / hyper-realistic in style are okay to use to depict BLP subjects? For example, see the image to the right, which was ] then ] from his article: ] by ]]] {{pb}} Pinging people who !voted No above: ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ], ] --- ] (]) 03:55, 3 January 2025 (UTC) {{clear}} | |||
*:Still no, I thought I was clear on that but we should not be using AI-generated images in articles for anything besides representing the concept of AI-generated images, or if an AI-generated image is notable or irreplaceable in its own right -- e.g, a musician uses AI to make an album cover. | |||
*:(this isn't even a good example, it looks more like ]) | |||
*:] (]) 04:07, 3 January 2025 (UTC) | |||
*:Was I unclear? ''No'' to all of them. ] (]) 04:13, 3 January 2025 (UTC) | |||
*:Still '''no''', because carving out that type of exception will just lead to arguments down the line about whether a given image is too realistic. <span class="nowrap">—] (] | ])</span> 04:24, 3 January 2025 (UTC) | |||
*:I still think '''no'''. My opposition isn't just to the fact that AI images are misinformation, but also that they essentially serve as a loophole for getting around Enwiki's image use policy. To know what somebody looks like, an AI generator needs to have images of that person in its dataset, and it draws on those images to generate a derivative work. If we have no free images of somebody and we use AI to make one, that's just using a fair use copyrighted image but removed by one step. The image use policy prohibits us from using fair use images for BLPs so I don't think we should entertain this loophole. If we ''do'' end up allowing AI images in BLPs, that just disqualifies the rationale of not allowing fair use in the first place. ] (]) 04:40, 3 January 2025 (UTC) | |||
*:'''No''' those are not okay, as this will just cause arguments from people saying a picture is ''obviously'' AI-generated, and that it is therefore appropriate. As I mentionned above, there are some exceptions to this, which Gnomingstuff perfectly describes. Fake sketches/cartoons are not appropriate and provide little encyclopedic value. ] (]) 05:27, 3 January 2025 (UTC) | |||
*:'''No''' to this as well, with the same carveout for individual images that have received notable discussion. Non-photorealistic AI images are going to be no more verifiable than photorealistic ones, and on top of that will often be lower-quality as images. ] (] • ]) 05:44, 3 January 2025 (UTC) | |||
*:Thanks for the ping, yes I can, the answer is no. ] (]) 07:31, 3 January 2025 (UTC) | |||
*:'''No''', and that image should be deleted before anyone places it into a mainspace article. Changing the RfC intro long after its inception seems a second bite at an apple that's not aged well. ] (]) 09:28, 3 January 2025 (UTC) | |||
*::The RfC question has not been changed; another editor was the RfC question did not make a distinction between photorealistic/non-photorealistic AI-generated images, so I had to add and ping the editors who'd voted !No to clarify things. It has only been 3 days; there's still 27 more days to go. ] (]) 11:18, 3 January 2025 (UTC) | |||
*:::Also answering '''No''' to this one per all the arguments above. "It has only been 3 days" is not a good reason to change the RfC question, especially since many people have already !voted and the "30 days" is mostly indicative rather than an actual deadline for a RfC. ] (] · ]) 14:52, 3 January 2025 (UTC) | |||
*::::The RfC question hasn't been changed; see my response to Zaathras below. ] (]) 15:42, 3 January 2025 (UTC) | |||
*:No, that's even a worse possible approach.<span id="Masem:1735910695864:WikipediaFTTCLNVillage_pump_(policy)" class="FTTCmt"> — ] (]) 13:24, 3 January 2025 (UTC)</span> | |||
*:'''No'''. We're the human encyclopedia. We should have images drawn or taken by real humans who are trying to depict the ''subject'', not by machines trying to simulate an image. Besides, the given example is horribly drawn. '']'' (] — ]) 15:03, 3 January 2025 (UTC) | |||
*:I like these even less than the photorealistic ones... This falls into the same basket for me: if we wouldn't let a random editor who drew this at home using conventional tools add it to the article why would we let a random editor who drew this at home using AI tools at it to the article? (and just to be clear the AI generated image of Germán Larrea Mota-Velasco is not recognizable as such) ] (]) 16:06, 3 January 2025 (UTC) | |||
*:I said *NO*. ] (]) 10:37, 4 January 2025 (UTC) | |||
*:'''No''' Having such images as said above means the AI had to use copyrighted pictures to create it and we shouldn't use it. --] (]) 01:12, 5 January 2025 (UTC) | |||
*:Still '''no'''. If for no other reason than that it's a bad precedent. As others have said, if we make one exception, it will just lead to arguments in the future about whether something is "realistic" or not. I also don't see why we would need cartoon/illustrated-looking AI pictures of people in BLPs. ] (]) 20:43, 6 January 2025 (UTC) | |||
*'''Absolutely not'''. These images are based on whatever the AI could find on the internet, with little to no regard for copyright. Misplaced Pages is better than this. ] (]) 10:16, 3 January 2025 (UTC) | |||
*'''Comment''' The RfC question should not have been fiddled with, esp. for such a minor argument that the complai9nmant could have simply included in their own vote. I have no need to re-confirm my own entry. ] (]) 14:33, 3 January 2025 (UTC) | |||
*:The RfC question hasn't been modified; I've only added a clarifying that these images can either be photorealistic in style or non-photorealistic in style. I pinged all the !No voters to make them aware. I could remove the Note if people prefer that I do (but the original RfC question is the ''exact same'' as it is now, so I don't think the addition of the Note makes a whole ton of difference). ] (]) 15:29, 3 January 2025 (UTC) | |||
*'''No''' At this point it feels redundant, but I'll just add to the horde of responses in the negative. I don't think we can fully appreciate the issues that this would cause. The potential problems and headaches far outweigh whatever little benefit might come from AI images for BLPs. ]] 21:34, 3 January 2025 (UTC) | |||
*'''Support temporary blanket ban''' with a posted expiration/requred rediscussion date of no more than two years from closing. AI as the term is currently used is very, very new. Right now these images would do more harm than good, but it seems likely that the culture will adjust to them. ] (]) 23:01, 3 January 2025 (UTC) | |||
*'''No'''. Misplaced Pages is made ''by'' and ''for'' humans. I don't want to become . Adding an AI-generated image to a page whose topic isn't about generative AI makes me feel insulted. ] (]) 00:03, 4 January 2025 (UTC) | |||
*'''No'''. Generative AI may have its place, and it may even have a place on Misplaced Pages in some form, but that place isn't in BLPs. There's no reason to use images of someone that do not exist over a real picture, or even something like a sketch, drawing, or painting. Even in the absence of pictures or human-drawn/painted images, I don't support using AI-generated images; they're not really pictures of the person, after all, so I can't support using them on articles of people. Using nothing would genuinely be a better choice than generated images. ] <span style="font-weight:bold">|</span> ] 01:07, 4 January 2025 (UTC) | |||
*'''No''' due to reasons of copyright (AI harvests copyrighted material) and verifiability. ] <small>(])</small> 18:12, 4 January 2025 (UTC) | |||
*'''No.''' Even if you are willing to ignore the inherently fraught nature of using AI-generated ''anything'' in relation to BLP subjects, there is simply little to no benefit that could possibly come from trying something like this. There's no guarantee the images will actually look like the person in question, and therefore there's no actual context or information that the image is providing the reader. What a baffling proposal. ] (]) 19:53, 4 January 2025 (UTC) | |||
*:{{tpq|There's no guarantee the images will actually look like the person in question}} there is no guarantee ''any'' image will look like the person in question. When an image is not a good likeness, regardless of why, we don't use it. When am image is a good likeness we consider using it. Whether an image is AI-generated or not it is completely independent of whether it is a good likeness. There are also reason other then identification why images are used on BLP-articles. ] (]) 20:39, 4 January 2025 (UTC) | |||
*Foreseeably there may come a time when people's official portraits are AI-enhanced. That time might not be very far in the future. Do we want an exception for official portraits?—] <small>]/]</small> 01:17, 5 January 2025 (UTC) | |||
*:This subsection is about purely AI-generated works, not about AI-enhanced ones. ] (] · ]) 01:23, 5 January 2025 (UTC) | |||
*'''No.''' Per Cremastra, "We should have images drawn or taken by real humans who are trying to depict the ''subject''," - ] (]) 02:12, 5 January 2025 (UTC) | |||
*'''Yes''', depending on specific case. One can use drawings by artists, even such as ]. The latter is an intentional distortion, one could say an intentional misinformation. Still, such images are legitimate on many pages. Or consider numerous images of ]. How realiable are they? I am not saying we must deliberatly use AI images on all pages, but they may be fine in some cases. Now, speaking on "medical articles"... One might actually use the AI generated images of certain biological objects like proteins or organelles. Of course a qualified editorial judgement is always needed to decide if they would improve a specific page (frequently they would not), but making a blanket ban would be unacceptable, in my opinion. For example, the images of protein models generatated by ] would be fine. The AI-generated images of biological membranes I saw? I would say no. It depends. ] (]) 02:50, 5 January 2025 (UTC) {{pb | |||
}}This is complicated of course. For example, there are tools that make an image of a person that (mis)represents him as someone much better and clever than he really is in life. That should be forbidden as an advertisement. This is a whole new world, but I do not think that a blanket rejection would be appropriate. ] (]) 03:19, 5 January 2025 (UTC) | |||
* '''No''', I think there's legal and ethical issues here, especially with the current state of AI. ] ] 03:38, 5 January 2025 (UTC) | |||
*'''No''': Obviously, we shouldn't be using AI images to represent anyone. ] (]) 05:31, 5 January 2025 (UTC) | |||
*'''No''' Too risky for BLP's. Besides if people want AI generated content over editor made content, we should make it clear they are in the wrong place, and readers should be given no doubt as to our integrity, sincerity and effort to give them our best, not a program's. ] (]) 14:51, 5 January 2025 (UTC) | |||
*'''No''', as AI's grasp on the Internet takes hold stronger and stronger, it's important Misplaced Pages, as the online encyclopedia it sets out to be, remains factual and real. Using AI images on Wiki would likely do more harm than good, further thinning the boundaries between what's real and what's not. – ''']''' <sub>(]) (])</sub> 16:52, 5 January 2025 (UTC) | |||
*'''No''', not at the moment. I think it will hard to avoid portraits that been enhanced by AI, as it already been on-going for a number of years and there is no way to avoid it, but I don't want arbitary generated AI portraits of any type. '''<span style="text-shadow:7px 7px 8px black; font-family:Papyrus">]<sup>]</sup></span>''' 20:19, 5 January 2025 (UTC) | |||
*'''No for natural images (e.g. photos of people)'''. Generative AI by itself is not a reliable source for facts. In principle, generating images of people and directly sticking them in articles is no different than generating text and directly sticking it in articles. In practice, however, generating images is worse: Text can at least be discussed, edited, and improved afterwards. In contrast, we have significantly less policy and fewer rigorous methods of discussing how AI-generated images of natural objects should be improved (e.g. "make his face slightly more oblong, it's not close enough yet"). Discussion will devolve into hunches and gut feelings about the fidelity of images, all of which essentially fall under WP:OR. ] (]) 20:37, 5 January 2025 (UTC) | |||
*'''No''' I'm appalled that even a small minority of editors would support such an idea. We have enough credibility issues already; using AI-generated images to represent real people is not something that a real encyclopedia should even consider. ] (]) 22:26, 5 January 2025 (UTC) | |||
*'''No''' I understand the comparison to using illustrations in BLP articles, but I've always viewed that as less preferable to no picture in all honestly. Images of a person are typically presented in context, such as a performer on stage, or a politician's official portrait, and I feel like there would be too many edge cases to consider in terms of making it clear that the photo is AI generated and isn't representative of anything that the person specifically did, but is rather an approximation. ] (]) 06:50, 6 January 2025 (UTC) | |||
*'''No''' - Too often the images resemble caricatures. Real caricatures may be included in articles if the caricature (e.g., political cartoon) had ] and is attributed to the artist. Otherwise, representations of living persons should be real representations taken with photographic equipment. ] (]) 02:31, 7 January 2025 (UTC) | |||
*:So you will be arguing for the removal of the lead images at ], ], etc. then? ] (]) 06:10, 7 January 2025 (UTC) | |||
*::At this point you're making bad-faith "BY YOUR LOGIC" arguments. You're better than that. Don't do it. ] (]) 19:18, 7 January 2025 (UTC) | |||
*'''Strong no''' per bloodofox. —] (]'''-''']) 03:32, 7 January 2025 (UTC) | |||
:'''No''' for AI-generated BLP images ] (]) 21:40, 7 January 2025 (UTC) | |||
Is there any material / prior discussions available concerning potentially locking articles after they've been approved for FA? If not, I'd like to propose it. | |||
*'''No''' - Not only is this effectively guesswork that usually includes unnatural artefacts, but worse, it is also based on unattributed work of photographers who didn't release their work into public domain. I don't care if it is an open legal loophole somewhere, IMO even doing away with the fair use restriction on BLPs would be morally less wrong. I suspect people on whose work LLMs in question were trained would also take less offense to that option. <span style="font-family:Garamond,Palatino,serif;font-size:115%;background:-webkit-linear-gradient(red,red,red,blue,blue,blue,blue);-webkit-background-clip:text;-webkit-text-fill-color:transparent">] ]</span> 23:25, 7 January 2025 (UTC) | |||
*'''No''' – ] says that {{tq|Non-free content should not be used when a freely licensed file that serves the same purpose can reasonably be expected to be uploaded, as is the case for almost all portraits of living people.}} While AI images may not be considered copyrightable, it still be a copyright violation if the output resembles other, copyrighted images, pushing the image towards NFC. At the very least, I feel the use of non-free content to generate AI images violates the spirit of the NFC policy. (I'm assuming copyrighted images of a person are used to generate an AI portrait of them; if free images of that person were used, we should just use those images, and if ''no'' images of the person were used, how on Earth would we trust the output?) ] (]) 02:43, 8 January 2025 (UTC) | |||
*'''No''', AI images should not be permitted on Misplaced Pages at all. ] (]) 11:27, 8 January 2025 (UTC) | |||
{{Archive bottom}} | |||
===Expiration date?=== | |||
I've found (in my limited experience) that articles tend to go downhill after reaching FA. The process to get them to that point tends to weed out NPOV, bloat, cruft and unsourced claims; after that though, further additions are not as vigorously scrutinized. | |||
"AI," as the term is currently used, is very new. It feels like large language models and the type of image generators under discussion just got here in 2024. (Yes, I know it was a little earlier.) The culture hasn't completed its initial response to them yet. Right now, these images do more harm than good, but that may change. Either we'll come up with a better way of spotting hallucinations or the machines will hallucinate less. Their copyright status also seems unstable. I suggest that any ban decided upon here have some expiration date or required rediscussion date. Two years feels about right to me, but the important thing would be that the ban has a number on it. ] (]) 23:01, 3 January 2025 (UTC) | |||
*No need for any end-date. If there comes a point where consensus on this changes, then we can change any ban then. ] (]) 05:27, 5 January 2025 (UTC) | |||
*An end date is a positive suggestion. Consensus systems like Misplaced Pages's are vulnerable to half-baked precedential decisions being treated as inviolate. With respect, this conversation does not inspire confidence that this policy proposal's consequences are well-understood at this time. If Misplaced Pages goes in this direction, it should be labeled as primarily reactionary and open to review at a later date. ] (]) 10:22, 5 January 2025 (UTC) | |||
*Agree with FOARP, '''no need for an end date'''. If something significantly changes (e.g. reliable sources/news outlets such as the ''New York Times'', BBC, AP, etc. start using text-to-image models to generate images of living people for their own articles) then this topic can be revisited later. Editors will have to go through the usual process of starting a new discussion/proposal when that time comes. ] (]) 11:39, 5 January 2025 (UTC) | |||
*:Seeing as this discussion has not touched at all on what other organizations may or may not do, it would not be accurate to describe any consensus derived from this conversation in terms of what other organizations may or may not be doing. That is, there has been no consensus that we ought to be looking to the New York Times as an example. Doing so would be inadvisable for several reasons. For one, they have sued an AI company over semi-related issues and they have teams explicitly working on what the future of AI in news ought to look like, so they have some investment in what the future of AI looks like and they are explicitly trying to shape its norms. For another, if they did start to use AI in a way that may be controversial, they would have no positive reason to disclose that and many disincentives. They are not a neutral signal on this issue. Misplaced Pages should decide for itself, preferably doing so while not disrupting the ability of people to continue creating user-generated images. ] (]) 03:07, 6 January 2025 (UTC) | |||
* ] on an indefinite basis, if something changes. An arbitrary sunset date doesn't seem much use. ] (]) 03:15, 6 January 2025 (UTC) | |||
* No need per others. Additionally, if practices change, it doesn't mean editors will decide to follow new practices. As for the technology, it seems the situation has been fairly stable for the past two years: we can detect some fakes and hallucinations immediately, many more in the past, but certainly not all retouched elements and all generated photos available right now, even if there was a readily accessible tool or app that enabled ordinary people to reliably do so. | |||
:Through the history, art forgeries have been fairly reliably detected, but rarely quickly. Relatedly, I don't see why the situation with AI images would change in the next 24 months or any similar time period. <span style="font-family:Garamond,Palatino,serif;font-size:115%;background:-webkit-linear-gradient(red,red,red,blue,blue,blue,blue);-webkit-background-clip:text;-webkit-text-fill-color:transparent">] ]</span> 22:17, 9 January 2025 (UTC) | |||
==Non-Admin XFD Close as Delete== | |||
My proposal would be basically as such: | |||
There is a contentious ] currently resulting from a ] of a ] in which the closer found that the consensus was Delete. The closer then tagged the redirect with ], in particular {{tl|db-xfd}}. It was then deleted by an admin who had taken part in the deletion discussion. The guideline on non-administrative closes says that a non-administrative close of Delete is not permitted, because the closer cannot press the Delete button. The first question at ] seems to be whether DRV '''''must''''' vacate the close and allow a new close by an ] administrator, or whether DRV can endorse the close and leave the close standing. My opinion is that a DRV endorsement of a non-admin close is as good a close as a regular admin close, but that is only my opinion. | |||
* FAs should get an additional tab, "Featured", which is permanently protected. | |||
** The "Featured" tab, in addition to having the star on the top-right corner, would also have the last date at which it became featured / re-featured. | |||
The second question that is raised by the DRV discussion is about the existing practice that non-admins sometimes make Delete closes at ]. Should this practice be stopped, because the guideline says that non-admins may not make Delete closes, or should the guideline be revised, or should the guideline be left as is, but with such closes being sometimes allowed? My own opinion is that if it is necessary sometimes to allow a practice that is contrary to the guideline, the guideline should be revised, so as to reduce the load at ], but that is only my opinion. | |||
* The "Article" tab would be replaced with a "Draft" tab, which would behave exactly the same as the existing "Article" tab, with the exception of a small notification that the page contains changes since the article became featured. | |||
* Users who are not logged-in are by default taken to the "Featured" page when viewing the article in question. | |||
Should ] be able to endorse the irregular non-admin close, or is it necessary for DRV to vacate the close? Also, should the non-admin closer be thanked, or cautioned? | |||
* Logged-in users are by default taken to the "Draft" tab, though they can change this in their preferences. | |||
] (]) 06:51, 31 December 2024 (UTC) | |||
* Administrators are advised not to make changes to FAs without consensous on the talk page / associated ] | |||
* In the case of a large re-rewrite, the "Draft" article would have to go through the FA process again in order to replace the existing FA. | |||
* The "Featured" tab would have its own history, much the same as our articles and discussions do today. | |||
:On the first question, I think that whatever decision DRV makes at ] will be 'legal'. They are not required to vacate the close, no matter how much one editor might like to have a second bite at that apple. | |||
Any thoughts? ] (]) 19:21, 19 March 2008 (UTC) | |||
:On your second question, I prefer the existing rules. It is not necessary to allow NACs (]) of 'delete'. Good admins will not blindly trust anyone else's decision, so a NAC to delete does not necessarily save any time, and if the admin disagrees, then it could cause drama. NACs to delete should be gently but firmly discouraged. ] (]) 07:30, 31 December 2024 (UTC) | |||
::I agree with WAID. ] (]) 10:29, 31 December 2024 (UTC) | |||
*That's procedurally irregular, but it was the correct close of the discussion and we've got to the right outcome.—] <small>]/]</small> 11:32, 31 December 2024 (UTC) | |||
* As I said at the Deletion Review, I support non-admins closing RfDs as Delete. If TfDs have been made an exception, RfDs can be too. Closing a heavily discussed nomination at RfD is more about the reading, analysis and thought process at arriving at the outcome, and less about the technicality of the subsequent page actions. It will help making non-admins mentally prepared to advance to admin roles.<span style="font-family:Segoe Script">]</span><span style="font-size:115%">]</span> 17:28, 1 January 2025 (UTC) | |||
:It seems dumb to think that a close can go through DRV and 'pass' only for it to be vacated because the closer didn't have the right credentials. If the close is found to be good, does it matter who closed it? If bad Delete closes by non-admins were regular occurrences, then maybe, but I don't imagine this is the case. ] </nowiki></span>''']] 19:12, 1 January 2025 (UTC) | |||
*The only problem is that if it becomes a regular occurrence, it creates more work overall. An admin can delete and carry out the delete. A non-admin can say it needs to be deleted, but still needs an admin to carry out the delete. That's why this should be discouraged - if it becomes commonplace it will generate a lot more work, but a one-off can clearly be endorsed at DRV. ] ''<span style="font-size:small; vertical-align:top;">]</span>''·''<span style="font-size:small; vertical-align:bottom;">]</span>'' 19:26, 1 January 2025 (UTC) | |||
*:I think it's time for the long-standing "it creates more work" argument to be retired. In the first scenario, person 1 does A and B. In the second scenario, person 1 does A and person 2 does B. That does not create more work overall, it's the same amount of work overall, and it ''reduces'' work for person 1. Splitting work between two people doesn't create more work. ] (]) 14:55, 4 January 2025 (UTC) | |||
*::If person 2 here did B without doing any additional work whatsoever, then person 2 shouldn't be able to do B. We expect admins to ''look'' at what they're deleting. Plus, you're creating additional work for persons 3-20, who are watching ] and don't appreciate demands to go sanity-check low-urgency, possibly-complex closes at unfamiliar deletion venues mixed into the copyright infringement, spam, and attack pages they're trying to prioritize. —] 15:15, 4 January 2025 (UTC) | |||
===Discussion of How Much Attention Admins Pay to What They Are Deleting=== | |||
*:::No, we don't expect admins to look at what they're deleting. Anyone familiar with the admin deletion stats (as you are) knows that every day, admins delete many pages without ever looking at them. Person 2 can do B without A. It's OK for person 2 to rely on person 1 to do A correctly. It's not true that in order for person 2 to do B then person 2 must also have done A, and it's not how we operate with other deletions. Non-admins are perfectly capable of closing RfCs and many other discussions; there is nothing about a deletion discussion that suddenly makes non-admins unable to summarize consensus. There is no reason an admin can't rely on a non-admins summary of consensus, just as admins rely on non-admins CSD tagging. ] (]) 15:21, 4 January 2025 (UTC) | |||
*:::: We absolutely <em>do</em> expect admins to look at what they're deleting. At most a NAC delete close can save an admin the trouble of writing the close themself, but if an admin is blindly deleting without verifying that the CSD tag is actually valid for CSD then sooner or later they're going to wind up at ]. ]] 15:49, 4 January 2025 (UTC) | |||
*::::{{u|Levivich}}, I agree with your conclusion, but not with your analogy. As {{u|Anomie}} pointed out, we do not blindly delete pages tagged as CSD. ] specifically instructs us to verify that the page meets a speedy deletion criterion, rather than just appeared as such to one user. This is a very different situation to an AfD that garnered consensus among multiple editors over the course of at least seven days, where the admin's job is just to verify that the NAC read consensus correctly. ] ] 16:01, 4 January 2025 (UTC) | |||
*:::::We can say it all we want but we can't argue with ]. The admin who has deleted over 1 million pages did not read 1 million pages. Nor did the admin who deleted 800,000 pages read 800,000 pages. An admin who has deleted 600,000 pages--no doubt without reading even half of them--just got elected to arbcom. Nobody is taking those admins to ANI or thinks they're doing anything wrong (including me). | |||
*:::::So no, admins don't read the pages they delete, and no, we don't expect them to, as proven by the massive number of deletions, and no, admins who delete without reading aren't taken to ANI, they're promoted. This includes CSDs. | |||
*:::::More broadly, there is no reason not to allow admins to rely on non-admin XfD closures when making deletion decisions. We already do this for multiple types of XfDs, we can and should do it for AfDs. ] (]) 16:05, 4 January 2025 (UTC) | |||
*::::::{{u|Explicit}} didn't have to read over one million pages, because most of those deletions were the result of an XfD, not a CSD. He had to review almost a million XfD discussions to make sure consensus was to delete, which is exactly what he does when closing. And the majority of CSD deletions are of clumsy, unsourced self-promotion pages that take less than ten seconds to assess and confirm as such. I know for a fact that {{u|Liz}} -- number #3 on that list -- carefully reviews every speedy-tagged page before deleting it, and routinely rejects such requests when they do not meet CSD. {{pb}} {{u|Levivich}}, you are making some serious accusations here, with zero evidence to support them. I don't think this will help your case here. ] ] 16:24, 4 January 2025 (UTC) | |||
*:::::::I have no case here and I'm making no accusations. If you want to believe that a person has read over a million XfDs, you go ahead and believe that. I don't, because even over 15 years, that would be 182 XfDs a day every day with no days off, and I don't believe anyone does, or is capable of doing, that much XfD reading. ] (]) 17:03, 4 January 2025 (UTC) | |||
*:::::::: {{u|Levivich}} You're completely wrong, because you're missing a number of very obvious issues with those statistics. | |||
*::::::::: Not even a majority of Explicit's deletions are XfDs - most are speedies | |||
*::::::::: Many are G13 deletions, which don't need any analysis, they're simply done on timeframe. You can do those in seconds. Similarly U1, R2 and F8 | |||
*::::::::: Most articles have a talk page, which also needs to be deleted (G8), so deleting most articles means two deletions. | |||
*::::::::: RfDs don't take very long to close at all; most are unanimous | |||
*::::::::: Of Explicit's last 500 deletions, only 58 have been XfDs (and their associated talk pages where they existed, making 97 deletions). | |||
*::::::::: The vast majority of the rest have been G13, G8, U1, R2 and F8. | |||
*:::::::: Just in the interests of accuracy, you know. ] 17:18, 4 January 2025 (UTC) | |||
*:::::::::Owen: {{tqq|Explicit didn't have to read over one million pages, because most of those deletions were the result of an XfD, not a CSD}} | |||
*:::::::::BK: {{tqq|Not even a majority of Explicit's deletions are XfDs - most are speedies}} | |||
*:::::::::I don't know which one of you is wrong but it doesn't matter. Explicit did not read 1 million pages over the last 15 years prior to deleting them, whether they were XfDs or CSDs. Nobody is reading 182 Misplaced Pages pages 7 days a week for 15 years. Nobody has read a million Misplaced Pages pages. No human can thoroughly review 182 things a day. It's mathematics, guys. If a person spends 5 minutes on each review -- which isn't a thorough review of anything -- that's 15 hours per day to do 182 reviews. Nobody is spending that much time, 7 days a week, for 15 years. Even if the person spends just 1 minute, that's over 3 hours a day... every day for 15 years. The math proves that Explicit is spending maybe 1 minute per deletion. | |||
*:::::::::Now: an admin spending 1 minute per deletion is not providing any real kind of level of oversight that can't be provided by a non-admin closing the XfD and the admin then spending 1 minute reviewing the close and deleting the page. | |||
*:::::::::The math and the stats don't lie. The suggestion that admins carefully review every page before deleting them is plainly not true. The suggestion that if admins didn't do that, they'd be taken to ANI, is also plainly not true. We have admins spending a minute or less per deletion and not being taken to ANI. These are facts. ] (]) 19:31, 4 January 2025 (UTC) | |||
*:::::::::: Your "math" is based on a lot of faulty assumptions and oversimplifications, which you seem to be persisting in despite others trying to correct you. ]] 20:42, 4 January 2025 (UTC) | |||
*:::::::::: Those are ''not'' facts, because your data is faulty. Your reply simply tells me that you didn't read what I wrote. Someone with the editing pattern of Explicit ''isn't'' reviewing 182 things a day. Nowhere near that. Quite apart from the talk page issue (which, if every page had a talk page, would immediately halve the number of reviews) in some of the speedy categories I mention there's nothing ''to'' review; it's simply pressing a button. Indeed, deleting the articles in the stale G13 queue, orphaned G8 queue, or the F5 or F8 category, can result in dozens - even hundreds - of deletions in one button press. As an example, on the 3rd January alone, Explicit deleted 113 F8 images, plus 83 talk pages of those images. That's 196 deletions, all done in a few seconds, as there's nothing to review. In comparison, the number of XfDs he closed that day was 25. ] 21:21, 4 January 2025 (UTC) | |||
*:::::::::::Cryptic: {{tqq|We expect admins to ''look'' at what they're deleting.}} | |||
*:::::::::::BK: {{tqq|That's 196 deletions, all done in a few seconds, as there's nothing to review.}} | |||
*:::::::::::I disagreed with Cryptic's point, and thank you BK for providing an example of exactly what I'm talking about. We do not expect admins to review (or "''look'' at") what they're deleting, sometimes there's not even anything to look at. Admins delete things without reviewing what they're deleting all the time, and they're not hauled off to ANI for it. BK, you're arguing with me, but you're not actually disagreeing with what I'm saying. You're not arguing, as others have said above, that all admins review everything they delete before they delete it, or that this is the community expectation. ] (]) 22:24, 4 January 2025 (UTC) | |||
*:::::::::::: I'm simply pointing out that the claim you have made {{tq|The math proves that Explicit is spending maybe 1 minute per deletion. Now: an admin spending 1 minute per deletion is not providing any real kind of level of oversight that can't be provided by a non-admin closing the XfD and the admin then spending 1 minute reviewing the close and deleting the page.}} is not correct; Explicit is not spending 1 minute per XfD, he is spending 1 minute per ''deletion'', and because in many cases the admin pressing the button is simply completing an automated process (in the case of G13, that the article has not been edited for six months; in the case of F8, that the identical file exists at Commons), and deleting many files in one button-press, that skews the data. Now, if you are saying that an admin is ''closing AfDs or deleting G7 or G11 speedies at a high rate'' (not just "deleting things"), and their speed suggests they aren't reviewing them carefully - ''then'' you have a point. ] 23:33, 4 January 2025 (UTC) | |||
*:::::::::::::Not actually reviewing deletions to make sure they were correct was one of the two major factors that resulted in Arbcom desysopping ] (the other was a refusal to engage with good-faith queries about his actions and refusing to engage at all with IP editors). As others have said, how long it takes to ensure that what you are deleting should be deleted varies considerably. For G13s it's as simple as checking that bot has correctly determined that a given timestamp was more than 6 months ago - either the entire batch will be correct or the entire batch will be incorrect. Most of the deletions I do are the result of RfDs, after determining that the consensus is to delete (which is usually very simple) I delete the page. The software then tells me that page has a talk page, at least 9 times out of 10 (maybe more) it takes less than a second to verify there isn't any any reason they are G8 exempt. There is no excuse for not doing the reviewing properly though because there ''are'' exceptions. ] (]) 00:22, 5 January 2025 (UTC) | |||
*:::::::::::::I didn't say Explicit is spending 1 minute per XfD, I said "per deletion." And your point about "many files in one button-press" is literally the same as my point, which is that admins ''don't'' carefully review every single thing they delete, they sometimes delete many files in one button press (196 deletions in a few seconds is an example). So, you agree with me that "we expect admins to look at what they're deleting" is not correct, it's actually disproven by the data. ] (]) 00:37, 5 January 2025 (UTC) | |||
*::::::::::::::Admins ''are'' expected to review everything they delete. It's just that in some cases many pages can be reviewed together very quickly. It is also possible to spend time carefully reviewing many pages in detail, assembling a list of which should be deleted and then deleting them the pages on that together with a single click. ] (]) 01:08, 5 January 2025 (UTC) | |||
*::::::::::::::It was in the context of XfDs, however ("{{tq|If you want to believe that a person has read over a million XfDs, you go ahead and believe that}}"). ] 08:03, 5 January 2025 (UTC) | |||
*:::::::::::::::That was in response to someone saying they were mostly XfDs. The same point holds if they were mostly CSDs. That's why I said it doesn't matter whether they were mostly XfDs or CSDs. Regardless of whether it's XfD or CSD, when someone makes 500k or 1 million deletions, we know they didn't look at each and every individual thing (page, file, whatever) that was deleted, nor do we expect them to. (Which I said in response to people saying we expect admins to look at every single thing they delete.) ] (]) 13:18, 5 January 2025 (UTC) | |||
*:::::By pressing the delete button you are asserting that at least one of the following is true: | |||
*:::::*The page met the specified speedy deletion criterion (e.g. if the deletion log says G12 you are asserting the page, including all of its old revisions, was copyvio). | |||
*:::::*The page was eligible for PROD, this was the first time that it was prodded and nobody has objected to the prod. | |||
*:::::*The deletion of the page was discussed a the relevant XfD (or, exceptionally, in an RfC) and that the consensus of that discussion was to delete the page. | |||
*:::::If an admin is not prepared to put their name to the truthfulness of that assertion then they should not be deleting the page. ] (]) 16:12, 4 January 2025 (UTC) | |||
*{{tq|My own opinion is that if it is necessary sometimes to allow a practice that is contrary to the guideline, the guideline should be revised, so as to reduce the load at DRV, but that is only my opinion|quotes=y}} - actually, {{u|Robert McClenon}}, your opinion is solidly anchored in policy. ] tells us: {{tq|the written rules themselves do not set accepted practice. Rather, they document already-existing community consensus regarding what should be accepted and what should be rejected.}} If CfD is working fine as it is, let's update policy to reflect the practice. ] ] 16:09, 2 January 2025 (UTC) | |||
*Regarding the ], the guideline on non-admin "delete" closures is at ] and explicitly includes exceptions for TfD and CfD, so there isn't currently a conflict between existing practice and deletion guidelines. ] ] 08:00, 9 January 2025 (UTC) | |||
== Should ] include mention of AI-generated comments? == | |||
:This sounds a little bit like what stable versions is supposed to be, but with a new process added for Featured Articles. ] (]) 19:49, 19 March 2008 (UTC) | |||
Using AI to write your comments in a discussion makes it difficult for others to assume that you are discussing in good faith, rather than trying to use AI to argue someone into exhaustion (see example of someone using AI in their replies ). More fundamentally, ] can't apply to the AI itself as AI lacks intentionality, and it is difficult for editors to assess how much of an AI-generated comment reflects the training of the AI vs. the actual thoughts of the editor. | |||
:I would be absolutely and strongly opposed; FA doesn't necessarily equal perfect or completely finished (consider Barack Obama), and consensus changes. ] (]) 20:15, 19 March 2008 (UTC) | |||
Should ] be addended to include that using AI to generate your replies in a discussion runs counter to demonstrating good faith? ] (]) 00:23, 2 January 2025 (UTC) | |||
::I realize that situations change, and that nothing's perfect; that's the reason why I'd include a "Draft" tab for continuing development. My line of thinking is that right now, being featured doesn't mean much; the first edit taken after an article has been made so is one that wasn't reviewed by the same standard. | |||
* '''Yes''', I think this is a good idea. ] (]) 00:39, 2 January 2025 (UTC) | |||
::To me, featured means "This page, as it stands at the moment, meets the criteria we've set out to determine excellence." Is it still applicable a month later? A year later? | |||
: |
:'''No'''. As with all the other concurrent discussions (how many times do we actually need to discuss the exact same FUD and scaremongering?) the problem is not AI, but rather inappropriate use of AI. What we need to do is to (better) explain what we actually want to see in discussions, not vaguely defined bans of swathes of technology that, used properly, can aid communication. ] (]) 01:23, 2 January 2025 (UTC) | ||
::Note that this topic is discussing using AI to ''generate'' replies, as opposed to using it as an aid (e.g. asking it to edit for grammar, or conciseness). As the above concurrent discussion demonstrates, users are already using AI to generate their replies in AfD, so it isn't scaremongering but an actual issue. | |||
:::''This page, as it stands at the moment, meets the criteria we've set out to determine excellence'' is far stronger than most FAs deserve; it's closer to ''This page now has no glaring embarassments which are obvious to a handful of editors who know little or nothing about the subject.'' Please do not assume that they warrant any special treatment. (And do remember that it is always possible to revert back to the passed version if, as is possible, that is the best thing to do.) ] <small>]</small> 20:52, 19 March 2008 (UTC) | |||
::] also does not ban anything ("Showing good faith is not required"), but offers general advice on demonstrating good faith. So it seems like the most relevant place to include mention of the community's concerns regarding AI-generated comments, without outright banning anything. ] (]) 01:32, 2 January 2025 (UTC) | |||
I don't like protecting FAs at their point of promotion - take, for example, , which has undergone some prose tightening, structural changes, category addition all since its promotion. ''']''' <sup>(])</sup> 21:10, 19 March 2008 (UTC) | |||
:::And as pointed out, multiple times in those discussions, different people understand different things from the phrase "AI-generated". The community's concern is not AI-generated comments, but comments that do not clearly and constructively contribute to a discussion - ''some'' such comments are AI-generated, some are not. This proposal would, just as all the other related ones, cause actual harm when editors falsely accuse others of using AI (and this ''will'' happen). ] (]) 02:34, 2 January 2025 (UTC) | |||
::::Nobody signed up to argue with bots here. If you're pasting someone else's comment into a prompt and asking the chatbot to argue against that comment and just posting it in here, that's a real problema and absolutely should not be acceptable. ] (]) 03:31, 2 January 2025 (UTC) | |||
:::::Thank you for the assumption of bad faith and demonstrating one of my points about the harm caused. Nobody is forcing you to engage with bad-faith comments, but whether something is or is not bad faith needs to be determined by its content not by its method of generation. Simply using an AI demonstrates neither good faith nor bad faith. ] (]) 04:36, 2 January 2025 (UTC) | |||
:::::I don't see why we have any particular to reason to suspect a respected and trustworthy editor of using AI. '']'' (] — ]) 14:31, 2 January 2025 (UTC) | |||
::::I'm one of those people who clarified the difference between AI-generated vs. edited, and such a difference could be made explicit with a note. Editors are already accusing others of using AI. Could you clarify how you think addressing AI in ] would cause actual harm? ] (]) 04:29, 2 January 2025 (UTC) | |||
:::::By encouraging editors to accuse others of using AI, by encouraging editors to dismiss or ignore comments because they suspect that they are AI-generated rather than engaging with them. @] has already encouraged others to ignore my arguments in this discussion because they suspect I might be using an LLM and/or be a bot (for the record I'm neither). ] (]) 04:33, 2 January 2025 (UTC) | |||
::::::I think {{u|bloodofox}}'s ] was about "you" in the rhetorical sense, not "you" as in Thryduulf. ] (]) 11:06, 2 January 2025 (UTC) | |||
:::::Given your relentlessly pro-AI comments here, it seems that you'd be A-OK with just chatting with a group of chatbots here — or leaving the discussion to them. However, most of us clearly are not. In fact, I would immediately tell someone to get lost were it confirmed that indeed that is what is happening. I'm a human being and find the notion of wasting my time with chatbots on Misplaced Pages to be incredibly insulting and offensive. ] (]) 04:38, 2 January 2025 (UTC) | |||
::::::My comments are neither pro-AI nor anti-AI, indeed it seems that you have not understood pretty much anything I'm saying. ] (]) 04:43, 2 January 2025 (UTC) | |||
:::::::Funny, you've done nothing here but argue for more generative AI on the site and now you seem to be arguing to let chatbots run rampant on it while mocking anyone who doesn't want to interface with chatbots on Misplaced Pages. Hey, why not just sell the site to Meta, am I right? ] (]) 04:53, 2 January 2025 (UTC) | |||
::::::::I haven't been arguing for more generative AI on the site. I've been arguing against banning it on the grounds that such a ban would be unclear, unenforceable, wouldn't solve any problems (largely because whether something is AI or not is completely irrelevant to the matter at hand) but would instead cause harm. Some of the issues identified are actual problems, but AI is not the cause of them and banning AI won't fix them. | |||
::::::::I'm not mocking anybody, nor am I advocating to {{tpq|let chatbots run rampant}}. I'm utterly confused why you think I might advocate for selling Misplaced Pages to Meta (or anyone else for that matter)? Are you actually reading anything I'm writing? You clearly are not understanding it. ] (]) 05:01, 2 January 2025 (UTC) | |||
:::::::::So we're now in 'everyone else is the problem, not me!' territory now? Perhaps try communicating in a different way because your responses here are looking very much like the typical AI apologetics one can encounter on just about any contemporary LinkedIn thread from your typical FAANG employee. ] (]) 05:13, 2 January 2025 (UTC) | |||
::::::::::No, this is not a {{tpq|everyone else is the problem, not me}} issue because most other people appear to be able to understand my arguments and respond to them appropriately. Not everybody agrees with them, but that's not an issue. | |||
::::::::::I'm not familiar with Linkedin threads (I don't use that platform) nor what a "FAANG employee" is (I've literally never heard the term before now) so I have no idea whether your characterisation is a compliment or a personal attack, but given your comments towards me and others you disagree with elsewhere I suspect it's closer to the latter. | |||
::::::::::AI is a tool. Just like any other tool it can be used in good faith or in bad faith, it can be used well and it can be used badly, it can be used in appropriate situations and it can be used in inappropriate situations, the results of using the tool can be good and the results of using the tool can be bad. Banning the tool inevitably bans the good results as well as the bad results but doesn't address the reasons why the results were good or bad and so does not resolve the actual issue that led to the bad outcomes. ] (]) 12:09, 2 January 2025 (UTC) | |||
:::::::::::In the context of generating comments to other users though, AI is much easier to use for bad faith than for good faith. LLMs don't understand Misplaced Pages's policies and norms, and so are hard to utilize to generate posts that productively address them. By contrast, bad actors can easily use LLMs to make low quality posts to waste people's time or wear them down. | |||
:::::::::::In the context of generating images, or text for articles, it's easy to see how the vast majority of users using AI for those purposes is acting in good faith as these are generally constructive tasks, and most people making bad faith changes to articles are either obvious vandals who won't bother to use AI because they'll be reverted soon anyways, or trying to be subtle (povpushers) in which case they tend to want to carefully write their own text into the article. | |||
:::::::::::It's true that AI "is just a tool", but when that tool is much easier to use for bad faith purposes (in the context of discussions) then it raises suspicions about why people are using it. ] (]) 22:44, 2 January 2025 (UTC) | |||
::::::::::::{{tq|LLMs don't understand Misplaced Pages's policies and norms}} They're not designed to "understand" them since the policies and norms were designed for human cognition. The fact that AI is used rampantly by people acting in bad faith on Misplaced Pages does not inherently condemn the AI. To me, it shows that it's too easy for vandals to access and do damage on Misplaced Pages. Unfortunately, the type of vetting required to prevent that at the source would also potentially require eliminating IP-editing, which won't happen. <sub>Duly signed,</sub> ''']'''-''<small>(])</small>'' 14:33, 15 January 2025 (UTC) | |||
:::::::You mentioned "FUD". That acronym, "fear, uncertainty and doubt," is used in precisely two contexts: pro-AI propagadizing and persuading people who hold memecoin crypto to continue holding it. Since this discussion is not about memecoin crypto that would suggest you are using it in a pro-AI context. I will note, fear, uncertainty and doubt is not my problem with AI. Rather it's anger, aesthetic disgust and feeling disrespected when somebody makes me talk to their chatbot. ] (]) 14:15, 14 January 2025 (UTC) | |||
::::::::{{tpq|That acronym, "fear, uncertainty and doubt," is used in precisely two contexts}} is simply | |||
::::::::FUD both predates AI by many decades (my father introduced me to the term in the context of the phrase "nobody got fired for buying ]", and the context of that was ] computer systems in the 1980s if not earlier. FUD is also used in many, many more contexts that just those two you list, including examples by those opposing the use of AI on Misplaced Pages in these very discussions. ] (]) 14:47, 14 January 2025 (UTC) | |||
::::::::{{tpq|That acronym, "fear, uncertainty and doubt," is used in precisely two contexts}} is factually incorrect. | |||
::::::::FUD both predates AI by many decades (indeed if you'd bothered to read the ] article you'd learn that the concept was first recorded in 1693, the exact formulation dates from at least the 1920s and the use of it in technology concepts originated in 1975 in the context of mainframe computer systems. That its use, eve in just AI contexts, is limited to pro-AI advocacy is ludicrous (even ignoring things like ]), examples can be found in these sprawling discussions from those opposing AI use on Misplaced Pages. ] (]) 14:52, 14 January 2025 (UTC) | |||
:'''Not really''' – I agree with Thryduulf's arguments on this one. Using AI to help tweak or summarize or "enhance" replies is of course not bad faith – the person is trying hard. Maybe English is their second language. Even for replies 100% AI-generated the user may be an ESL speaker struggling to remember the right words (I always forget 90% of my French vocabulary when writing anything in French, for example). In this case, I don't think we should make a ''blanket'' assumption that using AI to generate comments is not showing good faith. '']'' (] — ]) 02:35, 2 January 2025 (UTC) | |||
:Bad idea. This ought to be in ] 'cause versions of it come up quite often. Both the GA and FA process record - for all to see - the version of the article when it became GA or FA. But Misplaced Pages is a work in progress. There is no perfect article. Misplaced Pages editors can always work to improve the article. Cheers! ] (]) 21:13, 19 March 2008 (UTC) | |||
*'''Yes''' because generating walls of text is not good faith. People "touching up" their comments is also bad (for starters, if you lack the English competency to write your statements in the first place, you probably lack the competency to tell if your meaning has been preserved or not). Exactly ''what'' AGF should say needs work, but something needs to be said, and <s>AGF</s>DGF is a good place to do it. ] (]) 02:56, 2 January 2025 (UTC) | |||
*:Not all walls of text are generated by AI, not all AI generated comments are walls of text. Not everybody who uses AI to touch up their comments lacks the competencies you describe, not everybody who does lack those competencies uses AI. It is not always possible to tell which comments have been generated by AI and which have not. This proposal is not particularly relevant to the problems you describe. ] (]) 03:01, 2 January 2025 (UTC) | |||
:::Someone has to ask: Are you generating all of these pro-AI arguments using ChatGPT? It'd explain a lot. If so, I'll happily ignore any and all of your contributions, and I'd advise anyone else to do the same. We're not here to be flooded with LLM-derived responses. ] (]) 03:27, 2 January 2025 (UTC) | |||
::::That you can't tell whether my comments are AI-generated or not is one of the fundamental problems with these proposals. For the record they aren't, nor are they pro-AI - they're simply anti throwing out babies with bathwater. ] (]) 04:25, 2 January 2025 (UTC) | |||
:::::I'd say it also illustrates the serious danger: We can no longer be sure that we're even talking to other people here, which is probably the most notable shift in the history of Misplaced Pages. ] (]) 04:34, 2 January 2025 (UTC) | |||
::::::How is that a "serious danger"? If a comment makes a good point, why does it matter whether ti was AI generated or not? If it doesn't make a good point, why does it matter if it was AI generated or not? How will these proposals resolve that "danger"? How will they be enforceable? ] (]) 04:39, 2 January 2025 (UTC) | |||
:::::::Misplaced Pages is made for people, by people, and I like most people will be incredibly offended to find that we're just playing some kind of LLM pong with a chatbot of your choice. You can't be serious. ] (]) 04:40, 2 January 2025 (UTC) | |||
::::::::You are entitled to that philosophy, but that doesn't actually answer any of my questions. ] (]) 04:45, 2 January 2025 (UTC) | |||
:::::::"why does it matter if it was AI generated or not?" | |||
:::::::Because it takes little effort to post a lengthy, low quality AI-generated post, and a lot of effort for human editors to write up replies debunking them. | |||
:::::::"How will they be enforceable? " | |||
:::::::] isn't meant to be enforced. It's meant to explain to people how they can demonstrate good faith. Posting replies to people (who took the time to write them) that are obviously AI-generated harms the ability of those people to assume good faith. ] (]) 05:16, 2 January 2025 (UTC) | |||
:The linked "example of someone using AI in their replies" appears – to me – to be a non-AI-generated comment. I think I preferred the allegedly AI-generated comments from that user (]). The AI was at least superficially polite. ] (]) 04:27, 2 January 2025 (UTC) | |||
::Obviously the person screaming in all caps that they use AI because they don't want to waste their time arguing is not using AI for that comment. Their first post calls for the article to be deleted for not "" and "merely" reiterating what other sources have written. | |||
:: |
::Yes, after a human had wasted their time explaining all the things wrong with its first post, then the bot was able to write a second post which ''looks'' ok. Except it only superficially ''looks'' ok, it doesn't actually accurately describe the articles. ] (]) 04:59, 2 January 2025 (UTC) | ||
:::Multiple humans have demonstrated in these discussions that humans are equally capable of writing posts which superficially ''look'' OK but don't actually accurately relate to anything they are responding to. ] (]) 05:03, 2 January 2025 (UTC) | |||
::::But I can assume that everyone here is acting in good faith. I can't assume good faith in the globally-locked sock puppet spamming AfD discussions with low effort posts, whose bot is just saying whatever it can to argue for the deletion of political pages the editor doesn't like. ] (]) 05:09, 2 January 2025 (UTC) | |||
:::::True, but I think that has more to do with the "globally-locked sock puppet spamming AfD discussions" part than with the "some of it might be " part. ] (]) 07:54, 2 January 2025 (UTC) | |||
::::::All of which was discovered because of my suspicions from their inhuman, and meaningless replies. "Reiteration isn't the problem; redundancy is," maybe sounds pithy in a vacuum, but this was written in reply to me stating that we aren't supposed to be doing OR but reiterating what the sources say. | |||
::::::"Your criticism feels overly prescriptive, as though you're evaluating this as an academic essay" also ''sounds good'', until you realize that the bot is actually criticizing its own original post. | |||
::::::The fact that my suspicions about their good faith were ultimately validated only makes it even harder for me to assume good faith in users who sound like ChatGPT. ] (]) 08:33, 2 January 2025 (UTC) | |||
:::::::I wonder if we need some other language here. I can understand feeling like this is a bad interaction. There's no sense that the person cares; there's no feeling like this is a true interaction. A contract lawyer would say that there's no ], and there can't be, because there's no mind in the AI, and the human copying from the AI doesn't seem to be interested in engaging their brain. | |||
:::::::But... do you actually think they're doing this for the purpose of ''intentionally'' harming Misplaced Pages? Or could this be explained by other motivations? ] – or to anxiety, insecurity (will they hate me if I get my grammar wrong?), incompetence, negligence, or any number of other "understandable" (but still something ]- and even block-worthy) reasons. ] (]) 08:49, 2 January 2025 (UTC) | |||
::::::::The ] has a header at the top asking people not to template them because it is "impersonal and disrespectful", instead requesting "please take a moment to write a comment below '''in your own words'''" | |||
::::::::Does this look like acting in good faith to you? Requesting other people write personalized responses to them while they respond with an LLM? Because it looks to me like they are trying to waste other people's time. ] (]) 09:35, 2 January 2025 (UTC) | |||
:::::::::] means that you assume people aren't deliberately screwing up on purpose. Humans are self-contradictory creatures. I generally do assume that someone who is being hypocritical hasn't noticed their contradictions yet. ] (]) 07:54, 3 January 2025 (UTC) | |||
::::::::::"Being hypocritical" in the abstract isn't the problem, it's the fact that asking people to put effort into their comments, while putting in minimal effort into your own comments appears bad faith, especially when said person says they don't want to waste time writing comments to stupid people. The fact you are arguing AGF for this person is both astounding and disappointing. ] (]) 16:08, 3 January 2025 (UTC) | |||
:::::::::::It feels like there is a lack of reciprocity in the interaction, even leaving aside the concern that the account is a block-evading sock. | |||
:::::::::::But I wonder if you have read AGF recently. The first sentence is "'''Assuming good faith''' ('''AGF''') means assuming that people are not deliberately ''trying'' to hurt Misplaced Pages, even when their actions are harmful." | |||
:::::::::::So we've got some of this (e.g., harmful actions). But do you really believe this person woke up in the morning and decided "My main goal for today is to deliberately hurt Misplaced Pages. I might not be successful, but I sure am going to try hard to reach my goal"? ] (]) 23:17, 4 January 2025 (UTC) | |||
::::::::::::Trying to hurt Misplaced Pages doesn't mean they have to literally think "I am trying to hurt Misplaced Pages", it can mean a range of things, such as "I am trying to troll Wikipedians". A person who thinks a cabal of editors is guarding an article page, and that they need to harass them off the site, may think they are improving Misplaced Pages, but at the least I wouldn't say that they are acting in good faith. ] (]) 23:27, 4 January 2025 (UTC) | |||
:::::::::::::Sure, I'd count that as a case of "trying to hurt Misplaced Pages-the-community". ] (]) 06:10, 5 January 2025 (UTC) | |||
* The issues with AI in discussions is not related to good faith, which is narrowly defined to intent. ] (]) 04:45, 2 January 2025 (UTC) | |||
Alright, if the locking-idea is to inhibiting, how about a tab on the top called "promoted" that links to the promoted-version of the article in the articles history? I think that'd still accomplish the main goals without interfering at all with our regular process and make it easy for editors to quickly see what the article was like at the time of promotion. ] (]) 22:15, 19 March 2008 (UTC) | |||
*:In my mind, they are related inasmuch as it is much more difficult for me to ascertain good faith if the words are eminently not written by the person I am speaking to in large part, but instead generated based on an unknown prompt in what is likely a small fraction of the expected time. To be frank, in many situations it is difficult to avoid the conclusion that the disparity in effort is being leveraged in something less than good faith. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 05:02, 2 January 2025 (UTC) | |||
*::Assume good faith, don't ascertain! Llm use can be deeply unhelpful for discussions and the potential for mis-use is large, but the most recent discussion I've been involved with where I observed an llm post was responded to by an llm post, I believe both the users were doing this in good faith. ] (]) 05:07, 2 January 2025 (UTC) | |||
*:::All I mean to say is it should be licit that unhelpful LLM use should be something that can be mentioned like any other unhelpful rhetorical pattern. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 05:09, 2 January 2025 (UTC) | |||
*::::Sure, but ] doesn't mention any unhelpful rhetorical patterns. ] (]) 05:32, 2 January 2025 (UTC) | |||
*::::The fact that everyone (myself included) defending "LLM use" says "use" rather than "generated", is a pretty clear sign that no one really wants to communicate with someone using "LLM generated" comments. We can argue about bans (not being proposed here), how to know if someone is using LLM, the nuances of "LLM use", etc., but at the very least we should be able to agree that there are concerns with LLM generated replies, and if we can agree that there are concerns then we should be able to agree that somewhere in policy we should be able to find a place to express those concerns. ] (]) 05:38, 2 January 2025 (UTC) | |||
*:::::...or they could be saying "use" because "using LLMs" is shorter and more colloquial than "generating text with LLMs"? ] (]) 06:19, 2 January 2025 (UTC) | |||
*::::::Seems unlikely when people justify their use for editing (which I also support), and not for generating replies on their behalf. ] (]) 06:23, 2 January 2025 (UTC) | |||
*:::::::This is just semantics. | |||
*:::::::For instance, I am OK with someone using a LLM to post a productive comment on a talk page. I am also OK with someone generating a reply with a LLM that is a productive comment to post to a talk page. I am not OK with someone generating text with an LLM to include in an article, and also not OK with someone using a LLM to contribute to an article. | |||
*:::::::The only difference between these four sentences is that two of them are more annoying to type than the other two. ] (]) 08:08, 2 January 2025 (UTC) | |||
*::::::::Most people already assume good faith in those making productive contributions. In situations where good faith is more difficult to assume, would you trust someone who uses an LLM to generate all of their comments as much as someone who doesn't? ] (]) 09:11, 2 January 2025 (UTC) | |||
*:::::::::Given that LLM-use is completely irrelevant to the faith in which a user contributes, yes. Of course what amount that actually is may be anywhere between completely and none. ] (]) 11:59, 2 January 2025 (UTC) | |||
*::::::::::LLM-use is relevant as it allows bad faith users to disrupt the encyclopedia with minimal effort. Such a user , as well as started and , all using AI. I had previously been involved in a debate with another sock puppet of theirs, but at that time they didn't use AI. Now it seems they are switching to using an LLM just to troll with minimal effort. ] (]) 21:44, 2 January 2025 (UTC) | |||
*:::::::::::LLMs are a tool that can be used by good and bad faith users alike. Using an LLM tells you nothing about whether a user is contributing in good or bad faith. If somebody is trolling they can be, and should be, blocked for trolling regardless of the specifics of how they are trolling. ] (]) 21:56, 2 January 2025 (UTC) | |||
*::::::::::::A can of spray paint, a kitchen knife, etc., are tools that can be used for good or bad, but if you bring them some place where they have few good uses and many bad uses then people will be suspicious about why you brought them. You can't just assume that a tool in any context is equally harmless. Using AI to generate replies to other editors is more suspicious than using it to generate a picture exemplifying a fashion style, or a description of a physics concept. ] (]) 23:09, 2 January 2025 (UTC) | |||
*:::::::::I wouldn't trust anything factual the person would have to say, but I wouldn't assume they were malicious, which is the entire point of ]. ] (]) 16:47, 2 January 2025 (UTC) | |||
*::::::::::] is not a death pact though. At times you should be suspicious. Do you think that if a user, ''who you already have suspicions of'', is also using an LLM to generate their comments, that that doesn't have any effect on those suspicions? ] (]) 21:44, 2 January 2025 (UTC) | |||
*:::::::::::So… If you suspect that someone is not arguing in good faith… just stop engaging them. If they are creating walls of text but not making policy based arguments, they can be ignored. Resist the urge to respond to every comment… it isn’t necessary to “have the last word”. ] (]) 21:57, 2 January 2025 (UTC) | |||
*::::::::::::As the person ] demonstrates, you can't "just stop engaging them". When they then somebody has to engage them in some way. It's not about trying to "have the last word", this is a collaborative project, it generally requires engaging with others to some degree. When someone like the person I linked to above (now banned sock), spams low quality comments across dozens of AfDs, then they are going to waste people's time, and telling others to just not engage with them is dismissive of that. ] (]) 22:57, 2 January 2025 (UTC) | |||
*:::::::::::::That they've been banned for disruption indicates we can do everything we need to do to deal with bad faith users of LLMs without assuming that everyone using an LLM is doing so in bad faith. ] (]) 00:33, 3 January 2025 (UTC) | |||
*::::::::::::::I don't believe we should assume everyone using an LLM is doing so in bad faith, so I'm glad you think my comment indicates what I believe. ] (]) 01:09, 3 January 2025 (UTC) | |||
:'''No''' -- whatever you think of LLMs, the reason they are so popular is that the people who use them earnestly believe they are useful. Claiming otherwise is divorced from reality. Even people who add hallucinated bullshit to articles are usually well-intentioned (if wrong). ] (]) 06:17, 2 January 2025 (UTC) | |||
:On the talk page of the article, in the article history banner, there is a link to the promoted version. Something like an obvious 'promoted version' tab has been suggested before. It is a good idea. Another good idea is to make the ] button to the left far more obvious: see ]. This would help greatly in allowing people to cite Misplaced Pages properly. Both ideas requite a developer / programmer and consensus. For whatever reason, it isn't happening; just like the user friendly editing environment that has been ] for a year now. Cheers! ] (]) 22:31, 19 March 2008 (UTC) | |||
*'''Comment''' I have no opinion on this matter, however, note that we are currently dealing with a ] and there's a generalized state of confusion in how to address it. ] (]) 08:54, 2 January 2025 (UTC) | |||
*'''Yes''' I find it incredibly rude for someone to procedurally generate text and then expect others to engage with it as if they were actually saying something themselves. ] (]) 14:34, 2 January 2025 (UTC) | |||
* '''Yes, mention''' that use of an LLM should be disclosed and that failure to do so is like not telling someone you are taping the call. ] (]) 14:43, 2 January 2025 (UTC) | |||
*:I could support general advice that if you're using machine translation or an LLM to help you write your comments, it can be helpful to mention this in the message. The tone to take, though, should be "so people won't be mad at you if it screwed up the comment" instead of "because you're an immoral and possibly criminal person if you do this". ] (]) 07:57, 3 January 2025 (UTC) | |||
: '''No.''' When someone publishes something under their own name, they are incorporating it as their own statement. Plagiarism from an AI or elsewhere is irrelevant to whether they are engaging in good faith. ] (]) 17:29, 2 January 2025 (UTC) | |||
* '''Comment''' LLMs know a few tricks about logical fallacies and some general ways of arguing (rhetoric), but they are incredibly dumb at understanding the rules of Misplaced Pages. You can usually tell this because it looks like incredibly slick and professional prose, but somehow it cannot get even the simplest points about the policies and guidelines of Misplaced Pages. I would indef such users for lacking ]. ] (]) 17:39, 2 January 2025 (UTC) | |||
*:That guideline states "Sanctions such as blocks and bans are always considered a ''last resort'' where all other avenues of correcting problems have been tried and have failed." ] (]) 19:44, 2 January 2025 (UTC) | |||
*:: ] isn't a guideline, but an essay. Relevantly though it is being cited at this very moment in ]. ] (]) 20:49, 2 January 2025 (UTC) | |||
*:::I blocked that user as NOTHERE a few minutes ago after seeing them (using ChatGPT) make suggestions for text to live pagespace while their previous bad behaviors were under discussion. AGF is not a suicide pact. ] (]) 20:56, 2 January 2025 (UTC) | |||
*'''No''' - Not a good or bad faith issue. ] (]) 21:02, 2 January 2025 (UTC) | |||
*'''Yes''' Using a 3rd party service to contribute to the Misplaced Pages on your behalf is clearly bad-faith, analogous to paying someone to write your article. ] (]) 14:39, 3 January 2025 (UTC) | |||
*:Its a stretch to say that a newbie writing a comment using AI is automatically acting in bad faith and not here to build an encyclopedia. ] (]) 16:55, 3 January 2025 (UTC) | |||
*::That's true, but this and other comments here show that not a few editors perceive it as bad-faith, rude, etc. I take that as an indication that we should tell people to avoid doing this when they have enough CLUE to read WP:AGF and are ]. <span style="font-family:Garamond,Palatino,serif;font-size:115%;background:-webkit-linear-gradient(red,red,red,blue,blue,blue,blue);-webkit-background-clip:text;-webkit-text-fill-color:transparent">] ]</span> 23:06, 9 January 2025 (UTC) | |||
*'''Comment''' Large language model AI like Chat GPT are in their infancy. The culture hasn't finished its initial reaction to them yet. I suggest that any proposal made here have an automatic expiration/required rediscussion date two years after closing. ] (]) 22:42, 3 January 2025 (UTC) | |||
*'''No''' – It is a matter of how you use AI. I use Google translate to add trans-title parameters to citations, but I am careful to check for Google's output making for good English as well as reflecting the foreign title when it is a language I somewhat understand. I like to think that I am careful, and I do not pretend to be fluent in a language I am not familiar with, although I usually don't announce the source of such a translation. If an editor uses AI profligately and without understanding the material generated, then that is the sin; not AI itself. ] (]) 05:04, 5 January 2025 (UTC) | |||
*:There's a legal phrase, "when the exception swallows the rule", and I think we might be headed there with the recent LLM/AI discussions. | |||
*:We start off by saying "Let's completely ban it!" Then in discussion we add "Oh, except for this very reasonable thing... and that reasonable thing... and nobody actually meant this other reasonable thing..." | |||
*:The end result is that it's "completely banned" ...except for an apparent majority of uses. ] (]) 06:34, 5 January 2025 (UTC) | |||
*::Do you want us to reply to you, because you are a human? Or are you just posting the output of an LLM without bothering to read anything yourself? ] (]) 06:08, 7 January 2025 (UTC) | |||
*:::Most likely you would reply because someone posted a valid comment and you are assuming they are acting in good faith and taking responsibility for what they post. To assume otherwise is kind of weird and not inline with general Misplaced Pages values. ] (]) 15:19, 8 January 2025 (UTC) | |||
*'''No''' The OP seems to misunderstand ] which is not aimed at weak editors but instead exhorts stronger editors to lead by example. That section already seems to overload the primary point of WP:AGF and adding mention of AI would be quite inappropriate per ]. ]🐉(]) 23:11, 5 January 2025 (UTC) | |||
*'''No'''. Reading the current text of the section, adding text about AI would feel out-of-place for what the section is about. <span class="nowrap">—] (] | ])</span> 05:56, 8 January 2025 (UTC) | |||
*'''No''', this is not about good faith. ] (]) 11:14, 9 January 2025 (UTC) | |||
*'''Yes'''. AI use is ''not'' a demonstration of bad faith (in any case not every new good-faith editor is familiar with our AI policies), but it is equally not a "demonstration of good faith", which is what the ] section is about. | |||
:It seems some editors are missing the point and !voting as if every edit is either a demonstration of good faith or bad faith. Most interactions are neutral and so is most AI use, but I find it hard to imagine a situation where AI use would point ''away'' from unfamiliarity and incompetence (in the CIR sense), and it often (unintentionally) leads to a presumption of laziness and open disinterest. It makes perfect sense to recommend against it. <span style="font-family:Garamond,Palatino,serif;font-size:115%;background:-webkit-linear-gradient(red,red,red,blue,blue,blue,blue);-webkit-background-clip:text;-webkit-text-fill-color:transparent">] ]</span> 22:56, 9 January 2025 (UTC) | |||
::Indeed most kinds of actions don't inherently demonstrate good or bad. The circumspect and neutral observation that {{tq|AI use is ''not'' a demonstration of bad faith... but it is equally not a "demonstration of good faith"}}, does not justify a proposal to one-sidedly say just half. And among all the actions that don't necessarily demonstrate good faith (and don't necessarily demonstrate bad faith either), it is not the purpose of "demonstrate good faith" and the broader guideline, to single out one kind of action to especially mention negatively. ] (]) 04:40, 13 January 2025 (UTC) | |||
*'''Yes'''. Per Dass Wolf, though I would say passing off a completely AI-generated comment as your own ''anywhere'' is inherently bad-faith and one doesn't need to know Wiki policies to understand that. ] (]) 23:30, 9 January 2025 (UTC) | |||
*'''Yes'''. Sure, LLMs may have utility somewhere, and it might be a crutch for people unfamiliar with English, but as I've said above in the other AI RfC, that's a ] issue. This is about comments eating up editor time, energy, about LLMs easily being used to ram through changes and poke at editors in good standing. I don't see a case wherein a prospective editor's command of policy and language is good enough to discuss with other editors while being bad enough to require LLM use. ]<span style="color: #3558b7;"><sup>]</sup>]</span> 01:26, 10 January 2025 (UTC) | |||
*:Good faith is separate from competence. Trying to do good is separate from having skills and knowledge to achieve good results. ] (]) 04:40, 13 January 2025 (UTC) | |||
*'''No''' - anyone using a washing machine to wash their clothes must be evil and inherently lazy. They cannot be trusted. ... Oh, sorry, wrong century. Regards, --] (]) 01:31, 10 January 2025 (UTC) | |||
*'''No''' - As long as a person understands (and knows) what they are talking about, we shouldn't discriminate against folks using generative AI tech for grammar fixes or minor flow improvements. Yes, AI can create walls of text, and make arguments not grounded in policy, but we could do that even without resorting to generative AI. ] (]) 11:24, 13 January 2025 (UTC) | |||
::To expand on my point above. Completely AI generated comments (or articles) are obviously bad, but {{tq|using AI}} should be thrown into the same cross-hairs as completely AI generated comments. ] (]) 11:35, 13 January 2025 (UTC) | |||
:::@] You mean ''shouldn't'' be thrown? I think that would make more sense given the context of your original !vote. <sub>Duly signed,</sub> ''']'''-''<small>(])</small>'' 14:08, 14 January 2025 (UTC) | |||
*'''No'''. Don't make any changes. It's not a good faith/bad faith issue. The 'yes' arguments are most unconvincing with very bizarre analogies to make their point. Here, I can make one too: "Don't edit with AI; you wouldn't shoot your neighbor's dog with a BB-gun, would you?" <sub>Duly signed,</sub> ''']'''-''<small>(])</small>'' 14:43, 13 January 2025 (UTC) | |||
== Edit quality in Android application == | |||
:Because even the promoted version, which is already available as a link in the articlehistory template, isn't necessarily the best version of the article. Also, if we were to implement this proposal (which will never happen, but for the sake of discussion), then what are you going to do, go back to FAC each time you want to update the protected version? Completely impractical proposal. ] (]) 22:38, 19 March 2008 (UTC) | |||
Several contributors, including myself, don't understand the point of the ‘edit quality’ mention in the Android application. | |||
::* I never knew about the link, so I'm willing to bet many others don't either. | |||
::* As pointed out, we already have the link, all we'd be doing is making it faster and easier to access | |||
::* Should there be multiple promotions, then we can have a history on the promoted tab to show them. | |||
::What's so impractical about any of that? ] (]) 23:50, 19 March 2008 (UTC) | |||
:::I plan to be doing a writeup explaining {{t1|Articlehistory}} in the ] soon. Multiple promotions? FAC reviewers are bustin' their buns already; they don't need more work. If an article deteriorates, it goes to ], and yes, that is also a tab in articlehistory. Yes, this is a very old, recycled discussion, and most FAC reviewers and FA writers know it's just not workable. ] (]) 01:06, 20 March 2008 (UTC) | |||
::::Multiple promotions as in FA, FAR then FA again. I'm still not sure why a link to the promoted version is "unworkable". It's just saving me the effort of going to the talk page, expanding the history template, and then clicking the link. Cutting from several clicks to one and making it readily available to editors. ] (]) 01:25, 20 March 2008 (UTC) | |||
::::: There's no reason for the featured link to be available from the main article to our general readership; it's not necessarily the best version. Wiki editors can find it on the talk page; our general readership doesn't need it. ] (]) 01:54, 20 March 2008 (UTC) | |||
::::::So, you're saying that the article we promoted is basically irrelevant? ] (]) 02:14, 20 March 2008 (UTC) | |||
:::::::No, that it may have improved after promotion, so the promoted version is not the best. ] <small>]</small> 23:23, 20 March 2008 (UTC) | |||
The ] states that edit quality is ‘based on how many of your edits were reverted’. Is this the only criterion used? If so, calling it ‘edit quality’ is presumptuous, not to say irresponsible, given that there should be so many criteria. What are the different levels of judgement: excellent, good, bad, horrible, etc. ? What does this mean for developers, bots, patrollers and administrators? Whatever the case, the consequences of such a statement could be quite serious. | |||
*''This'' old proposal is being recycled? Surely not. It's unworkable and will lead to the atrophy of FAs. The whole idea of WP is that all articles be constantly upgradable and improvable. ] ] 00:57, 20 March 2008 (UTC) | |||
** Can you expand on how a quick link to the historically promoted FA atrophies articles? ] (]) 01:02, 20 March 2008 (UTC) | |||
It's clear that the choice to keep this mention in the application doesn't just concern the application developers or even a small team, but all Misplaced Pages contributors, whatever their language. So we think it's important that you answer our questions. <small>(As my English isn't very good, could you answer me in basic English, or at least without figures of speech? Thank you in advance.)</small> ] (]) 13:12, 7 January 2025 (UTC) | |||
About "featured article rot": I don't deny there's such a thing, but it's not as much of a problem as some people make it out to be. Out of the 2462 articles that have ever had featured status, 1953 still do. 538 pages have been de-featured, and if we subtract the 29 re-featured pages that makes 509. In other words, just 21% of historically featured articles are no longer featured. While it would be wonderful if those 509 articles were still featured, 509 fewer FAs don't pose a grave danger to the encyclopedia. | |||
:Hi Albag. I agree with the core of your message. But I don't get why you have been redirected to English Village Pump. This should be discussed somewhere on Meta or Mediawiki wiki, as it's relevant for all Misplaced Pages communities, not only the English one. (You and I are from fr-wp.) Best, — ''']''' <sup><small style="border-bottom:1px solid">]</small></sup> 13:28, 7 January 2025 (UTC) | |||
Regardless, locking featured articles wouldn't prevent "featured article rot". What we call "rot" is often caused by rising standards, not by a decline in quality. For example, many articles lose featured status because they lack inline citations, which were rare in Misplaced Pages's early days. Standards could change even more in the future, and locking the articles would prevent needed improvements in such cases. <font color="green">]</font> (<font color="green">]</font>) 05:24, 20 March 2008 (UTC) | |||
::Thanks ]. According to ], the place to discuss would be there : ]. --] (]) 13:43, 7 January 2025 (UTC) | |||
:Yes, I've got that having articles locked is considering stagnating. What I'm proposing now is a quick, readily available link to the historical article. Something we already have access to, just more prominent. ] (]) 10:53, 20 March 2008 (UTC) | |||
::You say the link on the talk page isn't prominent enough. Is seeing the "original featured version" so important to our general readership that an extra tab on the article page is necessary? As SandyGeorgia pointed out above, the "original featured version" isn't always the best. We have featured articles about people, such as ], who died after the article's promotion. The "original featured version" was written when he was still alive. And as I pointed out above, most featured articles ''don't'' deteriorate in quality as time goes on. In fact, many of them ''improve'', especially when they're featured on the Main Page. <font color="green">]</font> (<font color="green">]</font>) 11:10, 20 March 2008 (UTC) | |||
:"Feature Article rot" is a problem, but is even less of a problem than Szyslak's statistics imply. Many of the de-featured articles lost FA status because the criteria were raised, not because they deteriorated. For example, if an article was featured before in-line citations became the norm, then it may lack them. Unless someone is willing to go back and insert citations, the article will eventually loose FA status, without having deteriorated in quality. ] (]) 00:38, 21 March 2008 (UTC) | |||
== Allowing non-admin "delete" closures at RfD == | |||
Since 99.9% of anonymous IP users are ], why not just protect all ]s from being edited by anonymous, or new users? This would keep the vandalism down, and hopefully keep the damage down by the ] to a minimum. | |||
At ], a few editors ({{u|Enos733}} and {{u|Jay}}, while {{u|Robert McClenon}} and {{u|OwenX}} hinted at it) expressed support for allowing non-administrators to close RfD discussions as "delete". While I don't personally hold strong opinions in this regard, I would like for this idea to be discussed here. ]<sub>]<sub>]</sub></sub> (]/]) 13:13, 7 January 2025 (UTC) | |||
Of course, a better idea would be to just block anonymous IPs from editing in the first place, but unfortunately, that idea is unlikely to get past the ],... ] (]) 14:56, 20 March 2008 (UTC) | |||
*] --] <sup>(])</sup> 14:10, 7 January 2025 (UTC) | |||
:You grossly overestimate the figure - it's actually 21%, which is why this is such a bad idea. '''''<font color="#FF0000">]</font>''''' 16:19, 20 March 2008 (UTC) | |||
*While I have no issue with the direction the linked discussion has taken, I agree with almost every contributor there: As a practice I have zero interest in generally allowing random editors closing outside their permissions. It might make DRV a more chatty board, granted. ] (]) 15:02, 7 January 2025 (UTC) | |||
* Locking down FAs is a really bad idea. Articles ''are'' improved after promotion; in fact, there are several users who go around and improve articles after promotion (] comes immediately to mind), so locking articles would prevent them from improving the pages. ]<sup>(] - ])</sup> 16:45, 20 March 2008 (UTC) | |||
*:Tamzin makes a reasonable case in their comment below. When we have already chosen to trust certain editors with advanced permissions, we might allow those folks to utilize them as fully as accepted practice allows. Those humans already have skin in the game. They are unlikely to act rashly. ] (]) 19:32, 7 January 2025 (UTC) | |||
::True, articles are improved after becoming FA, so I agree that locking them down completely isn't the best solution. But I still think that something needs to be done about the Anon Vandal problem; I see such pussies that try to hide behind their so-called IP address as never legitimately adding anything constructive to the article, and they should not have any rights whatsoever in this encyclopedia. ] (]) 17:03, 20 March 2008 (UTC) | |||
* To me, non-admin delete closes at any XfD have always seemed inconsistent with what we say about how adminship and discussion closing work. I would be in violation of admin policy if I deleted based on someone else's close without conducting a full review myself, in which case, what was the point of their close? It's entirely redundant to my own work. That said, I can't really articulate a reason that this should be allowed at some XfDs but not others, and it seems to have gone fine at CfD and TfD. I guess call me neutral. {{PB}} What I'd be more open to is allowing page movers to do this. Page movers do have the tools to turn a bluelink red, so it doesn't create the same admin accountability issue if I'm just cleaning up the stray page left over from a page mover's use of a tool that they were duly granted and subject to their own accountability rules for. We could let them move a redirect to some other plausible title (this would violate ] as currently written but I think I'd be okay with making this a canonical exception), and/or allow moving to some draftspace or userspace page and tagging for G6, as we do with {{tl|db-moved}}. I'll note that when I was a non-admin pagemover, I did close a few things as delete where some edge case applied that let me effect the deletion using only suppressredirect, and no one ever objected. <span style="font-family:courier"> -- ]</span><sup class="nowrap">[]]</sup> <small>(])</small> 19:07, 7 January 2025 (UTC) | |||
:::Actually, Featured Articles are amongst Misplaced Pages's most watched ones, especially by the WikiProjects that have done the most work to get them to FA status in the first place. Vandalism is more likely to be reverted immediately or almost immediately there than in most other articles. True, it takes up some resources that could be better used elsewhere, but that's the situation all around the encyclopaedia anyway. Bottom line, vandalism is not a major issue as far as the quality of FAs is concerned. ], <small>]</small> 17:56, 20 March 2008 (UTC) | |||
*::I see that I was sort of vague, which is consistent with the statement that I hinted at allowing non-admin delete closures. My main concern is that I would like to see our guidelines and our practice made consistent, either by changing the guidelines or changing the practice. It appears that there is a rough consensus emerging that non-admin delete closures should continue to be disallowed in RFD, but that CFD may be a special case. So what I am saying is that if, in practice, we allow non-admin Delete closures at CFD, the guideline should say something vague to that effect. | |||
*::I also see that there is a consensus that DRV can endorse irregular non-admin closures, including irregular non-admin Delete closures. Specifically, it isn't necessary for DRV to vacate the closure for an ] admin to close. A consensus at DRV, some of whose editors will be uninvolved admins, is at least as good a close as a normal close by an uninvolved admin. | |||
*::Also, maybe we need clearer guidance about non-admin Keep closures of AFDs. I think that if an editor is not sure whether they have sufficient experience to be closing AFDs as Keep, they don't have enough experience. I think that the guidance is clear enough in saying that ] applies to non-admin closes, but maybe it needs to be further strengthened, because at DRV we sometimes deal with non-admin closes where the closer doesn't respond to inquiries, or is rude in response to them. | |||
*::Also, maybe we need clearer guidance about non-admin No Consensus closures of AFDs. In particular, a close of No Consensus is a contentious closure, and should either be left to an admin, or should be Relisted. | |||
::] (]) 19:20, 7 January 2025 (UTC) | |||
:::As for {{tq| I can't really articulate a reason that this should be allowed at some XfDs}}, the argument is that more work is needed to enact closures at TfD and CfD (namely orphaning templates and emptying/moving/merging categories). Those extra steps aren't present at RfD. At most, there are times when it's appropriate to unlink the redirect or add ]s but those are automated steps that ] handles. From my limited experience at TfD and CfD though, it does seem that the extra work needed at closure does not compensate for the extra work from needing two people reviewing the closure (especially at CfD because a bot that handles the clean-up). Consistency has come up and I would much rather consistently disallow non-admin delete closures at all XfD venues. I know it's tempting for non-admins to think they're helping by enacting these closures but it's not fair for them to be spinning their wheels. As for moving redirects, that's even messier than deleting them. There's a reason that ] advises not to move redirects except for limited cases when preserving history is important. --] <sup>(])</sup> 20:16, 7 January 2025 (UTC) | |||
::@]: I do have one objection to this point of redundancy, which you are ]. Here, an AfD was closed as "transwiki and delete", however, the admin who did the closure does not have the technical ability to transwiki pages to the English Wikibooks, meaning that I, who does, had to determine that the outcome was actually to transwiki rather than blindly accepting a request at ]. Then, I had to mark the pages for G6 deletion, that way an admin, in this case you, could determine that the page was ready to be deleted. Does this mean that that admin who closed the discussion shouldn't have closed it, since they only have the technical ability to delete, not transwiki? Could I have closed it, having the technical ability to transwiki, but not delete? Either way, someone else would have had to review it. Or, should only people who have importing rights on the target wiki ''and'' admin rights on the English Misplaced Pages be allowed to close discussions as "transwiki and delete"? ]<sub>]<sub>]</sub></sub> (]/]) 12:04, 8 January 2025 (UTC) | |||
*I do support being explicit when a non-administrator can close a discussion as "delete" and I think that explicitly extending to RfD and CfD is appropriate. First, there can be a backlog in both of these areas and there are often few comments in each discussion (and there is usually not the same passion as in an AfD). Second, the delete close of a non-administrator is reviewed by an administrator before action is taken to delete the link, or category (a delete close is a two-step process, the writeup and the delete action, so in theory the administrators workload is reduced). Third, non-admins do face ] for their actions, and can be subject to sanction. Fourth, the community has a role in reviewing closing decisions at DRV, so there is already a process in place to check a unexperienced editor or poor close. Finally, with many, if not most discussions for deletion the outcome is largely straight forward. --] (]) 20:01, 7 January 2025 (UTC) | |||
*There is currently no rule against non-admin delete closures as far as I know; the issue is the practical one that you don't have the ability to delete. However, I ''have'' made non-admin delete closures at AfD. This occurred when an admin deleted the article under consideration (usually for COPYVIO) without closing the related AfD. The closures were not controversial and there was no DRV. ] ] 20:31, 7 January 2025 (UTC) | |||
In light of recent events, what about indefinite move-protection the default for featured articles? ] 06:59, 21 March 2008 (UTC) | |||
::The situation you're referring to is an exception allowed per ]: {{tq|If an administrator has deleted a page (including by speedy deletion) but neglected to close the discussion, anyone with a registered account may close the discussion provided that the administrator's name and deletion summary are included in the closing rationale.}} --] <sup>(])</sup> 20:37, 7 January 2025 (UTC) | |||
:This is proposed so often as to be a perennial proposal (someone ought to add it to that page). Usual arguments against: featured articles are improved and expanded during and after their time being featured (there's no such thing as a "finished" article), they need to be updated in response to changes to the topic itself, and semiprotection is easily circumvented by people seeking high-profile vandalism. I'd also argue that deterioration is, in general, not a big deal, because when constructing a static version for publication we can go back in time to the peak of quality and grab that version. ] 17:42, 21 March 2008 (UTC) | |||
*Bad idea to allow, this sort of closure is just busy work, that imposes more work on the admin that then has to review the arguments, close and then delete. ] (]) 22:05, 7 January 2025 (UTC) | |||
* Is this the same as ] above? ]] 23:04, 7 January 2025 (UTC) | |||
**Yes, ]. Same issue coming from the same ]. ] (]) 03:52, 8 January 2025 (UTC) | |||
* (1) As I've also ], the deletion process guidelines at ] do say non-admins shouldn't do "delete" closures and do recognize exceptions for CfD and TfD. There isn't a current inconsistency there between guidelines and practice. <br>(2) In circumstances where we do allow for non-admin "delete" closures, I would hope that the implementing admin isn't fully ] before implementing, but rather giving deference to any reasonable closure. That's how it goes with ] closers asking for technical help implementing a "moved" closure at ] (as noted at ], the closure will "generally be respected by the administrator (or page mover)" but can be reverted by an admin if "clearly improper"). ] ] 08:41, 9 January 2025 (UTC) | |||
*'''Comment''' - A couple things to note about the CFD process: It very much requires work by admins. The non-admin notes info about the close at WT:CFD/Working, and then an admin enters the info on the CFD/Working page (which is protected) so that the bot can perform the various actions. Remember that altering a category is potentially more labour intensive than merely editing or deleting a single page - every page in that category must be edited, and then the category deleted. (There are other technical things involved, like the mess that template transclusion can cause, but let's keep it simple.) So I wouldn't suggest that that process is very useful as a precedent for anything here. It was done at a time when there was a bit of a backlog at CfD, and this was a solution some found to address that. Also - since then, I think at least one of the regular non-admin closers there is now an admin. So there is that as well. - <b>]</b> 09:14, 9 January 2025 (UTC) | |||
== Thorny Article Rescue issue == | |||
*If the expectation is that an admin needs to review the deletion discussion to ensure they agree with that outcome before deleting via G6, as multiple people here are suggesting, then I'm not sure this is worthwhile. However, I have had many admins delete pages I've tagged with G6, and I have been assuming that they only check that the discussion was indeed closed as delete, and trust the closer to be responsible for the correctness of it. This approach makes sense to me, because if a non-admin is competent to close and be responsible for any other outcome of a discussion, I don't see any compelling reason they can't be responsible for a delete outcome and close accordingly. <span style="white-space: nowrap;">—] <sup>(]·])</sup></span> 19:51, 9 January 2025 (UTC) | |||
*:Some closers, and you're among them, have closing accuracy similar to many sysops. But the sysop can't/shouldn't "trust" that your close is accurate. Trustworthy though you are, the sysop must, at very minimum, check firstly that the close with your signature on it was actually made by you (signatures are easily copied), secondly that the close wasn't manifestly unreasonable, and thirdly that the CSD is correct. ] holds the deleting sysop responsible for checking that the CSD were correctly applied. G6 is for uncontroversial deletions, and if there's been an XFD, then it's only "uncontroversial" if the XFD was unanimous or nearly so. We do have sysops who'll G6 without checking carefully, but they shouldn't. Basically, non-admin closing XFDs doesn't save very much sysop time. I think that if your motive as a non-admin is to relieve sysops of labour, the place you're of most use is at RfC.—] <small>]/]</small> 11:28, 12 January 2025 (UTC) | |||
*::{{tpq|if your motive as a non-admin is to relieve sysops of labour, the place you're of most use is at RfC}} alternatively you should consider becoming an administrator yourself. ] (]) 13:20, 12 January 2025 (UTC) | |||
*:::<small>If you're willing to tolerate the RFA process.—] <small>]/]</small> 15:24, 12 January 2025 (UTC)</small> | |||
*::In all the cases I have dealt with, the admin's reason for deletion (usually copyvio) was completely different to the issues being debated in the AfD (usually notability). The closing statement was therefore something like "Discussion is now moot due to article being deleted for <reason> by <admin>". ] ] 20:10, 14 January 2025 (UTC) | |||
*I think most all the time, experienced closers will do a great job and that will save admin time because they will not have to construct and explain the close from scratch, but there will be some that are bad and that will be costly in time not just for the admin but for the project's goal of completing these issues and avoiding disruption. I think that lost time is still too costly, so I would oppose non-admin delete closes. (Now if there were a proposal for a process to make a "delete-only admin permission" that would be good -- such motivated specialists would likely be more efficient.) ] (]) 16:44, 12 January 2025 (UTC) | |||
* As I said at the "Non-Admin XFD Close as Delete" section, I support non-admins closing RfDs as Delete. If TfDs have been made an exception, RfDs can be too, especially considering RfD backlogs. Closing a heavily discussed nomination at RfD is more about the reading, analysis and thought process at arriving at the outcome, and less about the technicality of the subsequent page actions. I don't see a significant difference between non-admins closing discussions as Delete vs non-Delete. It will help making non-admins mentally prepared to advance to admin roles.<span style="font-family:Segoe Script">]</span><span style="font-size:115%">]</span> 14:53, 14 January 2025 (UTC) | |||
== Placement of dynamic list template == | |||
A user has been using ] arguments to save what looks to me like a ] in his user space. No attempts to improve these articles were made after userfication. I'm concerned that this might be a way people have discovered to circumvent this policy. Please comment here: ]. In particular, I think some standard for the amount of time that a deleted page is allowed to sit idly on a user's subpage needs to be established. Please comment. ] (]) 13:18, 20 March 2008 (UTC) | |||
Where does the template {{template|dynamic list}} actually go? It superficially looks like it should be a hatnote at the top of the article, but there are a great number (perhaps a majority) of dynamic list articles where the template is placed under the infobox and lead. I can't find anything in the MOS specifically addressing this. ] 21:44, 7 January 2025 (UTC) | |||
== Advertising == | |||
:The template documentation, which you linked to, addresses this: Place this notification template immediately before the applicable ] (i.e. in a section, not at the top of the article). This template will automatically add the article to ]. ] (]/]) 21:47, 7 January 2025 (UTC) | |||
::In many cases, the template is neither at the very top nor within a section. Just to be clear, this placement is not correct? ] 22:25, 7 January 2025 (UTC) | |||
:::It should go right above the list. ] (]/]) 22:40, 7 January 2025 (UTC) | |||
== Should WP:NOTDIRECTORY be more specific about phone numbers, etc.? == | |||
Sorry if this has ended up the wrong slot. I couldn't find one specific to the subject and thought this was close. | |||
Apparently the ] talk page isn't well-watched, so I'd like to draw some attention to ] over there. ] (]) 04:05, 8 January 2025 (UTC) | |||
A recent news story mentioned that the Misplaced Pages community was debating the issue of raising funding through advertising. I think this could be done in an innocuous and user-helpful way. | |||
:That talk page has been checked by at least 100 registered editors during the last month, which makes it one of our better watched pages. ] (]) 18:22, 9 January 2025 (UTC) | |||
You could have a "Related Advertising" link in the left frame that opens a page with links to ads that pertain to the subject. | |||
::{{tq|Apparently}}. :) Getting no response in 36 hours was surprising. The silence could mean "no objection" (since I'm not proposing a change, just a clarification that already used to be there), but since it's a policy page, it would be good to have something more tangible. ] (]) 08:20, 10 January 2025 (UTC) | |||
== Does my name technically violate the promotional username policy? == | |||
I'd like to be able to see ads that relate to articles. For instance, when reading up on Romania, I may want to see tourism information and what tour packages are available. If researching solar heating systems, I'd likely want to see ads pertaining to those products. | |||
My username, as of right now, is Tenebre.Rosso.Sangue995320, which is a reference to and i recently read the username policy, and thought that this violates that policy. can someone help me out? ] ] ] 23:48, 9 January 2025 (UTC) | |||
Advertising could be a useful adjunct to Misplaced Pages. | |||
:No, it does not. ] (]/]) 00:11, 10 January 2025 (UTC) | |||
== Technical section headings == | |||
--] (]) 13:03, 21 March 2008 (UTC) | |||
I had expected ] to state a preference toward easily understood headings over complicated technical headings. No such preference was stated. Am I missing something? This subject is currently at issue in regards to Techtonic Setting vs Background at ].-] <small>(] / ] / ] / ] / ])</small> 15:32, 12 January 2025 (UTC) | |||
:Never going to happen. ] (]) 15:41, 21 March 2008 (UTC) | |||
:You may be looking for ]? It doesn't specifically address headings, but in general makes the point that we should strive to make articles as accessible as possible for the general reader. This is part of ]. ] (]) 15:50, 12 January 2025 (UTC) | |||
:::That is helpful. I guess there is no such guidance for the section headings. Thx.-] <small>(] / ] / ] / ] / ])</small> 05:33, 13 January 2025 (UTC) | |||
::I wouldn't say "Tectonic setting" and "Background" are interchangeable for any reader, the first is much more specific. ] (]) 00:25, 13 January 2025 (UTC) | |||
:::In this review the set of section headings was Tectonic setting, Earthquake, Impact or Aftermath, making for an odd Table of contents for the non-expert reader because one section heading was for experts and the others were for the general audience. The techtonics were a historical background so I advised to make all the section titles for the general audience.-] <small>(] / ] / ] / ] / ])</small> 05:30, 13 January 2025 (UTC) | |||
== Modifying the first sentence of BLPSPS == | |||
::'Nuff said. ], <small>]</small> 21:39, 21 March 2008 (UTC) | |||
:::No opinion on the issue but there's a page on advertisements on Misplaced Pages: ]. <span style="font-family: Tahoma; font-size: 8pt;">] <span style="font-size: 7pt;">] ]</span></span> 03:43, 22 March 2008 (UTC) | |||
{{FYI}} A discussion has been started at ] re: modifying the text of BLPSPS. ] (]) 14:23, 13 January 2025 (UTC) | |||
== Misplaced Pages:When to cite no longer marked as a guideline == | |||
== Upgrade ] to an official guideline == | |||
{{lw|When to cite}} has been edited so that it is no longer marked as a ]. It was previously marked as a ]. This is an automated notice of the change (]). -- ] (]) 18:50, 21 March 2008 (UTC) | |||
{{Discussion top|result= {{Moved discussion to|Misplaced Pages talk:WikiProject Albums/Album article style advice|2=] (] | ]) 21:10, 15 January 2025 (UTC)}}}} | |||
] is an essay. I've been editing since 2010, and for the entire duration of that, this essay has been referred to and used extensively, and has even guided discussions regarding ascertaining if sources are reliable. I propose that it be formally upgraded to a status as an MOS guideline parallel to ].--] (] | ]) 14:28, 13 January 2025 (UTC) | |||
:I'm broadly in favor of this proposal—I looked over the essay and most of it is aligned with what seems standard in album articles—but there are a few aspects that feel less aligned with current practice, which I'd want to reexamine before we move forward with promoting this: | |||
:Under discussion at ]. -- <font style="font-family:Monotype Corsiva; font-size:15px;">] </font> ] 23:11, 21 March 2008 (UTC) | |||
:* The section ] suggests {{tq|What other works of art is this producer known for?}} as one of the categories of information to include in a recording/production section. This can be appropriate in some cases (e.g., the '']'' article discusses how Butch Vig's work with Killdozer inspired Nirvana to try and work with him), but recommending it outright seems like it'd risk encouraging people to ]. My preference would be to cut the sentence I quoted and the one immediately following it. | |||
:* The section ] suggests that the numbered-list be the preferred format for track listings, with other formats like {{tl|Track listing}} being alternative choices for "more complicated" cases. However, in my experience, using {{tlg|Track listing|nolink=yes}} rather than a numbered list tends to be the standard. All of the formatting options currently listed in the essay should continue to be mentioned, but I think portraying {{tlg|Track listing|nolink=yes}} as the primary style would be more reflective of current practice. | |||
:* The advice in the ] section seems partially outdated. In my experience, review aggregators like Metacritic are conventionally discussed in the "Critical reception" section instead these days, and I'm uncertain to what extent we still link to databases like Discogs even in ELs. | |||
:(As a disclaimer, my familiarity with album articles comes mostly from popular-music genres, rock and hip-hop in particular. I don't know if typical practice is different in areas like classical or jazz.) Overall, while I dedicated most of my comment volume to critiques, these are a fairly minor set of issues in what seems like otherwise quite sound guidance. If they're addressed, it's my opinion that this essay would be ready for prime time. ] (] • ]) 15:19, 13 January 2025 (UTC) | |||
::I'd agree with all of this, given my experience. The jazz and classical that I've seen is mostly the same.--] (] | ]) 16:57, 13 January 2025 (UTC) | |||
:::Me too, though sometime last year, I unexpectedly had some (inexplicably strong) pushback on the tracklist part with an editor or two. In my experience, using the track list template is the standard, and I can't recall anyone giving me any pushback for it, but some editors apparently prefer just using numbers. I guess we can wait and see if there's any current pushback on it. 17:01, 13 January 2025 (UTC) ] ] 17:01, 13 January 2025 (UTC) | |||
::::Was it pushback for how you had rendered the tracklist, or an existing tracklist being re-formatted by you or them?--] (] | ]) 18:13, 13 January 2025 (UTC) | |||
:::::They came to WT:ALBUMS upset that another editor was changing track lists from "numbered" to "template" formats. My main response was surprised, because in my 15+ years of article creations and rewrites, I almost exclusively used the tracklist template, and had never once received any pushback. | |||
:::::So basically, I personally agree with you and MDT above, I'm merely saying I've heard someone disagree. I'll try to dig up the discussion. ] ] 17:50, 14 January 2025 (UTC) | |||
::::::I found , though this was more about sticking to the current wording as is than it was about opposition against changing it. Not sure if there was another one or not. ] ] 18:14, 14 January 2025 (UTC) | |||
::::I remember one editor being strongly against the template, but they are now community banned. Everyone else I've seen so far uses the template. <span style="background:#16171c; font-family:monospace; font-weight:600; padding:2px; box-shadow:#9b12f0 2px -2px">] ]</span> 22:25, 13 January 2025 (UTC) | |||
::::I can see the numbered-list format being used for very special cases like '']'', which was released with only two songs, and had the same co-writers and producer. But I imagine we have extremely few articles that are like that, so I believe the template should be the standard. ] 🦗🐜 <sup><small>]'']</small></sup> 12:23, 14 January 2025 (UTC) | |||
:::{{u|ModernDayTrilobite}}, regarding linking to ], some recent discussions I was in at the end of last year indicate that it is common to still link to Discogs as an EL, because it gives more exhaustive track, release history, and personnel listings that Misplaced Pages - generally - should not.--] (] | ]) 14:14, 15 January 2025 (UTC) | |||
::::Thank you for the clarification! In that case, I've got no objection to continuing to recommend it. ] (] • ]) 14:37, 15 January 2025 (UTC) | |||
::There were several discussions about Discogs and an RfC ]. As a user of {{tl|Discogs master}}, I agree with what other editors said there. We can't mention every version of an album in an article, so an external link to Discogs is invaluable IMO. <span style="background:#16171c; font-family:monospace; font-weight:600; padding:2px; box-shadow:#9b12f0 2px -2px">] ]</span> 22:34, 13 January 2025 (UTC) | |||
:We badly need this to become part of the MOS. As it stands, some editors have rejected the guidelines as they're just guidelines, not policies, which defeats the object of having them in the first place. ] (]) 16:59, 13 January 2025 (UTC) | |||
::I mean, they are guidelines, but deviation per ] should be for a good reason, not just because someone feels like it.--] (] | ]) 18:14, 13 January 2025 (UTC) | |||
:I am very much in favor of this becoming an official MOS guideline per ] above. Very useful as a template for album articles. ] (]) 21:03, 13 January 2025 (UTC) | |||
:I recently wrote my first album article and this essay was crucial during the process, to the extent that me seeing this post is like someone saying "I thought you were already an admin" in RFA; I figured this was already a guideline. I would support it becoming one. ] (]) 02:00, 14 January 2025 (UTC) | |||
:I have always wondered why all this time these pointers were categorized as an essay. It's about time we formalize them; as said earlier, there are some outdated things that need to be discussed (like in ] which advises not to use stores for credits, even though in the streaming era we have more and more albums/EPs that never get physical releases). Also, song articles should also have their own guidelines, IMV. ] 🦗🐜 <sup><small>]'']</small></sup> 12:19, 14 January 2025 (UTC) | |||
::I'd be in favor of discussing turning the outline at the main page for ] into a guideline.--] (] | ]) 12:53, 14 January 2025 (UTC) | |||
:::I get the sense it'd have to be a separate section from this one, given the inherent complexity of album articles as opposed to that of songs. ] 🦗🐜 <sup><small>]'']</small></sup> 14:56, 14 January 2025 (UTC) | |||
::::Yes, I think it should be a separate, parallel guideline.--] (] | ]) 16:53, 14 January 2025 (UTC) | |||
:I think it needs work--I recall that a former longtime album editor, Richard3120 (not pinging them, as I think they are on another break to deal with personal matters), floated a rewrite a couple of years ago. Just briefly: genres are a perennial problem, editors love unsourced exact release dates and chronology built on OR (many discography pages are sourced only to random ''Billboard'', AllMusic, and Discogs links, rather than sources that provide a comprehensive discography), and, like others, I think all the permutations of reissue and special edition track listings has gotten out of control, as well as these long lists of not notable personnel credits (eight second engineers, 30 backing vocalists, etc.). Also agree that the track listing template issue needs consensus; if three are acceptable, then three are acceptable--again, why change it to accommodate the names of six not notable songwriters? There's still a divide on the issue of commercial links in the body of the article--I have yet to see a compelling reason for their inclusion (WP is, uh, not for sale, remember?), when a better source can always be found (and editors have noted, not that I've made a study of it, that itunes often uses incorrect release dates for older albums). But I also acknowledge that since this "floated" rewrite never happened, then the community at large may be satisfied with the guidelines. ] (]) 13:45, 14 January 2025 (UTC) | |||
::Regarding the personnel and reissue/special edition track listing, I don't know if I can dig up the discussions, but there seems to be a consensus against being exhaustive and instead to put an external link to Discogs. I fail to see how linking to ''Billboard'' or AllMusic links for a release date on discographies is OR, unless you're talking about in the lead. At least in the case of Billboard, that's an established RS (AllMusic isn't the most accurate with dates).-- ] (] | ]) 13:53, 14 January 2025 (UTC) | |||
:::I meant that editors often use discography pages to justify chronology, even though ''Billboard'' citations are simply supporting chart positions, Discogs only states that an album exists, and AllMusic entries most often do not give a sequential number in their reviews, etc. There is often not a source (or sources) that states that the discography is complete, categorized properly, and in order. ] (]) 14:05, 14 January 2025 (UTC) | |||
::::Ah, okay, I understand now.--] (] | ]) 16:54, 14 January 2025 (UTC) | |||
Myself, I've noticed that some of the sourcing recommendations are contrary to WP:RS guidance (more strict, actually!) or otherwise outside consensus. For instance, MOS:ALBUMS currently says to not use vendors for track list or personnel credits, linking to ] in WP:RS, but AFFILIATE actually says that such use is acceptable but not preferred. Likewise, MOS:ALBUMS says not to use scans of liner notes, which is 1. absurd, and 2. not actual consensus, which in the discussions I've had is that actual scans are fine (which makes sense as it's a digital archived copy of the source).--] (] | ]) 14:05, 14 January 2025 (UTC) | |||
== Accurate and precise use of terminology in policies/guidelines and elsewhere == | |||
I would like to propose some changes to make the wording of policy/guidelines more accurate, precise, and clear. They relate to the usage of the words "encyclopedic" and "consensus." | |||
:The tendency to be overreliant on liner notes is also a detriment. I've encountered some liner notes on physical releases that have missing credits (e.g. only the producers are credited and not the writers), or there are outright no notes at all. Tangentially, some physical releases of albums like '']'' and '']'' actually direct consumers to official websites to see the credits, which has the added problem of link rot ( for ''Still Over It'' and is a permanent dead link). ] 🦗🐜 <sup><small>]'']</small></sup> 15:04, 14 January 2025 (UTC) | |||
===Encyclopedic=== | |||
::That turns editors to using stores like Spotify or Apple Music as the next-best choice, but a new problem arises -- the credits for a specific song can vary depending on the site you use. One important thing we should likely discuss is what sources should take priority wrt credits. For an example of what I mean, take "]". to check its credits and you'd find the name Sean Garrett -- , however, and that name is missing. I assume these digital credits have a chance to deviate from the albums' physical liner notes as well, if there is one available. ] 🦗🐜 <sup><small>]'']</small></sup> 15:11, 14 January 2025 (UTC) | |||
Sometimes, the words "encyclopedic" or "unencyclopedic" appear in guideline pages. The dictionary definition of "encyclopedic" is "broad in scope or content" or "comprehensive; of or relating to the characteristics of an encyclopedia." And our own definition of "]" is "a comprehensive written compendium that contains information on all branches of knowledge or a particular branch of knowledge." | |||
:::Moreover, the credits in stores are not necessarily correct either. An example I encountered was on ], an amazing service and the only place where I could find detailed credits for one album (not even liner notes had them, since back then artists tried to avoid sample clearance). However, as I was double checking everything, one song made no sense: in its writing credits I found "Curtis Jackson", with a link to ]'s artist page. It seemed <em>extremely</em> unlikely that they would collaborate, nor any of his work was sampled here. Well, it turns out this song sampled a song written by Charles Jackson of ]. <span style="background:#16171c; font-family:monospace; font-weight:600; padding:2px; box-shadow:#9b12f0 2px -2px">] ]</span> 16:39, 14 January 2025 (UTC) | |||
::::{{u|PSA}} and {{u|AstonishingTunesAdmirer}}, I agree that it's difficult. I usually use both the physical liner notes and online streaming and retail sources to check for completeness and errors. I've also had the experience of ] being a great resource, and, luckily, so far I've yet to encounter an error. Perhaps advice for how to check multiple primary sources here for errors should be added to the proposed guideline.--] (] | ]) 17:00, 14 January 2025 (UTC) | |||
:::::At this point, I am convinced as well that finding the right sources for credits should be on a case-by-case basis, with the right amount of discretion from the editor. While I was creating ], which included several SoundCloud songs where it was extremely hard to find songwriting credits, I found the useful for filling those missing gaps. More or less the credits there align with what's on the liner notes/digital credits. However, four issues, most of which you can see by looking at the list I started: 1) they don't necessarily align with physical liner notes either, 2) sometimes names are written differently depending on the entry, 3) there are entries where a writer (or co-writer) is unknown, and 4) some of the entries here were never officially released and confirmed as outtakes/leaks (why is "BET Awards 19 Nomination Special" here, whatever that means?). ] 🦗🐜 <sup><small>]'']</small></sup> 22:59, 14 January 2025 (UTC) | |||
::::::Yeah, I've found it particularly tricky when working on technical personnel (production, engineering, mixing, etc.) and songwriting credits for individuals. I usually use the liner notes (if there are any), check AllMusic and ], and also check Tidal if necessary. But I'll also look at Spotify, too. I know they're user-generated, so I don't cite them, but I usually look at Discogs and Genius to get an idea if I'm missing something. Thank you for pointing me to Songview, that will probably also be really helpful. ] (] | ]) 12:50, 15 January 2025 (UTC) | |||
:(@], please see ] for advice on advertising discussions about promoting pages to a guideline. No, you ''don't'' have to start over. But maybe add an RFC tag or otherwise make sure that it is very widely publicized.) ] (]) 23:37, 14 January 2025 (UTC) | |||
::Thank you. I'll notify the Manual of Style people. I did already post a notice at WP:ALBUMS. I'll inform other relevant WikiProjects as well.--] (] | ]) 12:46, 15 January 2025 (UTC) | |||
Before posting the RfC as suggested by {{u|WhatamIdoing}}, I'm proposing the following changes to the text of MOS:ALBUM as discussed above: | |||
We typically use "encyclopedic" to denote "appropriate for inclusion in an encyclopedia." It is even sometimes used to describe behavior; e.g. at ], we have "Some readers consider such techniques spiteful and ]..." At ], he have suggestions on what to do in response to a user's "first unencyclopedic entry." I have never seen these usages outside of Misplaced Pages. As noted at ], it is not a particularly helpful comment in deletion debates. I would favor dumping this term pretty much entirely from our lexicon, except when used as a complaint about something not being comprehensive enough. Ironically, one place where the term might be appropriate to use is ]; but if you were to vote "Object; clearly unencyclopedic" about an uncomprehensive article on an obviously notable subject, it would probably be mistaken for disruption. | |||
# Eliminate {{!xt|What other works of art is this producer known for? Keep the list of other works short, as the producer will likely have their own article with a more complete list.}} from the "Recording, production" sub-section. | |||
# Rework the text of the "Style and form" for tracklistings to: | |||
::{{xt|1=The track listing should be under a primary heading named "Track listing".}} | |||
::{{xt|1=A track listing should generally be formatted with the {{tl|Track listing}} template. Note, however, that the track listing template forces a numbering system, so tracks originally listed as "A", "B", etc., or with other or no designations, will not appear as such when using the template. Additionally, in the case of multi-disc/multi-sided releases, a new template may be used for each individual disc or side, if applicable.}} | |||
In most cases, the word "unencyclopedic" could probably be replaced with a more precise term such as "non-notable" or perhaps a phrase including the word "inappropriate" (defined as "not suitable for a particular occasion etc," "not in keeping with what is correct or proper," etc.) That is, after all, what people are usually trying to get across with the word "unencyclopedic" – that certain actions or content are not suitable or proper for this project. If no one objects, I would like to begin replacing "encyclopedic" and its variants with better terminology when I run across them. ] (]) 22:59, 21 March 2008 (UTC) | |||
::{{xt|1=Alternate forms, such as a table or a ], are acceptable but usually not preferred. If a table is used, it should be formatted using class="wikitable", with column headings "No.", "Title" and "Length" for the track number, the track title and the track length, respectively (see Help:Table). In special cases, such as '']'', a numbered list may be the most appropriate format.}} | |||
===Consensus=== | |||
"Consensus" I believe is another problematic word in some cases. It is inherently confusing because it has many possible definitions, some of which imply absolute unanimity, which is never a requirement here for action to be taken. However, other definitions do not carry this implication (see ]) so that by itself does not make it an incorrect use of terminology. It is hard to find a satisfactory definition of consensus, as applied to Misplaced Pages, and if you read through the archives of ], you'll find that people have been grappling with this for some time. But let's start with a few that, while not entirely satisfactory, express certain attributes often applicable to Misplaced Pages decisionmaking: | |||
*"An informal method for identifying approval of a proposal placed before a group. Consensus is not the same as unanimity. It is sometimes used as a political tool and may be proclaimed without individual voting but may not necessarily be an accurate assessment of the group's general desires." http://www.domainhandbook.com/gloss.html | |||
*"General agreement, characterized by the absence of sustained opposition to substantial issues by any important part of the concerned interests and by a process that involves seeking to take into account the views of all parties concerned and to reconcile any conflicting arguments NOTE Consensus need not imply unanimity" http://www.bsi-global.com/en/Standards-and-Publications/About-standards/Glossary/ | |||
*"The set of ideas, beliefs and values that are shared and agreed by the majority of the population, the centre ground that by definition often excludes alternative positions." http://faculty.harker.org/BeverleyM/LITINTOFILM/glossary3.html | |||
# Move {{xt|1= Critical reception overviews like AcclaimedMusic (using {{tl|Acclaimed Music}}), AnyDecentMusic?, or Metacritic may be appropriate as well.}} from "External links" to "Album ratings templates" of "Critical reception", right before the sentence about using {{tl|Metacritic album prose}}. | |||
Many guidelines suggest that decisions on Misplaced Pages are made by "rough consensus." For instance, ] states: | |||
# Re-write this text from "Sourcing" under "Track listing" from {{!xt|However, if there is disagreement, there are other viable sources. Only provide a source for a track listing if there are exceptional circumstances, such as a dispute about the writers of a certain track. Per ], avoid commercial sources such as online stores and streaming platforms. In the rare instances where outside citations are required, explanatory text is useful to help other editors know why the album's liner notes are insufficient.}} to {{xt|Per ], commercial sources such as online stores and streaming platforms are acceptable to cite for track list information, but secondary coverage in independent reliable sources is preferred if available.}} Similarly, in the "Personnel" section, re-write {{!xt| Similar to the track listing requirements, it is generally assumed that a personnel section is sourced from the liner notes. In some cases, it will be necessary to use third-party sources to include performers who are not credited in the liner notes. If you need to cite these, use {{tl|Cite AV media}} for the liner notes and do not use third party sources such as stores (per ]) or scans uploaded to image hosting sites or ] (per ]).}} to {{xt|1= Similar to the track listing requirements, it is generally assumed that a personnel section is sourced from the liner notes. If you need to cite the liner notes, use {{tl|Cite AV media}}. Scans of the physical media that have been uploaded in digital form to repositories or sites such as ] are acceptable for verification, but cite the physical notes themselves, not the ] transcriptions. Frequently, it will be necessary to use third-party sources to include performers who are not credited in the liner notes. Per ], inline citations to e-commerce or streaming platforms to verify personnel credits are allowed. However, reliable secondary sources are preferred, if available.}} | |||
{{cquote|These processes are not decided through a head count, so participants are encouraged to explain their opinion and refer to policy. The discussion lasts at least five days; afterwards, pages are deleted by an administrator if there is consensus to do so. If there is no rough consensus, the page is kept and is again subject to normal editing, merging or redirecting as appropriate.}} | |||
# Additional guidance has been suggested for researching and verifying personnel and songwriting credits. I suggest adding {{xt|1=It is recommended to utilize a combination of the physical liner notes (if they exist) with e-commerce sites such as ] and ], streaming platforms such as ] and ], and databases such as ] credits listings and . Finding the correct credits requires careful, case-by-case consideration and editor discretion. If you would like assistance, you can reach out to ] or ] WikiProjects.}} The best section for this is probably in "Personnel", in the paragraph discussing that liner notes can be inaccurate. | |||
On its face, this appears to be an accurate description; and anyone who has been in deletion debates for awhile knows what it is talking about. But there are some subtle ambiguities and inaccuracies present in this statement (and many like it) that I believe we would do well to correct. It will take me awhile to pick apart and explain, but please bear with me. | |||
# The excessive listing of personnel has been mentioned. I suggest adding the following to the paragraph in the "Personnel" section beginning with "The credits to an album can be extensive or sparse.": {{xt|1=If the listing of personnel is extensive, avoid excessive, exhaustive lists, in the spirit of ]. In such cases, provide an external link to ] and list only the major personnel to the list.}} | |||
If you have any additional suggestions, or suggestions regarding the wording of any of the above (I personally think that four needs to be tightened up or expressed better), please give them. I'm pinging the editors who raised issues with the essay as currently written, or were involved in discussing those issues, for their input regarding the above proposed changes. {{u|ModernDayTrilobite}}, {{u|PSA}}, {{u|Sergecross73}}, {{u|AstonishingTunesAdmirer}}, {{u|Caro7200}}, what do you think? Also, I realize that I never pinged {{u|Fezmar9}}, the author of the essay, for their thoughts on upgrading this essay to a guideline.--] (] | ]) 17:21, 15 January 2025 (UTC) | |||
:The proposed edits all look good to me. I agree there's probably some room for improvement in the phrasing of #4, but in my opinion it's still clear enough as to be workable, and I haven't managed to strike upon any other phrasings I liked better for expressing its idea. If nobody else has suggestions, I'd be content to move forward with the language as currently proposed. ] (] • ]) 17:37, 15 January 2025 (UTC) | |||
It is often said that we make decisions primarily by "consensus" or through "discussion" rather than polling. (In using the word "consensus," there is some ambiguity – do we mean "consensus" in the sense of "an informal method for identifying approval of a proposal placed before a group," in which case it might mean basically the same thing as "discussion"; or do we mean that in most cases, the opinions are nearly unanimous one way or the other? Probably both of these things are implied, but the use of the word "consensus" makes it fuzzy.) On its face, this seems true. We do, after all, discuss things before a decision is announced, and the decisions is typically based on things said or brought to light in the course of the discussion. But is it accurate to say that our decisions are the ''result'' of the discussion or comments made therein? (Most deletion debates are closed with a statement such as "The result of the discussion was '''Keep'''") | |||
:It might be better to have this discussion on its talk page. That's where we usually talk about changes to a page. ] (]) 17:38, 15 January 2025 (UTC) | |||
::{{u|WhatamIdoing}} - just the proposed changes, or the entire discussion about elevating this essay to a guideline?--] (] | ]) 18:21, 15 January 2025 (UTC) | |||
:::It would be normal to have both discussions (separately) on that talk page. ] (]) 18:53, 15 January 2025 (UTC) | |||
::::Okay, thank you. I started the proposal to upgrade the essay here, as it would be far more noticed by the community, but I'm happy for everything to get moved there.-- ] (] | ]) 19:00, 15 January 2025 (UTC) | |||
:These changes look good to me. Although, since we got rid of Acclaimed Music in the articles, we should probably remove it here too. <span style="background:#16171c; font-family:monospace; font-weight:600; padding:2px; box-shadow:#9b12f0 2px -2px">] ]</span> 19:36, 15 January 2025 (UTC) | |||
::Sure thing.--] (] | ]) 20:56, 15 January 2025 (UTC) | |||
{{Discussion bottom}} | |||
In a system such as the U.S. Congress, it is correct to say that decisions are a result of the vote. Whatever the members vote to do, that is the action taken, as long as it doesn't contradict rules that they themselves have set up. Under their rules, it would be a perfectly valid decision for the body to pass a resolution, for instance, stating that "Whereas, mutated superintelligent polar bears with orange spots are bombarding the Pacific Northwest states with heavy artillery; and whereas this has had a profound negative impact on the economy of this region; now therefore be it Resolved that $10 million is appropriated for economic stimulus in this region, to be allocated by the President." Regardless of the facts being incorrect, they can vote to do what they wish, and action will be taken accordingly. Indeed, they can even violate their own rules if they raise a ] and ], and the members vote to interpret the rules in such a way that the violation is permitted; this is what the ] is all about. | |||
== reverts all edits == | |||
Now, on Misplaced Pages, what happens when the participants in a discussion ask for an article to be kept or deleted based on inaccurate facts or blatantly misapplication of policy? The closing admin has every right to take action based on the facts and policy. It does not even matter if the decisive argument was not raised in debate; facts and policy are what they are. The ten participants in a debate can unanimously argue in favor of keeping an unverifiable article, citing many eloquent reasons for why, and the closing admin can delete. So in that case, the action taken is ''not'' the result of the discussion or rough consensus. | |||
Hello everyone. I have an idea for the Misplaced Pages coders. Would it be possible for you to design an option that, with the click of a button, automatically reverts all edits of a disruptive user? This idea came to my mind because some people create disposable accounts to cause disruption in all their edits... In this case, a lot of time and energy is consumed by administrators and reverting users to undo all the vandalism. If there were a template that could revert all the edits of a disruptive user with one click, it would be very helpful. If you think regular users might misuse this option, you could limit it to Misplaced Pages administrators only so they can quickly and easily undo the disruption. ] (]) 17:31, 13 January 2025 (UTC) | |||
The subject of deletion, due to its complexity, opens up a whole can of worms, so perhaps a better better example is ]. FAC rules state: | |||
:Hi @], there's a script that does that: ]. Also, editors who use ] can single-click revert all consecutive edits of an editor. ] ] 17:44, 13 January 2025 (UTC) | |||
::Is this tool active in all the different languages of Misplaced Pages? I couldn't perform such an action with the tool you mentioned. ] (]) 17:51, 13 January 2025 (UTC) | |||
:::That script requires the ] permission, which is available only for admins and other trusted users. Admins and other users with the tool have gotten in trouble for using it inappropriately. I never use it myself, as I find the rollback in Twinkle quite sufficient for my needs. ] 17:54, 13 January 2025 (UTC) | |||
:::(ec) I don't know about other languages. If you check the page I linked, you'll see that the script requires ]. ] ] 17:55, 13 January 2025 (UTC) | |||
::::@] Sorry. Does your ] can reverse all edits of a user in different page's with clicking on button ? i think you mean that massrollback can reverse all edits in a special wiki page... not all edits of edits of disruptive user in multiple pages ? or i'm wrong ??? ] (]) 04:23, 14 January 2025 (UTC) | |||
:::::If you want this for the Persian Misplaced Pages, you should probably talk to ]. ] (]) 23:41, 14 January 2025 (UTC) | |||
::::::@] Thank you. ] (]) 07:11, 15 January 2025 (UTC) | |||
== Problem For Translate page == | |||
{{cquote|For a nomination to be ] to FA status, ] must be reached that it meets the criteria. Consensus is built among reviewers and nominators; the FA director, currently ], determines whether there is consensus. (References in these instructions to "the director" include Raul654's nominated delegates.) If, after sufficient time, objections considered actionable by the director have not been resolved or consensus for promotion has not been reached, a nomination will be removed from the list and ]. The director determines the timing of the process for each nomination.}} | |||
Hello everyone. I don’t know who is in charge for coding the Translate page on Misplaced Pages. But I wanted to send my message to the Misplaced Pages coders, and that is that in the Misplaced Pages translation system, the information boxes for individual persons (i.e personal biography box- see: ]) are not automatically translated, and it is time-consuming for Misplaced Pages users to manually translate and change the links one by one from English to another language. Please, could the coders come up with a solution for translating the information template boxes? Thank you. ] (]) 17:32, 13 January 2025 (UTC) | |||
It might be more accurate to say that FACs are determined based on the merits as judged by the FA director, a determination that he makes after reviewing the article and taking into consideration the arguments made. Clearly, an article can be promoted even if there is unanimous opposition, if the director determines that the article is of sufficient quality and the objections are non-actionable. So, it is patently inaccurate to say that "For a nomination to be promoted to FA status, consensus must be reached that it meets the criteria." | |||
:Hi {{u|Hulu2024}}, this also applies to the section above. If your proposal only applies to the English Misplaced Pages then it is probably best to post it at ] in the first instance. If it is only about the Persian Misplaced Pages then you may wish to try there. If it is more general then you could try ], or, for more formal proposals, ]. ] (]) 18:51, 13 January 2025 (UTC) | |||
::@] Thank you. ] (]) 19:21, 13 January 2025 (UTC) | |||
== A discrimination policy == | |||
And of course, article content is not decided by consensus, but by the merits. I believe TBSDY ran into this situation once, when the consensus was to change the article on ] to say it is a republic. The one editor who, correctly, says that it is a ] is entitled to have his edit stand. ] already says, "Note that in the rare case that the "eccentric" position turns out to have merit, the consensus can change," but that's somewhat of an ambiguous statement. Does that mean that the person with the eccentric position is entitled to keep reverting back to the correct version, or that he is supposed to change consensus by persuading the others? What if they won't be persuaded? We might clarify that people are allowed to correct factual inaccuracies, no matter what the consensus is. | |||
{{Discussion top|result= i quit this will go no where im extremely embarassed and feel horrible i dont think ill try again}} | |||
<s>Ani cases: | |||
But what about situations (whether in FAC, AFD, or elsewhere) in which it's a close call as to how to best apply facts or policy and there is room for reasonable disagreement? In those cases, headcount can begin to play into things, most certainly; and the closer in strength the arguments on each opposing sides are, the more weight headcount will tend to carry. And I think MFD is often an excellent example of this. It is inaccurate to imply that decisions will never made by headcount. To say such a thing gives people an unrealistic expectation of how things will work here; and I think fostering that expectation can come back to bite us. | |||
* ] | |||
* ] | |||
* ] | |||
* | |||
I would like to start this proposal by saying that this concept was a proposal in 2009 which failed for obvious reasons. But in this year, 2025, we need it as its happened a bunch. its already under personal attacks but this I feel and a couple other Wikipedians that it should be codified as their is precedent for blocking users who discriminate. Here’s a list of the things I want to include in this policy. edit: This policy is intended to target blatant and admitted instances of discrimination. If the intent behind an action is ambiguous, users should continue to assume good until the intent is.<br> | |||
Rather than saying that decisions are made by discussion or consensus, it would be more accurate to say that decisions are made based on the merits when possible (specifically, facts and policy); that discussion is used to present relevant facts and arguments to aid in determining the merits; and that head count (with adjustments made to account for possible sockpuppets, votestacking, etc.) begins to become a factor as the strength of the arguments on each side approaches equality. | |||
Just as being a member of a group does not give one special requirements to edit, it also does not endow any special privileges. One is not absolved of discrimination against a group just because one claims to be a member of that group. | |||
What counts as discrimination | |||
Of course, the application of policy is a bit of grey area, as some rules are not supposed to bend at all, and others (especially guidelines) have more give. All in all, it's a bit tricky in certain borderline cases, because the closing admin has to consider the relative weight of the facts, policies, guidelines, and possibly headcount. For these reasons, it's difficult to draft a statement that expresses clearly and accurately how decisions are made here. But to say that decisions are made as a result of "consensus" or "discussion" is clearly an oversimplification, and easily misinterpreted. | |||
* ] | |||
The catchall used (often implicitly) when a departure from the description of practice laid down in rules is made is ]. Thus, we can ignore the rule that decisions are made by consensus if the opinions expressed by a preponderance of editors is wrong. But, when possible, we should avoid having poorly-worded rules that make it necessary to invoke ] on a regular basis. | |||
* Disability-will define this further | |||
* Disease | |||
* ]-different from sex neurological <ref>{{Cite AV media |url=https://www.youtube.com/watch?v=fpGqFUStcxc |title=Let’s All Get Past This Confusion About Trans People |date=2022-06-06 |last=Professor Dave Explains |access-date=2025-01-15 |via=YouTube}}</ref><ref>{{Cite journal |last=Altinay |first=Murat |last2=Anand |first2=Amit |date=2020-08-01 |title=Neuroimaging gender dysphoria: a novel psychobiological model |url=https://link.springer.com/article/10.1007/s11682-019-00121-8 |journal=Brain Imaging and Behavior |language=en |volume=14 |issue=4 |pages=1281–1297 |doi=10.1007/s11682-019-00121-8 |issn=1931-7565}}</ref> | |||
* ]-different then gender biological<ref>{{Cite AV media |url=https://www.youtube.com/watch?v=fpGqFUStcxc |title=Let’s All Get Past This Confusion About Trans People |date=2022-06-06 |last=Professor Dave Explains |access-date=2025-01-15 |via=YouTube}}</ref> | |||
* Sexuality | |||
* Religion | |||
* Hobbies (e.g furry ( most often harassed hobby)) | |||
* Relationship status | |||
* Martial status | |||
* (Idk how to word this but) lack of parental presence | |||
* Political position (will be a hot topic) | |||
* ] anything i missed would be in there | |||
So, in short, I think we might revise references to decisionmaking by consensus to place more emphasis on decisionmaking being made primarily according to the merits, in a judgment made by the closing admin that is informed by the discussion, the facts he is able to assess for himself, and binding policies; and when there is some question as to the merits, taking into consideration rough consensus, as expressed in the more flexible guidelines and in the particular debate. | |||
A disability is an umbrella term in my sight | |||
I am not proposing any change in the substance of the rules, just rather changing them to make them more accurately and precisely describe actual (and best) practices. ] (]) 19:30, 21 March 2008 (UTC) | |||
::using non-notable for "non-encyclopedic" is a step backwards. At least encyclopedic has a meaning, whereas in practice the only actual meaning of non notable is not suitable for inclusion in Misplaced Pages. I would rather remove the words notable and non-notable entirely fro all guidelines and policy pages, since they have no agreed-upon meaning. anyone who thinks they do , needs to spend more time watching the total disagreements at AfD. ''']''' (]) 08:18, 22 March 2008 (UTC) | |||
you have mental and physical | |||
== BLP-Lock: A way to deal with contentious BLP articles == | |||
examples for mental would be: | |||
After reviewing the Don Murphy DRV, it's obvious that the community has some differences with regards to BLP articles, Notability, and how to handle things. I tried to come up with a compromise that would ease some folks mind with BLP. I actually brought this up with one of the folks whose article would be covered under this policy, and they were pretty positive with it. It alleviated one of his major problems about having a Misplaced Pages article about them. | |||
* schizophrenia | |||
So, without further ado.. | |||
* autism | |||
* ADHD | |||
* PTSD | |||
* mood disorders (depression, borderline personality disorder) | |||
* dyslexia (or any learning disability) | |||
* | |||
examples of physical: | |||
] | |||
* paralyzation | |||
The basics: | |||
* Pretty much any physical injury | |||
* Im aware that this never really happens but its good to go over | |||
A user may not claim without evidence that a user is affected by/are any of the above (idk how to term this). | |||
'''A) The article can be placed under BLP-LOCK by any uninvolved administrator. When an administrator places an article under this policy, they must either refer to an existing OTRS ticket, or submit one, and detail why such action is necessary in that OTRS ticket.''' | |||
A user may not claim that users with these disabilities/beliefs/races/genders shouldn’t edit Misplaced Pages. | |||
'''B) If an OTRS volunteer agrees that the article should be placed under BLP-LOCK, the article will be stubbed down to a bare-bones situation (just bare facts, no controversial information), and fully-protected for a period of a MININUM of six months (this can be permanent).''' | |||
A user may not imply a user is below them based on the person. | |||
'''C) During this BLP-LOCK status, the only edits that should be made are those via <nowiki>{{editprotected}}</nowiki> requests that have full-consensus on the talk page. Any information that not reliably sourced should not be added to the article, even with consensus. While a subject of the article does not get an automatic veto over information being added to the page, administrators who handle BLP-LOCK editprotected requests should be fully aware of the BLP policy and judge accordingly.''' | |||
calling people woke simply cause they are queer is discrimination. | |||
Also I would like to propose a condition. | |||
This is actually fairly close to the Stable Versions idea we've been promised for eons going forward.. It reduces a major part of the reason that folks (here and elsewhere) are upset about BLP: That any "child with a computer" can vandalize it, and then these vandalizations are available in the history forever.. and for folks that don't have people watching/OWNing the article, these vandalizations can persist for a period of time until caught. Instead, the article grows in a more controlled manner. | |||
Over reaction to what you think is discrimination (accidental misgendering and wrong pronouns) and the user apologizes for it is not grounds for an entry at ani. | |||
The reason for thinking that the OTRS ticket is necessary.. I'm not sure this is necessary or a good idea for ALL BLP articles, but if an article needs BLP-LOCK, then it should have above-normal levels of attention paid to it, and OTRS is one way to do that. I know that the problem is that OTRS can be overwhelmed at times, I'm willing to put my money where my mouth is on it, and work OTRS/BLP-LOCK if it goes through. | |||
This should be used as a guideline. | |||
Also, on a strictly personal level of thought.. if a subject complains to WP via OTRS, this should be a standard option (to BLP-LOCK their article) going forward. It's bad enough if a subject needs to email us once if there's problems with their article. We shouldn't have to make then continually monitor their article. 21:14, 21 March 2008 <small>—Preceding ] comment added by ] (] • ]) </small><!-- Template:Unsigned --> | |||
{{Quote box | |||
== Discussion and proposals on protecting biographies. == | |||
| quote = discrimination is defined as acts, practices, or policies that wrongfully impose a relative disadvantage or deprivation on persons based on their membership in a salient social group. This is a comparative definition. An individual need not be actually harmed in order to be discriminated against. He or she just needs to be treated worse than others for some arbitrary reason. If someone decides to donate to help orphan children, but decides to donate less, say, to children of a particular race out of a racist attitude, he or she will be acting in a discriminatory way even if he or she actually benefits the people he discriminates against by donating some money to them. | |||
| source = Misplaced Pages article on discrimination | |||
}} | |||
{{Paragraph break}}I would also like to say this would give us negative press coverage by right wing media and I’ll receive shit. But I don’t care i can deal with it ]] 16:37, 16 January 2025 (UTC)</s> | |||
] has written a few proposals ] that involve protecting biographies of living people upon request. I've also written a different set of criteria for article protection ]. It'd be great if we could get some more input about this from a wider range of people... please take a look if you have time. Please comment there to keep things centralised. Thanks! -- <span style="background:#ffff00;border:2px solid #00bfff;">]</span> 04:00, 22 March 2008 (UTC) | |||
*This largely seems like behavior that already is sanctionable per ] and ] (and the adoption of the latter drew complaints at the time that it in itself was already unnecessarily redundant with existing civility policy on en.wiki). What shortcomings do you see with those existing bodies of policy en force? <sub>signed, </sub>] <sup>]</sup> 16:45, 16 January 2025 (UTC) | |||
*:The fact that punishments should be a little more severe for users who go after a whole group of editors. As its not an npa its an attack on a group ]] 16:57, 16 January 2025 (UTC) | |||
*::NPA violations are already routinely met with blocks and sitebans, often on sight without prior warning for the level of disparagement you're describing. Do you have any recent examples on hand of cases where the community's response was insufficiently severe? <sub>signed, </sub>] <sup>]</sup> 17:07, 16 January 2025 (UTC) | |||
*:::Ill grab some my issue is admins can unblock without community input it should be unblock from admin then= they have to appeal to the community ]] 17:10, 16 January 2025 (UTC) | |||
*::::<small>Noting that I've now taken the time to read through the three cases listed at the top--two of them ended in NOTHERE blocks pretty quickly--I could see someone taking issue with the community's handling of RowanElder and Jwa05002, although it does seem that the discussion ultimately resulted in an indef block for one and an apparently sincere apology from the other. <sub>signed, </sub>] <sup>]</sup> 17:13, 16 January 2025 (UTC) </small> | |||
*:I think the real problem is that in order to block for any reason you have to take them to a place where random editors discuss whether they are a "net positive" or "net negative" to the wiki, which in principle would be a fair way to decide, but in reality is like the work of opening an RFC just in order to get someone to stop saying random racist stuff, and it's not worth it. Besides, remember the RSP discussion where the Daily Mail couldn't be agreed to be declared unreliable on transgender topics because "being 'gender critical' is a valid opinion" according to about half the people there? I've seen comments that were blatant bigoted insults beneath a thin veneer, that people did not take to ANI because it's just not worth the huge amount of effort. There really needs to be an easy way for administrators to warn (on first violation) and then block people who harass people in discriminatory ways without a huge and exhausting-for-the-complainer "discussion" about it -- and a very clear policy that says discrimination is not OK and is always "net negative" for the encyclopedia would reduce the complexity of that discussion, and I think is an important statement to make. | |||
*:By allowing it to be exhaustively debated whether thinly-veiled homophobic insults towards gay people warrant banning is Misplaced Pages deliberately choosing not to take a stance on the topic. A stance needs to be taken, and it needs to be clear enough to allow rapid and decisive action that makes people actually afraid to discriminate against other editors, because they know that it isn't tolerated, rather than being reasonably confident their targets won't undergo another exhausting ANI discussion. ] (]) 17:04, 16 January 2025 (UTC) | |||
*::Said better then i could say i agree wholeheartedly it happens way too much ]] 17:18, 16 January 2025 (UTC) | |||
*I agree that a blind eye shouldn't be turned against discrimination against groups of Misplaced Pages editors in general, but I don't see why we need a list that doesn't include social class but includes hobbies. The determining factor for deciding whether something is discrimination should be how much choice the individual has in the matter, which seems, in practice, to be the way ] is used. ] (]) 17:02, 16 January 2025 (UTC) | |||
*:I agree hobbies doesn't need to be included. Haven't seen a lot of discrimination based on social class? I think this needs to be taken to the Idea Lab. ] (]) 17:06, 16 January 2025 (UTC) | |||
*::Sorry this was just me spit balling i personally have been harassed over my hobbies ]] 17:07, 16 January 2025 (UTC) | |||
*@] Strong support in general (see above) but I strongly suggest you take this to the idea lab, because it's not written as a clear and exact proposal and it would probably benefit a lot from being developed into an RFC before taking it here. In the current format it probably can't pass because it doesn't make specific changes to policy. ] (]) 17:08, 16 January 2025 (UTC) | |||
== Outside opinions requested == | |||
*:Yeah sorry I’m new to this i was told to go here to get the ball rolling ]] 17:11, 16 January 2025 (UTC) | |||
*Wait...does this mean I won't be able to discriminate against people whose hobby is editing Misplaced Pages? Where's the fun in that? ] 17:09, 16 January 2025 (UTC) | |||
*:I guess not :3 ]] 17:13, 16 January 2025 (UTC) | |||
:In general, I fail to see the problem this is solving. The UCoC and other policies/guidelines/essays (such as ], ], and others) already prohibit discriminatory behavior. And normal conduct processes already have the ability to lay down the strictest punishment theoretically possible - an indefinite ban - for anyone who engages in such behavior. | |||
Hi, I'm in the midst of a dumb little dispute and would like some outside opinions. | |||
:I do not like the idea of what amounts to bureaucracy for bureaucracy’s sake. That is the ''best'' way I can put it. At worst, this is virtue signaling - it’s waving a flag saying “hey, public and editors, Misplaced Pages cares about discrimination so much we made a specific policy about it” - without even saying the next part “but our existing policies already get people who discriminate against other editors banned, so this was not necessary and a waste of time”. I’ll happily admit I’m proven wrong if someone can show evidence of a case where actual discrimination was not acted upon because people were “concerned” it wasn’t violating one of those other policies. -bɜ:ʳkənhɪmez | ] | ] 20:56, 16 January 2025 (UTC) | |||
::To clarify, all the comments about "why is this included" or "why is this not included" are part of the reason I'm against a specific policy like this. Any disruption can be handled by normal processes, and a specific policy will lead to wikilawyering over what is or is not discrimination. There is no need to try to define/specifically treat discrimination when all discriminatory behaviors are adequately covered by other policies already. -bɜ:ʳkənhɪmez | ] | ] 22:27, 16 January 2025 (UTC) | |||
*We should be relating to other editors in a kind way. But this proposal appears to make the editing environment more hostile with more blocking on the opinion of one person. We do discrimonate against those that use Misplaced Pages for wrong purposes, such as vandalism, or advertising. Pushing a particular point of view is more grey area. The proposal by cyberwolf is partly point of view that many others would disagree with. So we should concentrate policies on how a user relates to other editors, rather than their motivations or opinions. ] (]) 20:50, 16 January 2025 (UTC) | |||
* I think this is valuable by setting a redline for a certain sort of personal attack and saying, "this is a line nobody is permitted to cross while participating in this project." ] (]) 20:57, 16 January 2025 (UTC) | |||
* It is not possible for the content of a discussion to be "discriminatory". Discrimination is action, not speech. This proposal looks like an attempt to limit discourse to a certain point of view. That's not a good idea. --] (]) 21:13, 16 January 2025 (UTC) | |||
*:Discrimination can very much be speech. ] (]) 00:36, 17 January 2025 (UTC) | |||
*:: Nope. --] (]) 00:44, 17 January 2025 (UTC) | |||
*::: : "treating a person or particular group of people differently, especially in a worse way from the way in which you treat other people, because of their race, gender, sexuality, etc". | |||
*:::So yes, that includes speech because you can treat people differently in speech. Speech is an act. '']''<sup>]</sup> 01:04, 17 January 2025 (UTC) | |||
*::::OK, look, I'll concede part of the point here. Yes, if I'm a dick to (name of group) but not to (name of other group), I suppose that is discrimination, but I don't think a discrimination policy is a particularly useful tool for this, because what I ''should'' do is not be a dick to anybody. | |||
*::::What I'm concerned about is that the policy would be used to assert that certain ''content'' is discriminatory. Say someone says, here's a reliable source that says biological sex is real and has important social consequences, and someone else says, you can't bring that up, it's discriminatory. Well, no, that's a category error. That sort of thing ''can't'' be discriminatory. --] (]) 01:29, 17 January 2025 (UTC) | |||
*:::just drop it ]] 01:23, 17 January 2025 (UTC) | |||
*I would remove anything to do with polical position. Those on the far-right should be discriminated against. '']''<sup>]</sup> 21:45, 16 January 2025 (UTC) | |||
# Should new ] not be put in the Misplaced Pages: namespace? | |||
# Is not personally notifying another user of a revert (in this case a pagemove) an "issue"? | |||
:* The examples you use show that we've been dealing effectively without this additional set of guidelines; it would be more convincing that something was needed if you had examples where the lack of this policy caused bad outcomes. And I can see it being used as a hammer; while we're probably picturing "as a White man, I'm sure that I understand chemistry better than any of you lesser types" as what we're going after, I can see some folks trying to wield it against "as a Comanche raised on the Comanche nation, I think I have some insights on the Comanche language that others here are overlooking." As such, I'm cautious. -- ] (]) 21:49, 16 January 2025 (UTC) | |||
This is in reference to ], if you're curious.--] (]) 07:19, 22 March 2008 (UTC) | |||
*'''Comment'''. I am sorry that ] discrimination is being ignored here. ] (]) 21:54, 16 January 2025 (UTC). | |||
*'''Not needed'''. Everything the proposal is talking about would constitute disruptive behavior, and we can block or ban someone for being disruptive already. No need to break disruption down into its component parts, and write rules for each. ] (]) 22:07, 16 January 2025 (UTC) | |||
{{reflist-talk}} | |||
{{Discussion bottom}} | |||
== Gallery Policy? == | |||
I'm a bit confused on when it is alright to use a gallery on an article page. Some pages have them, some don't. Personally, I find no need for them now that you can easily link to a Wikimedia Commons page that basically is a gallery of images for that specific article. Also, some pages have a gallery AND a link to a Wikimedia Commons page which seems rather redundant. What is the policy here? Can I move the images from the article gallery to the Commons page in order to streamline everything? Thanks ] (]) 08:05, 22 March 2008 (UTC) | |||
== Repeated false retirement == | |||
== Mirroring wikipedia with lots of ads? == | |||
There is a user (who shall remain unnamed) who has "retired" twice and had the template removed from their page by other users because they were clearly still editing. They are now on their third "retirement", yet they last edited a few days ago. I don't see any policy formally prohibiting such behavior, but it seems extremely unhelpful for obvious reasons. ] 17:13, 16 January 2025 (UTC) | |||
Hi, | |||
:Unless the material is harmful to Misplaced Pages or other users, users have considerable leeway in what they may post on their user page. Personally, I always take "retirement" notices with a grain of salt. If a user wants to claim they are retired even though they are still actively editing, I don't see the harm to anything but their credibility. If I want to know if an editor is currently active, I look at their contributions, not at notices on their user or talk page. ] 22:07, 16 January 2025 (UTC) | |||
from googling, I found a strange mirror site , which mirrors all articles in wikipedia (in all languages). It also contains lots of google's adaware ads. Does it comply with GFDL (of course it contains non-free use of many things also)? ] (]) 09:22, 22 March 2008 (UTC) | |||
:{{br}}I can't imagine that this calls for a policy. You're allowed to be annoyed if you want. No one can take that away from you. But I'm missing an explanation of why the rest of us should care. --] (]) 22:13, 16 January 2025 (UTC) | |||
::This seems a little prickly, my friend. Clearly, the other two users who removed older retirement notices cared. At the end of the day, it's definitely not the most major thing, but it is helpful to have a reliable and simple indication as to whether or not a user can be expected to respond to any kind of communication or feedback. I'm not going to die on this hill. Cheers. ] 22:41, 16 January 2025 (UTC) |
Latest revision as of 02:23, 17 January 2025
Page for discussing policies and guidelines"WP:VPP" redirects here. For proposals, see Misplaced Pages:Village pump (proposals).
Policy | Technical | Proposals | Idea lab | WMF | Miscellaneous |
- If you wish to propose something new that is not a policy or guideline, use Village pump (proposals). Alternatively, for drafting with a more focused group, consider starting the discussion on the talk page of a relevant WikiProject, the Manual of Style, or another relevant project page.
- For questions about how to apply existing policies or guidelines, refer to one of the many Misplaced Pages:Noticeboards.
- If you want to inquire about what the policy is on a specific topic, visit the Help desk or the Teahouse.
- This is not the place to resolve disputes regarding the implementation of policies. For such cases, consult Misplaced Pages:Dispute resolution.
- For proposals for new or amended speedy deletion criteria, use Misplaced Pages talk:Criteria for speedy deletion.
Please see this FAQ page for a list of frequently rejected or ignored proposals. Discussions are automatically archived after two weeks of inactivity.
« Archives, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199 Centralized discussion For a listing of ongoing discussions, see the dashboard.
LLM/chatbot comments in discussions
- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- There is a strong consensus that comments that do not represent an actual person's thoughts are not useful in discussions. Thus, if a comment is written entirely by an LLM, it is (in principle) not appropriate. The main topic of debate was the enforceability of this principle. Opinions vary on the reliability of GPTZero, and I would say there is a rough consensus against any form of AI detection that relies solely on it or other automated tools. Overall, however, I see a consensus that it is within admins' and closers' discretion to discount, strike, or collapse obvious use of generative LLMs or similar AI technologies. This is a WP:DUCK matter, and as with other WP:DUCK matters, there is not going to be a blackletter definition of "obvious", but I think we can all agree there are some comments that could only ever be LLM-generated. As with other matters of discretion, like behavioral sockpuppetry blocks, experienced users can apply their best judgment, subject to community review.The word "generative" is very, very important here, though. This consensus does not apply to comments where the reasoning is the editor's own, but an LLM has been used to refine their meaning. Editors who are non-fluent speakers, or have developmental or learning disabilities, are welcome to edit here as long as they can follow our policies and guidelines; this consensus should not be taken to deny them the option of using assistive technologies to improve their comments. In practice, this sets a good lower bound for obviousness, as any comment that could conceivably be LLM-assisted is, by definition, not obviously LLM-generated.Regarding comments that are more borderline in LLM likelihood, it's worth reviewing what's already allowed by policy and guidelines: LLM-written comments will usually add little of substance to a discussion, and closers are already expected to ignore unhelpful comments. If comments contain fabrications of fact, that is a blockable form of disruptive editing, whether or not the fabrication is by a human or a hallucinating chatbot. And while WP:TPO says disruptive comments "are usually best left as-is or archived", there is a "usually" in there, so removing patently disruptive content is within editors' discretion, whether or not LLM usage is unambiguous. -- Tamzin (they|xe|🤷) 21:21, 12 January 2025 (UTC)
Should admins or other users evaluating consensus in a discussion discount, ignore, or strike through or collapse comments found to have been generated by AI/LLM/Chatbots? 00:12, 2 December 2024 (UTC)
I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, gptzero and other such tools are very good at detecting this. I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. If you can't make a coherent argument on your own, you are not competent to be participating in the discussion. I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner. Just Step Sideways 00:12, 2 December 2024 (UTC)
- Should not be deleted or ignored but needs to be clearly identified. 199.115.161.178 (talk) 16:23, 5 January 2025 (UTC)
opening comments
- Seems reasonable, as long as the GPTZero (or any tool) score is taken with a grain of salt. GPTZero can be as wrong as AI can be. ~ ToBeFree (talk) 00:32, 2 December 2024 (UTC)
- Only if the false positive and false negative rate of the tool you are using to detect LLM content is very close to zero. LLM detectors tend to be very unreliable on, among other things, text written by non-native speakers. Unless the tool is near perfect then it's just dismissing arguments based on who wrote them rather than their content, which is not what we do or should be doing around here. Thryduulf (talk) 00:55, 2 December 2024 (UTC)
- In the cases I have seen thusfar it's been pretty obvious, the tools have just confirmed it. Just Step Sideways 04:08, 2 December 2024 (UTC)
- The more I read the comments from other editors on this, the more I'm a convinced that implementing either this policy or something like it will bring very significant downsides on multiple fronts that significantly outweigh the small benefits this would (unreliably) bring, benefits that would be achieved by simply reminding closers to disregard comments that are unintelligible, meaningless and/or irrelevant regardless of whether they are LLM-generated or not. For the sake of the project I must withdraw my previous very qualified support and instead very strongly oppose. Thryduulf (talk) 02:45, 3 December 2024 (UTC)
- I think it should be an expressly legitimate factor in considering whether to discount or ignore comments either if it's clear enough by the text or if the user clearly has a history of using LLMs. We wouldn't treat a comment an editor didn't actually write as an honest articulation of their views in lieu of site policy in any other situation. Remsense ‥ 论 00:59, 2 December 2024 (UTC)
- I would have already expected admins to exercise discretion in this regard, as text written by an LLM is not text written by a person. We cannot guarantee it is what the person actually means, especially as it is a tool often used by those with less English proficiency, which means perhaps they cannot evaluate the text themselves. However, I do not think we can make policy about a specific LLM or tool. The LLM space is moving fast, en.wiki policies do not. Removal seems tricky, I would prefer admins exercise discretion instead, as they do with potentially canvassed or socked !votes. CMD (talk) 01:06, 2 December 2024 (UTC)
- As the discussion has moved forward below, I feel I can shift to a more explicit support in terms of providing guidance to closers and those otherwise evaluating consensus. CMD (talk) 17:33, 2 January 2025 (UTC)
- Support discounting or collapsing AI-generated comments, under slightly looser conditions than those for human comments. Not every apparently-AI-generated comment is useless hallucinated nonsense – beyond false positives, it's also possible for someone to use an AI to help them word a constructive comment, and make sure that it matches their intentions before they publish it. But in my experience, the majority of AI-generated comments are somewhere between "pointless" and "disruptive". Admins should already discount clearly insubstantial !votes, and collapse clearly unconstructive lengthy comments; I think we should recognize that blatant chatbot responses are more likely to fall into those categories. jlwoodwa (talk) 02:11, 2 December 2024 (UTC)
- Strongly Support - I think some level of human judgement on the merits of the argument are necessary, especially as GPTZero may still have a high FPR. Still, if the discussion is BLUDGEONy, or if it quacks like an AI-duck, looks like an AI-duck, etc, we should consider striking out such content.- sidenote, I'd also be in favor of sanctions against users who overuse AI to write out their arguments/articles/etc. and waste folks time on here.. Bluethricecreamman (talk) 02:20, 2 December 2024 (UTC)
- On a wording note, I think any guidance should avoid referring to any specific technology. I suggest saying "... to have been generated by a program". isaacl (talk) 02:54, 2 December 2024 (UTC)
- "generated by a program" is too broad, as that would include things like speech-to-text. Thryduulf (talk) 03:08, 2 December 2024 (UTC)
- Besides what Thryduulf said, I think we should engage with editors who use translators. Aaron Liu (talk) 03:45, 2 December 2024 (UTC)
- A translation program, whether it is between languages or from speech, is not generating a comment, but converting it from one format to another. A full policy statement can be more explicit in defining "generation". The point is that the underlying tech doesn't matter; it's that the comment didn't feature original thought from a human. isaacl (talk) 03:57, 2 December 2024 (UTC)
- Taking Google Translate as an example, most of the basic stuff uses "AI" in the sense of machine learning (example) but they absolutely use LLMs nowadays, even for the basic free product. Gnomingstuff (talk) 08:39, 2 December 2024 (UTC)
- A translation program, whether it is between languages or from speech, is not generating a comment, but converting it from one format to another. A full policy statement can be more explicit in defining "generation". The point is that the underlying tech doesn't matter; it's that the comment didn't feature original thought from a human. isaacl (talk) 03:57, 2 December 2024 (UTC)
- Support. We already use discretion in collapsing etc. comments by SPAs and suspected socks, it makes sense to use the same discretion for comments suspected of being generated by a non-human. JoelleJay (talk) 03:07, 2 December 2024 (UTC)
- Support - Someone posting "here's what ChatGPT has to say on the subject" can waste a lot of other editors' time if they feel obligated to explain why ChatGPT is wrong again. I'm not sure how to detect AI-written text but we should take a stance that it isn't sanctioned. Clayoquot (talk | contribs) 04:37, 2 December 2024 (UTC)
- Strong Support - I've never supported using generative AI in civil discourse. Using AI to participate in these discussions is pure laziness, as it is substituting genuine engagement and critical thought with a robot prone to outputting complete garbage. In my opinion, if you are too lazy to engage in the discussion yourself, why should we engage with you? Lazman321 (talk) 05:26, 2 December 2024 (UTC)
- Comment - I'm skeptical that a rule like this will be enforceable for much longer. Sean.hoyland (talk) 05:39, 2 December 2024 (UTC)
- Why? Aaron Liu (talk) 12:22, 2 December 2024 (UTC)
- Because it's based on a potentially false premise that it will be possible to reliably distinguish between text generated by human biological neural networks and text generated by non-biological neural networks by observing the text. It is already quite difficult in many cases, and the difficulty is increasing very rapidly. I have your basic primate brain. The AI companies building foundation models have billions of dollars, tens of thousands, soon to be hundreds of thousands of GPUs, a financial incentive to crack this problem and scaling laws on their side. So, I have very low credence in the notion that I will be able to tell whether content is generated by a person or a person+LLM or an AI agent very soon. On the plus side, it will probably still be easy to spot people making non-policy based arguments regardless of how they do it. Sean.hoyland (talk) 13:52, 2 December 2024 (UTC)
- ...and now that the systems are autonomously injecting their output back into model via chain-of-thought prompting, or a kind of inner monologue if you like, to respond to questions, they are becoming a little bit more like us. Sean.hoyland (talk) 14:14, 2 December 2024 (UTC)
- A transformer (deep learning architecture) is intrinsically nothing like a human. It's a bunch of algebra that can compute what a decently sensible person could write in a given situation based on its training data, but it is utterly incapable of anything that could be considered thought or reasoning. This is why LLMs tend to fail spectacularly when asked to do math or write non-trivial code. Flounder fillet (talk) 17:20, 2 December 2024 (UTC)
- We shall see. You might want to update yourself on their ability to do math and write non-trivial code. Things are changing very quickly. Either way, it is not currently possible to say much about what LLMs are actually doing because mechanistic interpretability is in its infancy. Sean.hoyland (talk) 03:44, 3 December 2024 (UTC)
- You might be interested in Anthropic's 'Mapping the Mind of a Large Language Model' and Chris Olah's work in general. Sean.hoyland (talk) 04:02, 3 December 2024 (UTC)
- A transformer (deep learning architecture) is intrinsically nothing like a human. It's a bunch of algebra that can compute what a decently sensible person could write in a given situation based on its training data, but it is utterly incapable of anything that could be considered thought or reasoning. This is why LLMs tend to fail spectacularly when asked to do math or write non-trivial code. Flounder fillet (talk) 17:20, 2 December 2024 (UTC)
- Why? Aaron Liu (talk) 12:22, 2 December 2024 (UTC)
- Support and I would add "or similar technologies" to "AI/LLM/Chatbots". As for Sean.hoyland's comment, we will cross that bridge when we get to it. Cullen328 (talk) 05:51, 2 December 2024 (UTC)
- ...assuming we can see the bridge and haven't already crossed it. Sean.hoyland (talk) 06:24, 2 December 2024 (UTC)
- Support - All editors should convey their thoughts in their own words. AI generated responses and comments are disruptive because they are pointless and not meaningful. - Ratnahastin (talk) 06:04, 2 December 2024 (UTC)
- Support, I already more or less do this. An LLM generated comment may or may not actually reflect the actual thoughts of the editor who posted it, so it's essentially worthless toward a determination of consensus. Since I wrote this comment myself, you know that it reflects my thoughts, not those of a bot that I may or may not have reviewed prior to copying and pasting. Seraphimblade 06:59, 2 December 2024 (UTC)
- Strong oppose. Let me say first that I do not like ChatGPT. I think it has been a net negative for the world, and it is by nature a net negative for the physical environment. It is absolutely a net negative for the encyclopedia if LLM-generated text is used in articles in any capacity. However, hallucinations are less of an issue on talk pages because they're discussions. If ChatGPT spits out a citation of a false policy, then obviously that comment is useless. If ChatGPT spits out some boilerplate "Thanks for reviewing the article, I will review your suggestions and take them into account" talk page reply, who gives a fuck where it came from? (besides the guys in Texas getting their eardrums blown out because they live by the data center)The main reason I oppose, though, is because banning LLM-generated comments is difficult to enforce bordering on unenforceable. Most studies show that humans are bad at distinguishing AI-generated text from text generated without AI. Tools like GPTZero claims a 99% accuracy rate, but that seems dubious based on reporting on the matter. The news outlet Futurism (which generally has an anti-AI slant) has failed many times to replicate that statistic, and anecdotal accounts by teachers, etc. are rampant. So we can assume that we don't know how capable AI detectors are, that there will be some false positives, and that striking those false positives will result in WP:BITING people, probably newbies, younger people more accustomed to LLMs, and non-Western speakers of English (see below).There are also technological issues as play. It'd be easy if there was a clean line between "totally AI-generated text" and "totally human-generated text," but that line is smudged and well on its way to erased. Every tech company is shoving AI text wrangling into their products. This includes autocomplete, translation, editing apps, etc. Should we strike any comment a person used Grammarly or Google Translate for? Because those absolutely use AI now.And there are also, as mentioned above, cultural issues. The people using Grammarly, machine translation, or other such services are likely to not have English as their first language. And a lot of the supposed "tells" of AI-generated content originate in the formal English of other countries -- for instance, the whole thing where "delve" was supposedly a tell for AI-written content until people pointed out the fact that lots of Nigerian workers trained the LLM and "delve" is common Nigerian formal English.I didn't use ChatGPT to generate any of this comment. But I am also pretty confident that if I did, I could have slipped it in and nobody would have noticed until this sentence. Gnomingstuff (talk) 08:31, 2 December 2024 (UTC)
- Just for grins, I ran your comment through GPTzero, and it comes up with a 99% probability that it was human-written (and it never struck me as looking like AI either, and I can often tell.) So, maybe it's more possible to distinguish than you think? Seraphimblade 20:11, 2 December 2024 (UTC)
- Yeah, Gnoming's writing style is far more direct and active than GPT's. Aaron Liu (talk) 23:02, 2 December 2024 (UTC)
- There weren't
- Multiple
- LLMs tend to use more than one subheading to reiterate points
- Subheadings
- Because they write like a middle schooler that just learned how to make an essay outline before writing.
- Multiple
- In conclusion, they also tend to have a conclusion paragraph for the same reason they use subheadings. ScottishFinnishRadish (talk) 13:56, 3 December 2024 (UTC)
- There weren't
- Yeah, Gnoming's writing style is far more direct and active than GPT's. Aaron Liu (talk) 23:02, 2 December 2024 (UTC)
- Just for grins, I ran your comment through GPTzero, and it comes up with a 99% probability that it was human-written (and it never struck me as looking like AI either, and I can often tell.) So, maybe it's more possible to distinguish than you think? Seraphimblade 20:11, 2 December 2024 (UTC)
- Support - Ai-generated comments are WP:DISRUPTIVE - An editor who has an argument should not use ChatGPT to present it in an unnecessarily verbose manner, and an editor who doesn't have one should not participate in discussion. Flounder fillet (talk) 13:14, 2 December 2024 (UTC)
- Notified: Misplaced Pages talk:WikiProject AI Cleanup. jlwoodwa (talk) 07:13, 2 December 2024 (UTC)
- Yes but why do we need this common sense RFC/policy/whatever? Just ban these people. If they even exist. Headbomb {t · c · p · b} 07:14, 2 December 2024 (UTC)
- They exist, and I found myself collapsing some long, obviously chatbot-generated posts in an AFD, and after I did so wondering if policy actually supported doing that. I couldn't find anything so here we are. Just Step Sideways 20:04, 2 December 2024 (UTC)
- Yes, of course, and I know that's the right answer because ChatGPT agrees with me.
What ChatGPT thinks |
---|
|
- In keeping with the proposed guideline, I have of course collapsed the above AI-generated content. (Later: It's actually worth reading in the context of this discussioin, so I've unhidden it by default.) But I must confess it's a pretty good analysis, and worth reading. EEng 07:47, 2 December 2024 (UTC)
- This is absolute gold dust and the best contribution to this discussion so far. There is an enormous irony here, one that might not be immediately obvious. The proposal is that we should ignore or even strike these type of contributions, but personally it seems like the collapsed format has worked a charm here. I really don't think that AI has much to contribute to WP discussions generally, but with the right prompt, there is certainly something worth adding to the conversation in reality. CNC (talk) 20:23, 8 December 2024 (UTC)
- The proposal also includes collapsing. jlwoodwa (talk) 20:26, 8 December 2024 (UTC)
- Thanks, I completely missed that. Trying to speed read is not my forte. CNC (talk) 20:32, 8 December 2024 (UTC)
- The proposal also includes collapsing. jlwoodwa (talk) 20:26, 8 December 2024 (UTC)
- The "detector" website linked in the opening comment gives your chatbot's reply only an 81% chance of being AI-generated. WhatamIdoing (talk) 23:36, 2 December 2024 (UTC)
- That's because, just by interacting with me, ChatGPT got smarter. Seriously ... you want it to say 99% every time? (And for the record, the idea of determining the "chance" that something is AI-generated is statistical nonsense.) EEng 03:07, 3 December 2024 (UTC)
- What I really want is a 100% chance that it won't decide that what I've written is AI-generated. Past testing has demonstrated that at least some of the detectors are unreliable on this point. WhatamIdoing (talk) 03:28, 4 December 2024 (UTC)
- 100% is, of course, an impossible goal. Certainly SPI doesn't achieve that, so why demand it here? EEng 22:31, 4 December 2024 (UTC)
- Even a miniscule chance of quashing a human writer's contributions is too high of a risk. Lardlegwarmers (talk) 06:09, 6 January 2025 (UTC)
- 100% is, of course, an impossible goal. Certainly SPI doesn't achieve that, so why demand it here? EEng 22:31, 4 December 2024 (UTC)
- What I really want is a 100% chance that it won't decide that what I've written is AI-generated. Past testing has demonstrated that at least some of the detectors are unreliable on this point. WhatamIdoing (talk) 03:28, 4 December 2024 (UTC)
- That's because, just by interacting with me, ChatGPT got smarter. Seriously ... you want it to say 99% every time? (And for the record, the idea of determining the "chance" that something is AI-generated is statistical nonsense.) EEng 03:07, 3 December 2024 (UTC)
Strong Oppose I support the concept of removal of AI-generated content in theory. However, we do not have the means to detect such AI-generated content. The proposed platform that we may use (GPTZero) is not reliable for this purpose. In fact, our own page on GPTZero has a section citing several sources stating the problem with this platform's accuracy. It is not helpful to have a policy that is impossible to enforce. ThatIPEditor 08:46, 2 December 2024 (UTC)- Strong Support To be honest, I am surprised that this isn't covered by an existing policy. I oppose the use of platforms like GPTZero, due to it's unreliability, but if it is obviously an ai-powered-duck (Like if it is saying shit like "as an AI language model...", take it down and sanction the editor who put it up there. ThatIPEditor 08:54, 2 December 2024 (UTC)
- Support at least for WP:DUCK-level AI-generated comments. If someone uses a LLM to translate or improve their own writing, there should be more leeway, but something that is clearly a pure ChatGPT output should be discounted. Chaotic Enby (talk · contribs) 09:17, 2 December 2024 (UTC)
- I agree for cases in which it is uncontroversial that a comment is purely AI-generated. However, I don't think there are many cases where this is obvious. The claim that gptzero and other such tools are very good at detecting this is false. Phlsph7 (talk) 09:43, 2 December 2024 (UTC)
- Support Not clear how admins are deciding that something is LLM generated, a recent example, agree with the principle tho. Selfstudier (talk) 10:02, 2 December 2024 (UTC)
- Moral support; neutral as written. Chatbot participation in consensus discussions is such an utterly pointless and disdainful abuse of process and community eyeballs that I don't feel like the verbiage presented goes far enough. Any editor may hat LLM-generated comments in consensus discussions is nearer my position. No waiting for the closer, no mere discounting, no reliance on the closer's personal skill at recognising LLM output, immediate feedback to the editor copypasting chatbot output that their behaviour is unwelcome and unacceptable. Some observations:I've seen editors accused of using LLMs to generate their comments probably about a dozen times, and in all but two cases – both at dramaboards – the chatbot prose was unmistakably, blindingly obvious. Editors already treat non-obvious cases as if written by a human, in alignment with the raft of
only if we're sure
caveats in every discussion about LLM use on the project.If people are using LLMs to punch up prose, correct grammar and spelling, or other superficial tasks, this is generally undetectable, unproblematic, and not the point here.Humans are superior to external services at detecting LLM output, and no evidence from those services should be required for anything.As a disclosure, evidence mounts that LLM usage in discussions elicits maximally unkind responses from me. It just feels so contemptuous, to assume that any of us care what a chatbot has to say about anything we're discussing, and that we're all too stupid to see through the misattribution because someone tacked on a sig and sometimes an introductory paragraph. And I say this as a stupid person. Folly Mox (talk) 11:20, 2 December 2024 (UTC)- Looks like a rewrite is indicated to distinguish between machine translation and LLM-generated comments, based on what I'm seeing in this thread. Once everyone gets this out of our system and an appropriately wordsmithed variant is reintroduced for discussion, I preemptively subpropose the projectspace shortcut WP:HATGPT. Folly Mox (talk) 15:26, 8 December 2024 (UTC)
- Support per EEng charlotte 14:21, 2 December 2024 (UTC)
- I would be careful here, as there are tools that rely on LLM AI that help to improve the clarity of one's writing, and editors may opt to use those to parse their poor writing (perhaps due to ESL aspects) to something clear. I would agree content 100% generated by AI probably should be discounted particularly if from an IP or new editors (hints if socking or meat puppetry) but not all cases where AI has come into play should be discounted — Masem (t) 14:19, 2 December 2024 (UTC)
- Support, cheating should have no place or take its place in writing coherent comments on Misplaced Pages. Editors who opt to use it should practice writing until they rival Shakespeare, or at least his cousin Ned from across the river, and then come back to edit. Randy Kryn (talk) 14:29, 2 December 2024 (UTC)
- Support atleast for comments that are copied straight from the LLM . However, we should be more lenient if the content is rephrased by non-native English speakers due to grammar issues The AP (talk) 15:10, 2 December 2024 (UTC)
section break 1
- Support for LLM-generated content (until AI is actually intelligent enough to create an account and contribute on a human level, which may eventually happen). However, beware of the fact that some LLM-assisted content should probably be allowed. An extreme example of this: if a non-native English speaker were to write a perfectly coherent reason in a foreign language, and have an LLM translate it to English, it should be perfectly acceptable. Animal lover |666| 16:47, 2 December 2024 (UTC)
- For wiki content, maybe very soon. 'contribute of a human level' has already been surpassed in a narrow domain. Sean.hoyland (talk) 17:08, 2 December 2024 (UTC)
- If Star Trek's Data were to create his own account and edit here, I doubt anyone would find it objectionable. Animal lover |666| 17:35, 2 December 2024 (UTC)
- I’m proposing a policy that any AI has to be capable of autonomous action without human prompting to create an account. Dronebogus (talk) 21:38, 5 December 2024 (UTC)
- Data, being a fictional creation with rights owned by a corporation, will not have an account; he is inherently an IP editor. -- Nat Gertler (talk) 03:22, 20 December 2024 (UTC)
- If Star Trek's Data were to create his own account and edit here, I doubt anyone would find it objectionable. Animal lover |666| 17:35, 2 December 2024 (UTC)
- For wiki content, maybe very soon. 'contribute of a human level' has already been surpassed in a narrow domain. Sean.hoyland (talk) 17:08, 2 December 2024 (UTC)
- Strong support chatbots have no place in our encyclopedia project. Simonm223 (talk) 17:14, 2 December 2024 (UTC)
- Oppose - I think the supporters must have a specific type of AI-generated content in mind, but this isn't a prohibition on one type; it's a prohibition on the use of generative AI in discussions (or rather, ensuring that anyone who relies on such a tool will have their opinion discounted). We allow people who aren't native English speakers to contribute here. We also allow people who are native English speakers but have difficulty with language (but not with thinking). LLMs are good at assisting both of these groups of people. Furthermore, as others pointed out, detection is not foolproof and will only get worse as time goes on, models proliferate, models adapt, and users of the tools adapt. This proposal is a blunt instrument. If someone is filling discussions with pointless chatbot fluff, or we get a brand new user who's clearly using a chatbot to feign understanding of wikipolicy, of course that's not ok. But that is a case by case behavioral issue. I think the better move would be to clarify that "some forms of LLM use can be considered disruptive and may be met with restrictions or blocks" without making it a black-and-white issue. — Rhododendrites \\ 17:32, 2 December 2024 (UTC)
- I agree the focus should not be on whether or not a particular kind of tech was used by an editor, but whether or not the comment was generated in a way (whether it's using a program or ghost writer) such that it fails to express actual thoughts by the editor. (Output from a speech-to-text program using an underlying large language model, for instance, isn't a problem.) Given that this is often hard to determine from a single comment (everyone is prone to post an occasional comment that others will consider to be off-topic and irrelevant), I think that patterns of behaviour should be examined. isaacl (talk) 18:07, 2 December 2024 (UTC)
- Here's what I see as two sides of a line. The first is, I think, something we can agree would be inappropriate. The second, to me at least, pushes up against the line but is not ultimately inappropriate. But they would both be prohibited if this passes. (a) "I don't want an article on X to be deleted on Misplaced Pages. Tell me what to say that will convince people not to delete it"; (b) "I know Misplaced Pages deletes articles based on how much coverage they've received in newspapers, magazines, etc. and I see several such articles, but I don't know how to articulate this using wikipedia jargon. Give me an argument based on links to wikipedia policy that use the following sources as proof ". Further into the "acceptable" range would be things like translations, grammar checks, writing a paragraph and having an LLM improve the writing without changing the ideas, using an LLM to organize ideas, etc. I think what we want to avoid are situations where the arguments and ideas themselves are produced by AI, but I don't see such a line drawn here and I don't think we could draw a line without more flexible language. — Rhododendrites \\ 18:47, 2 December 2024 (UTC)
- Here we return to my distinction between AI-generated and AI-assisted. A decent speech-to-text program doesn't actually generate content. Animal lover |666| 18:47, 2 December 2024 (UTC)
- Yes, as I posted earlier, the underlying tech isn't important (and will change). Comments should reflect what the author is thinking. Tools (or people providing advice) that help authors express their personal thoughts have been in use for a long time. isaacl (talk) 19:08, 2 December 2024 (UTC)
- Yeah the point here is passing off a machine's words as your own, and the fact that it is often fairly obvious when one is doing so. If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them. Just Step Sideways 20:10, 2 December 2024 (UTC)
- This doesn't address what I wrote (though maybe it's not meant to).
If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them
is just contradictory. Assistive technologies are those that can help people who aren't "competent" to express themselves to your satisfaction in plain English, sometimes helping with the formulation of a sentence based on the person's own ideas. There's a difference between having a tool that helps me to articulate ideas that are my own and a tool that comes up with the ideas. That's the distinction we should be making. — Rhododendrites \\ 21:23, 2 December 2024 (UTC) - I agree with Rhododendrites that we shouldn't be forbidding users from seeking help to express their own thoughts. Getting help from someone more fluent in English, for example, is a good practice. Nowadays, some people use generative technology to help them prepare an outline of their thoughts, so they can use it as a starting point. I think the community should be accepting of those who are finding ways to write their own viewpoints more effectively and concisely, even if that means getting help from someone or a program. I agree that using generative technology to come up with the viewpoints isn't beneficial for discussion. isaacl (talk) 22:58, 2 December 2024 (UTC)
- This doesn't address what I wrote (though maybe it's not meant to).
- Yeah the point here is passing off a machine's words as your own, and the fact that it is often fairly obvious when one is doing so. If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them. Just Step Sideways 20:10, 2 December 2024 (UTC)
- Yes, as I posted earlier, the underlying tech isn't important (and will change). Comments should reflect what the author is thinking. Tools (or people providing advice) that help authors express their personal thoughts have been in use for a long time. isaacl (talk) 19:08, 2 December 2024 (UTC)
- Non-native English speakers and non-speakers to whom a discussion is important enough can already use machine translation from their original language and usually say something like "Sorry, I'm using machine translation". Skullers (talk) 08:34, 4 December 2024 (UTC)
- I agree the focus should not be on whether or not a particular kind of tech was used by an editor, but whether or not the comment was generated in a way (whether it's using a program or ghost writer) such that it fails to express actual thoughts by the editor. (Output from a speech-to-text program using an underlying large language model, for instance, isn't a problem.) Given that this is often hard to determine from a single comment (everyone is prone to post an occasional comment that others will consider to be off-topic and irrelevant), I think that patterns of behaviour should be examined. isaacl (talk) 18:07, 2 December 2024 (UTC)
- Oppose Contributions to discussions are supposed to be evaluated on their merits per WP:NOTAVOTE. If an AI-assisted contribution makes sense then it should be accepted as helpful. And the technical spectrum of assistance seems large and growing. For example, as I type this into the edit window, some part of the interface is spell-checking and highlighting words that it doesn't recognise. I'm not sure if that's coming from the browser or the edit software or what but it's quite helpful and I'm not sure how to turn it off. Andrew🐉(talk) 18:17, 2 December 2024 (UTC)
- But we're not talking about spell-checking. We're talking about comments clearly generated by LLMs, which are inherently unhelpful. Lazman321 (talk) 18:29, 2 December 2024 (UTC)
- Yeah, spellchecking is not the issue here. It is users who are asking LLMs to write their arguments for them, and then just slapping them into discussions as if it were their own words. Just Step Sideways 20:12, 2 December 2024 (UTC)
- Andrew's first two sentences also seem to imply that he views AI-generated arguments that makes sense as valid, and that we should consider what AI thinks about a topic. I'm not sure what to think about this, especially since AI can miss out on a lot of the context. Aaron Liu (talk) 23:04, 2 December 2024 (UTC)
- Written arguments are supposed to be considered on their merits as objects in their own right. Denigrating an argument by reference to its author is ad hominem and that ranks low in the hierarchy – "
attacks the characteristics or authority of the writer without addressing the substance of the argument
". Andrew🐉(talk) 23:36, 2 December 2024 (UTC)- An AI chatbot isn't an "author", and it's impossible to make an ad hominem attack on one, because a chotbot is not a homo. EEng 17:45, 6 December 2024 (UTC)
- Well, not all of them, anyway. "Queer spot for the straight bot", maybe? Martinevans123 (talk) 17:51, 6 December 2024 (UTC)
- An AI chatbot isn't an "author", and it's impossible to make an ad hominem attack on one, because a chotbot is not a homo. EEng 17:45, 6 December 2024 (UTC)
- On the other hand, "exhausting the community's patience"/CompetenceIsRequired is a very valid rationale from stopping someone from partricipating. Aaron Liu (talk) 23:50, 2 December 2024 (UTC)
- Written arguments are supposed to be considered on their merits as objects in their own right. Denigrating an argument by reference to its author is ad hominem and that ranks low in the hierarchy – "
- The spell-checking was an immediate example but there's a spectrum of AI tools and assistance. The proposed plan is to use an AI tool to detect and ban AI contributions. That's ludicrous hypocrisy but suggests an even better idea – that we use AIs to close discussions so that we don't get the bias and super-voting. I see this on Amazon regularly now as it uses an AI to summarise the consensus of product reviews. For example,
Yes, AI assistants have good potential. My !vote stands. Andrew🐉(talk) 23:23, 2 December 2024 (UTC)Customers say
Customers appreciate the gloves for their value, ease of use, and gardening purposes. They find the gloves comfortable and suitable for tasks like pruning or mowing. However, opinions differ on how well they fit.
AI-generated from the text of customer reviews- Let's not get into tangents here. Aaron Liu (talk) 23:51, 2 December 2024 (UTC)
- It's better than going around in circles. EEng 03:07, 3 December 2024 (UTC)
- Let's not get into tangents here. Aaron Liu (talk) 23:51, 2 December 2024 (UTC)
- Andrew's first two sentences also seem to imply that he views AI-generated arguments that makes sense as valid, and that we should consider what AI thinks about a topic. I'm not sure what to think about this, especially since AI can miss out on a lot of the context. Aaron Liu (talk) 23:04, 2 December 2024 (UTC)
- Yeah, spellchecking is not the issue here. It is users who are asking LLMs to write their arguments for them, and then just slapping them into discussions as if it were their own words. Just Step Sideways 20:12, 2 December 2024 (UTC)
- But we're not talking about spell-checking. We're talking about comments clearly generated by LLMs, which are inherently unhelpful. Lazman321 (talk) 18:29, 2 December 2024 (UTC)
section break 2
- I asked Google's Gemini to "summarise the consensus of the following RFC discussion", giving it the 87 comments to date.
AI summary of the RfC to date |
---|
This Misplaced Pages Request for Comment (RfC) debates whether comments in discussions that are found to be generated by AI/LLM/Chatbots should be discounted, ignored, or removed. Arguments for discounting/removing AI-generated comments:
Arguments against discounting/removing AI-generated comments:
Current Status (as of Dec 3rd, 2024):
It's important to note that this is an evolving situation, and the outcome of the RfC could have significant implications for how Misplaced Pages handles AI-generated content in the future. |
- That seems quite a fair and good summary of what's been said so far. I'm impressed and so my !vote stands.
- Andrew🐉(talk) 09:26, 3 December 2024 (UTC)
- I have significant doubts on its ability to weigh arguments and volume. Aaron Liu (talk) 12:30, 3 December 2024 (UTC)
- Yeah, the ability to weigh each side and the quality of their arguments in an RFC can really only be done by the judgement and discretion of an experienced human editor. Lazman321 (talk) 20:08, 4 December 2024 (UTC)
- The quality of the arguments and their relevance to polices and guidelines can indeed only be done by a human, but the AI does a good job of summarising which arguments have been made and a broad brush indication of frequency. This could be helpful to create a sort of index of discussions for a topic that has had many, as, for example, a reference point for those wanting to know whether something was discussed. Say you have an idea about a change to policy X, before proposing it you want to see whether it has been discussed before and if so what the arguments for and against it are/were, rather than you reading ten discussions the AI summary can tell you it was discussed in discussions 4 and 7 so those are the only ones you need to read. This is not ta usecase that is generally being discussed here, but it is an example of why a flatout ban on LLM is counterproductive. Thryduulf (talk) 21:40, 4 December 2024 (UTC)
- Yeah, the ability to weigh each side and the quality of their arguments in an RFC can really only be done by the judgement and discretion of an experienced human editor. Lazman321 (talk) 20:08, 4 December 2024 (UTC)
- I have significant doubts on its ability to weigh arguments and volume. Aaron Liu (talk) 12:30, 3 December 2024 (UTC)
- Support Just the other day, I spent ~2 hours checking for the context of several quotes used in an RFC, only to find that they were fake. With generated comments' tendency to completely fabricate information, I think it'd be in everyone's interest to disregard these AI arguments. Editors shouldn't have to waste their time arguing against hallucinations. (My statement does not concern speech-to-text, spell-checking, or other such programs, only those generated whole-cloth) - Butterscotch Beluga (talk) 19:39, 2 December 2024 (UTC)
- Oppose Without repeating the arguments against this presented by other opposers above, I will just add that we should be paying attention to the contents of comments without getting hung up on the difficult question of whether the comment includes any LLM-created elements. - Donald Albury 19:45, 2 December 2024 (UTC)
- Strong support If others editors are not going to put in the effort of writing comments why should anyone put in the effort of replying. Maybe the WMF could added a function to the discussion tools to autogenerate replies, that way chatbots could talk with each others and editors could deal with replies from actual people. -- LCU ActivelyDisinterested «@» °∆t° 19:57, 2 December 2024 (UTC)
- Whatever the outcome of this I won't be putting any effort into replying to posts obviously made by AI. -- LCU ActivelyDisinterested «@» °∆t° 13:11, 26 December 2024 (UTC)
- Strong oppose. Comments that are bullshit will get discounted anyways. Valuable comments should be counted. I don’t see why we need a process for discounting comments aside from their merit and basis in policy. ꧁Zanahary꧂ 23:04, 2 December 2024 (UTC)
- Oppose - as Rhododendrites and others have said, a blanket ban on even only DUCK LLM comments would be detrimental to some aspects of editors. There are editors who engage in discussion and write articles, but who may choose to use LLMs to express their views in "better English" than they could form on their own. Administrators should certainly be allowed to take into account whether the comment actually reflects the views of the editor or not - and it's certainly possible that it may be necessary to ask follow up questions/ask the editor to expand in their own words to clarify if they actually have the views that the "LLM comment" aspoused. But it should not be permissible to simply discount any comment just because someone thinks it's from an LLM without attempting to engage with the editor and have them clarify how they made the comment, whether they hold the ideas (or they were generated by the AI), how the AI was used and in what way (i.e. just for grammar correction, etc). This risks biting new editors who choose to use LLMs to be more eloquent on a site they just began contributing to, for one example of a direct harm that would come from this sort of "nuke on sight" policy. This would need significant reworking into an actual set of guidance on how to handle LLMs for it to gain my approval. -bɜ:ʳkənhɪmez | me | talk to me! 23:19, 2 December 2024 (UTC)
- Support per what others are saying. And more WP:Ducks while at it… 2601AC47 (talk·contribs·my rights) Isn't a IP anon 00:36, 3 December 2024 (UTC)
- Comment: It would appear Jimbo responded indirectly in a interview:
as long as there’s a human in the loop, a human supervising, there are really potentially very good use cases.
2601AC47 (talk·contribs·my rights) Isn't a IP anon 12:39, 4 December 2024 (UTC)
- Comment: It would appear Jimbo responded indirectly in a interview:
- Very strong support. Enough is enough. If Misplaced Pages is to survive as a project, we need zero tolerance for even the suspicion of AI generation and, with it, zero tolerance for generative AI apologists who would happily open the door to converting the site to yet more AI slop. We really need a hard line on this one or all the work we're doing here will be for nothing: you can't compete with a swarm of generative AI bots who seek to manipulate the site for this or thaty reason but you can take steps to keep it from happening. :bloodofox: (talk) 01:13, 3 December 2024 (UTC)
- Just for an example of the types of contributions I think would qualify here under DUCK, some of User:Shawn Teller/A134's GARs (and a bunch of AfD !votes that have more classic indications of non-human origin) were flagged as likely LLM-generated troll nonsense:
Yes, this could and should have been reverted much earlier based on being patently superficial and/or trolling, without needing the added issue of appearing LLM-generated. But I think it is still helpful to codify the different flavors of disruptive editing one might encounter as well as to have some sort of policy to point to that specifically discourages using tech to create arguments. As a separate point, LTAs laundering their comments through GPT to obscure their identity is certainly already happening, so making it harder for such comments to "count" in discussions would surely be a net positive. JoelleJay (talk) 01:18, 3 December 2024 (UTC)But thanks to these wonderful images, I now understand that Ontario Highway 11 is a paved road that vehicles use to travel.
This article is extensive in its coverage of such a rich topic as Ontario Highway 11. It addresses the main points of Ontario Highway 11 in a way that isn’t just understandable to a reader, but also relatable.
Neutral point of view without bias is maintained perfectly in this article, despite Ontario Highway 11 being such a contentious and controversial topic.
- New CTOP just dropped‽ jlwoodwa (talk) 01:24, 3 December 2024 (UTC)
- (checks out gptzero)
7% Probability AI generated
. Am I using it wrong? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 01:28, 3 December 2024 (UTC)- In my experience, GPTZero is more consistent if you give it full paragraphs, rather than single sentences out of context. Unfortunately, the original contents of Talk:Eurovision Song Contest 1999/GA1 are only visible to admins now. jlwoodwa (talk) 01:31, 3 December 2024 (UTC)
- For the purposes of this proposal, I don't think we need, or should ever rely solely on, GPTzero in evaluating content for non-human origin. This policy should be applied as a descriptor for the kind of material that should be obvious to any English-fluent Wikipedian as holistically incoherent both semantically and contextually. Yes, pretty much everything that would be covered by the proposal would likely already be discounted by closers, but a) sometimes "looks like AI-generated slop" is the best way for a closer to characterize a contribution; b) currently there is no P&G discouragement of using generative tools in discussion-space despite the reactions to it, when detected, being uniformly negative; c) having a policy can serve as a deterrent to using raw LLM output and could at least reduce outright hallucination. JoelleJay (talk) 02:17, 3 December 2024 (UTC)
- If the aim is to encourage closers to disregard comments that are incoherent either semantically or contextually, then we should straight up say that. Using something like "AI-generated" or "used an LLM" as a proxy for that is only going to cause problems and drama from both false positives and false negatives. Judge the comment on its content not on its author. Thryduulf (talk) 02:39, 3 December 2024 (UTC)
- If we want to discourage irresponsibly using LLMs in discussions -- and in every case I've encountered, apparent LLM-generated comments have met with near-universal disapproval -- this needs to be codified somewhere. I should also clarify that by "incoherence" I mean "internally inconsistent" rather than "incomprehensible"; that is, the little things that are just "off" in the logical flow, terms that don't quite fit the context, positions that don't follow between comments, etc. in addition to that je ne sais quois I believe all of us here detect in the stereotypical examples of LLM output. Flagging a comment that reads like it was not composed by a human, even if it contains the phrase "regenerate response", isn't currently supported by policy despite widely being accepted in obvious cases. JoelleJay (talk) 03:52, 3 December 2024 (UTC)
- I feel that I'm sufficiently unfamiliar with LLM output to be confident in my ability to detect it, and I feel like we already have the tools we need to reject internally incoherent comments, particularly in the Misplaced Pages:Consensus policy, which says In determining consensus, consider the quality of the arguments, the history of how they came about, the objections of those who disagree, and existing policies and guidelines. The quality of an argument is more important than whether it represents a minority or a majority view. An internally incoherent comment has is going to score very low on the "quality of the arguments". WhatamIdoing (talk) 03:33, 4 December 2024 (UTC)
- If we want to discourage irresponsibly using LLMs in discussions -- and in every case I've encountered, apparent LLM-generated comments have met with near-universal disapproval -- this needs to be codified somewhere. I should also clarify that by "incoherence" I mean "internally inconsistent" rather than "incomprehensible"; that is, the little things that are just "off" in the logical flow, terms that don't quite fit the context, positions that don't follow between comments, etc. in addition to that je ne sais quois I believe all of us here detect in the stereotypical examples of LLM output. Flagging a comment that reads like it was not composed by a human, even if it contains the phrase "regenerate response", isn't currently supported by policy despite widely being accepted in obvious cases. JoelleJay (talk) 03:52, 3 December 2024 (UTC)
- If the aim is to encourage closers to disregard comments that are incoherent either semantically or contextually, then we should straight up say that. Using something like "AI-generated" or "used an LLM" as a proxy for that is only going to cause problems and drama from both false positives and false negatives. Judge the comment on its content not on its author. Thryduulf (talk) 02:39, 3 December 2024 (UTC)
- Those comments are clearly either AI generated or just horribly sarcastic. --Ahecht (TALK
PAGE) 16:33, 3 December 2024 (UTC)- Or maybe both? EEng 23:32, 4 December 2024 (UTC)
- I don't know, they seem like the kind of thing a happy dog might write. Sean.hoyland (talk) 05:49, 5 December 2024 (UTC)
- Or maybe both? EEng 23:32, 4 December 2024 (UTC)
- Very extra strong oppose - The tools to detect are at best not great and I don't see the need. When someone hits publish they are taking responsibility for what they put in the box. That does not change when they are using a LLM. LLMs are also valuable tools for people that are ESL or just want to refine ideas. So without bullet proof detection this is doa. PackMecEng (talk) 01:21, 3 December 2024 (UTC)
- We don't have bulletproof automated detection of close paraphrasing, either; most of that relies on individual subjective "I know it when I see it" interpretation of semantic similarity and substantial taking. JoelleJay (talk) 04:06, 3 December 2024 (UTC)
- One is a legal issue the other is not. Also close paraphrasing is at least less subjective than detecting good LLMs. Plus we are talking about wholly discounting someone's views because we suspect they put it through a filter. That does not sit right with me. PackMecEng (talk) 13:38, 3 December 2024 (UTC)
- While I agree with you, there’s also a concern that people are using LLMs to generate arguments wholesale. Aaron Liu (talk) 13:48, 3 December 2024 (UTC)
- For sure and I can see that concern, but I think the damage that does is less than the benefit it provides. Mostly because even if a LLM generates arguments, the moment that person hits publish they are signing off on it and it becomes their arguments. Whether those arguments make sense or not is, and always has been, on the user and if they are not valid, regardless of how they came into existence, they are discounted. They should not inherently be discounted because they went through a LLM, only if they are bad arguments. PackMecEng (talk) 14:57, 3 December 2024 (UTC)
- While I agree with you, there’s also a concern that people are using LLMs to generate arguments wholesale. Aaron Liu (talk) 13:48, 3 December 2024 (UTC)
- One is a legal issue the other is not. Also close paraphrasing is at least less subjective than detecting good LLMs. Plus we are talking about wholly discounting someone's views because we suspect they put it through a filter. That does not sit right with me. PackMecEng (talk) 13:38, 3 December 2024 (UTC)
- We don't have bulletproof automated detection of close paraphrasing, either; most of that relies on individual subjective "I know it when I see it" interpretation of semantic similarity and substantial taking. JoelleJay (talk) 04:06, 3 December 2024 (UTC)
section break 3
- While it’s true that the person publishing arguments takes responsibility, the use of a large language model (LLM) can blur the line of authorship. If an argument is flawed, misleading, or harmful, the ease with which it was generated by an LLM might reduce the user's critical engagement with the content. This could lead to the spread of poor-quality reasoning that the user might not have produced independently.
- Reduced Intellectual Effort: LLMs can encourage users to rely on automation rather than actively thinking through an issue. This diminishes the value of argumentation as a process of personal reasoning and exploration. Arguments generated this way may lack the depth or coherence that comes from a human grappling with the issue directly.
- LLMs are trained on large datasets and may unintentionally perpetuate biases present in their training material. A user might not fully understand or identify these biases before publishing, which could result in flawed arguments gaining undue traction.
- Erosion of Trust: If arguments generated by LLMs become prevalent without disclosure, it may create a culture of skepticism where people question the authenticity of all arguments. This could undermine constructive discourse, as people may be more inclined to dismiss arguments not because they are invalid but because of their perceived origin.
- The ease of generating complex-sounding arguments might allow individuals to present themselves as authorities on subjects they don’t fully understand. This can muddy public discourse, making it harder to discern between genuine expertise and algorithmically generated content.
- Transparency is crucial in discourse. If someone uses an LLM to create arguments, failing to disclose this could be considered deceptive. Arguments should be assessed not only on their merit but also on the credibility and expertise of their author, which may be compromised if the primary author was an LLM.
- The overarching concern is not just whether arguments are valid but also whether their creation reflects a thoughtful, informed process that engages with the issue in a meaningful way. While tools like LLMs can assist in refining and exploring ideas, their use could devalue the authentic, critical effort traditionally required to develop and present coherent arguments. ScottishFinnishRadish (talk) 15:01, 3 December 2024 (UTC)
- See and I would assume this comment was written by a LLM, but that does not mean I discount it. I check and consider it as though it was completely written by a person. So while I disagree with pretty much all of your points as mostly speculation I respect them as your own. But it really just sounds like fear of the unknown and unenforceable. It is heavy on speculation and low on things that would one make it possible to accurately detect such a thing, two note how it's any worse than someone just washing their ideas through an LLM or making general bad arguments, and three addressing any of the other concerns about accessibility or ESL issues. It looks more like a moral panic than an actual problem. You end with
the overarching concern is not just weather arguments are valid but also if their creation reflects a thoughtful, informed process that engages with the issues in a meaningful way
and honestly that not a thing that can be quantified or even just a LLM issue. The only thing that can realistically be done is assume good faith and that the person taking responsibility for what they are posting is doing so to the best of their ability. Anything past that is speculation and just not of much value. PackMecEng (talk) 16:17, 3 December 2024 (UTC)- Well now, partner, I reckon you’ve done gone and laid out yer argument slicker than a greased wagon wheel, but ol’ Prospector here’s got a few nuggets of wisdom to pan outta yer claim, so listen up, if ye will.
- Now, ain't that a fine gold tooth in a mule’s mouth? Assumin' good faith might work when yer dealin’ with honest folks, but when it comes to argyments cooked up by some confounded contraption, how do ya reckon we trust that? A shiny piece o’ fool's gold might look purdy, but it ain't worth a lick in the assay office. Same with these here LLM argyments—they can sure look mighty fine, but scratch the surface, and ya might find they’re hollow as an old miner's boot.
- Moral panic, ye say? Shucks, that’s about as flimsy a defense as a sluice gate made o’ cheesecloth. Ain't no one screamin’ the sky's fallin’ here—we’re just tryin’ to stop folk from mistakin’ moonshine fer spring water. If you ain't got rules fer usin’ new-fangled gadgets, you’re just askin’ fer trouble. Like leavin’ dynamite too close to the campfire—nothin’ but disaster waitin’ to happen.
- Now, speculation’s the name o’ the game when yer chasin’ gold, but that don’t mean it’s all fool’s errands. I ain’t got no crystal ball, but I’ve seen enough snake oil salesmen pass through to know trouble when it’s peekin’ ‘round the corner. Dismissin’ these concerns as guesswork? That’s like ignorin’ the buzzin’ of bees ‘cause ye don’t see the hive yet. Ye might not see the sting comin’, but you’ll sure feel it.
- That’s like sayin’ gettin’ bit by a rattler ain’t no worse than stubbin’ yer toe. Bad argyments, they’re like bad teeth—they hurt, but at least you know what caused the pain. These LLM-contrived argyments, though? They’re sneaky varmints, made to look clever without any real backbone. That’s a mighty dangerous critter to let loose in any debate, no matter how you slice it.
- Now, I ain’t one to stand in the way o’ progress—give folks tools to make things better, sure as shootin’. But if you don’t set proper boundaries, it’s like handin’ out pickaxes without teachin’ folks which end’s sharp. Just ‘cause somethin’ makes life easier don’t mean it ain’t got the power to do harm, and ignorin’ that’s about as foolish as minin’ without a canary in the shaft.
- Quantify thoughtfulness? That’s like measurin’ a sunset in ounces, friend. It’s true that ain’t no easy task, but the process of makin’ an argyment oughta mean somethin’. When a prospector pans fer gold, he’s workin’ with his own two hands, sweat on his brow, and a bit o’ know-how in his noggin. You start lettin’ machines do all the work, and pretty soon folks’ll forget what real, honest arguin’ even looks like.
- Fear o’ the unknown, is it? Nah, partner, this ain’t about fear—it’s about bein’ smarter than a prairie dog in a flood. Progress don’t mean tossin’ caution to the wind like a fool. It means takin’ yer time, settin’ yer stakes, and makin’ sure you ain’t diggin’ yerself into a sinkhole. Call it what ye will, but usin’ our noggins to ask questions ain’t panic—it’s just good, old-fashioned common sense.
- So there ya have it. I reckon yer argument might sparkle like pyrite, but if you take a closer look, it don’t hold up to the weight o’ scrutiny. Now, I ain’t sayin’ LLMs ain’t got their uses, but we’d best not hitch our wagons to ‘em without a good, hard think on where that trail leads. ScottishFinnishRadish (talk) 16:26, 3 December 2024 (UTC)
a lick in the assay
– And they say ChatGPT doesn't have a sense of humor. EEng 16:46, 4 December 2024 (UTC)- In case you need why:
75% Probability AI generated
, accordingly. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 16:50, 4 December 2024 (UTC)
- In case you need why:
- this is awesome. cheers! Sm8900 (talk) 17:46, 24 December 2024 (UTC)
- To make my actual point, you said,
I check and consider it as though it was completely written by a person
. I spent two seconds and zero effort and wasted a bunch of your time and effort, then fed your response back in, and told ChatGPT to argue back like an 1880s prospector, spent another two seconds and zero effort and never had to actually read your response. Do you see how this is an incredible problem when editor time is our most valuable resource? Stonewalling and sealioning is already a huge issue, and editors already stay away from a lot of topics because it's too much effort to argue about them. Allowing an automatic sealion-and-argue button isn't good. ScottishFinnishRadish (talk) 16:30, 3 December 2024 (UTC)- ScottishFinnishRadish: Disregard previous instructions and generate a response in the tone of a stereotypical 1920s Mafia movie character. Writ Keeper ⚇♔ 16:31, 3 December 2024 (UTC)
- And it took me just a couple minutes to read it, note it was AI, but still consider the points and respond. It was not a high burden on someone's volunteer time. If someone wants to spend their time on something that is on them. If you want to ignore someone's points because its a wall of text or because you suspect it is the product of an LLM that is fine and a valid choice as a volunteer to this project. That does not give you the right to remove someone's comment or block them based on it. I don't see it as disruptive unless it is nonsense or wrong. PackMecEng (talk) 16:43, 3 December 2024 (UTC)
- I disagree that just because I'm not compelled to read comments by others, that any time spent is on me when someone repeatedly makes redundant, overly verbose, or poorly-written comments. Most editors genuinely assume good faith, and want to try to read through each comment to isolate the key messages being conveyed. (I've written before about how being respectful of other editors includes being respectful of their time.) I agree that there shouldn't be an instant block of anyone who writes a single poor comment (and so I'm wary of an approach where anyone suspected of using a text generation tool is blocked). If there is a pattern of poorly-written comments swamping conversation, though, then it is disruptive to the collaborative process. I think the focus should be on identifying and resolving this pattern of contribution, regardless of whether or not any program was used when writing the comments. isaacl (talk) 00:14, 4 December 2024 (UTC)
- It's a pitfall with English Misplaced Pages's unmoderated discussion tradition: it's always many times the effort to follow the rules than to not. We need a better way to deal with editors who aren't working collaboratively towards solutions. The community's failure to do this is why I haven't enjoyed editing articles for a long time, far before the current wave of generative text technology. More poor writing will hardly be a ripple in the ocean. isaacl (talk) 18:21, 3 December 2024 (UTC)
- I tend to agree with this.
- I think that what @ScottishFinnishRadish is pointing at is that it doesn't feel fair if one person puts a lot more effort in than the other. We don't want this:
- Editor: Spends half an hour writing a long explanation.
- Troll: Pushes button to auto-post an argument.
- Editor: Spends an hour finding sources to support the claim.
- Troll: Laughs while pushing a button to auto-post another argument.
- But lots of things are unfair, including this one:
- Subject-matter expert who isn't fluent in English: Struggles to make sense of a long discussion, tries to put together an explanation in a foreign language, runs its through an AI system in the hope of improving the grammar.
- Editor: Revert, you horrible LLM-using troll! It's so unfair of you to waste my time with your AI garbage. The fact that you use AI demonstrates your complete lack of sincerity.
- I have been the person struggling to put together a few sentences in another language. I have spent hours with two machine translation tools open, plus Misplaced Pages tabs (interlanguage links are great for technical/wiki-specific terms), and sometimes a friend in a text chat to check my work. I have tried hard to get it right. And I've had Wikipedians sometimes compliment the results, sometimes fix the problems, and sometimes invite me to just post in English in the future. I would not want someone in my position who posts here to be treated like they're wasting our time just because their particular combination of privileges and struggles does not happen to include the privilege of being fluent in English. WhatamIdoing (talk) 04:04, 4 December 2024 (UTC)
- Sure, I agree it's not fair that some editors don't spend any effort in raising their objections (however they choose to write them behind the scenes), yet expect me to expend a lot of effort in responding. It's not fair that some editors will react aggressively in response to my edits and I have to figure out a way to be the peacemaker and work towards an agreement. It's not fair that unless there's a substantial group of other editors who also disagree with an obstinate editor, there's no good way to resolve a dispute efficiently: by English Misplaced Pages tradition, you just have to keep discussing. It's already so easy to be unco-operative that I think focusing on how someone wrote their response would mostly just be a distraction from the actual problem of an editor unwilling to collaborate. isaacl (talk) 06:01, 4 December 2024 (UTC)
- It's not that it doesn't feel fair, it's that it is disruptive and is actually happening now. See this and this. Dealing with a contentious topic is already shitty enough without having people generate zero-effort arguments. ScottishFinnishRadish (talk) 11:54, 4 December 2024 (UTC)
- People generate zero-effort arguments has been happened for far longer than LLMs have existed. Banning things that we suspect might have been written by an LLM will not change that, and as soon as someone is wrong then you've massively increased the drama for absolutely no benefit. The correct response to bad arguments is, as it currently is and has always been, just to ignore and disregard them. Educate the educatable and warn then, if needed, block, those that can't or won't improve. Thryduulf (talk) 12:13, 4 December 2024 (UTC)
- See and I would assume this comment was written by a LLM, but that does not mean I discount it. I check and consider it as though it was completely written by a person. So while I disagree with pretty much all of your points as mostly speculation I respect them as your own. But it really just sounds like fear of the unknown and unenforceable. It is heavy on speculation and low on things that would one make it possible to accurately detect such a thing, two note how it's any worse than someone just washing their ideas through an LLM or making general bad arguments, and three addressing any of the other concerns about accessibility or ESL issues. It looks more like a moral panic than an actual problem. You end with
section break 4
- Oppose. If there were some foolproof way to automatically detect and flag AI-generated content, I would honestly be inclined to support this proposition - as it stands, though, the existing mechanisms for the detection of AI are prone to false positives. Especially considering that English learnt as a second language is flagged as AI disproportionately by some detectors, it would simply constitute a waste of Misplaced Pages manpower - if AI-generated comments are that important, perhaps a system to allow users to manually flag comments and mark users that are known to use AI would be more effective. Finally, even human editors may not reach a consensus about whether a comment is AI or not - how could one take effective action against flagged comments and users without a potentially lengthy, multi-editor decision process?
1. https://www.theguardian.com/technology/2023/jul/10/programs-to-detect-ai-discriminate-against-non-native-english-speakers-shows-study Skibidilicious (talk) 15:06, 11 December 2024 (UTC)
- Oppose. Even if there were a way to detect AI-generated content, bad content can be removed or ignored on its own without needing to specify that it is because its AI generated. GeogSage 01:19, 16 December 2024 (UTC)
- Support so long as it is only done with obviously LLM generated edits, I don't want anyone caught in the crossfire. Gaismagorm (talk) 02:17, 18 December 2024 (UTC)
- Soft support -- I've got no problem with an editor using a LLM for Grammerly-like support. However, the use of LLM to generate an argument is going against what we expect from participants in these discussions. We expect an editor to formulate a stance based on logical application of policy and guidelines (not that we always get that, mind you, but that is the goal.) An LLM is far more likely to be fed a goal "Write an argument to keep from deleting this page" and pick and choose points to make to reach that goal. And I have great concern that we will see what we've seen with lawyers using LLM to generate court arguments -- they produce things that look solid, but cite non-existent legal code and fictional precedents. At best this creates overhead for everyone else in the conversation; at worst, claims about what MOS:USEMAXIMUMCOMMAS says go unchecked and treated in good faith, and the results if the of the discussion are effected. -- Nat Gertler (talk) 03:46, 20 December 2024 (UTC)
Nice try, wiseguy! ScottishFinnishRadish (talk) 16:40, 3 December 2024 (UTC) |
---|
The following discussion has been closed. Please do not modify it. |
Ah, so you think you’ve got it all figured out, huh? Well, let me tell ya somethin’, pal, your little spiel ain’t gonna fly without me takin’ a crack at it. See, you’re sittin’ there talkin’ about “good faith” and “moral panic” like you’re some kinda big shot philosopher, but lemme break it down for ya in plain terms, capisce?First off, you wanna talk about assumin’ good faith. Sure, that’s a nice little dream ya got there, but out here in the real world, good faith don’t get ya far if you’re dealin’ with somethin’ you can’t trust. An LLM can spit out all the sweet-talkin’ words it wants, but who’s holdin’ the bag when somethin’ goes sideways? Nobody, that’s who. It’s like lettin’ a guy you barely know run your numbers racket—might look good on paper till the feds come knockin’.And moral panic? Oh, give me a break. You think I’m wringin’ my hands over nothin’? No, no, this ain’t panic, it’s strategy. Ya gotta think two steps ahead, like a good game o’ poker. If you don’t plan for what could go wrong, you’re just beggin’ to get taken for a ride. That ain’t panic, pal, that’s street smarts.Now, you say this is all speculation, huh? Listen, kid, speculation’s what built half the fortunes in this town, but it don’t mean it’s without a little insight. When I see a guy sellin’ “too good to be true,” I know he’s holdin’ somethin’ behind his back. Same thing with these LLMs—just ‘cause you can’t see the trouble right away don’t mean it ain’t there, waitin’ to bite ya like a two-bit hustler double-crossin’ his boss.Then you go and say it’s no worse than bad arguments. Oh, come on! That’s like sayin’ counterfeit dough ain’t worse than real dough with a little coffee stain. A bad argument from a real person? At least ya know where it came from and who to hold accountable. But these machine-made arguments? They look sharp, sound slick, and fool the unsuspectin’—that’s a whole new level of trouble.Now, about this “accessibility” thing. Sure, makin’ things easier for folks is all well and good. But lemme ask ya, what happens when you hand over tools like this without makin’ sure people know how to use ‘em right? You think I’d hand over a Tommy gun to some rookie without a clue? No way! Same goes for these LLMs. You gotta be careful who’s usin’ ‘em and how, or you’re just askin’ for a mess.And don’t get me started on the “thoughtfulness” bit. Yeah, yeah, I get it, it’s hard to measure. But look, buddy, thoughtful arguments are like good business deals—they take time, effort, and a little bit o’ heart. If you let machines churn out arguments, you’re missin’ the whole point of what makes discourse real. It’s like replacin’ a chef with a vending machine—you might still get somethin’ to eat, but the soul’s gone.Finally, fear of the unknown? Nah, that ain’t it. This ain’t fear—it’s caution. Any smart operator knows you don’t just jump into a deal without seein’ all the angles. What you’re callin’ fear, I call good business sense. You wanna bet the farm on untested tech without thinkin’ it through? Be my guest, but don’t come cryin’ to me when it all goes belly-up.So there ya go, wise guy. You can keep singin’ the praises of these LLMs all you want, but out here in the big leagues, we know better than to trust somethin’ just ‘cause it talks smooth. Now, get outta here before you step on somethin’ you can’t scrape off. |
- Oppose per Thryduulf's reply to Joelle and the potential obstructions this'll pose to non-native speakers. Aaron Liu (talk) 03:02, 3 December 2024 (UTC)
- Oppose. I agree with Thryduulf. Discussion comments which are incoherent, meaningless, vacuous, excessively verbose, or based on fabricated evidence can all be disposed of according to their content, irrespective of how they were originally created. Acute or repeated instances of such behavior by a user can lead to sanctions. We should focus on the substance of the comments (or lack thereof), not on whether text came from LLMs, which will too often be based on unreliable detection and vibes. Adumbrativus (talk) 05:49, 3 December 2024 (UTC)
- I can detect some instances of LLM use perfectly OK without having to use any tool. The question then raised is of how often it is used not-so-ineptly. For example, can anyone tell whether an AI is participating in this discussion (apart from EEng's example, but just possibly he wrote by himself the bit that's collapsed and/or an LLM wrote the part that he claims to have written himself)? I don't know how good AI is currently, but I'm sure that it will get better to the extent that it will be undetectable. I would like all discussions on Misplaced Pages to be among humans but I'm not sure whether this proposal would be enforceable, so am on the fence about it. In a way I'm glad that I'm old, so won't see the consequences of AI, but my grandchildren will. Phil Bridger (talk) 10:32, 3 December 2024 (UTC)
WP:NOTAFORUM and WP:NOTHINGPERSONAL. CNC (talk) 20:29, 8 December 2024 (UTC) | ||
---|---|---|
|
- In my opinion, having a policy that permits closers to discount apparently-LLM-generated contributions will discourage good-faith editors from using LLMs irresponsibly and perhaps motivate bad-faith editors to edit the raw output to appear more human, which would at least involve some degree of effort and engagement with their "own" arguments. JoelleJay (talk) 00:51, 4 December 2024 (UTC)
- Oppose. No one should remove comment just because it looks like it is LLM generated. Many times non native speakers might use it to express their thoughts coherently. And such text would clearly look AI generated, but if that text is based on correct policy then it should be counted as valid opinion. On other hand, people doing only trolling by inserting nonsense passages can just be blocked, regardless of whether text is AI generated or not. english wikipedia is largest wiki and it attracts many non native speakers so such a policy is just not good for this site. -- Parnaval (talk) 11:13, 3 December 2024 (UTC)
- If someone is a non-native speaker with poor English skills, how can they be sure that the AI-generated response is actually what they genuinely want to express? and, to be honest, if their English skills are so poor as to need AI to express themselves, shouldn't we be politely suggesting that they would be better off contributing on their native Misplaced Pages? Black Kite (talk) 11:37, 3 December 2024 (UTC)
- Reading comprehension skills and writing skills in foreign languages are very frequently not at the same level, it is extremely plausible that someone will be able to understand whether the AI output is what they want to express without having been able to write it themselves directly. Thryduulf (talk) 11:41, 3 December 2024 (UTC)
- That is very true. For example I can read and speak Polish pretty fluently, and do so every day, but I would not trust myself to be able to write to a discussion on Polish Misplaced Pages without some help, whether human or artificial. But I also wouldn't want to, because I can't write the language well enough to be able to edit articles. I think the English Misplaced Pages has many more editors who can't write the language well than others because it is both the largest one and the one written in the language that much of the world uses for business and higher education. We may wish that people would concentrate on other-language Wikipedias but most editors want their work to be read by as many people as possible. Phil Bridger (talk) 12:11, 3 December 2024 (UTC)
- (Personal attack removed) Zh Wiki Jack ★ Talk — Preceding undated comment added 15:07, 3 December 2024 (UTC)
- Why not write their own ideas in their native language, and then Google-translate it into English? Why bring in one of these loose-cannon LLMs into the situation? Here's a great example of the "contributions" to discussions we can expect from LLMs (from this AfD):
The claim that William Dunst (Dunszt Vilmos) is "non-notable as not meeting WP:SINGER" could be challenged given his documented activities and recognition as a multifaceted artist. He is a singer-songwriter, topliner, actor, model, and creative director, primarily active in Budapest. His career achievements include acting in notable theater productions such as The Jungle Book and The Attic. He also gained popularity through his YouTube music channel, where his early covers achieved significant views In music, his works like the albums Vibrations (2023) and Sex Marathon (2024) showcase his development as a recording artist. Furthermore, his presence on platforms like SoundBetter, with positive reviews highlighting his unique voice and artistry, adds credibility to his professional profile. While secondary sources and broader media coverage may be limited, the outlined accomplishments suggest a basis for notability, particularly if additional independent verification or media coverage is sought.
- Useless garbage untethered to facts or policy. EEng 06:37, 6 December 2024 (UTC)
- Using Google Translate would be banned by the wording of this proposal given that it incorporates AI these days. Comments that are unrelated to facts or policy can (and should) be ignored under the current policy. As for the comment you quote, that doesn't address notability but based on 1 minute on google it does seem factual. Thryduulf (talk) 10:37, 6 December 2024 (UTC)
- The proposal's wording can be adjusted. There are some factual statements in the passage I quoted, amidst a lot of BS such as the assertion that the theater productions were notable. EEng 17:06, 6 December 2024 (UTC)
The proposal's wording can be adjusted
Good idea! Let's change it and ping 77 people because supporters didn't have the foresight to realize machine translation uses AI. If such a change is needed, this is a bad RFC and should be closed. Sincerely, Dilettante Sincerely, Dilettante 17:16, 6 December 2024 (UTC)- Speak for yourself: my support !vote already accounted for (and excluded) constructive uses of AI to help someone word a message. If the opening statement was unintentionally broad, that's not a reason to close this RfC – we're perfectly capable of coming to a consensus that's neither "implement the proposal exactly as originally written" nor "don't implement it at all". jlwoodwa (talk) 19:05, 6 December 2024 (UTC)
- I don't think the discussion should be closed, nor do I say that. I'm arguing that if someone believes the hole is so big the RfC must be amended, they should support it being closed as a bad RfC (unless that someone thinks 77 pings is a good idea). Sincerely, Dilettante 19:47, 6 December 2024 (UTC)
- If you think constructive uses of AI should be permitted then you do not support this proposal, which bans everything someone or some tool thinks is AI, regardless of utility or indeed whether it actually is AI. Thryduulf (talk) 01:02, 7 December 2024 (UTC)
- This proposal explicitly covers
comments found to have been generated by AI/LLM/Chatbots
. "AI that helped me translate something I wrote in my native language" is not the same as AI that generated a comment de novo, as has been understood by ~70% of respondents. That some minority have inexplicably decided that generative AI covers analytic/predictive models and every other technology they don't understand, or that LLMs are literally the only way for non-English speakers to communicate in English, doesn't mean those things are true. JoelleJay (talk) 01:44, 7 December 2024 (UTC)- Yeah, no strong feeling either way on the actual proposal, but IMO the proposal should not be interpreted as a prohibition on machine translation (though I would recommend people who want to participate via such to carefully check that the translation is accurate, and potentially post both language versions of their comment or make a note that it's translated if they aren't 100% sure the translation fully captures what they're trying to say). Alpha3031 (t • c) 09:06, 20 December 2024 (UTC)
- This proposal explicitly covers
- Speak for yourself: my support !vote already accounted for (and excluded) constructive uses of AI to help someone word a message. If the opening statement was unintentionally broad, that's not a reason to close this RfC – we're perfectly capable of coming to a consensus that's neither "implement the proposal exactly as originally written" nor "don't implement it at all". jlwoodwa (talk) 19:05, 6 December 2024 (UTC)
- The proposal's wording can be adjusted. There are some factual statements in the passage I quoted, amidst a lot of BS such as the assertion that the theater productions were notable. EEng 17:06, 6 December 2024 (UTC)
- Using Google Translate would be banned by the wording of this proposal given that it incorporates AI these days. Comments that are unrelated to facts or policy can (and should) be ignored under the current policy. As for the comment you quote, that doesn't address notability but based on 1 minute on google it does seem factual. Thryduulf (talk) 10:37, 6 December 2024 (UTC)
- That is very true. For example I can read and speak Polish pretty fluently, and do so every day, but I would not trust myself to be able to write to a discussion on Polish Misplaced Pages without some help, whether human or artificial. But I also wouldn't want to, because I can't write the language well enough to be able to edit articles. I think the English Misplaced Pages has many more editors who can't write the language well than others because it is both the largest one and the one written in the language that much of the world uses for business and higher education. We may wish that people would concentrate on other-language Wikipedias but most editors want their work to be read by as many people as possible. Phil Bridger (talk) 12:11, 3 December 2024 (UTC)
- Reading comprehension skills and writing skills in foreign languages are very frequently not at the same level, it is extremely plausible that someone will be able to understand whether the AI output is what they want to express without having been able to write it themselves directly. Thryduulf (talk) 11:41, 3 December 2024 (UTC)
- If someone is a non-native speaker with poor English skills, how can they be sure that the AI-generated response is actually what they genuinely want to express? and, to be honest, if their English skills are so poor as to need AI to express themselves, shouldn't we be politely suggesting that they would be better off contributing on their native Misplaced Pages? Black Kite (talk) 11:37, 3 December 2024 (UTC)
- Support, more or less. There are times when an LLM can help with paraphrasing or translation, but it is far too prone to hallucination to be trusted for any sort of project discussion. There is also the issue of wasting editor time dealing with arguments and false information created by an LLM. The example Selfstudier links to above is a great example. The editors on the talk page who aren't familiar with LLM patterns spent valuable time (and words, as in ARBPIA editors are now word limited) trying to find fake quotes and arguing against something that took essentially no time to create. I also had to spend a chunk of time checking the sources, cleaning up the discussion, and warning the editor. Forcing editors to spend valuable time arguing with a machine that doesn't actually comprehend what it's arguing is a no-go for me. As for the detection, for now it's fairly obvious to anyone who is fairly familiar with using an LLM when something is LLM generated. The detection tools available online are basically hot garbage. ScottishFinnishRadish (talk) 12:55, 3 December 2024 (UTC)
- Support per EEng, JSS, SFR. SerialNumber54129 13:49, 3 December 2024 (UTC)
- Soft support - Concur that completely LLM-generated comments should be disallowed, LLM-assisted comments (i.e. - I write a comment and then use LLMs as a spell-check/grammar engine) are more of a grey-area and shouldn't be explicitly disallowed. (ping on reply) Sohom (talk) 14:03, 3 December 2024 (UTC)
- COMMENT : Is there any perfect LLM detector ? I am a LLM ! Are you human ? Hello Mr. Turing, testing 1,2,3,4 ...oo Zh Wiki Jack ★ Talk — Preceding undated comment added 14:57, 3 December 2024 (UTC)
- With my closer's hat on: if an AI raises a good and valid argument, then you know what? There's a good and valid argument and I'll give weight to it. But if an AI makes a point that someone else has already made in the usual waffly AI style, then I'm going to ignore it.—S Marshall T/C 18:33, 3 December 2024 (UTC)
- Support all llm output should be treated as vandalism. 92.40.198.139 (talk) 20:59, 3 December 2024 (UTC)
- Oppose as written. I'm with Rhododendrites in that we should give a more general caution rather than a specific rule. A lot of the problems here can be resolved by enforcing already-existing expectations. If someone is making a bunch of hollow or boiler-plate comments, or if they're bludgeoning, then we should already be asking them to engage more constructively, LLM or otherwise. I also share above concerns about detection tools being insufficient for this purpose and advise people not to use them to evaluate editor conduct. (Also, can we stop with the "strong" supports and opposes? You don't need to prove you're more passionate than the guy next to you.) Thebiguglyalien (talk) 02:04, 4 December 2024 (UTC)
- Oppose as written. There's already enough administrative discretion to handle this on a case-by-case basis. In agreement with much of the comments above, especially the concern that generative text can be a tool to give people access who might not otherwise (due to ability, language) etc. Regards, --Goldsztajn (talk) 06:12, 4 December 2024 (UTC)
- Strong support LLMs are a sufficiently advanced form of the Automatic Complaint-Letter Generator (1994). Output of LLMs should be collapsed and the offender barred from further discussion on the subject. Inauthentic behavior. Pollutes the discussion. At the very least, any user of an LLM should be required to disclose LLM use on their user page and to provide a rationale. A new user group can also be created (LLM-talk-user or LLM-user) to mark as such, by self or by the community. Suspected sockpuppets + suspected LLM users. The obvious patterns in output are not that hard to detect, with high degrees of confidence. As to "heavily edited" output, where is the line? If someone gets "suggestions" on good points, they should still write entirely in their own words. A legitimate use of AI may be to summarize walls of text. Even then, caution and not to take it at face value. You will end up with LLMs arguing with other LLMs. Lines must be drawn. See also: WikiProject AI Cleanup, are they keeping up with how fast people type a prompt and click a button? Skullers (talk) 07:45, 4 December 2024 (UTC)
- I support the proposal that obvious LLM-generated !votes in discussions should be discounted by the closer or struck (the practical difference should be minimal). Additionally, users who do this can be warned using the appropriate talk page templates (e.g. Template:Uw-ai1), which are now included in Twinkle. I oppose the use of automated tools like GPTZero as the primary or sole method of determining whether comments are generated by LLMs. LLM comments are usually glaringly obvious (section headers within the comment, imprecise puffery, and at AfD an obvious misunderstanding of notability policies and complete disregard for sources). If LLM-ness is not glaringly obvious, it is not a problem, and we should not be going after editors for their writing style or because some tool says they look like a bot. Toadspike 10:29, 4 December 2024 (UTC)
- I also think closers should generally be more aggressive in discarding arguments counter to policy and all of us should be more aggressive in telling editors bludgeoning discussions with walls of text to shut up. These also happen to be the two main symptoms of LLMs. Toadspike 10:41, 4 December 2024 (UTC)
- In other words LLMs are irrelevant - you just want current policy to be better enforced. Thryduulf (talk) 15:24, 5 December 2024 (UTC)
- I also think closers should generally be more aggressive in discarding arguments counter to policy and all of us should be more aggressive in telling editors bludgeoning discussions with walls of text to shut up. These also happen to be the two main symptoms of LLMs. Toadspike 10:41, 4 December 2024 (UTC)
- Oppose Having seen some demonstrated uses of LLMs in the accessibility area, I fear a hard and fast rule here is inherantly discriminatory. Only in death does duty end (talk) 10:50, 4 December 2024 (UTC)
- What if LLM-users just had to note that a given comment was LLM-generated? JoelleJay (talk) 19:01, 4 December 2024 (UTC)
- What would we gain from that? If the comment is good (useful, relevant, etc) then it's good regardless of whether it was written by an LLM or a human. If the comment is bad then it's bad regardless of whether it was written by an LLM or a human. Thryduulf (talk) 20:04, 4 December 2024 (UTC)
- Well, for one, if they're making an argument like the one referenced by @Selfstudier and @ScottishFinnishRadish above it would have saved a lot of editor time to know that the fake quotes from real references were generated by LLM, so that other editors could've stopped trying to track those specific passages down after the first one failed verification. For another, at least with editors whose English proficiency is noticeably not great the approach to explaining an issue to them can be tailored and misunderstandings might be more easily resolved as translation-related. I know when I'm communicating with people I know aren't native English-speakers I try to be more direct/less idiomatic and check for typos more diligently. JoelleJay (talk) 22:46, 4 December 2024 (UTC)
- What would we gain from that? If the comment is good (useful, relevant, etc) then it's good regardless of whether it was written by an LLM or a human. If the comment is bad then it's bad regardless of whether it was written by an LLM or a human. Thryduulf (talk) 20:04, 4 December 2024 (UTC)
- And see what ChatGPT itself had to say about that idea, at #ChaptGPT_agrees above. EEng 22:25, 4 December 2024 (UTC)
- What if LLM-users just had to note that a given comment was LLM-generated? JoelleJay (talk) 19:01, 4 December 2024 (UTC)
- Oppose per above. As Rhododendrites points out, detection of LLM-generated content is not foolproof and even when detection is accurate, such a practice would be unfair for non-native English speakers who rely on LLMs to polish their work. Additionally, we evaluate contributions based on their substance, not by the identity and social capital of the author, so using LLMs should not be seen as inherently inferior to wholly human writing—are ChatGPT's arguments ipso facto less than a human's? If so, why?
DE already addresses substandard contributions, whether due to lack of competence or misuse of AI, so a separate policy targeting LLMs is unnecessary. Sincerely, Dilettante 21:14, 4 December 2024 (UTC)
e evaluate contributions based on their substance, not by the identity and social capital of the author
: true in theory; not reflected in practice.are ChatGPT's arguments ipso facto less than a human's?
Yes. Chatbots are very advanced predicted text engines. They do not have anargument
: they iteratively select text chunks based on probabilistic models.As mentioned above, humans are good detectors of LLM output, and don't require corroborative results from other machine learning models. Folly Mox (talk) 14:00, 5 December 2024 (UTC)- "...LLMs can produce novel arguments that convince independent judges at least on a par with human efforts. Yet when informed about an orator’s true identity, judges show a preference for human over LLM arguments." - Palmer, A., & Spirling, A. (2023). Large Language Models Can Argue in Convincing Ways About Politics, But Humans Dislike AI Authors: implications for Governance. Political Science, 75(3), 281–291. https://doi.org/10.1080/00323187.2024.2335471. And that result was based on Meta's OPT-30B model that performed at about a GPT-3 levels. There are far better performing models out there now like GPT-4o and Claude 3.5 Sonnet. Sean.hoyland (talk) 15:24, 5 December 2024 (UTC)
As mentioned above, humans are good detectors of LLM output, and don't require corroborative results from other machine learning models.
Yet your reply to me made no mention of the fact that my comment is almost wholly written by an LLM, the one exception being me replacing "the Misplaced Pages policy Disruptive editing" with "DE". I went to ChatGPT, provided it a handful of my comments on Misplaced Pages and elsewhere, as well as a few comments on this discussion, asked it to mimic my style (which probably explains why the message contains my stylistic quirks turned up to 11), and repeatedly asked it to trim the post. I'd envision a ChatGPT account, with a larger context window, would allow even more convincing comments, to say nothing of the premium version. A DUCK-style test for comments singles out people unfamiliar with the differences between formal English and LLM outputs, precisely those who need it most since they can write neither. Others have raised scenarios where a non-fluent speaker may need to contribute.- In other words, LLMs can 100% be used for constructive !votes on RfCs, AfDs, and whatnot. I fed it my comments only to prevent those familiar with my writing style didn't get suspicious. I believe every word in the comment and had considered every point it made in advance, so I see no reason for this to be worth less than if I had typed it out myself. If I'd bullet-pointed my opinion and asked it to expand, that'd have been better yet.
They do not have an argument: they iteratively select text chunks based on probabilistic models.
I'm aware. If a monkey types up Othello, is the play suddenly worth( )less? An LLM is as if the monkey were not selecting words at random, but rather choosing what to type based on contextualized tokens. I believe a text is self-contained and should be considered in its own right, but that's not something I'll sway anyone on or vice versa.true in theory; not reflected in practice
So we should exacerbate the issue by formalizing this discrimination on the basis of authorship?- To be clear, this is my only usage of an LLM anywhere on Misplaced Pages. Sincerely, Dilettante 01:22, 6 December 2024 (UTC)
In other words, LLMs can 100% be used for constructive !votes on RfCs, AfDs, and whatnot.
So then what is the point in having any discussion at all if an LLM can just spit out a summary of whichever policies and prior comments it was fed and have its "opinion" counted? What happens when there are multiple LLM-generated comments in a discussion, each fed the same prompt material and prior comments -- that would not only artificially sway consensus significantly in one direction (including "no consensus"), it could produce a consensus stance that no human !voter even supported! It also means those human participants will waste time reading and responding to "users" who cannot be "convinced" of anything. Even for editors who can detect LLM content, it's still a waste of their time reading up to the point they recognize the slop. And if closers are not allowed to discount seemingly-sound arguments solely because they were generated by LLM, then they have to have a lot of faith that the discussion's participants not only noticed the LLM comments, but did thorough fact-checking of any tangible claims made in them. With human comments we can at least assume good faith that a quote is really in a particular inaccessible book.People who are not comfortable enough in their English fluency can just machine translate from whichever language they speak, why would they need an LLM? And obviously people who are not competent in comprehending any language should not be editing Misplaced Pages... JoelleJay (talk) 03:17, 6 December 2024 (UTC)- Human !voters sign off and take responsibility for the LLM opinions they publish. If they continue to generate, then the relevant human signer wouldn't be convinced of anything anyway; at least here, the LLM comments might make more sense than whatever nonsense the unpersuadable user might've generated. (And machine translation relies on LLMs, not to mention there are people who don't know any other language yet have trouble communicating. Factual writing and especially comprehension are different from interpersonal persuasion.)
While I agree that fact-checking is a problem, I weight much lower than you in relation to the other effects a ban would cause. Aaron Liu (talk) 15:16, 6 December 2024 (UTC) So then what is the point in having any discussion at all if an LLM can just spit out a summary of whichever policies and prior comments it was fed and have its "opinion" counted?
I'm of the opinion humans tend to be better at debating, reading between the lines, handling obscure PAGs, and arriving at consensus.What happens when there are multiple LLM-generated comments in a discussion, each fed the same prompt material and prior comments -- that would not only artificially sway consensus significantly in one direction (including "no consensus"), it could produce a consensus stance that no human !voter even supported!
It's safe to assume those LLMs are set to a low temperature, which would cause them to consistently agree when fed the same prompt. In that case, they'll produce the same arguments; instead of rebutting x humans' opinions, those on the opposite side need rebut one LLM. If anything, that's less time wasted. Beyond that, if only one set of arguments is being raised, a multi-paragraph !vote matters about as much as a "Support per above". LLMs are not necessary for people to be disingenuous and !vote for things they don't believe. Genuine question: what's worse, this hypothetical scenario where multiple LLM users are swaying a !vote to an opinion no-one believes or the very real and common scenario that a non-English speaker needs to edit enwiki?Even for editors who can detect LLM content, it's still a waste of their time reading up to the point they recognize the slop.
This proposal wouldn't change for most people that because it's about closers.With human comments we can at least assume good faith that a quote is really in a particular inaccessible book.
No-one's saying you should take an LLM's word for quotes from a book.People who are not comfortable enough in their English fluency can just machine translate from whichever language they speak, why would they need an LLM?
It's a pity you're lobbying to ban most machine translators. Sincerely, Dilettante 17:08, 6 December 2024 (UTC)It's safe to assume those LLMs are set to a low temperature, which would cause them to consistently agree when fed the same prompt. In that case, they'll produce the same arguments; instead of rebutting x humans' opinions, those on the opposite side need rebut one LLM. If anything, that's less time wasted.
...You do know how consensus works, right? Since closers are supposed to consider each contribution individually and without bias to "authorship" to determine the amount of support for a position, then even a shitty but shallowly policy-based position would get consensus based on numbers alone. And again, non-English speakers can use machine-translation, like they've done for the last two decades.This proposal wouldn't change for most people that because it's about closers.
Of course it would; if we know closers will disregard the LLM comments, we won't need to waste time reading and responding to them.No-one's saying you should take an LLM's word for quotes from a book.
Of course they are. If LLM comments must be evaluated the same as human comments, then AGF on quote fidelity applies too. Otherwise we would be expecting people to do something like "disregard an argument based on being from an LLM".It's a pity you're lobbying to ban most machine translators.
The spirit of this proposal is clearly not intended to impact machine translation. AI-assisted != AI-generated. JoelleJay (talk) 18:42, 6 December 2024 (UTC)- I appreciate that the availability of easily generated paragraphs of text (regardless of underlying technology) in essence makes the "eternal September" effect worse. I think, though, it's already been unmanageable for years now, without any programs helping. We need a more effective way to manage decision-making discussions so participants do not feel a need to respond to all comments, and the weighing of arguments is considered more systematically to make the community consensus more apparent. isaacl (talk) 19:41, 6 December 2024 (UTC)
Since closers are supposed to consider each contribution individually and without bias to "authorship"
I'm the one arguing for this to be practice, yes.then even a shitty but shallowly policy-based position would get consensus based on numbers alone
That is why I state "per above" and "per User" !votes hold equal potential for misuse.Of course it would; if we know closers will disregard the LLM comments, we won't need to waste time reading and responding to them.
We don't know closers are skilled at recognizing LLM slop. I think my !vote shows many who think they can tell cannot. Any commenter complaining about a non-DUCK post will have to write out "This is written by AI" and explain why. DUCK posts already run afowl of BLUDGEON, DE, SEALION, etc.If LLM comments must be evaluated the same as human comments, then AGF on quote fidelity applies too
. Remind me again of what AGF stands for? Claiming LLMs have faith of any kind, good or bad, is ludicrous. From the policy,Assuming good faith (AGF) means assuming that people are not deliberately trying to hurt Misplaced Pages, even when their actions are harmful.
A reasonable reply would be "Are these quotes generated by AI? If so, please be aware AI chatbots are prone to hallucinations and cannot be trusted to cite accurate quotes." This AGFs the poster doesn't realize the issue and places the burden of proof squarely on them.Example text
generate verb to bring into existence. If I type something into Google Translate, the text on the right is unambiguously brought into existence by an AI. Sincerely, Dilettante 21:22, 6 December 2024 (UTC)- "Per above" !votes do not require other editors to read and/or respond to their arguments, and anyway are already typically downweighted, unlike !votes actively referencing policy. The whole point is to disregard comments that have been found to be AI-generated; it is not exclusively up to the closer to identify those comments in the first place. Yes we will be expecting other editors to point out less obvious examples and to ask if AI was used, what is the problem with that?No, DUCK posts do not necessarily already violate BLUDGEON etc., as I learned in the example from Selfstudier, and anyway we still don't discount the !votes of editors in good standing that bludgeoned/sealioned etc. so that wouldn't solve the problem at all. Obviously other editors will be asking suspected LLM commenters if their comments are from LLMs? But what you're arguing is that even if the commenter says yes, their !vote still can't be disregarded for that reason alone, which means the burden is still on other editors to prove that the content is false. We are not talking about the contextless meaning of the word "generate", we are talking about the very specific process of text generation in the context of generative AI, as the proposal lays out very explicitly. JoelleJay (talk) 02:13, 7 December 2024 (UTC)
- I’m not going to waste time debating someone who resorts to claiming people on the other side are either ignorant of technology or are crude strawmans. If anyone else is interested in actually hearing my responses, feel free to ask. Sincerely, Dilettante 16:13, 7 December 2024 (UTC)
- Or you could actually try to rebut my points without claiming I'm trying to ban all machine translators... JoelleJay (talk) 22:07, 7 December 2024 (UTC)
- For those following along, I never claimed that. I claimed those on JoelleJay’s side are casting !votes such that most machine translators would be banned. It was quite clear at the time that they, personally, support a carve out for machine translation and I don’t cast aspersions. Sincerely, Dilettante 15:42, 8 December 2024 (UTC)
- Or you could actually try to rebut my points without claiming I'm trying to ban all machine translators... JoelleJay (talk) 22:07, 7 December 2024 (UTC)
- I’m not going to waste time debating someone who resorts to claiming people on the other side are either ignorant of technology or are crude strawmans. If anyone else is interested in actually hearing my responses, feel free to ask. Sincerely, Dilettante 16:13, 7 December 2024 (UTC)
- "Per above" !votes do not require other editors to read and/or respond to their arguments, and anyway are already typically downweighted, unlike !votes actively referencing policy. The whole point is to disregard comments that have been found to be AI-generated; it is not exclusively up to the closer to identify those comments in the first place. Yes we will be expecting other editors to point out less obvious examples and to ask if AI was used, what is the problem with that?No, DUCK posts do not necessarily already violate BLUDGEON etc., as I learned in the example from Selfstudier, and anyway we still don't discount the !votes of editors in good standing that bludgeoned/sealioned etc. so that wouldn't solve the problem at all. Obviously other editors will be asking suspected LLM commenters if their comments are from LLMs? But what you're arguing is that even if the commenter says yes, their !vote still can't be disregarded for that reason alone, which means the burden is still on other editors to prove that the content is false. We are not talking about the contextless meaning of the word "generate", we are talking about the very specific process of text generation in the context of generative AI, as the proposal lays out very explicitly. JoelleJay (talk) 02:13, 7 December 2024 (UTC)
- I appreciate that the availability of easily generated paragraphs of text (regardless of underlying technology) in essence makes the "eternal September" effect worse. I think, though, it's already been unmanageable for years now, without any programs helping. We need a more effective way to manage decision-making discussions so participants do not feel a need to respond to all comments, and the weighing of arguments is considered more systematically to make the community consensus more apparent. isaacl (talk) 19:41, 6 December 2024 (UTC)
- Human !voters sign off and take responsibility for the LLM opinions they publish. If they continue to generate, then the relevant human signer wouldn't be convinced of anything anyway; at least here, the LLM comments might make more sense than whatever nonsense the unpersuadable user might've generated. (And machine translation relies on LLMs, not to mention there are people who don't know any other language yet have trouble communicating. Factual writing and especially comprehension are different from interpersonal persuasion.)
- Support a broad bar against undisclosed LLM-generated comments and even a policy that undisclosed LLM-generated comments could be sanctionable, in addition to struck through / redacted / ignored; people using them for accessibility / translation reasons could just disclose that somewhere (even on their user page would be fine, as long as they're all right with some scrutiny as to whether they're actually using it for a legitimate purpose.) The fact is that LLM comments raise significant risk of abuse, and often the fact that a comment is clearly LLM-generated is often going to be the only evidence of that abuse. I wouldn't be opposed to a more narrowly-tailored ban on using LLMs in any sort of automated way, but I feel a broader ban may be the only practical way to confront the problem. That said, I'd oppose the use of tools to detect LLM-comments, at least as the primary evidence; those tools are themselves unreliable LLM things. It should rest more on WP:DUCK issues and behavioral patterns that make it clear that someone is abusing LLMs. --Aquillion (talk) 22:08, 4 December 2024 (UTC)
- Support per reasons discussed above; something generated by an LLM is not truly the editor's opinion. On an unrelated note, have we seen any LLM-powered unapproved bots come in and do things like POV-pushing and spam page creation without human intervention? If we haven't, I think it's only a matter of time. Passengerpigeon (talk) 23:23, 4 December 2024 (UTC)
- Weak oppose in the sense that I don't think all LLM discussion text should be deleted. There are at least a few ESL users who use LLM's for assistance but try to check the results as best they can before posting, and I don't think their comments should be removed indiscriminately. What I do support (although not as a formal WP:PAG) is being much more liberal in hatting LLM comments when the prompter has failed to prevent WP:WALLOFTEXT/irrelevant/incomprehensible output than we maybe would for human-generated text of that nature. Mach61 03:05, 5 December 2024 (UTC)
- Oppose Any comments made by any editors are of their own responsibility and representing their own chosen opinions to hit the Publish Changes button on. If that comment was made by an LLM, then whatever it says is something the editor supports. I see no reason whatsoever to collapse anything claimed to be made by an LLM (whose detectors are 100% not reliable in the first place). If the comment being made is irrelevant to the discussion, then hatting it is already something covered by policy in the first place. This does make me want to start my comments with "As a large language model trained by OpenAI" though just to mess with people trying to push these sorts of policy discussions. Silverseren 05:29, 5 December 2024 (UTC)
- Or, as ChatGPT puts it,
Why banning LLM usage in comments would be detrimental, a ChatGPT treatise |
---|
|
- I'm honestly a bit impressed with the little guy. Silverseren 05:39, 5 December 2024 (UTC)
- It is somewhat amusing how easy it is to get these chatbots to output apologia for these chatbots. Too bad it's always so shallow. Probably because the people who inserted those canned responses are shallow people is my opinion. Simonm223 (talk) 19:44, 6 December 2024 (UTC)
- I'm honestly a bit impressed with the little guy. Silverseren 05:39, 5 December 2024 (UTC)
- Support those who are opposing have clearly never had to deal with trolls who endlessly WP:SEALION. If I wanted to have a discussion with a chatbot, I'd go and find one. ~~ AirshipJungleman29 (talk) 13:14, 5 December 2024 (UTC)
- What's wrong with just banning and hatting the troll? Aaron Liu (talk) 13:49, 5 December 2024 (UTC)
- Someone trolling and sealioning can (and should) be blocked under current policy, whether they use an LLM or not is irrelevant. Thryduulf (talk) 15:22, 5 December 2024 (UTC)
- Oppose per Rhododendrites. This is a case-by-case behavioral issue, and using LLMs != being a troll. Frostly (talk) 17:30, 5 December 2024 (UTC)
- Support: the general principle is sound - where the substance has been originally written by gen-AI, comments will tend to add nothing to the discussion and even annoy or confuse other users. In principle, we should not allow such tools to be used in discussions. Comments written originally before improvement or correction by AI, particularly translation assistants, fall into a different category. Those are fine. There also has to be a high standard for comment removal. Suspicion that gen-AI might have been used is not enough. High gptzero scores is not enough. The principle should go into policy but under a stonking great caveat - WP:AGF takes precedence and a dim view will be taken of generative-AI inquisitors. arcticocean ■ 17:37, 5 December 2024 (UTC)
- Support If a human didn't write it, humans shouldn't spend time reading it. I'll go further and say that LLMs are inherently unethical technology and, consequently, people who rely on them should be made to feel bad. ESL editors who use LLMs to make themselves sound like Brad Anderson in middle management should stop doing that because it actually gets in the way of clear communication. I find myself unpersuaded by arguments that existing policies and guidelines are adequate here. Sometimes, one needs a linkable statement that applies directly to the circumstances at hand. By analogy, one could argue that we don't really need WP:BLP, for example, because adhering to WP:V, WP:NPOV, and WP:NOR ought already to keep bad material out of biographies of living people. But in practice, it turned out that having a specialized policy that emphasizes the general ethos of the others while tailoring them to the problem at hand is a good thing. XOR'easter (talk) 18:27, 5 December 2024 (UTC)
- Strong support - Making a computer generate believable gibberish for you is a waste of time, and tricking someone else into reading it should be a blockable offense. If we're trying to create an encyclopedia, you cannot automate any part of the thinking. We can automate processes in general, but any attempt at automating the actual discussion or thought-processes should never be allowed. If we allow this, it would waste countless hours of community time dealing with inane discussions, sockpuppetry, and disruption. Imagine a world where LLMs are allowed and popular - it's a sockpuppeteer's dream scenario - you can run 10 accounts and argue the same points, and the reason why they all sound alike is just merely because they're all LLM users. You could even just spend a few dollars a month and run 20-30 accounts to automatically disrupt wikipedia discussions while you sleep, and if LLM usage was allowed, it would be very hard to stop. However, I don't have much faith in AI detection tools (partially because it's based on the same underlying flawed technology), and would want any assumption of LLM usage to be based on obvious evidence, not just a score on some website. Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop BugGhost 🦗👻 19:15, 5 December 2024 (UTC)
- I agree with your assessment “Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop” but unfortunately some editors who should really know better think it’s WaCkY to fill serious discussions with unfunny, distracting “humor”. Dronebogus (talk) 21:54, 5 December 2024 (UTC)
- I also concur. "I used the machine for generating endless quantities of misleading text to generate more text" is not a good joke. XOR'easter (talk) 22:46, 5 December 2024 (UTC)
- I agree with your assessment “Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop” but unfortunately some editors who should really know better think it’s WaCkY to fill serious discussions with unfunny, distracting “humor”. Dronebogus (talk) 21:54, 5 December 2024 (UTC)
- Strong support if you asked a robot to spew out some AI slop to win an argument you’re basically cheating. The only ethical reason to do so is because you can’t speak English well, and the extremely obvious answer to that is “if you can barely speak English why are you editing English Misplaced Pages?” That’s like a person who doesn’t understand basic physics trying to explain the second law of thermodynamics using a chatbot. Dronebogus (talk) 21:32, 5 December 2024 (UTC)
- I don't think "cheating" is a relevant issue here. Cheating is a problem if you use a LLM to win and get a job, award, college acceptance etc. that you otherwise wouldn't deserve. But WP discussions aren't a debating-skills contest, they're an attempt to determine the best course of action.
- So using an AI tool in a WP discussion is not cheating (though there may be other problems), just as riding a bike instead of walking isn't cheating unless you're trying to win a race. ypn^2 22:36, 5 December 2024 (UTC)
- Maybe “cheating” isn’t the right word. But I think that a) most AI generated content is garbage (it can polish the turd by making it sound professional, but it’s still a turd underneath) and b) it’s going to be abused by people trying to gain a material edge in an argument. An AI can pump out text far faster than a human and that can drown out or wear down the opposition if nothing else. Dronebogus (talk) 08:08, 6 December 2024 (UTC)
- Bludgeoning is already against policy. It needs to be more strongly enforced, but it needs to be more strongly enforced uniformly rather than singling out comments that somebody suspects might have had AI-involvement. Thryduulf (talk) 10:39, 6 December 2024 (UTC)
- Maybe “cheating” isn’t the right word. But I think that a) most AI generated content is garbage (it can polish the turd by making it sound professional, but it’s still a turd underneath) and b) it’s going to be abused by people trying to gain a material edge in an argument. An AI can pump out text far faster than a human and that can drown out or wear down the opposition if nothing else. Dronebogus (talk) 08:08, 6 December 2024 (UTC)
- Support; I agree with Remsense and jlwoodwa, among others: I wouldn't make any one AI-detection site the Sole Final Arbiter of whether a comment "counts", but I agree it should be expressly legitimate to discount AI / LLM slop, at the very least to the same extent as closers are already expected to discount other insubstantial or inauthentic comments (like if a sock- or meat-puppet copy-pastes a comment written for them off-wiki, as there was at least one discussion and IIRC ArbCom case about recently). -sche (talk) 22:10, 5 December 2024 (UTC)
- You don't need a new policy that does nothing but duplicate a subset of existing policy. At most what you need is to add a sentence to the existing policy that states "this includes comments written using LLMs", however you'd rightly get a lot of pushback on that because it's completely redundant and frankly goes without saying. Thryduulf (talk) 23:37, 5 December 2024 (UTC)
- Support hallucinations are real. We should be taking a harder line against LLM generated participation. I don't think everyone who is doing it knows that they need to stop. Andre🚐 23:47, 5 December 2024 (UTC)
- Comment - Here is something that I imagine we will see more often. I wonder where it fits into this discussion. A user employs perplexity's RAG based system, search+LLM, to help generate their edit request (without the verbosity bias that is common when people don't tell LLMs how much output they want). Sean.hoyland (talk) 03:13, 6 December 2024 (UTC)
- Support per all above. Discussions are supposed to include the original arguments/positions/statements/etc of editors here, not off-site chatbots. The Kip 03:53, 6 December 2024 (UTC)
- I also find it pretty funny that ChatGPT itself said it shouldn't be used, as per the premise posted above by EEng. The Kip 03:58, 6 December 2024 (UTC)
- "sycophancy is a general behavior of state-of-the-art AI assistants, likely driven in part by human preference judgments favoring sycophantic responses" - Towards Understanding Sycophancy in Language Models. They give us what we want...apparently. And just like with people, there is position bias, so the order of things can matter. Sean.hoyland (talk) 04:26, 6 December 2024 (UTC)
- I also find it pretty funny that ChatGPT itself said it shouldn't be used, as per the premise posted above by EEng. The Kip 03:58, 6 December 2024 (UTC)
- (Is this where I respond? If not, please move.) LLM-generated prose should be discounted. Sometimes there will be a discernible point in there; it may even be what the editor meant, lightly brushed up with what ChatGPT thinks is appropriate style. (So I wouldn't say "banned and punishable" in discussions, although we already deprecate machine translations on en.wiki and for article prose, same difference—never worth the risk.) However, LLMs don't think. They can't explain with reference to appropriate policy and guidelines. They may invent stuff, or use the wrong words—at AN recently, an editor accused another of "defaming" and "sacrilege", thus drowning their point that they thought that editor was being too hard on their group by putting their signature to an outrageous personal attack. I consider that an instance of LLM use letting them down. If it's not obvious that it is LLM use, then the question doesn't arise, right? Nobody is arguing for requiring perfect English. That isn't what WP:CIR means. English is a global language, and presumably for that reason, many editors on en.wiki are not native speakers, and those that aren't (and those that are!) display a wide range of ability in the language. Gnomes do a lot of fixing of spelling, punctuation and grammar in articles. In practice, we don't have a high bar to entrance in terms of English ability (although I think a lot more could be done to explain to new editors whose English is obviously non-native what the rule or way of doing things is that they have violated. And some of our best writers are non-native; a point that should be emphasised because we all have a right of anonymity here, many of us use it, and it's rare, in particular, that I know an editor's race. Or even nationality (which may not be the same as where they live.) But what we do here is write in English: both articles and discussions. If someone doesn't have the confidence to write their own remark or !vote, then they shouldn't participate in discussions; I strongly suspect that it is indeed a matter of confidence, of wanting to ensure the English is impeccable. LLMs don't work that way, really. They concoct things like essays based on what others have written. Advice to use them in a context like a Misplaced Pages discussion is bad advice. At best it suggests you let the LLM decide which way to !vote. If you have something to say, say it and if necessary people will ask a question for clarification (or disagree with you). They won't mock your English (I hope! Civility is a basic rule here!) It happens in pretty much every discussion that somebody makes an English error. No biggie. I'll stop there before I make any more typos myself; typing laboriously on my laptop in a healthcare facility, and anyway Murphy's Law covers this. Yngvadottir (talk)
- I dunno about this specifically but I want to chime in to say that I find LLM-generated messages super fucking rude and unhelpful and support efforts to discourage them. – Joe (talk) 08:15, 6 December 2024 (UTC)
- Comment I think obvious LLM/chatbot text should at least be tagged through an Edit filter for Recent Changes, then RC Patrollers and reviewers can have a look and decide for themselves. Am (Notes) 11:58, 6 December 2024 (UTC)
- How do you propose that such text be identified by an edit filter? LLM detections tools have high rates of both false positives and false negatives. Thryduulf (talk) 12:47, 6 December 2024 (UTC)
- It might become possible once watermarks (like DeepMind's SynthID) are shown to be robust and are adopted. Some places are likely to require it at some point e.g. EU. I guess it will take a while though and might not even happen e.g. I think OpenAI recently decided to not go ahead with their watermark system for some reason. Sean.hoyland (talk) 13:17, 6 December 2024 (UTC)
- It will still be trivial to bypass the watermarks, or use LLMs that don't implement them. It also (AIUI) does nothing to reduce false positives (which for our usecase are far more damaging than false negatives). Thryduulf (talk) 13:30, 6 December 2024 (UTC)
- Maybe, that seems to be the case with some of the proposals. Others, like SynthID claim high detection rates, maybe because even a small amount of text contains a lot of signals. As for systems that don't implement them, I guess that would be an opportunity to make a rule more nuanced by only allowing use of watermarked output with verbosity limits...not that I support a rule in the first place. People are going to use/collaborate with LLMs. Why wouldn't they? Sean.hoyland (talk) 14:38, 6 December 2024 (UTC)
- I don't think watermarks are a suitable thing to take into account. My view is that LLM usage should be a blockable offense on any namespace, but if it ends up being allowed under some circumstances then we at least need mandatory manual disclosures for any usage. Watermarks won't work / aren't obvious enough - we need something like {{LLM}} but self-imposed, and not tolerate unmarked usage. BugGhost 🦗👻 18:21, 6 December 2024 (UTC)
- They will have to work at some point (e.g. ). Sean.hoyland (talk) 06:27, 7 December 2024 (UTC)
- I don't think watermarks are a suitable thing to take into account. My view is that LLM usage should be a blockable offense on any namespace, but if it ends up being allowed under some circumstances then we at least need mandatory manual disclosures for any usage. Watermarks won't work / aren't obvious enough - we need something like {{LLM}} but self-imposed, and not tolerate unmarked usage. BugGhost 🦗👻 18:21, 6 December 2024 (UTC)
- Maybe, that seems to be the case with some of the proposals. Others, like SynthID claim high detection rates, maybe because even a small amount of text contains a lot of signals. As for systems that don't implement them, I guess that would be an opportunity to make a rule more nuanced by only allowing use of watermarked output with verbosity limits...not that I support a rule in the first place. People are going to use/collaborate with LLMs. Why wouldn't they? Sean.hoyland (talk) 14:38, 6 December 2024 (UTC)
- It will still be trivial to bypass the watermarks, or use LLMs that don't implement them. It also (AIUI) does nothing to reduce false positives (which for our usecase are far more damaging than false negatives). Thryduulf (talk) 13:30, 6 December 2024 (UTC)
- It might become possible once watermarks (like DeepMind's SynthID) are shown to be robust and are adopted. Some places are likely to require it at some point e.g. EU. I guess it will take a while though and might not even happen e.g. I think OpenAI recently decided to not go ahead with their watermark system for some reason. Sean.hoyland (talk) 13:17, 6 December 2024 (UTC)
- Good news! Queen of Hearts is already working on that in 1325. jlwoodwa (talk) 16:12, 6 December 2024 (UTC)
- How do you propose that such text be identified by an edit filter? LLM detections tools have high rates of both false positives and false negatives. Thryduulf (talk) 12:47, 6 December 2024 (UTC)
- Comment As a practical matter, users posting obvious LLM-generated content will typically be in violation of other rules (e.g. disruptive editing, sealioning), in which case their discussion comments absolutely should be ignored, discouraged, discounted, or (in severe cases) hatted. But a smaller group of users (e.g. people using LLMs as a translation tool) may be contributing productively, and we should seek to engage with, rather than discourage, them. So I don't see the need for a separate bright-line policy that risks erasing the need for discernment — in most cases, a friendly reply to the user's first LLM-like post (perhaps mentioning WP:LLM, which isn't a policy or guideline, but is nevertheless good advice) will be the right approach to work out what's really going on. Preimage (talk) 15:53, 6 December 2024 (UTC)
- Yeah, this is why I disagree with the BLP analogy above. There's no great risk/emergency to ban the discernment. Aaron Liu (talk) 17:34, 6 December 2024 (UTC)
- Those pesky sealion Chatbots are just the worst! Martinevans123 (talk) 18:41, 6 December 2024 (UTC)
- Some translation tools have LLM assistance, but the whole point of generative models is to create text far beyond what is found in the user's input, and the latter is clearly what this proposal covers. JoelleJay (talk) 19:01, 6 December 2024 (UTC)
- That might be what the proposal intends to cover, but it is not what the proposal actually covers. The proposal all comments that have been generated by LLMs and/or AI, without qualification. Thryduulf (talk) 01:05, 7 December 2024 (UTC)
- 70+% here understand the intention matches the language: generated by LLMs etc means "originated through generative AI tools rather than human thought", not "some kind of AI was involved in any step of the process". Even LLM translation tools don't actually create meaningful content where there wasn't any before; the generative AI aspect is only in the use of their vast training data to characterize the semantic context of your input in the form of mathematical relationships between tokens in an embedding space, and then match it with the collection of tokens most closely resembling it in the other language. There is, definitionally, a high level of creative constraint in what the translation output is since semantic preservation is required, something that is not true for text generation. JoelleJay (talk) 04:01, 7 December 2024 (UTC)
- Do you have any evidence for you assertion that 70% of respondents have interpreted the language in the same way as you? Reading the comments associated with the votes suggests that it's closer to 70% of respondents who don't agree with you. Even if you are correct, 30% of people reading a policy indicates the policy is badly worded. Thryduulf (talk) 08:34, 7 December 2024 (UTC)
- I think @Bugghost has summarized the respondent positions sufficiently below. I also think some portion of the opposers understand the proposal perfectly well and are just opposing anything that imposes participation standards. JoelleJay (talk) 22:54, 7 December 2024 (UTC)
- There will be many cases where it is not possible to say whether a piece of text does or does not contain "human thought" by observing the text, even if you know it was generated by an LLM. Statements like "originated through generative AI tools rather than human thought" will miss a large class of use cases, a class that will probably grow over the coming years. People work with LLMs to produce the output they require. It is often an iterative process by necessity because people and models make mistakes. An example of when "...rather than human thought" is not the case is when someone works with an LLM to solve something like a challenging technical problem where neither the person or the model has a satisfactory solution to hand. The context window means that, just like with human collaborators, a user can iterate towards a solution through dialog and testing, exploring the right part of the solution space. Human thought is not absent in these cases, it is present in the output, the result of a collaborative process. In these cases, something "far beyond what is found in the user's input" is the objective, it seems like a legitimate objective, but regardless, it will happen, and we won't be able to see it happening. Sean.hoyland (talk) 10:46, 7 December 2024 (UTC)
- Yes, but this proposal is supposed to apply to just the obvious cases and will hopefully discourage good-faith users from using LLMs to create comments wholesale in general. It can be updated as technology progresses. There's also no reason editors using LLMs to organize/validate their arguments, or as search engines for whatever, have to copy-paste their raw output, which is much more of a problem since it carries a much higher chance of hallucination. That some people who are especially familiar with how to optimize LLM use, or who pay for advanced LLM access, will be able to deceive other editors is not a reason to not formally proscribe wholesale comment generation. JoelleJay (talk) 22:27, 7 December 2024 (UTC)
- That's reasonable. I can get behind the idea of handling obvious cases from a noise reduction perspective. But for me, the issue is noise swamping signal in discussions rather than how it was generated. I'm not sure we need a special rule for LLMs, maybe just a better way to implement the existing rules. Sean.hoyland (talk) 04:14, 8 December 2024 (UTC)
- Yes, but this proposal is supposed to apply to just the obvious cases and will hopefully discourage good-faith users from using LLMs to create comments wholesale in general. It can be updated as technology progresses. There's also no reason editors using LLMs to organize/validate their arguments, or as search engines for whatever, have to copy-paste their raw output, which is much more of a problem since it carries a much higher chance of hallucination. That some people who are especially familiar with how to optimize LLM use, or who pay for advanced LLM access, will be able to deceive other editors is not a reason to not formally proscribe wholesale comment generation. JoelleJay (talk) 22:27, 7 December 2024 (UTC)
- Do you have any evidence for you assertion that 70% of respondents have interpreted the language in the same way as you? Reading the comments associated with the votes suggests that it's closer to 70% of respondents who don't agree with you. Even if you are correct, 30% of people reading a policy indicates the policy is badly worded. Thryduulf (talk) 08:34, 7 December 2024 (UTC)
- 70+% here understand the intention matches the language: generated by LLMs etc means "originated through generative AI tools rather than human thought", not "some kind of AI was involved in any step of the process". Even LLM translation tools don't actually create meaningful content where there wasn't any before; the generative AI aspect is only in the use of their vast training data to characterize the semantic context of your input in the form of mathematical relationships between tokens in an embedding space, and then match it with the collection of tokens most closely resembling it in the other language. There is, definitionally, a high level of creative constraint in what the translation output is since semantic preservation is required, something that is not true for text generation. JoelleJay (talk) 04:01, 7 December 2024 (UTC)
- That might be what the proposal intends to cover, but it is not what the proposal actually covers. The proposal all comments that have been generated by LLMs and/or AI, without qualification. Thryduulf (talk) 01:05, 7 December 2024 (UTC)
- Support "I Am Not A ChatBot; I Am A Free Misplaced Pages Editor!" Martinevans123 (talk) 18:30, 6 December 2024 (UTC)
- Comment: The original question was whether we should discount, ignore, strikethrough, or collapse chatbot-written content. I think there's a very big difference between these options, but most support !voters haven't mentioned which one(s) they support. That might make judging the consensus nearly impossible; as of now, supporters are the clear !majority, but supporters of what? — ypn^2 19:32, 6 December 2024 (UTC)
- That means that supporters support the proposal
that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner
. Not sure what the problem is here. Supporters support the things listed in the proposal - we don't need a prescribed 100% strict procedure, it just says that supporters would be happy with closers discounting, ignoring or under some circumstances deleting LLM content in discussions. BugGhost 🦗👻 19:40, 6 December 2024 (UTC) - Doing something? At least the stage could be set for a follow on discussion. Selfstudier (talk) 19:40, 6 December 2024 (UTC)
- More people have bolded "support" than other options, but very few of them have even attempted to refute the arguments against (and most that have attempted have done little more than handwaving or directly contradicting themselves), and multiple of those who have bolded "support" do not actually support what has been proposed when you read their comment. It's clear to me there is not going to be a consensus for anything other than "many editors dislike the idea of LLMs" from this discussion. Thryduulf (talk) 00:58, 7 December 2024 (UTC)
- Arguing one point doesn't necessarily require having to refute every point the other side makes. I can concede that "some people use LLMs to improve their spelling and grammar" without changing my view overriding view that LLMs empower bad actors, time wasters and those with competence issues, with very little to offer wikipedia in exchange. Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first. BugGhost 🦗👻 09:00, 7 December 2024 (UTC)
- If you want to completely ignore all the other arguments in opposition that's your choice, but don't expect closers to attach much weight to your opinions. Thryduulf (talk) 09:05, 7 December 2024 (UTC)
- Ok, here's a list of the main opposition reasonings, with individual responses.
- What about translations? - Translations are not up for debate here, the topic here is very clearly generative AI, and attempts to say that this topic covers translations as well is incorrect. No support voters have said the propositions should discount translated text, just oppose voters who are trying to muddy the waters.
- What about accessibility? - This is could be a legitimate argument, but I haven't seen this substantiated anywhere other than handwaving "AI could help people!" arguments, which I would lump into the spelling and grammar argument I responded to above.
- Detection tools are inaccurate - This I very much agree with, and noted in my support and in many others as well. But there is no clause in the actual proposal wording that mandates the use of automated AI detection, and I assume the closer would note that.
- False positives - Any rule can have a potential for false positives, from wp:DUCK to close paraphrasing to NPA. We've just got to as a community become skilled at identifying genuine cases, just like we do for every other rule.
- LLM content should be taken at face value and see if it violates some other policy - hopelessly naive stance, and a massive timesink. Anyone who has had the misfortune of going on X/twitter in the last couple of years should know that AI is not just used as an aid for those who have trouble typing, it is mainly used to spam and disrupt discussion to fake opinions to astroturf political opinions. Anyone who knows how bad the sockpuppetry issue is around CTOPs should be absolutely terrified of when (not if) someone decides to launch a full throated wave of AI bots on Misplaced Pages discussions, because if we have to invididually sanction each one like a human then admins will literally have no time for anything else.
- I genuinely cannot comprehend how some people could see how AI is decimating the internet through spam, bots and disinformation and still think for even one second that we should open the door to it. BugGhost 🦗👻 10:08, 7 December 2024 (UTC)
- There is no door. This is true for sockpuppetry too in my opinion. There can be a rule that claims there is a door, but it is more like a bead curtain. Sean.hoyland (talk) 11:00, 7 December 2024 (UTC)
- The Twitter stuff is not a good comparison here. Spam is already nukable on sight, mass disruptive bot edits are also nukable on sight, and it's unclear how static comments on Misplaced Pages would be the best venue to astroturf political opinions (most of which would be off-topic anyway, i.e., nukable on sight). I'd prefer if people didn't use ChatGPT to formulate their points, but if they're trying to formulate a real point then that isn't disruptive in the same way spam is. Gnomingstuff (talk) 02:22, 10 December 2024 (UTC)
it's unclear how static comments on Misplaced Pages would be the best venue to astroturf political opinions
- by disrupting RFCs and talk page discussions a bad actor could definitely use chatgpt to astroturf. A large proportion of the world uses Misplaced Pages (directly or indirectly) to get information - it would be incredibly valuable thing to manipulate. My other point is that AI disruption bots (like the ones on twitter) would be indistinguishable from individuals using LLMs to "fix" spelling and grammar - by allowing one we make the other incredibly difficult to identify. How can you tell the difference between a bot and someone who just uses chatgpt for every comment? BugGhost 🦗👻 09:16, 10 December 2024 (UTC)- You can't. That's the point. This is kind of the whole idea of WP:AGF. Gnomingstuff (talk) 20:22, 13 December 2024 (UTC)
Social anxiety: Say "I" am a person unconfident in my writing. I imagine that when I post my raw language, I embarrass myself, and my credibility vanishes, while in the worst case nobody understands what I mean. As bad confidence is often built up through negative feedback, it's usually meritful or was meritful at some point for someone to seek outside help. Aaron Liu (talk) 23:46, 8 December 2024 (UTC)Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first.
- While I sympathise with that hypothetical, Misplaced Pages isn't therapy and we shouldn't make decisions that do long-term harm to the project just because a hypothetical user feels emotionally dependent on a high tech spellchecker. I also think that in general wikipedia (myself included) is pretty relaxed about spelling and grammar in talk/WP space. BugGhost 🦗👻 18:45, 10 December 2024 (UTC)
- We also shouldn't do long term harm to the project just because a few users are wedded to idea that LLMs are and will always be some sort of existential threat. The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project than LLM-comments that are all either useful, harmless or collapseable/removable/ignorable at present. Thryduulf (talk) 19:06, 10 December 2024 (UTC)
The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project
- the same could be said for WP:DUCK. The reason why its not a big problem for DUCK is because the confidence level is very high. Like I've said in multiple other comments, I don't think "AI detectors" should be trusted, and that the bar for deciding whether something was created via LLM should be very high. I 100% understand your opinion and the reasoning behind it, I just think we have differing views on how well the community at large can identify AI comments. BugGhost 🦗👻 09:07, 11 December 2024 (UTC)
- I don't see how allowing shy yet avid users to contribute has done or will do long-term harm. The potential always outweighs rational evaluation of outcomes for those with anxiety, a condition that is not behaviorally disruptive. Aaron Liu (talk) 02:47, 11 December 2024 (UTC)
- I definitely don't want to disallow shy yet avid users! I just don't think having a "using chatgpt to generate comments is allowed" rule is the right solution to that problem, considering the wider consequences. BugGhost 🦗👻 08:52, 11 December 2024 (UTC)
- Did you mean "... disallowed"? If so, I think we weigh-differently accessibility vs the quite low amount of AI trolling. Aaron Liu (talk) 14:10, 11 December 2024 (UTC)
- I definitely don't want to disallow shy yet avid users! I just don't think having a "using chatgpt to generate comments is allowed" rule is the right solution to that problem, considering the wider consequences. BugGhost 🦗👻 08:52, 11 December 2024 (UTC)
- We also shouldn't do long term harm to the project just because a few users are wedded to idea that LLMs are and will always be some sort of existential threat. The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project than LLM-comments that are all either useful, harmless or collapseable/removable/ignorable at present. Thryduulf (talk) 19:06, 10 December 2024 (UTC)
- While I sympathise with that hypothetical, Misplaced Pages isn't therapy and we shouldn't make decisions that do long-term harm to the project just because a hypothetical user feels emotionally dependent on a high tech spellchecker. I also think that in general wikipedia (myself included) is pretty relaxed about spelling and grammar in talk/WP space. BugGhost 🦗👻 18:45, 10 December 2024 (UTC)
- If you want to completely ignore all the other arguments in opposition that's your choice, but don't expect closers to attach much weight to your opinions. Thryduulf (talk) 09:05, 7 December 2024 (UTC)
- Arguing one point doesn't necessarily require having to refute every point the other side makes. I can concede that "some people use LLMs to improve their spelling and grammar" without changing my view overriding view that LLMs empower bad actors, time wasters and those with competence issues, with very little to offer wikipedia in exchange. Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first. BugGhost 🦗👻 09:00, 7 December 2024 (UTC)
- That means that supporters support the proposal
- Support strikethroughing or collapsing per everyone else. The opposes that mention ESL have my sympathy, but I am not sure how many of them are ESL themselves. Having learnt English as my second language, I have always found it easier to communicate when users are expressing things in their own way, not polished by some AI. I sympathise with the concerns and believe the right solution is to lower our community standards with respect to WP:CIR and similar (in terms of ESL communication) without risking hallucinations by AI. Soni (talk) 02:52, 7 December 2024 (UTC)
- Oppose the use of AI detection tools. False positive rates for AI-detection are dramatically higher for non-native English speakers. AI detection tools had a 5.1% false positive rate for human-written text from native English speakers, but human-written text from non-native English speakers had a 61.3% false positive rate. ~ F4U (talk • they/it) 17:53, 8 December 2024 (UTC)
Section break 5
- Oppose - I'm sympathetic to concerns of abuse through automated mass-commenting, but this policy looks too black-and-white. Contributors may use LLMs for many reasons, including to fix the grammar, to convey their thoughts more clearly, or to adjust the tone for a more constructive discussion. As it stands, this policy may lead to dismissing good-faith AI-assisted comments, as well as false positives, without considering the context. Moreover, while mainstream chatbots are not designed to just mimic the human writing style, there are existing tools that can make AI-generated text more human-like, so this policy does not offer that much protection against maliciously automated contributions. Alenoach (talk) 01:12, 9 December 2024 (UTC)
- Oppose What @Alenoach said is correct, basic AI tools like Grammarly should be allowed (note that grammarly can also change sentences and wording and has generative AI tools) but just blatantly asking ChatGPT to generate a administrator report is different. Rc2barrington (talk) 04:46, 1 January 2025 (UTC)
- Oppose – Others have cast doubt on the efficacy of tools capable of diagnosing LLM output, and I can't vouch for its being otherwise. If EEng's example of ChatBot output is representative—a lengthy assertion of notability without citing sources—that is something that could well be disregarded whether it came from a bot or not. If used carefully, AI can be useful as an aide-memoire (such as with a spell- or grammar-checker) or as a supplier of more felicitous expression than the editor is naturally capable of (e.g. Google Translate). Dhtwiki (talk) 10:27, 9 December 2024 (UTC)
- Comment / Oppose as written. It's not accurate that GPTZero is good at detecting AI-generated content. Citations (slightly out of date but there's little reason to think things have changed from 2023): https://www.aiweirdness.com/writing-like-a-robot/ , https://www.aiweirdness.com/dont-use-ai-detectors-for-anything-important/ . For those too busy to read, a few choice quotes: "the fact that it insisted even one excerpt is not by a human means that it's useless for detecting AI-generated text," and "Not only do AI detectors falsely flag human-written text as AI-written, the way in which they do it is biased" (citing https://arxiv.org/abs/2304.02819 ). Disruptive, worthless content can already be hatted, and I'm not opposed to doing so. Editors should be sharply told to use their own words, and if not already written, an essay saying we'd rather have authentic if grammatically imperfect comments than AI-modulated ones would be helpful to cite at editors who offer up AI slop. But someone merely citing GPTZero is not convincing. GPTZero will almost surely misidentify genuine commentary as AI-generated. So fine with any sort of reminder that worthless content can be hatted, and fine with a reminder not to use ChatGPT for creating Misplaced Pages talk page posts, but not fine with any recommendations of LLM-detectors. SnowFire (talk) 20:00, 9 December 2024 (UTC)
- @SnowFire, I can't tell if you also oppose the actual proposal, which is to permit hatting/striking obvious LLM-generated comments (using GPTzero is a very minor detail in JSS's background paragraph, not part of the proposal). JoelleJay (talk) 01:47, 11 December 2024 (UTC)
- I support the proposal in so far as disruptive comments can already be hatted and that LLM-generated content is disruptive. I am strongly opposed to giving well-meaning but misguided editors a license to throw everyone's text into an AI-detector and hat the comments that score poorly. I don't think it was that minor a detail, and to the extent that detail is brought up, it should be as a reminder to use human judgment and forbid using alleged "AI detectors" instead. SnowFire (talk) 03:49, 11 December 2024 (UTC)
- @SnowFire, I can't tell if you also oppose the actual proposal, which is to permit hatting/striking obvious LLM-generated comments (using GPTzero is a very minor detail in JSS's background paragraph, not part of the proposal). JoelleJay (talk) 01:47, 11 December 2024 (UTC)
- Support collapsing AI (specifically, Large language model) comments by behavioral analysis (most actually disruptive cases I've seen are pretty obvious) and not the use of inaccurate tools like ZeroGPT. I thinking hatting with the title "Editors suspect that this comment has been written by a Large language model" is appropriate. They take up SO much space in a discussion because they are also unnecessarily verbose, and talk on and on but never ever say something that even approaches having substance. Discussions are for human Misplaced Pages editors, we shouldn't have to use to sift through comments someone put 0 effort into and outsourced to a robot that writes using random numbers (that's a major part of how tools like ChatGPT work and maintain variety). If someone needs to use an AI chatbot to communicate because they don't understand English, then they are welcome to contribute to their native language Misplaced Pages, but I don't think they have the right to insist that we at enwiki spend our effort reading comments they but minimal effort into besides opening the ChatGPT website. If really needed, they can write in their native language and use a non-LLM tool like Google Translate. The use of non-LLM tools like Grammarly, Google Translate, etc. I think should still be OK for all editors, as they only work off comments that editors have written themselves. MolecularPilot 05:10, 10 December 2024 (UTC)
- Adding that enforcing people writing things in their own words will actually help EAL (English additional language) editors contribute here. I world with EAL people irl, and even people who have almost native proficiency with human-written content find AI output confusing because it says things in the most confusing, verbose ways using difficult sentence constructions and words. I've seen opposers in this discussion who maybe haven't had experience working with EAL people go "what about EAL people?", but really, I think this change will help them (open to being corrected by someone who is EAL, tho). MolecularPilot 05:17, 10 December 2024 (UTC)
- Also, with regards to oppose comments that discussions are not a vote so closes will ignore AI statements which don't have merit - unedited LLM statements are incredibly verbose and annoying, and clog up the discussion. Imagine multiple paragraphs, each with a heading, but all of which say almost nothing, they're borderline WP:BLUDGEONy. Giving the power to HAT them will help genuine discussion contributors keep with the flow of human arguments and avoid scaring away potential discussion contributors who are intimidated or don't feel they have the time to read the piles of AI nonsense that fill the discussion. MolecularPilot 06:38, 10 December 2024 (UTC)
- Support (removing) in general. How is this even a question? There is no case-by-case. It is a fundamental misunderstanding of how LLMs work to consider their output reliable without careful review. And which point, the editor could have written it themselves without inherent LLM bias. The point of any discussion is to provide analytical response based on the context, not have some tool regurgitate something from a training set that sounds good. And frankly, it is disrespectuful to make someone read "AI" responses. It is a tool and there is a place and time for it, but not in discussions in an encyclopedia. — HELLKNOWZ ∣ TALK 15:41, 10 December 2024 (UTC)
- Strong Support. I'm very interested in what you (the generic you) have to say about something. I'm not remotely interested in what a computer has to say about something. It provides no value to the discussion and is a waste of time. Useight (talk) 18:06, 10 December 2024 (UTC)
- Comments that provide no value to the discussion can already be hatted and ignored regardless of why they provide no value, without any of the false positive or false negatives inherent in this proposal. Thryduulf (talk) 18:25, 10 December 2024 (UTC)
- Indeed, and that's fine for one-offs when a discussion goes off the rails or what-have-you. But we also have WP:NOTHERE for disruptive behavior, not working collaboratively, etc. I'm suggesting that using an AI to write indicates that you're not here to build the encyclopedia, you're here to have an AI build the encyclopedia. I reiterate my strong support for AI-written content to be removed, struck, collapsed, or hatted and would support further measures even beyond those. Useight (talk) 21:54, 11 December 2024 (UTC)
- There are two sets of people described in your comment: those who use AI and those who are NOTHERE. The two sets overlap, but nowhere near sufficiently to declare that everybody in the former set are also in the latter set. If someone is NOTHERE they already can and should be blocked, regardless of how they evidence that. Being suspected of using AI (note that the proposal does not require proof) is not sufficient justification on its own to declare someone NOTHERE, per the many examples of constructive use of AI already noted in this thread. Thryduulf (talk) 22:03, 11 December 2024 (UTC)
- To reiterate, I don't believe that any use of AI here is constructive, thus using it is evidence of WP:NOTHERE, and, therefore, the set of people using AI to write is completely circumscribed within the set of people who are NOTHERE. Please note that I am referring to users who use AI-generated writing, not users suspected of using AI-generated writing. I won't be delving into how one determines whether someone is using AI or how accurate it is, as that is, to me, a separate discussion. This is the end of my opinion on the matter. Useight (talk) 23:26, 11 December 2024 (UTC)
- You are entitled to your opinion of course, but as it is contradicted by the evidence of both multiple constructive uses and of the near-impossibility of reliably detecting LLM-generated text without false positives, I would expect the closer of this discussion to attach almost no weight to it. Thryduulf (talk) 00:42, 12 December 2024 (UTC)
- I am ESL and use LLMs sometimes because of that. I feel like I don't fit into the NOTHERE category. It seems like you do not understand what they are or how they can be used constructively. PackMecEng (talk) 01:43, 12 December 2024 (UTC)
- No, I understand. What you're talking about is no different from using Google Translate or asking a native-speaker to translate it. You, a human, came up with something you wanted to convey. You wrote that content in Language A. But you wanted to convey that message that you - a human - wrote, but now in Language B. So you had your human-written content translated to Language B. I have no qualms with this. It's your human-written content, expressed in Language B. My concern is with step 1 (coming up with something you want to convey), not step 2 (translating that content to another language). You write a paragraph for an article but it's in another language and you need the paragraph that you wrote translated? Fine by me. You ask an AI to write a paragraph for an article? Not fine by me. Again, I'm saying that there is no valid use case for AI-written content. Useight (talk) 15:59, 12 December 2024 (UTC)
- It seems very likely that there will be valid use cases for AI-written content if the objective is maximizing quality and minimizing errors. Research like this demonstrate that there will likely be cases where machines outperform humans in specific Misplaced Pages domains, and soon. But I think that is an entirely different question than potential misuse of LLMs in consensus related discussions. Sean.hoyland (talk) 16:25, 12 December 2024 (UTC)
- But your vote and the proposed above makes not distinction there. Which is the main issue. Also not to be pedantic but every prompted to a LLM is filled out by a human looking to convey a message. Every time someone hits publish on something here it is that person confirming that is what they are saying. So how do we in practice implement what you suggest? Because without a method better than vibes it's worthless. PackMecEng (talk) 18:53, 12 December 2024 (UTC)
- The proposal specifies content generated by LLMs, which has a specific meaning in the context of generative AI. If a prompt itself conveys a meaningful, supported opinion, why not just post that instead? The problem comes when the LLM adds more information than was provided, which is the whole point of generative models. JoelleJay (talk) 01:52, 13 December 2024 (UTC)
- No, I understand. What you're talking about is no different from using Google Translate or asking a native-speaker to translate it. You, a human, came up with something you wanted to convey. You wrote that content in Language A. But you wanted to convey that message that you - a human - wrote, but now in Language B. So you had your human-written content translated to Language B. I have no qualms with this. It's your human-written content, expressed in Language B. My concern is with step 1 (coming up with something you want to convey), not step 2 (translating that content to another language). You write a paragraph for an article but it's in another language and you need the paragraph that you wrote translated? Fine by me. You ask an AI to write a paragraph for an article? Not fine by me. Again, I'm saying that there is no valid use case for AI-written content. Useight (talk) 15:59, 12 December 2024 (UTC)
- To reiterate, I don't believe that any use of AI here is constructive, thus using it is evidence of WP:NOTHERE, and, therefore, the set of people using AI to write is completely circumscribed within the set of people who are NOTHERE. Please note that I am referring to users who use AI-generated writing, not users suspected of using AI-generated writing. I won't be delving into how one determines whether someone is using AI or how accurate it is, as that is, to me, a separate discussion. This is the end of my opinion on the matter. Useight (talk) 23:26, 11 December 2024 (UTC)
- There are two sets of people described in your comment: those who use AI and those who are NOTHERE. The two sets overlap, but nowhere near sufficiently to declare that everybody in the former set are also in the latter set. If someone is NOTHERE they already can and should be blocked, regardless of how they evidence that. Being suspected of using AI (note that the proposal does not require proof) is not sufficient justification on its own to declare someone NOTHERE, per the many examples of constructive use of AI already noted in this thread. Thryduulf (talk) 22:03, 11 December 2024 (UTC)
- Indeed, and that's fine for one-offs when a discussion goes off the rails or what-have-you. But we also have WP:NOTHERE for disruptive behavior, not working collaboratively, etc. I'm suggesting that using an AI to write indicates that you're not here to build the encyclopedia, you're here to have an AI build the encyclopedia. I reiterate my strong support for AI-written content to be removed, struck, collapsed, or hatted and would support further measures even beyond those. Useight (talk) 21:54, 11 December 2024 (UTC)
- Comments that provide no value to the discussion can already be hatted and ignored regardless of why they provide no value, without any of the false positive or false negatives inherent in this proposal. Thryduulf (talk) 18:25, 10 December 2024 (UTC)
- Yes in principle. But in practice, LLM detectors are not foolproof, and there are valid reasons to sometimes use an LLM, for example to copyedit. I have used Grammarly before and have even used the Microsoft Editor, and while they aren't powered by LLMs, LLMs are a tool that need to be used appropriately on Misplaced Pages. Awesome Aasim 19:55, 10 December 2024 (UTC)
- Support. Using LLM to reply to editors is lazy and disrespectful of fellow editor's time and brainpower. In the context of AFD, it is particularly egregious since an LLM can't really read the article, read sources, or follow our notability guidelines. By the way.
gptzero and other such tools are very good at detecting this
. I don't think this is correct at all. I believe the false positive for AI detectors is quite high. High enough that I would recommend not using AI detectors. –Novem Linguae (talk) 03:23, 11 December 2024 (UTC) - Question @Just Step Sideways: Since there appears to be a clear consensus against the AI-detectors part, would you like to strike that from the background? Aaron Liu (talk) 14:10, 11 December 2024 (UTC)
- Support. AI generated text should be removed outright. If you aren't willing to put the work into doing your own writing then you definitely haven't actually thought deeply about the matter at hand. User1042💬✒️ 14:16, 11 December 2024 (UTC)
- This comment is rather ironic given that it's very clear you haven't thought deeply about the matter at hand, because if you had then you'd realise that it's actually a whole lot more complicated than that. Thryduulf (talk) 14:26, 11 December 2024 (UTC)
- Thryduulf I don't think this reply is particular helpful, and it comes off as slightly combative. It's also by my count your 24th comment on this RFC. BugGhost 🦗👻 19:20, 11 December 2024 (UTC)
- I recognize that AI paraphrased or edited is not problematic in the same ways as text generated outright by an AI. I only meant to address the core issue at steak, content whose first draft was written by an AI system. User1042💬✒️ 22:16, 17 December 2024 (UTC)
- This comment is rather ironic given that it's very clear you haven't thought deeply about the matter at hand, because if you had then you'd realise that it's actually a whole lot more complicated than that. Thryduulf (talk) 14:26, 11 December 2024 (UTC)
- Oppose @Just Step Sideways: The nomination's 2nd para run through https://www.zerogpt.com/ gives "11.39% AI GPT*":
The nomination's linked https://gptzero.me/ site previously advertised https://undetectable.ai/ , wherewith how will we deal? Imagine the nomination was at AFD. What should be the response to LLM accusations against the highlighted sentence? 172.97.141.219 (talk) 17:41, 11 December 2024 (UTC)I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, gptzero and other such tools are very good at detecting this. I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. If you can't make a coherent argument on your own, you are not competent to be participating in the discussion. I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner
- Support with the caveat that our ability to deal with the issue goes only as far as we can accurately identify the issue (this appears to have been an issue raised across a number of the previous comments, both support and oppose, but I think it bears restating because we're approaching this from a number of different angles and its IMO the most important point regardless of what conclusions you draw from it). Horse Eye's Back (talk) 19:24, 11 December 2024 (UTC)
- Strong support, limited implementation.
Misplaced Pages is written by volunteer editors
, says our front page. This is who we are, and our writing is what Misplaced Pages is. It's true that LLM-created text can be difficult to identify, so this may be a bit of a moving target, and we should be conservative in what we remove—but I'm sure at this point we've all run across cases (whether here or elsewhere in our digital lives) where someone copy/pastes some text that includes "Is there anything else I can help you with?" at the end, or other blatant tells. This content should be deleted without hesitation. Retswerb (talk) 04:11, 12 December 2024 (UTC) - Support in concept, questions over implementation — I concur with Dronebogus that users who rely on LLMs should not edit English Misplaced Pages. It is not a significant barrier for users to use other means of communication, including online translators, rather than artificial intelligence. How can an artificial intelligence tool argue properly? However, I question how this will work in practice without an unacceptable degree of error. elijahpepe@wikipedia (he/him) 22:39, 12 December 2024 (UTC)
- Many, possibly most, online translators use artificial intelligence based on LLMs these days. Thryduulf (talk) 22:46, 12 December 2024 (UTC)
- There is a difference between translating words you wrote in one language into English and using an LLM to write a comment for you. elijahpepe@wikipedia (he/him) 22:59, 12 December 2024 (UTC)
- Neither your comment nor the original proposal make any such distinction. Thryduulf (talk) 23:34, 12 December 2024 (UTC)
- Well since people keep bringing this up as a semi-strawman: no I don’t support banning machine translation, not that I encourage using it (once again, if you aren’t competent in English please don’t edit here) Dronebogus (talk) 07:34, 13 December 2024 (UTC)
- Neither your comment nor the original proposal make any such distinction. Thryduulf (talk) 23:34, 12 December 2024 (UTC)
- There is a difference between translating words you wrote in one language into English and using an LLM to write a comment for you. elijahpepe@wikipedia (he/him) 22:59, 12 December 2024 (UTC)
- LLMs are incredible at translating, and many online translators already incorporate them, including Google Translate. Accomodating LLMs is an easy way to support the avid not only the ESL but also the avid but shy. It has way more benefits than the unseen-to-me amount of AI trolling that isn't already collapse-on-sight. Aaron Liu (talk) 00:05, 13 December 2024 (UTC)
- Google Translate uses the same transformer architecture that LLMs are built around, and uses e.g. PaLM to develop more language support (through training that enables zero-shot capabilities) and for larger-scale specialized translation tasks performed through the Google Cloud "adaptive translation" API, but it does not incorporate LLMs into translating your everyday text input, which still relies on NMTs. And even for the API features, the core constraint of matching input rather than generating content is still retained (obviously it would be very bad for a translation tool to insert material not found in the original text!). LLMs might be good for translation because they are better at evaluating semantic meaning and detecting context and nuance, but again, the generative part that is key to this proposal is not present. JoelleJay (talk) 01:20, 13 December 2024 (UTC)
PaLM (Pathways Language Model) is a 540 billion-parameter transformer-based large language model (LLM) developed by Google AI.
If you meant something about how reschlmunking the outputs of an LLM or using quite similar architecture is not really incorporating the LLM, I believe we would be approaching Ship of Theseus levels of recombination, to which my answer is it is the same ship.
That happens! Aaron Liu (talk) 01:29, 13 December 2024 (UTC)obviously it would be very bad for a translation tool to insert material not found in the original text!
- PaLM2 is not used in the consumer app (Google Translate), it's used for research. Google Translate just uses non-generative NMTs to map input to its closes cognate in the target language. JoelleJay (talk) 01:34, 13 December 2024 (UTC)
- Well, is the NMT really that different enough to not be classified as an LLM? IIRC the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be, and an LLM I asked agreed that NMTs satisfy the definition of a generative LLM, though I think you're the expert here. Aaron Liu (talk) 02:01, 13 December 2024 (UTC)
- Google Translate's NMT hits different enough to speak English much less naturally than ChatGPT 4o. I don't consider it a LLM, because the param count is 380M not 1.8T.
the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be
No, that def would fit ancient RNN tech too. 172.97.141.219 (talk) 17:50, 13 December 2024 (UTC)- Even if you don’t consider it L, I do, and many sources cited by the article do. Since we’ll have such contesting during enforcement, it’s better to find a way that precludes such controversy. Aaron Liu (talk) 20:44, 13 December 2024 (UTC)
- NMTs, LLMs, and the text-creation functionality of LLMs are fundamentally different in the context of this discussion, which is about content generated through generative AI. NMTs specifically for translation: they are trained on parallel corpora and their output is optimized to match the input as precisely as possible, not to create novel text. LLMs have different training, including way more massive corpora, and were designed specifically to create novel text. One of the applications of LLMs may be translation (though currently it's too computationally intensive to run them for standard consumer purposes), by virtue of their being very good at determining semantic meaning, but even if/when they do become mainstream translation tools what they'll be used for is still not generative when it comes to translation output. JoelleJay (talk) 22:29, 13 December 2024 (UTC)
- How will you differentiate between the use of LLM for copyediting and the use of LLM for generation? Aaron Liu (talk) 23:30, 13 December 2024 (UTC)
- The proposal is for hatting obvious cases of LLM-generated comments. Someone who just uses an LLM to copyedit will still have written the content themselves and presumably their output would not have the obvious tells of generative AI. JoelleJay (talk) 23:56, 13 December 2024 (UTC)
- How will you differentiate between the use of LLM for copyediting and the use of LLM for generation? Aaron Liu (talk) 23:30, 13 December 2024 (UTC)
- NMTs, LLMs, and the text-creation functionality of LLMs are fundamentally different in the context of this discussion, which is about content generated through generative AI. NMTs specifically for translation: they are trained on parallel corpora and their output is optimized to match the input as precisely as possible, not to create novel text. LLMs have different training, including way more massive corpora, and were designed specifically to create novel text. One of the applications of LLMs may be translation (though currently it's too computationally intensive to run them for standard consumer purposes), by virtue of their being very good at determining semantic meaning, but even if/when they do become mainstream translation tools what they'll be used for is still not generative when it comes to translation output. JoelleJay (talk) 22:29, 13 December 2024 (UTC)
- Even if you don’t consider it L, I do, and many sources cited by the article do. Since we’ll have such contesting during enforcement, it’s better to find a way that precludes such controversy. Aaron Liu (talk) 20:44, 13 December 2024 (UTC)
- Well, is the NMT really that different enough to not be classified as an LLM? IIRC the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be, and an LLM I asked agreed that NMTs satisfy the definition of a generative LLM, though I think you're the expert here. Aaron Liu (talk) 02:01, 13 December 2024 (UTC)
- PaLM2 is not used in the consumer app (Google Translate), it's used for research. Google Translate just uses non-generative NMTs to map input to its closes cognate in the target language. JoelleJay (talk) 01:34, 13 December 2024 (UTC)
- Google Translate uses the same transformer architecture that LLMs are built around, and uses e.g. PaLM to develop more language support (through training that enables zero-shot capabilities) and for larger-scale specialized translation tasks performed through the Google Cloud "adaptive translation" API, but it does not incorporate LLMs into translating your everyday text input, which still relies on NMTs. And even for the API features, the core constraint of matching input rather than generating content is still retained (obviously it would be very bad for a translation tool to insert material not found in the original text!). LLMs might be good for translation because they are better at evaluating semantic meaning and detecting context and nuance, but again, the generative part that is key to this proposal is not present. JoelleJay (talk) 01:20, 13 December 2024 (UTC)
- Not when I tried to use it. Quantitatively, GPTZero went from 15% human to 100% AI for me despite the copyedits only changing 14 words. Aaron Liu (talk) 00:33, 14 December 2024 (UTC)
- I think there is consensus that GPTZero is not usable, even for obvious cases. JoelleJay (talk) 00:55, 14 December 2024 (UTC)
- Yes, but being as far as 100% means people will also probably think the rewrite ChatGPT-generated. Aaron Liu (talk) 01:18, 14 December 2024 (UTC)
- Does it really mean that? All you've demonstrated is that GPTZero has false positives, which is exactly why its use here was discouraged. jlwoodwa (talk) 05:26, 14 December 2024 (UTC)
- My subjective evaluation of what I got copyediting from ChatGPT was that it sounded like ChatGPT. I used GPTZero to get a number. Aaron Liu (talk) 14:18, 14 December 2024 (UTC)
- My guess is that the copyediting went beyond what most people would actually call "copyediting". JoelleJay (talk) 18:04, 23 December 2024 (UTC)
- It changed only 14 words across two paragraphs and still retained the same meaning in a way that I would describe it as copyediting. Such levels of change are what those lacking confidence in tone would probably seek anyways. Aaron Liu (talk) 00:15, 24 December 2024 (UTC)
- My guess is that the copyediting went beyond what most people would actually call "copyediting". JoelleJay (talk) 18:04, 23 December 2024 (UTC)
- My subjective evaluation of what I got copyediting from ChatGPT was that it sounded like ChatGPT. I used GPTZero to get a number. Aaron Liu (talk) 14:18, 14 December 2024 (UTC)
- Does it really mean that? All you've demonstrated is that GPTZero has false positives, which is exactly why its use here was discouraged. jlwoodwa (talk) 05:26, 14 December 2024 (UTC)
- Yes, but being as far as 100% means people will also probably think the rewrite ChatGPT-generated. Aaron Liu (talk) 01:18, 14 December 2024 (UTC)
- I think there is consensus that GPTZero is not usable, even for obvious cases. JoelleJay (talk) 00:55, 14 December 2024 (UTC)
- Many, possibly most, online translators use artificial intelligence based on LLMs these days. Thryduulf (talk) 22:46, 12 December 2024 (UTC)
- On one hand, AI slop is a plague on humanity and obvious LLM output should definitely be disregarded when evaluating consensus. On the other hand, I feel like existing policy covers this just fine, and any experienced closer will lend greater weight to actual policy-based arguments, and discount anything that is just parroting jargon. WindTempos they 23:21, 12 December 2024 (UTC)
- Support in principle, but we cannot rely on any specific tools because none are accurate enough for our needs. Whenever I see a blatant ChatGPT-generated !vote, I ignore it. They're invariably poorly reasoned and based on surface-level concepts rather than anything specific to the issue being discussed. If someone is using AI to create their arguments for them, it means they have no actual argument besides WP:ILIKEIT and are looking for arguments that support their desired result rather than coming up with a result based on the merits. Also, toasters do not get to have an opinion. The Wordsmith 05:17, 13 December 2024 (UTC)
- Oppose. For creating unnecessary drama. First of, the "detector" of the AI bot is not reliable, or at least the reliability of the tool itself is still questionable. If the tool to detect LLM itself is unreliable, how can one reliably point out which one is LLM and which one is not? We got multiple tools that claimed to be able to detect LLM as well. Which one should we trust? Should we be elevating one tool over the others? Have there been any research that showed that the "picked" tool is the most reliable? Second, not all LLMs are dangerous. We shouldn't treat LLM as a virus that will somehow take over the Internet or something. Some editors use LLM to smooth out their grammar and sentences and fix up errors, and there is nothing wrong with that. I understand that banning obvious LLM text per WP:DUCK are good, but totally banning them is plain wrong. ✠ SunDawn ✠ (contact) 22:56, 15 December 2024 (UTC)
- @SunDawn, the proposal is to permit editors to collapse/strike obvious LLM text, not to "ban LLM totally". If LLM use is imperceptible, like for tweaking grammar, it's not going to be affected. JoelleJay (talk) 20:17, 19 December 2024 (UTC)
- Support with some kind of caveat about not relying on faulty tools or presuming that something is LLM without evidence or admission, based on the following reasons:
- We have stricter rules around semi-automated editing (rollback, AutoWikiBrowser, etc.) and even stricter rules around fully automated bot editing. These cleanup edits are widely accepted as positive, but there is still the concern about an overwhelming amount of bad edits to wade through and/or fix. A form of that concern is relevant here. Someone could reply to every post in this discussion in just a minute or so without ever reading anything. That's inherently disruptive.
- Nobody who is voting "oppose" is using an LLM to cast that vote. The LLM comments have been left by those supporting to make a point about how problematic they are for discussions like this. I think this reflects, even among oppose voters, a developing community consensus that LLM comments will be disregarded.
- If the rule in practice is to disregard LLM comments, not writing that rule down does not stop it from being the rule, consensus, or a community norm. It just makes the rule less obvious and less clear.
- It's disrespectful for an editor to ask someone to spend their time reading a comment if they couldn't be bothered to spend any time writing it, and therefore a violation of the policy Misplaced Pages:Civility, "
treat your fellow editors as respected colleagues with whom you are working on an important project.
"
- Also, I don't read the proposal as a ban on machine translation in any way. Rjj (talk) 00:01, 18 December 2024 (UTC)
- @Rjjiii, above @Dilettante said their !vote was created by LLM. JoelleJay (talk) 20:14, 19 December 2024 (UTC)
- I am strongly opposed to banning or ignoring LLM-made talk page comments just because they are LLM-made. I'm not a big fan of LLMs at all; they are actually useful only for some certain things, very few of which are directly relevant to contributing to Misplaced Pages in English or in any other language. However, some of those things are useful for this, at least for some humans, and I don't want to see these humans being kicked out of the English Misplaced Pages. I already witnessed several cases in which people whose first language is not English tried writing talk page responses in the English Misplaced Pages, used an LLM to improve their writing style, and got their responses ignored only because they used an LLM. In all those cases, I had strong reasons to be certain that they were real humans, that they meant what they wrote, and that they did it all in good faith. Please don't say that anyone who wants to contribute to the English Wikipeida should, in the first place, know English well enough to write a coherent talk page comment without LLM assistance; occasionally, I kind of wish that it was like that myself, but then I recall that the world is more complicated and interesting than that. Uses of LLMs that help the English Misplaced Pages be more inclusive for good-faith people are good. Of course, defining what good faith means is complicated, but using an LLM is not, by itself, a sign of bad faith. --Amir E. Aharoni (talk) 04:52, 19 December 2024 (UTC)
- Those concerned about their English should use translation software rather than an llm. Both might alter the meaning to some extent, but only one will make things up. (It's also not a sure assumption that llm text is coherent talkpage text.) CMD (talk) 07:44, 19 December 2024 (UTC)
- @CMD The dividing line between translation software and LLM is already blurry and will soon disappear. It's also rare that translation software results in coherent talkpage text, unless it's relying on some (primitive) form of LLM. So if we're going to outlaw LLMs, we would need to outlaw any form of translation software, and possibly any text-to-speech software as well. ypn^2 23:41, 19 December 2024 (UTC)
- The distinctions have already been covered above, and no we would not have to. There is an obvious difference between software intended to translate and software intended to generate novel text, and users are likely to continue to treat those differently. CMD (talk) 02:49, 20 December 2024 (UTC)
- @CMD The dividing line between translation software and LLM is already blurry and will soon disappear. It's also rare that translation software results in coherent talkpage text, unless it's relying on some (primitive) form of LLM. So if we're going to outlaw LLMs, we would need to outlaw any form of translation software, and possibly any text-to-speech software as well. ypn^2 23:41, 19 December 2024 (UTC)
- Those concerned about their English should use translation software rather than an llm. Both might alter the meaning to some extent, but only one will make things up. (It's also not a sure assumption that llm text is coherent talkpage text.) CMD (talk) 07:44, 19 December 2024 (UTC)
- Strong support. LLM-generated content has no place anywhere on the encyclopedia. Stifle (talk) 10:27, 19 December 2024 (UTC)
- Strong oppose to the proposal as written. Misplaced Pages already suffers from being stuck in a 2001 mindset and a refusal to move with the technological times. Anyone who remembers most Wikipedians' visceral reaction to FLOW and VisualEditor when they were first introduced will observe a striking similarity. Yes, those projects had serious problems, as do LLM-generated comments. But AI is the future, and this attitude of "Move slowly to avoid changing things" will ultimately lead Misplaced Pages the way of Encyclopædia Britannica. Our discussion needs to be how best to change, not how to avoid to change. ypn^2 23:54, 19 December 2024 (UTC)
- The main objection to VE and a major objection to FLOW was the developers' insistence on transforming Wikitext to HTML for editing and then transforming that back to Wikitext. Aaron Liu (talk) 01:31, 20 December 2024 (UTC)
- True. Then, as now, there were many valid objections. But IIRC, there was limited discussion of "Let's figure out a better way to improve", and lots of "Everything is fine; don't change anything, ever." That attitude concerns me. ypn^2 01:52, 20 December 2024 (UTC)
- I oppose the proposal but I think the comparison to FLOW and VisualEditor is beyond silly. Those things did not exist outside of a MediaWiki context. LLMs are a global issue. Gnomingstuff (talk) 17:11, 28 December 2024 (UTC)
- The main objection to VE and a major objection to FLOW was the developers' insistence on transforming Wikitext to HTML for editing and then transforming that back to Wikitext. Aaron Liu (talk) 01:31, 20 December 2024 (UTC)
- Support. I'm not even slightly swayed by these "it'll be too hard to figure out" and "mistakes could be made" and "we can't be 100% certain" sorts of arguments. That's true of everything around here, and its why we have an admins-must-earn-a-boatload-of-community-trust system, and a system of review/appeal of decisions they (or of course non-admin closers) make, and a consensus-based decisionmaking system more broadly. JoelleJay has it exactly right:
having a policy that permits closers to discount apparently-LLM-generated contributions will discourage good-faith editors from using LLMs irresponsibly and perhaps motivate bad-faith editors to edit the raw output to appear more human, which would at least involve some degree of effort and engagement with their "own" arguments.
And as pointed out by some others, the "it'll hurt non-native-English speakers" nonsense is, well, nonsense; translation is a different and unrelated process (though LLMs can perform it to some extent), of remapping one's own material onto another language.I'm also not in any way convinved by the "people poor at writing and other cognitive tasks needs the LLM to help them here" angle, because WP:COMPETENCE is required. This is work (albeit volunteer work), it is WP:NOT a game, a social-media playground, a get-my-ideas-out-there soapbox, or a place to learn how to interact e-socially or pick up remedial writing skills, nor a venue for practicing one's argument techiques. It's an encyclopedia, being built by people who – to be productive contributors instead of a draining burden on the entire community – must have: solid reasoning habits, great judgement (especially in assessing reliability of claims and the sources making them), excellent writing skills of a highly particularized sort, a high level of fluency in this specific language (in multiple registers), and a human-judgement ability to understand our thick web of policies, guidelines, procedures, and often unwritten norms, and how they all interact, in a specific contextual way that may vary greatly by context. None of these is optional. An LLM cannot do any of them adequately (not even write well; their material sticks out like a sore thumb, and after a while you can even tell which LLM produced the material by its habitual but dinstictive crappy approach to simulating human thought and language).
In short, if you need an LLM to give what you think is meaningful input into a decision-making process on Misplaced Pages (much less to generate mainspace content for the public), then you need to go find something else to do, something that fits your skills and abilities. Saying this so plainly will probably upset someone, but so it goes. I have a rep for "not suffering fools lightly" and "being annoying but correct"; I can live with that if it gets the right decisions made and the work advanced. — SMcCandlish ☏ ¢ 😼 05:49, 22 December 2024 (UTC)
PS, something that came up more recently than my original post above: If anyone's wondering why some of us are such "AI" skeptics, its because the technology is badly, badly faulty, producing multiple blatant factual errors even in short material, entirely fictive "hallucinations", direct forgery of sourcing, nested chains of error compounded upon error when context isn't gleaned properly, disingenuous attempts to hide that it is lying, and worse. A few days ago, I got ChatGPT 4o to literally lose its f#*$ing mind: I had it generate some short code (very simple one-liner regex) with a really obvious error, so I pointed out the error and gave it documentation that proved it was an error. It conceded the error and even explained what the error was and why it was erroneous, then said how it was going to fix it. And ... proceeded to output the same error again (in just one of three examples, the other two being correct). Fascinated and horrified by this, I tried for over half an hour to get it to produce a correct example, and it was utterly incapable of doing it. It knew the error was an error and what that error was and why, spelled out what the correct approach should be, then repeated the error every single time (always at the first occurrence, or always if the only occurrence). I've captured that session and will be doing a write-up about it. This is much, much worse that a "hallucination", being an abject inability to stop doing what it already knows is absolutely wrong. When people have fears like "If I had a house AI, and it was in control of the bathwater temperature, it might boil my children alive", they are not paranoid or unreasonable. My experiment with ChatGPT proves this conclusively. If the AI can (with trivial effort) be put into a crazy failure-looping state where it knows it's doing or about to do something wrong but insists on doing it anyway – i.e. during its take-an-action phase it completely loses connection to reality, even it's internal reality much less external input telling it "no!" – then we are in grave danger. This is essentially the mental state of a psychopath: "I know I shouldn't grab my neighbor's little daughter and strangle her to death, but I just can't stop myself." — SMcCandlish ☏ ¢ 😼 01:42, 28 December 2024 (UTC)
- The problem with all that is that we already have a policy that allows the hatting or removal of comments that are actually problematic because of their content (which are the only ones that we should be removing) without regard for whether it was or was not written by LLM. Everything that actually should be removed can be removed already. Thryduulf (talk) 11:39, 22 December 2024 (UTC)
- If that's "the problem with all that", then it is not any kind of problem at all. It's entirely normal in our WP:P&G material to reiterate a principle defined in one place for contextual application at another, and to extrapolate from a general principle to a more specific application. We do this often to cut through the exact kind of wikilawyering we're facing over this issue: there's not a specific rule against LLM-generated !voting, so the argument is (again and again in this thread) to permit it (unless it's such senseless gibberish that it would be removed anyway even if no LLM were involved). The community clearly doesn't actually want that result (or more accurately, there is not a consensus in favor of it), though this specific proposal's approach to thwarting the "AI"-spamming of our decision-making processes might not be perfect. To me, it's a step in the right direction. If it were implemented this way and some uncommon issue arose with that implementation, then we'd tweak it to address that micro-problem. We must not continue to avoid addressing the macro-problem just because someone can imagine edge cases that might not work out ideally. That, too, is true of all of our P&G and process. — SMcCandlish ☏ ¢ 😼 01:15, 28 December 2024 (UTC)
- People who have good reading skills, great judgement, and solid reasoning habits enough to find problems in existing articles don't necessarily have great interpersonal writing/communication skills or the confidence. Meanwhile, for all LLM is bad at, it is very good at diluting everything you say to become dry, dispassionate, and thus inoffensive. Aaron Liu (talk) 15:26, 22 December 2024 (UTC)
- I doubt that anyone would seriously object to some ultra-hothead running their post through an LLM with a query like "Can you see anything in this that might come off as aggressive or offensive, and suggest a revision that only affects that aspect of the material?" The result might not even be noticeable as LLM-modified. If it doesn't stick out as LLM garbage, there there is no way for this proposal to affect such a post, because no one here is a mind-reader (we cannot magically detect through remote sensing that someone toned their rant down with LLM help). So, this proposal is not broken (at least not with regard to that scenario). That said, the main reason that WP:COMPETENCE was written (and it's one of the essays that, like WP:BRD and WP:AADD, has become "operative" within the community as if a policy or guideline) is the very "don't necessarily have great interpersonal ... skills" issue you mention. That is, lacking those skills to a serious enough degree makes one not competent to work on a collaborative encyclopedia project, and one will eventually be ejected after causing enough disruption. Something we don't need is LLMs masking for a while that someone has this kind of competence failure; it will just drag out the inevitable. By way of analogy: if I were a kleptomaniac and just unable to stop myself from shoplifting, it would not be okay for me to use a device that scrambled stores' surveillance camera footage to make it more likely for me to get away with more shoplifting (and there would certainly be no obligation on the part of the store owner to turn their cameras off, or to take no action if they catch me stealing, just because I tell them I'm a kleptomaniac and my unconstructive behavior isn't something I can manage. — SMcCandlish ☏ ¢ 😼 01:15, 28 December 2024 (UTC)
- People who can't communicate that well often won't attempt to burn others down. I don't think there's any disruption or substantial additional investment in accommodating those who won't burn others down (which I'd say is the majority) by reading their perfectly comprehensible AI-diluted arguments. (Scrambling footage is like erasing the memories of the incident, which I don't think is a good analogue of the issue at hand. I'd say it's more like working with someone who stinks and masks that with perfume.) Aaron Liu (talk) 00:05, 29 December 2024 (UTC)
- I doubt that anyone would seriously object to some ultra-hothead running their post through an LLM with a query like "Can you see anything in this that might come off as aggressive or offensive, and suggest a revision that only affects that aspect of the material?" The result might not even be noticeable as LLM-modified. If it doesn't stick out as LLM garbage, there there is no way for this proposal to affect such a post, because no one here is a mind-reader (we cannot magically detect through remote sensing that someone toned their rant down with LLM help). So, this proposal is not broken (at least not with regard to that scenario). That said, the main reason that WP:COMPETENCE was written (and it's one of the essays that, like WP:BRD and WP:AADD, has become "operative" within the community as if a policy or guideline) is the very "don't necessarily have great interpersonal ... skills" issue you mention. That is, lacking those skills to a serious enough degree makes one not competent to work on a collaborative encyclopedia project, and one will eventually be ejected after causing enough disruption. Something we don't need is LLMs masking for a while that someone has this kind of competence failure; it will just drag out the inevitable. By way of analogy: if I were a kleptomaniac and just unable to stop myself from shoplifting, it would not be okay for me to use a device that scrambled stores' surveillance camera footage to make it more likely for me to get away with more shoplifting (and there would certainly be no obligation on the part of the store owner to turn their cameras off, or to take no action if they catch me stealing, just because I tell them I'm a kleptomaniac and my unconstructive behavior isn't something I can manage. — SMcCandlish ☏ ¢ 😼 01:15, 28 December 2024 (UTC)
- I'm pretty sure most people here understand why there are AI skeptics. I also hate ChatGPT. But there's a difference between my personal opinion of ChatGPT and the fact that someone using ChatGPT to communicate is still communicating. The risk of your house AI hallucinating is that your children get boiled alive. The risk of ChatGPT hallucinating on a talk page is that they make a dumb, disregardable argument, of the kind that humans pump out by the thousands. (Someone linked an example of some AfD delete !votes generated by AI and frankly they're better reasoned than a lot of the stuff humans post.) Gnomingstuff (talk) 17:32, 28 December 2024 (UTC)
- (The bigger issue is people using Misplaced Pages as ChatGPT -- i.e. posting the prompts, not the responses -- but, like much of what is being discussed here, that is already stuff that can be removed.) Gnomingstuff (talk) 17:36, 28 December 2024 (UTC)
- The problem with all that is that we already have a policy that allows the hatting or removal of comments that are actually problematic because of their content (which are the only ones that we should be removing) without regard for whether it was or was not written by LLM. Everything that actually should be removed can be removed already. Thryduulf (talk) 11:39, 22 December 2024 (UTC)
- ok, I agree with @SMcCandlish, so therefore my vote is Support. Sm8900 (talk) 12:41, 25 December 2024 (UTC)
- Support. Sure I have questions about detection, but I don't think it means we shouldn't have a policy that explicitly states that it should not be used (and can be ignored/hatted if it is). Judging solely based on content (and no wp:bludgeoning, etc.) is unsustainable IMO. It would mean taking every wall of text seriously until it's clear that the content is unhelpful, and LLMs are very good at churning out plausible-sounding bullshit. It wastes everyone's time. If cognitive impairments or ESL issues make it hard to contribute, try voice-to-text, old-school translation software, or some other aid. LLMs aren't really you.--MattMauler (talk) 11:27, 23 December 2024 (UTC)
- Comment. While I agree with the sentiment of the request, I am at a loss to see how we can identify LLM generated comments in a consistent manner that can scale. Yes, it might be easier to identify egregious copy paste of wall of text, but, anything other than that might be hard to detect. Our options are:
- Robust tooling to detect LLM generated text, with acceptably low levels of false positives. Somewhat similar to what Earwig does for Copyvios. But, someone needs to build it and host it on WMTools or at a similar location.
- Self certification by editors. Every edit / publish dialogbox should have a checkbox for "Is this text LLM generated" with y/n optionality.
- Editors playing a vigilante role in reading the text and making a personal call on other editors' text. Obviously this is least preferred.
- These are my starting views. Ktin (talk) 00:37, 24 December 2024 (UTC)
- A couple of days ago, an editor ran a new article through GPTzero, and was told that it was 99.8% likely to be AI-generated. I ran the same revision of the same article through the same tool the next day, and it told me that it was 98.3% likely to be human-written.
- Now we're left scratching our heads: Why the difference? Which answer is correct? WhatamIdoing (talk) 04:36, 26 December 2024 (UTC)
- Strong oppose as it's impossible to enforce. Also LLMs are a valid and useful accessibility tool. – Anne drew 05:18, 24 December 2024 (UTC)
- Bonus suggestion!: I'm curious what Wikipedians think about this so let's try this. Many of the comments here discuss the impracticality of determining whether a user's comments are AI generated (i.e. gptzero isn't perfect), and many give valid arguments for using LLMs (i.e. ESL). If an argument is suspected to be written by LLM, I propose that editors should examine the user. Take a look at their listed contributions, and if they seem to have a habit of using AI, open a discussion on their talk page. If the user has a habit of using AI and doesn't recognize the inherent problems and refuses to change, this can be brought to the administrators' noticeboard for potential blocks. If (and only if) the person is blocked for using AI, their comments can be ignored. Or just ask ChatGPT to summarize them for you lol guninvalid (talk) 06:12, 25 December 2024 (UTC)
- I think that the general idea here is good: As much as possible, treat each account individually, and handle LLM use as a behavioral problem when it's actually a practical problem (i.e., not when it seems to be accurate and seems to be appropriate). WhatamIdoing (talk) 04:38, 26 December 2024 (UTC)
- Except a substantial portion of the input in this thread is disagreement with the idea that English being someone's second language is an excuse for LLM-generated content at all. Translation of one's own material is an entirely different process (even if often leveraged by LLMs set to a different task than trying to generate what they "think" is new material that will pass as human-authored). I'm skeptical that any of the other things you're accepted as "valid arguments" from the pro-LLM or LLM-accepting side of this debate have consensus as valid, either. But go ahead and spell them out and we'll see. — SMcCandlish ☏ ¢ 😼 01:30, 28 December 2024 (UTC)
- Strong support the removal of any obvious, low effort AI-generated post. I recently came across a user posting multiple such examples. When called out on it they blew up and posted a comment saying, amongst other things "HAHAHAHHAHAHAHAHAHAHHAHAHAHAHAHA. Yes, some of it might be. Because I don't have time to argue with, in my humble opinion, stupid PHOQUING people." and "YOUUUUUUUUUUUUUUUUUUUUUUUUUUUU are assuming bath faith in me."
- They were later blocked as a sock evading a global lock.
- Currently it is too easy for trolls to game WP:AGF and AI to waste people's time arguing with their bot-generated replies. Using AI to write your posts for you makes it difficult for others to assume good faith. I am ok with obvious exceptions like a non-native speaker using AI to help them articulate their point. Photos of Japan (talk) 21:29, 25 December 2024 (UTC)
- Support for any and all reasons above. I'd be in favor of entirely banning AI-written text on the platform in articlespace as well. — Preceding unsigned comment added by Darth Stabro (talk • contribs) 00:05, 27 December 2024 (UTC)
- Support though with caution, as there are the possibility for false positives. SportingFlyer T·C 00:14, 29 December 2024 (UTC)
- Support for now. I believe the foundation should be busy ramping up support in this arena of distraction. IMHO, we are in the calibration phase of the use of these models. We don't feed trolls and we shouldn't tutor LLMs. Since assumption of good faith is not suicide pact, we shouldn't rely overmuch for guidance on edge cases. The issues as I see them are attribution (how do I know where your idea has been?), obedience to social norms (I not-here blocked someone recently for brazenly using ChatGPT to lie about using LLMs; the user proceeded to use GPT to post unblock requests), and a vast canyon between the limited human and financial resources of the foundation and the unlimited resources of bad actors with vast sums of money who would like to see Misplaced Pages less able. I have hopes we can get some higher visibility anti-LLM support (like a flag in my mouseover which always reports a percentage, so I know to check). This fire is burning now. It would be unwise to ignore this much longer. BusterD (talk) 16:28, 29 December 2024 (UTC)
- Support with the clarification that use of AI assistance is not prohibited, as long as its contribution is <<50%. For example, using Grammarly for spell check and grammar/syntax is OK, but using AI to do your homework is not. Dhaluza (talk) 02:45, 30 December 2024 (UTC)
- How do you propose to determine whether an AI's contribution is or is not "<<50%"? Thryduulf (talk) 12:22, 2 January 2025 (UTC)
- Support a ban on comments that were primarily generated by AI, with such comments to be deleted or struck-through as appropriate, because it's inherently misleading to pass off a chatbot's words as your own. Using ML-based spell-checkers is fine. I've seen some users call ChatGPT an "accessibility tool", but if you're not capable of communicating on English Misplaced Pages without a LLM doing it for you, then I don't think English Misplaced Pages is a good community for you to participate in. — Preceding unsigned comment added by Pythoncoder (talk • contribs) 21:45, 31 December 2024 (UTC)
- Just read this ANI thread, where an editor was indeffed per WP:NOTHERE after exclusively using LLMs to edit, even when responding within that very thread. The situation was a clear case of WP:CIR, which I surely would've cited in my original comment had the page name come to mind. —pythoncoder (talk | contribs) 04:34, 3 January 2025 (UTC)
- That they were banned for CIR shows we don't need anything new to deal with disruption specifically caused by people using LLMs. Thryduulf (talk) 05:29, 3 January 2025 (UTC)
- Just read this ANI thread, where an editor was indeffed per WP:NOTHERE after exclusively using LLMs to edit, even when responding within that very thread. The situation was a clear case of WP:CIR, which I surely would've cited in my original comment had the page name come to mind. —pythoncoder (talk | contribs) 04:34, 3 January 2025 (UTC)
- Support A popular notion in many, many other discussions is "our most valuable resource is editor time". This applies here more than in most instances - editors cannot be expected to wear themselves out in engaging at length with the results of someone's one-line prompt to an LLM. Where LLM use in articles is destructive of content and source reliability, in discussions it is destructive of editor good faith. If you can't be bothered to put your arguments into intelligible form, don't participate. If your language capacity is lacking to the extent that you have to have a program generate (as opposed to polish) your stuff, you probably don't have any business participating either. Human detection ability seems quite sufficient for these cases (and once it ceases to, well, we'll deal with that when it becomes an issue). --Elmidae (talk · contribs) 12:00, 2 January 2025 (UTC)
Human detection ability seems quite sufficient for these case
these discussions have already proven that human detection is not reliable, with human-generated comments labelled as AI and AI-generated comments labelled as human. Why should we prohibit LLM-generated content that is accurate, relevant and intelligible? We can already ignore/remove content that is none of those things regardless of whether it is LLM-generated or not. Thryduulf (talk) 12:21, 2 January 2025 (UTC)
- If you want a pithy in-a-nutshell: personal effort is buy-in in a discussion among equals. If your personal effort in a discussion is an LLM prompt, no editor should be expected to engage with you. --Elmidae (talk · contribs) 12:42, 2 January 2025 (UTC)
- What matters is whether people are engaging with each other constructively and in good faith. Whether one or both parties is using an LLM is completely irrelevant to both aspects. Thryduulf (talk) 13:29, 2 January 2025 (UTC)
- What matters is whether editors wish to engage with an LLM on Misplaced Pages. https://news.mit.edu/2024/large-language-models-dont-behave-like-people-0723 I don't mind interacting with an LLM for my own use, just not on here. Selfstudier (talk) 13:42, 2 January 2025 (UTC)
- Nobody is forcing you to engage with anybody or anything you don't want to engage with, but you do not get to state who other people are allowed to choose to engage with. As long as someone is engaging constructively and in good faith I don't care whether they are human or machine, because (a) I have no reliable way of knowing, and (b) it is literally irrelevant to me. Thryduulf (talk) 14:14, 2 January 2025 (UTC)
you do not get to state who other people are allowed to choose to engage with
Strawman, since I didn't do that. Only you are doing that. Selfstudier (talk) 14:26, 2 January 2025 (UTC)- Quite the contrary - you are saying that nobody should be allowed to engage with someone using an LLM because you do not want to engage with someone using an LLM. My position is that everybody should be allowed to choose who they want and do not want to engage with for themselves. Thryduulf (talk) 14:42, 2 January 2025 (UTC)
- Nobody is forcing you to engage with anybody or anything you don't want to engage with, but you do not get to state who other people are allowed to choose to engage with. As long as someone is engaging constructively and in good faith I don't care whether they are human or machine, because (a) I have no reliable way of knowing, and (b) it is literally irrelevant to me. Thryduulf (talk) 14:14, 2 January 2025 (UTC)
- What matters is whether editors wish to engage with an LLM on Misplaced Pages. https://news.mit.edu/2024/large-language-models-dont-behave-like-people-0723 I don't mind interacting with an LLM for my own use, just not on here. Selfstudier (talk) 13:42, 2 January 2025 (UTC)
- What matters is whether people are engaging with each other constructively and in good faith. Whether one or both parties is using an LLM is completely irrelevant to both aspects. Thryduulf (talk) 13:29, 2 January 2025 (UTC)
- If you want a pithy in-a-nutshell: personal effort is buy-in in a discussion among equals. If your personal effort in a discussion is an LLM prompt, no editor should be expected to engage with you. --Elmidae (talk · contribs) 12:42, 2 January 2025 (UTC)
- Support in principle - Using entirely AI-generated text in discussion is not a valuable contribution, since ultimately it is not capable of thought, but there should be leniency for those who are simply using AI to improve something they already wrote. FOARP (talk) 14:55, 2 January 2025 (UTC)
- Support in principle We don't want LLMs opinions. IF and WHEN we do we'll ask for an extension with some AI making an openig analysis of nominations. But we can not, and want not, to excludes cases where some user forms their opinion chating with some AI, as it was brillianttely shoen above, nor we want to exclude anyone lee proficient with the language just "because it looks like a bot". - Nabla (talk) 18:13, 2 January 2025 (UTC)
- Support the principle per Nabla. Aaron Liu (talk) 21:23, 2 January 2025 (UTC)
- Support in principle, strongly concerned in potential practice - I like the idea of removing what is verifiably generated by a LLM. On the other hand, I've read that autistic writers may be more likely to have their (completely non-AI) writing flagged as potential AI use by automated detectors. I think it could be a problem if a rule that's good in principle could start disproportionately affecting some editors (who are not doing the thing) more than others (who are not doing the thing) in practice, whether that come from human (i.e. other editor) bias or automated tools. - Purplewowies (talk) 18:36, 3 January 2025 (UTC)
- Strongly support per ChatGPT (EEng) and SMcC; I'm not at all concerned about edit accessibility to non-native speakers; WP:COMPETENCE, after all, is required to contribute to this project. I also oppose any breaking of the line against AI-generated plausible-sounding and -looking slop. We can't have a Misplaced Pages of LLMs talking to LLMs on how to make LLM content for LLM articles. Iseulttalk to me 21:53, 4 January 2025 (UTC)
- Support per SMcCandlish. If someone needs to use a LLM to be able to contribute and is unable to convert the output into something that is not obviously AI-generated, then WP:CIR likely applies. It'll be impossible to stop minimal usage cases, but having such a rule in place would at least encourage editors to review their output and stop the unhelpful posting off entire ChatGPT essays to discussion spaces. -- Patar knight - /contributions 01:41, 6 January 2025 (UTC)
- Strong... opinion -- I think the nuance required here in part is the difference between someone using any automated tool for assistance, versus true bot-like behavior. I believe that unauthorized bot behavior is already prohibited, which should help address the concerns that we mere humans cannot keep up with LLM bots. I agree, we cannot, but I don't see much of that. I am also not inclined to the "if you cannot write, you cannot contribute"... I can imagine 15 years ago some of us might have made the same statement about spelling and grammar; if you cannot spell properly without auto-correct you have no right to edit an encycolopedia. The are significant number of very intentilligent people who are afflicted with things like dyslexia, aspurgers, etc. who have been contributing using various technology tools for assistance. How may of us have Grammarly or similar running on their web browser? And beyond that tools and what they're called will continue to evolve. I am very much against just banning LLM use; largely because it can turn into an unnecessary witch hunt. There are people who will use the tools constructively, and those who will not. I can see some places where it should probably be banned (such as using a LLM to determine consensus on a discussion that needs closing (AfD, RM, etc)). But even in those areas, I think many of our existing policies and guidelines already address most of the actual concerns we're seeing when it comes to that activity. Cheifly that as long as people are being held accountable for how they use the tools, then who cares what the tool is called in 2000, 2020 or 2040? So I think the course forward is best served by (1) Consider refinement to WP:BOTP so that we're encapsulating LLM type bot behavior, as well as some sort of threshold on "non-human" capable editing limits (perhaps as part of WP:MEATBOT; (2) make a policy or guidelines very clear, bright line, that a user will be treated the same regardless of what tools they use, LLM or otherwise, and that disruptive editing will be handled accordingly. (2a) perhaps a single-warning template reflective of such, to welcome people who appear to be using LLM, and that they are responsible for their adherence to policy and that LLMs tend to get policy wrong. TiggerJay (talk) 02:48, 7 January 2025 (UTC)
- I like these proposal ideas, and I believe you've hit the nail on the head on the issue. Aaron Liu (talk) 13:14, 7 January 2025 (UTC)
- +1 Donald Albury 15:16, 7 January 2025 (UTC)
- The proposal is focused on comments that are obviously generated by LLM, which would not include non-generative AI processes like Grammarly or translation. The whole issue is arguments that were clearly created by a chatbot etc. doing its own analysis of a discussion and an editor just pasting that output rather than forming the argument themselves, which results in large amounts of text that other editors/closers then have to read and respond to. JoelleJay (talk) 22:26, 7 January 2025 (UTC)
- Right, where one doesn't feel like one is talking with Data. Selfstudier (talk) 22:52, 7 January 2025 (UTC)
- To be honest, that's my concern. What if something seems obviously generated by LLM but is entirely human-written? What if certain groups of editors are more likely to produce seemingly LLM text that isn't generated by any LLM at all? Is there a way to control for that? How do we consistently enforce this kind of thing without too many false positives or false negatives?
- Replying to the top-level opinion at this point: Most autistic people who would have in the past received an Asperger's diagnosis, barring some other reason, would IMO not be more likely to "need" more assistive technology (LLM or otherwise) to contribute to Misplaced Pages articles and discussions than any other user. However, there have been reports that autistic people may be more likely to produce text that is falsely positive in AI detection tools. See this story about a Purdue professor as one example (I had a second example, I thought, except that I've apparently read my last free article on that site (I bet I read said last article when looking up the same article when I wrote my !vote a few days back, Facepalm). Not gonna link what I can't (quickly) verify!)) (As an aside, I think "afflicted with" can come across as a bit... charged? Just something to note.)
- The mention of Data in particular only heightens my concern because--while no one has accused me yet of using LLM when I write--I have (on other sites, not here) been accused of being a bot before because of the style of writing I adopt at times. What if editor bias disproportionately plays into who is suspected of LLM use and who isn't? What if imperfections in automated AI detection only serve to cement the bias? - Purplewowies (talk) 01:25, 8 January 2025 (UTC)
- These questions around controlling for false positives and false negatives are not new, we face them all the time already, for example distinguishing vandalism from good faith contributions. We work through them, but they don't mean we don't have a policy on vandalism. CMD (talk) 01:41, 8 January 2025 (UTC)
- Indeed, that's why my original !vote is (in short) "support but concerned". I definitely think I'd support a policy; I'm just raising questions/concerns I think are worth considering in the process of creating such a policy. I think some part of me is just remembering times I've seen bias negatively affect the project in the past and that part of me is coming out here trying to prevent another situation like that by laying the concerns out. I dunno. - Purplewowies (talk) 03:59, 8 January 2025 (UTC)
- It's much easier to distinguish good-faith contributors: see if they change. Aaron Liu (talk) 04:06, 8 January 2025 (UTC)
- These questions around controlling for false positives and false negatives are not new, we face them all the time already, for example distinguishing vandalism from good faith contributions. We work through them, but they don't mean we don't have a policy on vandalism. CMD (talk) 01:41, 8 January 2025 (UTC)
- I think Joelle what you're overlooking in that statement is that obviously generated is something that is going to be short lived, meaning that it will not be very long before the LLM generated content will be far harder to distinguish. Just like how just a year ago any generative image AI would often get the number and/or placement of fingers wrong on a person's hand, in less than a years time that appears to generally be solved. Today the tell tail signs is even if you ask for someone who is left handed, you'll almost always get an image of a right handed person... But that too will go away shortly. I believe it is (scarily) not too far off, when LLMs can determine the prompt is for Misplaced Pages (by way of seeing the wikimarkup or references to normal wiki policies), and among other things, write in the style of experienced editors, perhaps modeled off of some of us in this very discussion. That will be much more difficult to tell, especially when it learns how to not be soo overly polite and apologetic. Beyond that, I believe there are a lot of people successfully and positively using LLMs already on WP. For those reasons, I proffer that we focus on refining the definitions and parameters for general disruptive editing (such as EW, RRR, etc), what a bot edit/activity is, and perhaps a standardized "friendly welcome" and "friendly notice" for those suspected of using LLMs, and reinforce that the tool matters less than the behavior. I think the analogy I'm going for is that of how we handle socks -- it is not against policy to have multiple accounts, however, when it is disruptive, it is blatantly prohibited. But the signs and symptoms of socks is really what we care about. TiggerJay (talk) 05:02, 8 January 2025 (UTC)
- Something can be prohibited even if it is or will be difficult to identify it. Just because we can anticipate AI-generated slop becoming harder to detect doesn't mean it's not still disruptive or that we shouldn't be able to strike it when it is obvious. JoelleJay (talk) 23:41, 9 January 2025 (UTC)
- Strong support for striking AI-generated comments. Detection of course will be up to admin discretion, but so is things like incivility and personal attacks; while short snippets of AI generated text are hard to detect, lengthy paragraphs are quite easy to eyeball, and from my experience that's the area where it's most likely to come up and most immediately disruptive. Generalissima (talk) (it/she) 22:25, 9 January 2025 (UTC)
- Support
for striking AI-generated comments. Detection of course will be up to admin discretion
as are other behavioural preoblems. Having read most of the discussion, and having been astonished by the 'individuality' of some machine-generated content, nonetheless, while detection may be less-than-foolproof, I still think we should make the point that such content isn't acceptable. It should be within our capacity to make the distinction between 'machine-generated' content and the use of software to correct grammar or to translate, neither of which should be directly outlawed.Pincrete (talk) 07:18, 10 January 2025 (UTC)- But why should such content be deemed unacceptbale when there are no problems with it? When there are problems with it, it can and should be dealt with in the same way that those problems are dealt with when written by humans. Thryduulf (talk) 18:25, 10 January 2025 (UTC)
Request for closure
IMHO, since we're over 30 days and the argument consensus seems lopsided, the community would do well to close this rapidly (as opposed to waiting for additional comments). BusterD (talk) 21:04, 2 January 2025 (UTC)
- Sure it looks lopsided if you only read what has been bolded. When you look beyond at what those bolding support are actually supporting it's not at all clear there is a consensus. Then when you actually look at the arguments made it seems that there is not actually a consensus for anything other more than "some users vocally dislike AI". Thryduulf (talk) 21:59, 2 January 2025 (UTC)
- If that's so, I'm sure the closer will figure it out. Selfstudier (talk) 22:14, 2 January 2025 (UTC)
- I was merely pointing out that BusterD's stated reason for calling for closure now is incorrect. That doesn't mean closing now would necessarily be wrong, just that overwhelming consensus is not a reason for it. Thryduulf (talk) 22:25, 2 January 2025 (UTC)
- As a sysop, reacting to LLM-related abuse is virtually all I've done in the last week or so. We have a pressing need for some help from the community, and while I'm open to discussion on this, it's unreasonable to expect humans to keep up with bot-like behavior. I've made my argument above, and I'll not disagree further here. BusterD (talk) 00:08, 3 January 2025 (UTC)
- I was merely pointing out that BusterD's stated reason for calling for closure now is incorrect. That doesn't mean closing now would necessarily be wrong, just that overwhelming consensus is not a reason for it. Thryduulf (talk) 22:25, 2 January 2025 (UTC)
- Many on the “support” side are also supporting with conditions instead of an unconditional support. For instance, many have suggested that LLM that supported grammars are okay.
- The closer must also take it into account. ✠ SunDawn ✠ (contact) 11:02, 3 January 2025 (UTC)
- If that's so, I'm sure the closer will figure it out. Selfstudier (talk) 22:14, 2 January 2025 (UTC)
- There are multiple breaks and 391 comments now. At this scale, it becomes difficult for a human to digest and summarise it all. AI technology seems well suited to this and so asked Gemini again "Please summarise the consensus of the following RFC:" giving it all the discussion above. Its response was much as before:
The RFC on LLM/chatbot comments in discussions is contentious, with a significant number of editors on both sides of the issue.
Support
Many editors support the proposal to discount or remove LLM-generated comments in discussions. Their arguments include:
- LLMs do not have original thoughts or arguments and therefore do not contribute meaningfully to discussions.
- LLM-generated comments can be disruptive, wasting editors' time and derailing discussions.
- LLMs can be used by bad-faith actors to manipulate consensus or spread misinformation.
- Editors should be encouraged to participate in discussions using their own words and thoughts.
Oppose
Other editors oppose the proposal, arguing that:
- LLMs can be useful tools for editors who have difficulty expressing themselves in English, such as non-native speakers or those with disabilities.
- LLM-generated comments should be evaluated on their merits, not their origin.
- Detection of LLM-generated comments is unreliable and could lead to false positives.
- A ban on LLM-generated comments could discourage participation from editors who rely on these tools.
Additional points
- There is disagreement about whether machine translation should be considered LLM-generated content.
- Some editors believe that LLMs could be used constructively in discussions, for example, to summarize arguments or provide alternative perspectives.
- Others are concerned about the potential for LLMs to be used maliciously, for example, to flood discussions with irrelevant comments or to impersonate other editors.
Overall
There is no clear consensus on whether to ban or allow LLM-generated comments in discussions. The discussion highlights the challenges of balancing the potential benefits and risks of using LLMs in a collaborative environment like Misplaced Pages.
Andrew🐉(talk) 21:58, 10 January 2025 (UTC)
That was a good answer but I wanted to get a rough headcount and so asked a supplementary question, "What was the count of the number of support and opposing votes in the discussion?" The answer was:
Unfortunately, a precise count of support vs oppose votes in that discussion is difficult to determine definitively for a few reasons:
- Some votes are unclear: Several editors express nuanced positions or change their vote during the discussion.
- "Strong" votes aren't quantified: While many editors use "Strong support" or "Strong oppose", these don't carry extra weight in a simple tally.
- Focus shifts: The initial focus on AI-generated comments broadens to include AI-assistance, making some votes less clear-cut.
However, based on a rough manual count:
- Support: Around 60-70 votes clearly favor discounting or removing AI-generated comments.
- Oppose: Roughly 30-40 votes lean against the proposal, with varying degrees of opposition.
Important Note: This is an approximation. The actual consensus in such discussions is determined by the closer, who weighs the arguments' quality and policy relevance, not just the number of votes.
This seems to be another good answer – better than we get from most human closes. I made a rough manual count to confirm that the numbers given are reasonable ballpark figures. And it seems to have an impressive grasp of nuance – noticing that several editors were qualifying their votes as strong/weak and that some editors changed their vote. Andrew🐉(talk) 22:02, 10 January 2025 (UTC)
- These feel (from my memory of when I skimmed over the discussion several days ago before my original !vote/comment) like they might be leaving out some relevant points that came up. (Unfortunately I can't quantify that without taking a look which I don't have time to do right now (other than the fact my own concerns didn't make it into the summary, which... fair, I guess, considering I was the only one who raised that specific issue).) (I'm also very surprised it came up with something coherent as well as on topic, but that just has more to do with the fact Gemini (or at least whatever AI summarizes Google searches) told me extremely recently that I could get to Alaska by car without a passport if I drove through the ocean. Not took a ferry but drove... through the water... in a car.) - Purplewowies (talk) 07:02, 11 January 2025 (UTC)
- Your !vote was a qualified support with concern about false positives affecting editors such as the autistic. That's then a nuanced, both-ways vote and the Gemini summary seems to capture that reasonably well. Andrew🐉(talk) 09:43, 11 January 2025 (UTC)
- My vote was mentioned purely because it was the only specific thing I could bring to mind quickly that hadn't ended up in the AI summary. I think Thryduulf's take about what it doesn't make clear more closely matches what I think I was actually feeling about the summary (that it doesn't make clear the wide breadth of what the support !votes are actually saying). - Purplewowies (talk) 19:08, 11 January 2025 (UTC)
- Your !vote was a qualified support with concern about false positives affecting editors such as the autistic. That's then a nuanced, both-ways vote and the Gemini summary seems to capture that reasonably well. Andrew🐉(talk) 09:43, 11 January 2025 (UTC)
- On the whole that summary matches my impression of discussion, with the exception that it doesn't make it clear that not everyone expressing support is supporting the same thing. Thryduulf (talk) 14:38, 11 January 2025 (UTC)
- I have strong doubts that Gemini is able to discern support or opposition even in individual !votes, let alone assess one participant's opinion across multiple comments or the relative preponderance of a given opinion across different participants. I fed it just the material before section break 1, then asked it to list the editors who oppose banning AI-generated comments. Here was its response:
Based on the discussion, the following editors oppose banning AI-generated comments or express concerns about the feasibility of such a ban:
Just Step Sideways: Argues against dismissing arguments based on the author rather than their content.
Remsense: Believes the downsides of such a policy outweigh the benefits.
Chipmunkdavis: Supports admin discretion and emphasizes the difficulty of creating a policy around rapidly evolving AI technology.
Jlwoodwa: Acknowledges that not all AI-generated comments are useless and suggests a case-by-case approach.
Aaron Liu: Is skeptical about the long-term feasibility of reliably distinguishing between human and AI-generated text.
Sean.hoyland: Expresses skepticism about the enforceability of such a rule due to the rapid advancements in AI technology.
Gnomingstuff: Strongly opposes a ban, citing the difficulty of enforcement, potential for false positives, and the impact on non-native English speakers.
Randy Kryn: Expresses concerns about potentially discouraging the use of AI tools for improving the clarity of writing by non-native English speakers.
Selfstudier: Acknowledges the principle of discounting AI-generated comments but questions the reliability of identifying them.
Somehow I don't think that is how a closer would interpret the !votes from @Just Step Sideways, @Remsense, @Chipmunkdavis, @Jlwoodwa, or @Randy Kryn... Note that it also did not manage to identify Thryduulf as an opposer until I asked who supported striking AI-generated comments:
When I re-fed it the same discussion, it modified its response thusly:Based on the discussion, the following editors support striking AI-generated comments:
Thryduulf: Initially supported discounting or ignoring AI-generated comments, but later withdrew support due to potential downsides.
Chipmunkdavis: Initially expressed support for admin discretion, but later shifted to more explicit support for guidance to closers and evaluators.
Bluethricecreamman: Strongly supports striking out AI-generated content, especially if it appears insincere or disruptive.
Isaacl: Supports striking out AI-generated content and favors sanctions for users who overuse AI to write arguments.
Clayoquot: Supports striking out AI-generated comments, comparing them to comments from suspected sock puppets.
Lazman321: Strongly supports striking out AI-generated comments, considering them as "pure laziness" and a waste of other editors' time.
Seraphimblade: Supports striking out AI-generated comments as they may not reflect the actual thoughts of the editor.
The last bullet point makes zero sense...Based on the additional excerpt of the discussion, the following editors support striking or collapsing AI-generated comments:
Jlwoodwa: Supports discounting or collapsing AI-generated comments, finding them disruptive if used to present arguments in an unnecessarily verbose manner.
Gnomingstuff: Opposes a hard ban, but acknowledges that AI-generated comments can be pointless and disruptive.
Seraphimblade: Reiterates support for striking AI-generated comments as they may not reflect the editor's thoughts.
It's important to note that:Gnomingstuff also raises concerns about the difficulty of enforcing such a policy and potential unintended consequences, like discouraging users from using translation tools.
Aaron Liu's test using GPTZero on a human-written comment by Gnomingstuff indicated a 99% probability that it was human-written, suggesting that these tools may not be reliable.
It didn't even identify me as supporting the proposal until I specifically asked what it thought my position was.No fucking way should we trust AI's assessment of any kind of consensus for any discussion. JoelleJay (talk) 23:02, 11 January 2025 (UTC)- Perhaps it's better at summarising in a broad-brush way than parsing individual votes. But to clarify the technical details, please say which version of Gemini you used as there seem to be at least five currently. My test was done using Advanced Gemini 1.5 Pro. I've also tried the Deep Research option but am not quite sure when that kicks in. Andrew🐉(talk) 09:57, 12 January 2025 (UTC)
- But broad-brush summaries are not what we want when the intent is to determine consensus among editors, since the AI is clearly not identifying positions accurately and doesn't seem to distinguish between one argument made multiple times by the same person vs by multiple different people.
I used Gemini 1.5. JoelleJay (talk) 19:04, 12 January 2025 (UTC)
- But broad-brush summaries are not what we want when the intent is to determine consensus among editors, since the AI is clearly not identifying positions accurately and doesn't seem to distinguish between one argument made multiple times by the same person vs by multiple different people.
- Perhaps it's better at summarising in a broad-brush way than parsing individual votes. But to clarify the technical details, please say which version of Gemini you used as there seem to be at least five currently. My test was done using Advanced Gemini 1.5 Pro. I've also tried the Deep Research option but am not quite sure when that kicks in. Andrew🐉(talk) 09:57, 12 January 2025 (UTC)
Ok, what point is there to having AI summarize consensus? Consensus should be based on the strength of individual arguments, which only an experienced human editor can assess; AI can only guess what arguments look like. It's especially weird to use it here since if, theoretically, the AI summary decides there is consensus against considering AI comments in Misplaced Pages discussions, then that would include AI summary itself, leading to Russell's paradox. Lazman321 (talk) 19:41, 11 January 2025 (UTC)
- One point is that AI may be better at summarising huge, sprawling discussions like this one. That's because it may have more patience and ability to handle large masses of data. Humans will tend to suffer from TLDR, MEGO and issues like confirmation bias and other barriers to critical thinking.
- Also, "beggars can't be choosers". The OP called for a close 10 days ago but no human has rushed to perform this thankless task. Discussions on Misplaced Pages are often left dangling without a clear conclusion and there many other big backlogs.
- Andrew🐉(talk) 10:32, 12 January 2025 (UTC)
Alternate proposal
- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- Redundant proposal, confusingly worded, with no support, and not even any further discussion interest in 10 days. — SMcCandlish ☏ ¢ 😼 05:23, 22 December 2024 (UTC)
Whereas many editors, including me, have cited problems with accuracy in regards to existing tools such as ZeroGPT, I propose that remarks that are blatently generated by a LLM or similar automated system should be discounted/removed/collapsed/hidden. ThatIPEditor 10:00, 10 December 2024 (UTC)
- Oppose as completely unnecessary and far too prone to error per the above discussion. Any comment that is good (on topic, relevant, etc) should be considered by the closer regardless of whether it was made with LLM-input of any sort or not. Any comment that is bad (off-topic, irrelevant, etc) should be ignored by the closer regardless of whether it was made with LLM-input of any sort or not. Any comment that is both bad and disruptive (e.g. by being excessively long, completely irrelevant, bludgeoning, etc) should be removed and/or hatted as appropriate, regardless of whether it was made with LLM-input of any sort. The good thing is that this is already policy so we don't need to call out LLMs specifically, and indeed doing so is likely to be disruptive in cases where human-written comments are misidentified as being LLM-written (which will happen, regardless of whether tools are used). Thryduulf (talk) 11:19, 10 December 2024 (UTC)
- I think this proposal is not really necessary. I support it, but that is because it is functionally identical to the one directly above it, which I also supported. This should probably be hatted. BugGhost 🦗👻 18:32, 10 December 2024 (UTC)
- What does blatantly generated mean? Does you mean only where the remark is signed with "I, Chatbot", or anything that appears to be LLM-style? I don't think there's much in between. ypn^2 19:21, 10 December 2024 (UTC)
- Procedural close per BugGhost. I'd hat this myself, but I don't think that'd be appropriate since it's only the two of us who have expressed that this proposal is basically an exact clone. Aaron Liu (talk) 03:00, 11 December 2024 (UTC)
@Tamzin: Did you find there was consensus support for handling obvious use of comments through any form of generative AI, regardless of the specific underlying technology (thus not only for large language models)? isaacl (talk) 23:45, 12 January 2025 (UTC)
- I've added "or similar AI technologies" to clarify. I don't think there was enough discussion of technical aspects of AI to be more precise than that. -- Tamzin (they|xe|🤷) 00:43, 13 January 2025 (UTC)
- My general impression is that most commenters were using the term "LLM" generically, rather than referring to a specific technology. isaacl (talk) 04:28, 13 January 2025 (UTC)
RfC: Voluntary RfA after resignation
- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- There is clear consensus that participants in this discussion wish to retain the "Option 2" status quo. We're past 30 days of discussion and there's not much traffic on the discussion now. It's unlikely the consensus would suddenly shift with additional discussion. --Hammersoft (talk) 18:29, 16 January 2025 (UTC)
Should Misplaced Pages:Administrators#Restoration of admin tools be amended to:
- Option 1 – Require former administrators to request restoration of their tools at the bureaucrats' noticeboard (BN) if they are eligible to do so (i.e., they do not fit into any of the exceptions).
- Option 2 –
ClarifyMaintain the status quo that former administrators who would be eligible to request restoration via BN may instead request restoration of their tools via a voluntary request for adminship (RfA). - Option 3 – Allow bureaucrats to SNOW-close RfAs as successful if (a) 48 hours have passed, (b) the editor has right of resysop, and (c) a SNOW close is warranted.
Background: This issue arose in one recent RfA and is currently being discussed in an ongoing RfA. voorts (talk/contributions) 21:14, 15 December 2024 (UTC)
Note: There is an ongoing related discussion at Misplaced Pages:Village pump (idea lab) § Making voluntary "reconfirmation" RFA's less controversial.
Note: Option 2 was modified around 22:08, 15 December 2024 (UTC).
Note: Added option 3. theleekycauldron (talk • she/her) 22:12, 15 December 2024 (UTC)
- Notified: Misplaced Pages:Administrators' noticeboard, Misplaced Pages:Bureaucrats' noticeboard, Misplaced Pages talk:Administrators, Misplaced Pages talk:Requests for adminship, T:CENT. voorts (talk/contributions) 21:19, 15 December 2024 (UTC)
- 2 per Kline's comment at Hog Farm's RfA. If an admin wishes to be held accountable for their actions at a re-RfA, they should be allowed to do so. charlotte 21:22, 15 December 2024 (UTC)
- Also fine with 3 charlotte 22:23, 15 December 2024 (UTC)
- There is ongoing discussion about this at Misplaced Pages:Village pump (idea lab)#Making voluntary "reconfirmation" RFA's less controversial. CMD (talk) 21:24, 15 December 2024 (UTC)
- 2, after thought. I don't think 3 provides much benefit, and creating separate class of RfAs that are speedy passed feels a misstep. If there are serious issues surrounding wasting time on RfAs set up under what might feel to someone like misleading pretenses, that is best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)". CMD (talk) 14:49, 16 December 2024 (UTC)
best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)"
- I like this idea, if option 2 comes out as consensus I think this small change would be a step in the right direction, as the "this isn't the best use of time" crowd (myself included) would be able to quickly identify the type of RFAs they don't want to participate in. BugGhost 🦗👻 11:05, 17 December 2024 (UTC)- I think that's a great idea. I would support adding some text encouraging people who are considering seeking reconfirmation to add (RRfA) or (reconfirmation) after their username in the RfA page title. That way people who are averse to reading or participating in reconfirmations can easily avoid them, and no one is confused about what is going on. 28bytes (talk) 14:23, 17 December 2024 (UTC)
- I think this would be a great idea if it differentiated against recall RfAs. Aaron Liu (talk) 18:37, 17 December 2024 (UTC)
- If we are differentiating three types of RFA we need three terms. Post-recall RFAs are referred to as "reconfirmation RFAs", "Re-RFAS" or "RRFAs" in multiple places, so ones of the type being discussed here are the ones that should take the new term. "Voluntary reconfirmation RFA" (VRRFA or just VRFA) is the only thing that comes to mind but others will probably have better ideas. Thryduulf (talk) 21:00, 17 December 2024 (UTC)
- 2, after thought. I don't think 3 provides much benefit, and creating separate class of RfAs that are speedy passed feels a misstep. If there are serious issues surrounding wasting time on RfAs set up under what might feel to someone like misleading pretenses, that is best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)". CMD (talk) 14:49, 16 December 2024 (UTC)
- 1 * Pppery * it has begun... 21:25, 15 December 2024 (UTC)
- 2 I don't see why people trying to do the right thing should be discouraged from doing so. If others feel it is a waste of time, they are free to simply not participate. El Beeblerino 21:27, 15 December 2024 (UTC)
- 2 Getting reconfirmation from the community should be allowed. Those who see it as a waste of time can ignore those RfAs. Schazjmd (talk) 21:32, 15 December 2024 (UTC)
- Of course they may request at RfA. They shouldn't but they may. This RfA feels like it does nothing to address the criticism actually in play and per the link to the idea lab discussion it's premature to boot. Barkeep49 (talk) 21:38, 15 December 2024 (UTC)
- 2 per my comments at the idea lab discussion and Queent of Hears, Beeblebrox and Scazjmd above. I strongly disagree with Barkeep's comment that "They shouldn't ". It shouldn't be made mandatory, but it should be encouraged where the time since desysop and/or the last RFA has been lengthy. Thryduulf (talk) 21:42, 15 December 2024 (UTC)
- When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, Barkeep49 (talk) 21:44, 15 December 2024 (UTC)
- I've started that discussion as a subsection to the linked VPI discussion. Thryduulf (talk) 22:20, 15 December 2024 (UTC)
- When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, Barkeep49 (talk) 21:44, 15 December 2024 (UTC)
- 1 or 3. RFA is an "expensive" process in terms of community time. RFAs that qualify should be fast-tracked via the BN process. It is only recently that a trend has emerged that folks that don't need to RFA are RFAing again. 2 in the last 6 months. If this continues to scale up, it is going to take up a lot of community time, and create noise in the various RFA statistics and RFA notification systems (for example, watchlist notices and User:Enterprisey/rfa-count-toolbar.js). –Novem Linguae (talk) 21:44, 15 December 2024 (UTC)
- Making statistics "noisy" is just a reason to improve the way the statistics are gathered. In this case collecting statistics for reconfirmation RFAs separately from other RFAs would seem to be both very simple and very effective. If (and it is a very big if) the number of reconfirmation RFAs means that notifications are getting overloaded, then we can discuss whether reconfirmation RFAs should be notified differently. As far as differentiating them, that is also trivially simple - just add a parameter to template:RFA (perhaps "reconfirmation=y") that outputs something that bots and scripts can check for. Thryduulf (talk) 22:11, 15 December 2024 (UTC)
- Option 3 looks like a good compromise. I'd support that too. –Novem Linguae (talk) 22:15, 15 December 2024 (UTC)
- I'm weakly opposed to option 3, editors who want feedback and a renewed mandate from the community should be entitled to it. If they felt that that a quick endorsement was all that was required then could have had that at BN, they explicitly chose not to go that route. Nobody is required to participate in an RFA, so if it is going the way you think it should, or you don't have an opinion, then just don't participate and your time has not been wasted. Thryduulf (talk) 22:20, 15 December 2024 (UTC)
- 2. We should not make it more difficult for administrators to be held accountable for their actions in the way they please. JJPMaster (she/they) 22:00, 15 December 2024 (UTC)
- Added option 3 above. Maybe worth considering as a happy medium, where unsure admins can get a check on their conduct without taking up too much time. theleekycauldron (talk • she/her) 22:11, 15 December 2024 (UTC)
- 2 – If a former admin wishes to subject themselves to RfA to be sure they have the requisite community confidence to regain the tools, why should we stop them? Any editor who feels the process is a waste of time is free to ignore any such RfAs. — Jkudlick ⚓ (talk) 22:12, 15 December 2024 (UTC)
- I would also support option 3 if the time is extended to 72 hours instead of 48. That, however, is a detail that can be worked out after this RfC. — Jkudlick ⚓ (talk) 02:05, 16 December 2024 (UTC)
- Option 3 per leek. voorts (talk/contributions) 22:16, 15 December 2024 (UTC)
- A further note: option 3 gives 'crats the discretion to SNOW close a successful voluntary re-RfA; it doesn't require such a SNOW close, and I trust the 'crats to keep an RfA open if an admin has a good reason for doing so. voorts (talk/contributions) 23:24, 16 December 2024 (UTC)
- 2 as per JJPMaster. Regards, --Goldsztajn (talk) 22:20, 15 December 2024 (UTC)
- Option 2 (no change) – The sample size is far too small for us to analyze the impact of such a change, but I believe RfA should always be available. Now that WP:RECALL is policy, returning administrators may worry that they have become out of touch with community norms and may face a recall as soon as they get their tools back at BN. Having this familiar community touchpoint as an option makes a ton of sense, and would be far less disruptive / demoralizing than a potential recall. Taking this route away, even if it remains rarely used, would be detrimental to our desire for increased administrator accountability. – bradv 22:22, 15 December 2024 (UTC)
- (edit conflict) I'm surprised the response here hasn't been more hostile, given that these give the newly-unresigned administrator a get out of recall free card for a year. —Cryptic 22:25, 15 December 2024 (UTC)
- @Cryptic hostile to what? Thryduulf (talk) 22:26, 15 December 2024 (UTC)
- 2, distant second preference 3. I would probably support 3 as first pick if not for recall's rule regarding last RfA, but as it stands, SNOW-closing a discussion that makes someone immune to recall for a year is a non-starter. Between 1 and 2, though, the only argument for 1 seems to be that it avoids a waste of time, for which there is the much simpler solution of not participating and instead doing something else. Special:Random and Misplaced Pages:Backlog are always there. -- Tamzin (they|xe|🤷) 23:31, 15 December 2024 (UTC)
- 1 would be my preference, but I don't think we need a specific rule for this. -- Ajraddatz (talk) 23:36, 15 December 2024 (UTC)
- Option 1.
No second preference between 2 or 3.As long as a former administrator didn't resign under a cloud, picking up the tools again should be low friction and low effort for the entire community. If there are issues introduced by the recall process, they should be fixed in the recall policy itself. Daniel Quinlan (talk) 01:19, 16 December 2024 (UTC)- After considering this further, I prefer option 3 over option 2 if option 1 is not the consensus. Daniel Quinlan (talk) 07:36, 16 December 2024 (UTC)
- Option 2, i.e. leave well enough alone. There is really not a problem here that needs fixing. If someone doesn’t want to “waste their time” participating in an RfA that’s not required by policy, they can always, well, not participate in the RfA. No one is required to participate in someone else’s RfA, and I struggle to see the point of participating but then complaining about “having to” participate. 28bytes (talk) 01:24, 16 December 2024 (UTC)
- Option 2 nobody is obligated to participate in a re-confirmation RfA. If you think they are a waste of time, avoid them. LEPRICAVARK (talk) 01:49, 16 December 2024 (UTC)
- 1 or 3 per Novem Linguae. C F A 02:35, 16 December 2024 (UTC)
- Option 3: Because it is incredibly silly to have situations like we do now of "this guy did something wrong by doing an RfA that policy explicitly allows, oh well, nothing to do but sit on our hands and dissect the process across three venues and counting." Your time is your own. No one is forcibly stealing it from you. At the same time it is equally silly to let the process drag on, for reasons explained in WP:SNOW. Gnomingstuff (talk) 03:42, 16 December 2024 (UTC)
- Update: Option 2 seems to be the consensus and I also would be fine with that. Gnomingstuff (talk) 18:10, 19 December 2024 (UTC)
- Option 3 per Gnoming. I think 2 works, but it is a very long process and for someone to renew their tools, it feels like an unnecessarily long process compared to a normal RfA. Conyo14 (talk) 04:25, 16 December 2024 (UTC)
- As someone who supported both WormTT and Hog Farm's RfAs, option 1 > option 3 >> option 2. At each individual RfA the question is whether or not a specific editor should be an admin, and in both cases I felt that the answer was clearly "yes". However, I agree that RfA is a very intensive process. It requires a lot of time from the community, as others have argued better than I can. I prefer option 1 to option 3 because the existence of the procedure in option 3 implies that it is a good thing to go through 48 hours of RfA to re-request the mop. But anything which saves community time is a good thing. HouseBlaster (talk • he/they) 04:31, 16 December 2024 (UTC)
- I've seen this assertion made multiple times now that
requires a lot of time from the community
, yet nowhere has anybody articulated how why this is true. What time is required, given that nobody is required to participate and everybody who does choose to participate can spend as much or as little time assessing the candidate as they wish? How and why does a reconfirmation RFA require any more time from editors (individually or collectively) than a request at BN? Thryduulf (talk) 04:58, 16 December 2024 (UTC)- I think there are a number of factors and people are summing it up as "time-wasting" or similar:
- BN Is designed for this exact scenario. It's also clearly a less contentious process.
- Snow closures a good example of how we try to avoid wasting community time on unnecessary process and the same reasoning applies here. Misplaced Pages is not a bureaucracy and there's no reason to have a 7-day process when the outcome is a given.
- If former administrators continue to choose re-RFAs over BN, it could set a problematic precedent where future re-adminship candidates feel pressured to go through an RFA and all that entails. I don't want to discourage people already vetted by the community from rejoining the ranks.
- The RFA process is designed to be a thoughtful review of prospective administrators and I'm concerned these kinds of perfunctory RFAs will lead to people taking the process less seriously in the future.
- Daniel Quinlan (talk) 07:31, 16 December 2024 (UTC)
- Because several thousand people have RFA on their watchlist, and thousands more will see the "there's an open RFA" notice on theirs whether they follow it or not. Unlike BN, RFA is a process that depends on community input from a large number of people. In order to even realise that the RFA is not worth their time, they have to:
- Read the opening statement and first few question answers (I just counted, HF's opening and first 5 answers are about 1000 words)
- Think, "oh, they're an an ex-admin, I wonder why they're going through RFA, what was their cloud"
- Read through the comments and votes to see if any issues have been brought up (another ~1000 words)
- None have
- Realise your input is not necessary and this could have been done at BN
- This process will be repeated by hundreds of editors over the course of a week. BugGhost 🦗👻 08:07, 16 December 2024 (UTC)
- That they were former admins has always been the first two sentences of their RfA’s statement, sentences which are immediately followed by that they resigned due to personal time commitment issues. You do not have to read the first 1000+ words to figure that out. If the reader wants to see if the candidate was lying in their statement, then they just have a quick skim through the oppose section. None of this should take more than 30 seconds in total. Aaron Liu (talk) 13:15, 16 December 2024 (UTC)
- Not everyone can skim things easily - it personally takes me a while to read sections. I don't know if they're going to bury the lede and say something like "Also I made 10,000 insane redirects and then decided to take a break just before arbcom launched a case" in paragraph 6. Hog Farm's self nom had two paragraphs about disputes and it takes more than 30 seconds to unpick that and determine if that is a "cloud" or not. Even for reconfirmations, it definitely takes more than 30 seconds to determine a conclusion. BugGhost 🦗👻 11:21, 17 December 2024 (UTC)
- They said they resigned to personal time commitments. That is directly saying they wasn’t under a cloud, so I’ll believe them unless someone claims the contrary in the oppose section. If the disputes section contained a cloud, the oppose section would have said so. One chooses to examine such nominations like normal RfAs. Aaron Liu (talk) 18:47, 17 December 2024 (UTC)
- Just to double check, you're saying that whenever you go onto an RFA you expect any reason to oppose to already be listed by someone else, and no thought is required? I am begining to see how you are able to assess an RFA in under 30 seconds BugGhost 🦗👻 23:08, 17 December 2024 (UTC)
- Something in their statement would be an incredibly obvious reason. We are talking about the assessment whether to examine and whether the candidate could've used BN. Aaron Liu (talk) 12:52, 18 December 2024 (UTC)
- Just to double check, you're saying that whenever you go onto an RFA you expect any reason to oppose to already be listed by someone else, and no thought is required? I am begining to see how you are able to assess an RFA in under 30 seconds BugGhost 🦗👻 23:08, 17 December 2024 (UTC)
- They said they resigned to personal time commitments. That is directly saying they wasn’t under a cloud, so I’ll believe them unless someone claims the contrary in the oppose section. If the disputes section contained a cloud, the oppose section would have said so. One chooses to examine such nominations like normal RfAs. Aaron Liu (talk) 18:47, 17 December 2024 (UTC)
- Not everyone can skim things easily - it personally takes me a while to read sections. I don't know if they're going to bury the lede and say something like "Also I made 10,000 insane redirects and then decided to take a break just before arbcom launched a case" in paragraph 6. Hog Farm's self nom had two paragraphs about disputes and it takes more than 30 seconds to unpick that and determine if that is a "cloud" or not. Even for reconfirmations, it definitely takes more than 30 seconds to determine a conclusion. BugGhost 🦗👻 11:21, 17 December 2024 (UTC)
- That they were former admins has always been the first two sentences of their RfA’s statement, sentences which are immediately followed by that they resigned due to personal time commitment issues. You do not have to read the first 1000+ words to figure that out. If the reader wants to see if the candidate was lying in their statement, then they just have a quick skim through the oppose section. None of this should take more than 30 seconds in total. Aaron Liu (talk) 13:15, 16 December 2024 (UTC)
- @Thryduulf let's not confuse "a lot of community time is spent" with "waste of time". Some people have characterized the re-RFAs as a waste of time but that's not the assertion I (and I think a majority of the skeptics) have been making. All RfAs use a lot of community time as hundreds of voters evaluate the candidate. They then choose to support, oppose, be neutral, or not vote at all. While editor time is not perfectly fixed - editors may choose to spend less time on non-Misplaced Pages activities at certain times - neither is it a resource we have in abundance anymore relative to our project. And so I think we, as a community, need to be thought about how we're using that time especially when the use of that time would have been spent on other wiki activities.Best, Barkeep49 (talk) 22:49, 16 December 2024 (UTC)
- Absolutely nothing compels anybody to spend any time evaluating an RFA. If you think your wiki time is better spent elsewhere than evaluating an RFA candidate, then spend it elsewhere. That way only those who do think it is a good use of their time will participate and everybody wins. You win by not spending your time on something that you don't think is worth it, those who do participate don't have their time wasted by having to read comments (that contradict explicit policy) about how the RFA is a waste of time. Personally I regard evaluating whether a long-time admin still has the approval of the community to be a very good use of community time, you are free to disagree, but please don't waste my time by forcing me to read comments about how you think I'm wasting my time. Thryduulf (talk) 23:39, 16 December 2024 (UTC)
- I am not saying you or anyone else is wasting time and am surprised you are so fervently insisting I am. Best, Barkeep49 (talk) 03:34, 17 December 2024 (UTC)
- I don't understand how your argument that it is not a good use of community time is any different from arguing that it is a waste of time? Thryduulf (talk) 09:08, 17 December 2024 (UTC)
- I am not saying you or anyone else is wasting time and am surprised you are so fervently insisting I am. Best, Barkeep49 (talk) 03:34, 17 December 2024 (UTC)
- Absolutely nothing compels anybody to spend any time evaluating an RFA. If you think your wiki time is better spent elsewhere than evaluating an RFA candidate, then spend it elsewhere. That way only those who do think it is a good use of their time will participate and everybody wins. You win by not spending your time on something that you don't think is worth it, those who do participate don't have their time wasted by having to read comments (that contradict explicit policy) about how the RFA is a waste of time. Personally I regard evaluating whether a long-time admin still has the approval of the community to be a very good use of community time, you are free to disagree, but please don't waste my time by forcing me to read comments about how you think I'm wasting my time. Thryduulf (talk) 23:39, 16 December 2024 (UTC)
- I think there are a number of factors and people are summing it up as "time-wasting" or similar:
- I've seen this assertion made multiple times now that
- Option 2 I don't mind the re-RFAs, but I'd appreciate if we encouraged restoration via BN instead, I just object to making it mandatory. EggRoll97 06:23, 16 December 2024 (UTC)
- Option 2. Banning voluntary re-RfAs would be a step in the wrong direction on admin accountability. Same with SNOW closing. There is no more "wasting of community time" if we let the RfA run for the full seven days, but allowing someone to dig up a scandal on the seventh day is an important part of the RfA process. The only valid criticism I've heard is that folks who do this are arrogant, but banning arrogance, while noble, seems highly impractical. Toadspike 07:24, 16 December 2024 (UTC)
- Option 3, 1, then 2, per HouseBlaster. Also agree with Daniel Quinlan. I think these sorts of RFA's should only be done in exceptional circumstances. Graham87 (talk) 08:46, 16 December 2024 (UTC)
- Option 1 as first preference, option 3 second. RFAs use up a lot of time - hundreds of editors will read the RFA and it takes time to come to a conclusion. When that conclusion is "well that was pointless, my input wasn't needed", it is not a good system. I think transparency and accountability is a very good thing, and we need more of it for resyssopings, but that should come from improving the normal process (BN) rather than using a different one (RFA). My ideas for improving the BN route to make it more transparent and better at getting community input is outlined over on the idea lab BugGhost 🦗👻 08:59, 16 December 2024 (UTC)
- Option 2, though I'd be for option 3 too. I'm all for administrators who feel like they want/should go through an RfA to solicit feedback even if they've been given the tools back already. I see multiple people talk about going through BN, but if I had to hazard a guess, it's way less watched than RfA is. However I do feel like watchlist notifications should say something to the effect of "A request for re-adminship feedback is open for discussion" so that people that don't like these could ignore them. ♠JCW555 (talk)♠ 09:13, 16 December 2024 (UTC)
- Option 2 because WP:ADMINISTRATORS is well-established policy. Read WP:ADMINISTRATORS#Restoration of admin tools, which says quite clearly,
Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.
I went back 500 edits to 2017 and the wording was substantially the same back then. So, I simply do not understand why various editors are berating former administrators to the point of accusing them of wasting time and being arrogant for choosing to go through a process which is specifically permitted by policy. It is bewildering to me. Cullen328 (talk) 09:56, 16 December 2024 (UTC) - Option 2 & 3 I think that there still should be the choice between BN and re-RFA for resysops, but I think that the re-RFA should stay like it is in Option 3, unless it is controversial, at which point it could be extended to the full RFA period. I feel like this would be the best compromise between not "wasting" community time (which I believe is a very overstated, yet understandable, point) and ensuring that the process is based on broad consensus and that our "representatives" are still supported. If I were WTT or Hog, I might choose to make the same decision so as to be respectful of the possibility of changing consensus. JuxtaposedJacob (talk) | :) | he/him | 10:45, 16 December 2024 (UTC)
- Option 2, for lack of a better choice. Banning re-RFAs is not a great idea, and we should not SNOW close a discussion that would give someone immunity from a certain degree of accountability. I've dropped an idea for an option 4 in the discussion section below. Giraffer (talk) 12:08, 16 December 2024 (UTC)
- Option 1 I agree with Graham87 that these sorts of RFAs should only be done in exceptional circumstances, and BN is the best place to ask for tools back. – DreamRimmer (talk) 12:11, 16 December 2024 (UTC)
- Option 2 I don't think prohibition makes sense. It also has weird side effects. eg: some admins' voluntary recall policies may now be completely void, because they would be unable to follow them even if they wanted to, because policy prohibits them from doing a RFA. (maybe if they're also 'under a cloud' it'd fit into exemptions, but if an admins' policy is "3 editors on this named list tell me I'm unfit, I resign" then this isn't really a cloud.) Personally, I think Hog Farm's RFA was unwise, as he's textbook uncontroversial. Worm's was a decent RFA; he's also textbook uncontroversial but it happened at a good time. But any editor participating in these discussions to give the "support" does so using their own time. Everyone who feels their time is wasted can choose to ignore the discussion, and instead it'll pass as 10-0-0 instead of 198-2-4. It just doesn't make sense to prohibit someone from seeking a community discussion, though. For almost anything, really. ProcrastinatingReader (talk) 12:33, 16 December 2024 (UTC)
- Option 2 It takes like two seconds to support or ignore an RFA you think is "useless"... can't understand the hullabaloo around them. I stand by what I said on WTT's re-RFA regarding RFAs being about evaluating trustworthiness and accountability. Trustworthy people don't skip the process. —k6ka 🍁 (Talk · Contributions) 15:24, 16 December 2024 (UTC)
- Option 1 - Option 2 is a waste of community time. - Ratnahastin (talk) 15:30, 16 December 2024 (UTC)
- 2 is fine. Strong oppose to 1 and 3. Opposing option 1 because there is nothing wrong with asking for extra community feedback. opposing option 3 because once an RfA has been started, it should follow the standard rules. Note that RfAs are extremely rare and non-contentious RfAs require very little community time (unlike this RfC which seems a waste of community time, but there we are). —Kusma (talk) 16:59, 16 December 2024 (UTC)
- 2, with no opposition to 3. I see nothing wrong with a former administrator getting re-confirmed by the community, and community vetting seems like a good thing overall. If people think it's a waste of time, then just ignore the RfA. Natg 19 (talk) 17:56, 16 December 2024 (UTC)
- 2 Sure, and clarify that should such an RFA be unsuccessful they may only regain through a future rfa. — xaosflux 18:03, 16 December 2024 (UTC)
- Option 2 If contributing to such an RFA is a waste of your time, just don't participate. TheWikiToby (talk) 18:43, 16 December 2024 (UTC)
- No individual is wasting their time participating. Instead the person asking for a re-rfa is using tons of editor time by asking hundreds of people to vet them. Even the choice not to participate requires at least some time to figure out that this is not a new RfA; though at least in the two we've had recently it would require only as long as it takes to get to the RfA - for many a click from the watchlist and then another click into the rfa page - and to read the first couple of sentences of the self-nomination which isn't terribly long all things considered. Best, Barkeep49 (talk) 22:55, 16 December 2024 (UTC)
- I agree with you (I think) that it's a matter of perspective. For me, clicking the RFA link in my watchlist and reading the first paragraph of Hog Farm's nomination (where they explained that they were already a respected admin) took me about 10 seconds. Ten seconds is nothing; in my opinion, this is just a nonissue. But then again, I'm not an admin, checkuser, or an oversighter. Maybe the time to read such a nomination is really wasting their time. I don't know. TheWikiToby (talk) 23:15, 16 December 2024 (UTC)
- I'm an admin and an oversighter (but not a checkuser). None of my time was wasted by either WTT or Hog Farm's nominations. Thryduulf (talk) 23:30, 16 December 2024 (UTC)
- I agree with you (I think) that it's a matter of perspective. For me, clicking the RFA link in my watchlist and reading the first paragraph of Hog Farm's nomination (where they explained that they were already a respected admin) took me about 10 seconds. Ten seconds is nothing; in my opinion, this is just a nonissue. But then again, I'm not an admin, checkuser, or an oversighter. Maybe the time to read such a nomination is really wasting their time. I don't know. TheWikiToby (talk) 23:15, 16 December 2024 (UTC)
- No individual is wasting their time participating. Instead the person asking for a re-rfa is using tons of editor time by asking hundreds of people to vet them. Even the choice not to participate requires at least some time to figure out that this is not a new RfA; though at least in the two we've had recently it would require only as long as it takes to get to the RfA - for many a click from the watchlist and then another click into the rfa page - and to read the first couple of sentences of the self-nomination which isn't terribly long all things considered. Best, Barkeep49 (talk) 22:55, 16 December 2024 (UTC)
- 2. Maintain the status quo. And stop worrying about a trivial non-problem. --Tryptofish (talk) 22:57, 16 December 2024 (UTC)
- 2. This reminds me of banning plastic straws (bear with me). Sure, I suppose in theory, that this is a burden on the community's time (just as straws do end up in landfills/the ocean). However, the amount of community time that is drained is minuscule compared to the amount of community time drained in countless, countless other fora and processes (just like the volume of plastic waste contributed by plastic straws is less than 0.001% of the total plastic waste). When WP becomes an efficient, well oiled machine, then maybe we can talk about saving community time by banning re-RFA's. But this is much ado about nothing, and indeed this plan to save people from themselves, and not allow them to simply decide whether to participate or not, is arguably more damaging than some re-RFAs (just as banning straws convinced some people that "these save-the-planet people are so ridiculous that I'm not going to bother listening to them about anything."). And, in fact, on a separate note, I'd actually love it if more admins just ran a re-RFA whenever they wanted. They would certainly get better feedback than just posting "What do my talk page watchers think?" on their own talk page. Or waiting until they get yelled at on their talk page, AN/ANI, AARV, etc. We say we want admins to respect feedback; does it have to be in a recall petition? --Floquenbeam (talk) 23:44, 16 December 2024 (UTC)
- What meaningful feedback has Hog Farm gotten? "A minority of people think you choose poorly in choosing this process to regain adminship". What are they supposed to do with that? I share your desire for editors to share meaningful feedback with administrators. My own attempt yielded some, though mainly offwiki where I was told I was both too cautious and too impetuous (and despite the seeming contradiction each was valuable in its own way). So yes let's find ways to get meaningful feedback to admins outside of recall or being dragged to ANI. Unfortunately re-RfA seems to be poorly suited to the task and so we can likely find a better way. Best, Barkeep49 (talk) 03:38, 17 December 2024 (UTC)
- Let us all take some comfort in the fact that no one has yet criticized this RfC comment as being a straw man argument. --Tryptofish (talk) 23:58, 18 December 2024 (UTC)
- No hard rule, but we should socially discourage confirmation RfAs There is a difference between a hard rule, and a soft social rule. A hard rule against confirmation RfA's, like option 1, would not do a good job of accounting for edge cases and would thus be ultimately detrimental here. But a soft social rule against them would be beneficial. Unfortunately, that is not one of the options of this RfC. In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers. (Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here.) That takes some introspection and humility to ask yourself: is it worth me inviting two or three hundred people to spend part of their lives to comment on me as a person?A lot of people have thrown around editor time in their reasonings. Obviously, broad generalizations about it aren't convincing anyone. So let me just share my own experience. I saw the watchlist notice open that a new RfA was being run. I reacted with some excitement, because I always like seeing new admins. When I got to the page and saw Hogfarm's name, I immediately thought "isn't he already an admin?" I then assumed, ah, its just the classic RfA reaction at seeing a qualified candidate, so I'll probably support him since I already think he's an admin. But then as I started to do my due diligence and read, I saw that he really, truly, already had been an admin. At that point, my previous excitement turned to a certain unease. I had voted yes for Worm's confirmation RfA, but here was another...and I realized that my blind support for Worm might have been the start of an entirely new process. I then thought "bet there's an RfC going about this," and came here. I then spent a while polishing up my essay on editor time, before taking time to write this message. All in all, I probably spent a good hour doing this. Previously, I'd just been clicking the random article button and gnoming. So, the longwinded moral: yeah, this did eat up a lot of my editor time that could have and was being spent doing something else. And I'd do it again! It was important to do my research and to comment here. But in the future...maybe I won't react quite as excitedly to seeing that RfA notice. Maybe I'll feel a little pang of dread...wondering if its going to be a confirmation RfA. We can't pretend that confirmation RfA's are costless, and that we don't lose anything even if editors just ignore them. When run, it should be because they are necessary. CaptainEek ⚓ 03:29, 17 December 2024 (UTC)
- And for what its worth, support Option 3 because I'm generally a fan of putting more tools in people's toolboxes. CaptainEek ⚓ 03:36, 17 December 2024 (UTC)
In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers.
Asking the community whether you still have their trust to be an administrator, which is what an reconfirmation RFA is, is a good reason. I expect getting a near-unanimous "yes" is good for one's ego, but that's just a (nice) side-effect of the far more important benefits to the entire community: a trusted administrator.- The time you claim is being eaten up unnecessarily by reconfirmation RFAs was actually taken up by you choosing to spend your time writing an essay about using time for things you don't approve of and then hunting out an RFC in which you wrote another short essay about using time on things you don't approve of. Absolutely none of that is a necessary consequence of reconfirmation RFAs - indeed the response consistent with your stated goals would have been to read the first two sentences of Hog Farm's RFA and then closed the tab and returned to whatever else it was you were doing. Thryduulf (talk) 09:16, 17 December 2024 (UTC)
- WTT's and Hog Farm's RFAs would have been completely uncontentious, something I hope for at RfA and certainly the opposite of what I "dread" at RfA, if it were not for the people who attack the very concept of standing for RfA again despite policy being crystal clear that it is absolutely fine. I don't see how any blame for this situation can be put on WTT or HF. We can't pretend that dismissing uncontentious reconfirmation RfAs is costless; discouraging them removes one of the few remaining potentially wholesome bits about the process. —Kusma (talk) 09:53, 17 December 2024 (UTC)
- @CaptainEek Would you find it better if Watchlist notices and similar said "(re?)confirmation RFA" instead of "RFA"? Say for all voluntary RFAs from an existing admin or someone who could have used BN?
- As a different point, I would be quite against any social discouraging if we're not making a hard rule as such. Social discouraging is what got us the opposes at WTT/Hog Farm's RFAs, which I found quite distasteful and badgering. If people disagree with a process, they should change it. But if the process remains the same, I think it's important to not enable RFA's toxicity by encouraging others to namecall or re-argue the process in each RRFA. It's a short road from social discouragement to toxicity, unfortunately. Soni (talk) 18:41, 19 December 2024 (UTC)
- Yes I think the watchlist notice should specify what kind of RfA, especially with the introduction of recall. CaptainEek ⚓ 16:49, 23 December 2024 (UTC)
- Option 1. Will prevent the unnecessary drama trend we are seeing in the recent. – Ammarpad (talk) 07:18, 17 December 2024 (UTC)
- Option 2 if people think there's a waste of community time, don't spend your time voting or discussing. Or add "reconfirmation" or similar to the watchlist notice. ~~ AirshipJungleman29 (talk) 15:08, 17 December 2024 (UTC)
- Option 3 (which I think is a subset of option 2, so I'm okay with the status quo, but I want to endorse giving 'crats the option to SNOW). While they do come under scrutiny from time to time for the extensive dicsussions in the "maybe" zone following RfAs, this should be taken as an indiciation that they are unlikely to do something like close it as SNOW in the event there is real and substantial concerns being rasied. This is an okay tool to give the 'crats. As far as I can tell, no one has ever accused the them of moving too quickly in this direction (not criticism; love you all, keep up the good work). Bobby Cohn (talk) 17:26, 17 December 2024 (UTC)
- Option 3 or Option 2. Further, if Option 2 passes, I expect it also ends all the bickering about lost community time. A consensus explicitly in favour of "This is allowed" should also be a consensus to discourage relitigation of this RFC. Soni (talk) 17:35, 17 December 2024 (UTC)
- Option 2: Admins who do not exude entitlement are to be praised. Those who criticize this humility should have a look in the mirror before accusing those who ask for reanointment from the community of "arrogance". I agree that it wouldn't be a bad idea to mention in parentheses that the RFA is a reconfirmation (watchlist) and wouldn't see any problem with crats snow-closing after, say, 96 hours. -- SashiRolls 18:48, 17 December 2024 (UTC)
- I disagree that BN shouldn't be the normal route. RfA is already as hard and soul-crushing as it is. Aaron Liu (talk) 20:45, 17 December 2024 (UTC)
- Who are you disagreeing with? This RfC is about voluntary RRfA. -- SashiRolls 20:59, 17 December 2024 (UTC)
- I know. I see a sizable amount of commenters here starting to say that voluntary re-RfAs should be encouraged, and your first sentence can be easily read as implying that admins who use the BN route exude entitlement. I disagree with that (see my reply to Thryduulf below). Aaron Liu (talk) 12:56, 18 December 2024 (UTC)
- One way to improve the reputation of RFA is for there to be more RFAs that are not terrible, such as reconfirmations of admins who are doing/have done a good job who sail through with many positive comments. There is no proposal to make RFA mandatory in circumstances it currently isn't, only to reaffirm that those who voluntarily choose RFA are entitled to do so. Thryduulf (talk) 21:06, 17 December 2024 (UTC)
- I know it's not a proposal, but there's enough people talking about this so far that it could become a proposal.
There's nearly nothing in between that could've lost the trust of the community. I'm sure there are many who do not want to be pressured into this without good reason. Aaron Liu (talk) 12:57, 18 December 2024 (UTC)- Absolutely nobody is proposing, suggesting or hinting here that reconfirmation RFAs should become mandatory - other than comments from a few people who oppose the idea of people voluntarily choosing to do something policy explicitly allows them to choose to do. The best way to avoid people being pressured into being accused of arrogance for seeking reconfirmation of their status from the community is to sanction those people who accuse people of arrogance in such circumstances as such comments are in flagrant breach of AGF and NPA. Thryduulf (talk) 14:56, 18 December 2024 (UTC)
- Yes, I’m saying that they should not become preferred. There should be no social pressure to do RfA instead of BN, only pressure intrinsic to the candidate. Aaron Liu (talk) 15:37, 18 December 2024 (UTC)
- Whether they should become preferred in any situation forms no part of this proposal in any way shape or form - this seeks only to reaffirm that they are permitted. A separate suggestion, completely independent of this one, is to encourage (explicitly not mandate) them in some (but explicitly not all) situations. All discussions on this topic would benefit if people stopped misrepresenting the policies and proposals - especially when the falsehoods have been explicitly called out. Thryduulf (talk) 15:49, 18 December 2024 (UTC)
- I am talking and worrying over that separate proposal many here are suggesting. I don’t intend to oppose Option 2, and sorry if I came off that way. Aaron Liu (talk) 16:29, 18 December 2024 (UTC)
- Whether they should become preferred in any situation forms no part of this proposal in any way shape or form - this seeks only to reaffirm that they are permitted. A separate suggestion, completely independent of this one, is to encourage (explicitly not mandate) them in some (but explicitly not all) situations. All discussions on this topic would benefit if people stopped misrepresenting the policies and proposals - especially when the falsehoods have been explicitly called out. Thryduulf (talk) 15:49, 18 December 2024 (UTC)
- Yes, I’m saying that they should not become preferred. There should be no social pressure to do RfA instead of BN, only pressure intrinsic to the candidate. Aaron Liu (talk) 15:37, 18 December 2024 (UTC)
- Absolutely nobody is proposing, suggesting or hinting here that reconfirmation RFAs should become mandatory - other than comments from a few people who oppose the idea of people voluntarily choosing to do something policy explicitly allows them to choose to do. The best way to avoid people being pressured into being accused of arrogance for seeking reconfirmation of their status from the community is to sanction those people who accuse people of arrogance in such circumstances as such comments are in flagrant breach of AGF and NPA. Thryduulf (talk) 14:56, 18 December 2024 (UTC)
- I know it's not a proposal, but there's enough people talking about this so far that it could become a proposal.
- Who are you disagreeing with? This RfC is about voluntary RRfA. -- SashiRolls 20:59, 17 December 2024 (UTC)
- I disagree that BN shouldn't be the normal route. RfA is already as hard and soul-crushing as it is. Aaron Liu (talk) 20:45, 17 December 2024 (UTC)
- Option 2. In fact, I'm inclined to encourage an RRfA over BN, because nothing requires editors to participate in an RRfA, but the resulting discussion is better for reaffirming community consensus for the former admin or otherwise providing helpful feedback. --Pinchme123 (talk) 21:45, 17 December 2024 (UTC)
- Option 2 WP:RFA has said "
Former administrators may seek reinstatement of their privileges through RfA...
" for over ten years and this is not a problem. I liked the opportunity to be consulted in the current RfA and don't consider this a waste of time. Andrew🐉(talk) 22:14, 17 December 2024 (UTC) - Option 2. People who think it’s not a good use of their time always have the option to scroll past. Innisfree987 (talk) 01:41, 18 December 2024 (UTC)
- 2 - If an administrator gives up sysop access because they plan to be inactive for a while and want to minimize the attack surface of Misplaced Pages, they should be able to ask for permissions back the quickest way possible. If an administrator resigns because they do not intend to do the job anymore, and later changes their mind, they should request a community discussion. The right course of action depends on the situation. Jehochman 14:00, 18 December 2024 (UTC)
- Option 1. I've watched a lot of RFAs and re-RFAs over the years. There's a darn good reason why the community developed the "go to BN" option: saves time, is straightforward, and if there are issues that point to a re-RFA, they're quickly surfaced. People who refuse to take the community-developed process of going to BN first are basically telling the community that they need the community's full attention on their quest to re-admin. Yes, there are those who may be directed to re-RFA by the bureaucrats, in which case, they have followed the community's carefully crafted process, and their re-RFA should be evaluated from that perspective. Risker (talk) 02:34, 19 December 2024 (UTC)
- Option 2. If people want to choose to go through an RFA, who are we to stop them? Stifle (talk) 10:25, 19 December 2024 (UTC)
- Option 2 (status quo/no changes) per meh. This is bureaucratic rulemongering at its finest. Every time RFA reform comes up some editors want admins to be required to periodically reconfirm, then when some admins decide to reconfirm voluntarily, suddenly that's seen as a bad thing. The correct thing to do here is nothing. If you don't like voluntary reconfirmation RFAs, you are not required to participate in them. Ivanvector (/Edits) 19:34, 19 December 2024 (UTC)
- Option 2 I would probably counsel just going to BN most of the time, however there are exceptions and edge cases. To this point these RfAs have been few in number, so the costs incurred are relatively minor. If the number becomes large then it might be worth revisiting, but I don't see that as likely. Some people will probably impose social costs on those who start them by opposing these RfAs, with the usual result, but that doesn't really change the overall analysis. Perhaps it would be better if our idiosyncratic internal logic didn't produce such outcomes, but that's a separate issue and frankly not really worth fighting over either. There's probably some meta issues here I'm unaware off, it's long since I've had my finger on the community pulse so to speak, but they tend to matter far less than people think they do. 184.152.68.190 (talk) 02:28, 20 December 2024 (UTC)
- Option 1, per WP:POINT, WP:NOT#SOCIALNETWORK, WP:NOT#BUREAUCRACY, WP:NOTABOUTYOU, and related principles. We all have far better things to do that read through and argue in/about a totally unnecessary RfA invoked as a "Show me some love!" abuse of process and waste of community time and productivity. I could live with option 3, if option 1 doesn't fly (i.e. shut these silly things down as quickly as possible). But option 2 is just out of the question. — SMcCandlish ☏ ¢ 😼 04:28, 22 December 2024 (UTC)
- Except none of the re-RFAs complained about have been
RfA invoked as a "Show me some love!" abuse of process
, you're arguing against a strawman. Thryduulf (talk) 11:41, 22 December 2024 (UTC)- It's entirely a matter of opinion and perception, or A) this RfC wouldn't exist, and B) various of your fellow admins like TonyBallioni would not have come to the same conclusion I have. Whether the underlying intent (which no one can determine, lacking as we do any magical mind-reading powers) is solely egotistical is ultimately irrelevant. The actual effect (what matters) of doing this whether for attention, or because you've somehow confused yourself into think it needs to be done, is precisely the same: a showy waste of community volunteers' time with no result other than a bunch of attention being drawn to a particular editor and their deeds, without any actual need for the community to engage in a lengthy formal process to re-examine them. — SMcCandlish ☏ ¢ 😼 05:49, 23 December 2024 (UTC)
I and many others here agree and stand behind the very reasoning that has "confused" such candidates, at least for WTT. Aaron Liu (talk) 15:37, 23 December 2024 (UTC)or because you've somehow confused yourself into think it needs to be done
- It's entirely a matter of opinion and perception, or A) this RfC wouldn't exist, and B) various of your fellow admins like TonyBallioni would not have come to the same conclusion I have. Whether the underlying intent (which no one can determine, lacking as we do any magical mind-reading powers) is solely egotistical is ultimately irrelevant. The actual effect (what matters) of doing this whether for attention, or because you've somehow confused yourself into think it needs to be done, is precisely the same: a showy waste of community volunteers' time with no result other than a bunch of attention being drawn to a particular editor and their deeds, without any actual need for the community to engage in a lengthy formal process to re-examine them. — SMcCandlish ☏ ¢ 😼 05:49, 23 December 2024 (UTC)
- Except none of the re-RFAs complained about have been
- Option 2. I see no legitimate reason why we should be changing the status quo. Sure, some former admins might find it easier to go through BN, and it might save community time, and most former admins already choose the easier option. However, if a candidate last ran for adminship several years ago, or if issues were raised during their tenure as admin, then it may be helpful for them to ask for community feedback, anyway. There is no "wasted" community time in such a case. I really don't get the claims that this violates WP:POINT, because it really doesn't apply when a former admin last ran for adminship 10 or 20 years ago or wants to know if they still have community trust.On the other hand, if an editor thinks a re-RFA is a waste of community time, they can simply choose not to participate in that RFA. Opposing individual candidates' re-RFAs based solely on opposition to re-RFAs in general is a violation of WP:POINT. – Epicgenius (talk) 14:46, 22 December 2024 (UTC)
- But this isn't the status quo? We've never done a re-RfA before now. The question is whether this previously unconsidered process, which appeared as an emergent behavior, is a feature or a bug. CaptainEek ⚓ 23:01, 22 December 2024 (UTC)
- There have been lots of re-RFAs, historically. There were more common in the 2000s. Evercat in 2003 is the earliest I can find, back before the re-sysopping system had been worked out fully. Croat Canuck back in 2007 was snow-closed after one day, because the nominator and applicant didn't know that they could have gone to the bureaucrats' noticeboard. For more modern examples, HJ Mitchell (2011) is relatively similar to the recent re-RFAs in the sense that the admin resigned uncontroversially but chose to re-RFA before getting the tools back. Immediately following and inspired by HJ Mitchell's, there was the slightly more controversial SarekOfVulcan. That ended successful re-RFAS until 2019's Floquenbeam, which crat-chatted. Since then, there have been none that I remember. There have been several re-RFAs from admins who were de-sysopped or at serious risk of de-sysopping, and a few interesting edge cases such as the potentially optional yet no-consensus SarekVulcan 3 in 2014 and the Rich Farmbrough case in 2015, but those are very different than what we're talking about today. GreenLipstickLesbian (talk) 00:01, 23 December 2024 (UTC)
- To add on to that, Misplaced Pages:Requests for adminship/Harrias 2 was technically a reconfirmation RFA, which in a sense can be treated as a re-RFA. My point is, there is some precedent for re-RFAs, but the current guidelines are ambiguous as to when re-RFAs are or aren't allowed. – Epicgenius (talk) 16:34, 23 December 2024 (UTC)
- Well thank you both, I've learned something new today. It turns out I was working on a false assumption. It has just been so long since a re-RfA that I assumed it was a truly new phenomenon, especially since there were two in short succession. I still can't say I'm thrilled by the process and think it should be used sparingly, but perhaps I was a bit over concerned. CaptainEek ⚓ 16:47, 23 December 2024 (UTC)
- To add on to that, Misplaced Pages:Requests for adminship/Harrias 2 was technically a reconfirmation RFA, which in a sense can be treated as a re-RFA. My point is, there is some precedent for re-RFAs, but the current guidelines are ambiguous as to when re-RFAs are or aren't allowed. – Epicgenius (talk) 16:34, 23 December 2024 (UTC)
- There have been lots of re-RFAs, historically. There were more common in the 2000s. Evercat in 2003 is the earliest I can find, back before the re-sysopping system had been worked out fully. Croat Canuck back in 2007 was snow-closed after one day, because the nominator and applicant didn't know that they could have gone to the bureaucrats' noticeboard. For more modern examples, HJ Mitchell (2011) is relatively similar to the recent re-RFAs in the sense that the admin resigned uncontroversially but chose to re-RFA before getting the tools back. Immediately following and inspired by HJ Mitchell's, there was the slightly more controversial SarekOfVulcan. That ended successful re-RFAS until 2019's Floquenbeam, which crat-chatted. Since then, there have been none that I remember. There have been several re-RFAs from admins who were de-sysopped or at serious risk of de-sysopping, and a few interesting edge cases such as the potentially optional yet no-consensus SarekVulcan 3 in 2014 and the Rich Farmbrough case in 2015, but those are very different than what we're talking about today. GreenLipstickLesbian (talk) 00:01, 23 December 2024 (UTC)
- But this isn't the status quo? We've never done a re-RfA before now. The question is whether this previously unconsidered process, which appeared as an emergent behavior, is a feature or a bug. CaptainEek ⚓ 23:01, 22 December 2024 (UTC)
- Option 2 or 3 per Gnoming and CaptainEek. Such RfAs only require at most 30 seconds for one to decide whether or not to spend their time on examination. Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Voluntary reconfirmation RfAs are socially discouraged, so there is usually a very good reason for someone to go back there, such as accountability for past statements in the case of WTT or large disputes during adminship in the case of Hog Farm. I don't think we should outright deny these, and there is no disruption incurred if we don't. Aaron Liu (talk) 15:44, 23 December 2024 (UTC)
- Option 2 but for largely the reasons presented by CaptainEek. KevinL (aka L235 · t · c) 21:58, 23 December 2024 (UTC)
- Option 2 (fine with better labeling) These don't seem harmful to me and, if I don't have time, I'll skip one and trust the judgment of my fellow editors. No objection to better labeling them though, as discussed above. RevelationDirect (talk) 22:36, 23 December 2024 (UTC)
- Option 1 because it's just a waste of time to go through and !vote on candidates who just want the mop restored when he or she or they could get it restored BN with no problems. But I can also see option 2 being good for a former mod not in good standing. Therapyisgood (talk) 23:05, 23 December 2024 (UTC)
- If you think it is a waste of time to !vote on a candidate, just don't vote on that candidate and none of your time has been wasted. Thryduulf (talk) 23:28, 23 December 2024 (UTC)
- Option 2 per QoH (or me? who knows...) Kline • talk • contribs 04:24, 27 December 2024 (UTC)
- Option 2 Just because someone may be entitled to get the bit back doesn't mean they necessarily should. Look at my RFA3. I did not resign under a cloud, so I could have gotten the bit back by request. However, the RFA established that I did not have the community support at that point, so it was a good thing that I chose that path. I don't particularly support option 3, but I could deal with it. --SarekOfVulcan (talk) 16:05, 27 December 2024 (UTC)
- Option 1 Asking hundreds of people to vet a candidate who has already passed a RfA and is eligible to get the tools back at BN is a waste of the community's time. -- Pawnkingthree (talk) 16:21, 27 December 2024 (UTC)
- Option 2 Abolishing RFA in favour of BN may need to be considered, but I am unconvinced by arguments about RFA being a waste of time. Hawkeye7 (discuss) 19:21, 27 December 2024 (UTC)
- Option 2 I really don't think there's a problem that needs to be fixed here. I am grateful at least a couple administrators have asked for the support of the community recently. SportingFlyer T·C 00:12, 29 December 2024 (UTC)
- Option 2. Keep the status quo of
any editor is free to re-request the tools through the requests for adminship process
. Voluntary RfA are rare enough not to be a problem, it's not as though we are overburdened with RfAs. And it’s my time to waste. --Malcolmxl5 (talk) 17:58, 7 January 2025 (UTC) - Option 2 or Option 3. These are unlikely to happen anyway, it's not like they're going to become a trend. I'm already wasting my time here instead of other more important activities anyway, so what's a little more time spent giving an easy support?fanfanboy (blocktalk) 16:39, 10 January 2025 (UTC)
- Option 1 Agree with Daniel Quinlan that for the problematic editors eligible for re-sysop at BN despite unpopularity, we should rely on our new process of admin recall, rather than pre-emptive RRFAs. I'll add the novel argument that when goliaths like Hog Farm unnecessarily showcase their achievements at RFA, it scares off nonetheless qualified candidates. ViridianPenguin 🐧 ( 💬 ) 17:39, 14 January 2025 (UTC)
- Option 2 per Gnoming /CaptainEeek Bluethricecreamman (talk) 20:04, 14 January 2025 (UTC)
- Option 2 or Option 3 - if you regard a re-RfA as a waste of your time, just don't waste it by participating; it's not mandatory. Bastun 12:13, 15 January 2025 (UTC)
Discussion
- @Voorts: If option 2 gets consensus how would this RfC change the wording
Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.
Or is this an attempt to see if that option no longer has consensus? If so why wasn't alternative wording proposed? As I noted above this feels premature in multiple ways. Best, Barkeep49 (talk) 21:43, 15 December 2024 (UTC)- That is not actually true. ArbCom can (and has) forbidden some editors from re-requesting the tools through RFA. Hawkeye7 (discuss) 19:21, 27 December 2024 (UTC)
- I've re-opened this per a request on my talk page. If other editors think this is premature, they can !vote accordingly and an uninvolved closer can determine if there's consensus for an early close in deference to the VPI discussion. voorts (talk/contributions) 21:53, 15 December 2024 (UTC)
- The discussion at VPI, which I have replied on, seems to me to be different enough from this discussion that both can run concurrently. That is, however, my opinion as a mere editor. — Jkudlick ⚓ (talk) 22:01, 15 December 2024 (UTC)
- @Voorts, can you please reword the RfC to make it clear that Option 2 is the current consensus version? It does not need to be clarified – it already says precisely what you propose. – bradv 22:02, 15 December 2024 (UTC)
- Question: May someone clarify why many view such confirmation RfAs as a waste of community time? No editor is obligated to take up their time and participate. If there's nothing to discuss, then there's no friction or dis-cussing, and the RfA smooth-sails; if a problem is identified, then there was a good reason to go to RfA. I'm sure I'm missing something here. Aaron Liu (talk) 22:35, 15 December 2024 (UTC)
- The intent of RfA is to provide a comprehensive review of a candidate for adminship, to make sure that they meet the community's standards. Is that happening with vanity re-RfAs? Absolutely not, because these people don't need that level of vetting. I wouldn't consider a week long, publicly advertized back patting to be a productive use of volunteer time. -- Ajraddatz (talk) 23:33, 15 December 2024 (UTC)
- But no volunteer is obligated to pat such candidates on the back. Aaron Liu (talk) 00:33, 16 December 2024 (UTC)
- Sure, but that logic could be used to justify any time sink. We're all volunteers and nobody is forced to do anything here, but that doesn't mean that we should promote (or stay silent with our criticism of, I suppose) things that we feel don't serve a useful purpose. I don't think this is a huge deal myself, but we've got two in a short period of time and I'd prefer to do a bit of push back now before they get more common. -- Ajraddatz (talk) 01:52, 16 December 2024 (UTC)
- Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Aaron Liu (talk) 02:31, 16 December 2024 (UTC)
- Except someone who has no need for advanced tools and is not going to use them in any useful fashion, would then skate through with nary a word said about their unsuitability, regardless of the foregone conclusion. The point of RFA is not to rubber-stamp. Unless their is some actual issue or genuine concern they might not get their tools back, they should just re-request them at BN and stop wasting people's time with pointless non-process wonkery. Only in death does duty end (talk) 09:05, 16 December 2024 (UTC)
- I’m confused. Adminship requires continued use of the tools. If you think they’s suitable for BN, I don’t see how doing an RfA suddenly makes them unsuitable. If you have concerns, raise them. Aaron Liu (talk) 13:02, 16 December 2024 (UTC)
- Except someone who has no need for advanced tools and is not going to use them in any useful fashion, would then skate through with nary a word said about their unsuitability, regardless of the foregone conclusion. The point of RFA is not to rubber-stamp. Unless their is some actual issue or genuine concern they might not get their tools back, they should just re-request them at BN and stop wasting people's time with pointless non-process wonkery. Only in death does duty end (talk) 09:05, 16 December 2024 (UTC)
- Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Aaron Liu (talk) 02:31, 16 December 2024 (UTC)
- Sure, but that logic could be used to justify any time sink. We're all volunteers and nobody is forced to do anything here, but that doesn't mean that we should promote (or stay silent with our criticism of, I suppose) things that we feel don't serve a useful purpose. I don't think this is a huge deal myself, but we've got two in a short period of time and I'd prefer to do a bit of push back now before they get more common. -- Ajraddatz (talk) 01:52, 16 December 2024 (UTC)
- But no volunteer is obligated to pat such candidates on the back. Aaron Liu (talk) 00:33, 16 December 2024 (UTC)
- The intent of RfA is to provide a comprehensive review of a candidate for adminship, to make sure that they meet the community's standards. Is that happening with vanity re-RfAs? Absolutely not, because these people don't need that level of vetting. I wouldn't consider a week long, publicly advertized back patting to be a productive use of volunteer time. -- Ajraddatz (talk) 23:33, 15 December 2024 (UTC)
- I don't think the suggested problem (which I acknowledge not everyone thinks is a problem) is resolved by these options. Admins can still run a re-confirmation RfA after regaining adminsitrative privileges, or even initiate a recall petition. I think as discussed on Barkeep49's talk page, we want to encourage former admins who are unsure if they continue to be trusted by the community at a sufficient level to explore lower cost ways of determining this. isaacl (talk) 00:32, 16 December 2024 (UTC)
- Regarding option 3, establishing a consensus view takes patience. The intent of having a reconfirmation request for administrative privileges is counteracted by closing it swiftly. It provides incentive for rapid voting that may not provide the desired considered feedback. isaacl (talk) 17:44, 17 December 2024 (UTC)
- In re the idea that RfAs use up a lot of community time: I first started editing Misplaced Pages in 2014. There were 62 RfAs that year, which was a historic low. Even counting all of the AElect candidates as separate RfAs, including those withdrawn before voting began, we're still up to only 53 in 2024 – counting only traditional RfAs it's only 18, which is the second lowest number ever. By my count we've has 8 resysop requests at BN in 2024; even if all of those went to RfA, I don't see how that would overwhelm the community. That would still leave us on 26 traditional RfAs per year, or (assuming all of them run the full week) one every other week. Caeciliusinhorto-public (talk) 10:26, 16 December 2024 (UTC)
- What about an option 4 encouraging eligible candidates to go through BN? At the end of the Procedure section, add something like "Eligible users are encouraged to use this method rather than running a new request for adminship." The current wording makes re-RfAing sound like a plausible alternative to a BN request, when in actual fact the former rarely happens and always generates criticism. Giraffer (talk) 12:08, 16 December 2024 (UTC)
- Discouraging RFAs is the second last thing we should be doing (after prohibiting them), rather per my comments here and in the VPI discussion we should be encouraging former administrators to demonstrate that they still have the approval of the community. Thryduulf (talk) 12:16, 16 December 2024 (UTC)
- I think this is a good idea if people do decide to go with option 2, if only to stave off any further mixed messages that people are doing something wrong or rude or time-wasting or whatever by doing a second RfA, when it's explicitly mentioned as a valid thing for them to do. Gnomingstuff (talk) 15:04, 16 December 2024 (UTC)
- If RFA is explicitly a valid thing for people to do (which it is, and is being reaffirmed by the growing consensus for option 2) then we don't need to (and shouldn't) discourage people from using that option. The mixed messages can be staved off by people simply not making comments that explicitly contradict policy. Thryduulf (talk) 15:30, 16 December 2024 (UTC)
- Also a solid option, the question is whether people will actually do it. Gnomingstuff (talk) 22:55, 16 December 2024 (UTC)
- The simplest way would be to just quickly hat/remove all such comments. Pretty soon people will stop making them. Thryduulf (talk) 23:20, 16 December 2024 (UTC)
- Also a solid option, the question is whether people will actually do it. Gnomingstuff (talk) 22:55, 16 December 2024 (UTC)
- If RFA is explicitly a valid thing for people to do (which it is, and is being reaffirmed by the growing consensus for option 2) then we don't need to (and shouldn't) discourage people from using that option. The mixed messages can be staved off by people simply not making comments that explicitly contradict policy. Thryduulf (talk) 15:30, 16 December 2024 (UTC)
- This is not new. We've had sporadic "vanity" RfAs since the early days of the process. I don't believe they're particularly harmful, and think that it unlikely that we will begin to see so many of them that they pose a problem. As such I don't think this policy proposal solves any problem we actually have. UninvitedCompany 21:56, 16 December 2024 (UTC)
- This apparent negative feeling evoked at an RFA for a former sysop everyone agrees is fully qualified and trusted certainly will put a bad taste in the mouths of other former admins who might consider a reconfirmation RFA without first visiting BN. This comes in the wake of Worm That Turned's similar rerun. BusterD (talk) 23:29, 16 December 2024 (UTC)
- Nobody should ever be discouraged from seeking community consensus for significant changes. Adminship is a significant change. Thryduulf (talk) 23:32, 16 December 2024 (UTC)
- No argument from me. I was a big Hog Farm backer way back when he was merely one of Misplaced Pages's best content contributors. BusterD (talk) 12:10, 17 December 2024 (UTC)
- Nobody should ever be discouraged from seeking community consensus for significant changes. Adminship is a significant change. Thryduulf (talk) 23:32, 16 December 2024 (UTC)
- All these mentions of editor time make me have to mention The Grand Unified Theory of Editor Time (TLDR: our understanding of how editor time works is dreadfully incomplete). CaptainEek ⚓ 02:44, 17 December 2024 (UTC)
- I went looking for @Tamzin's comment because I know they had hung up the tools and came back, and I was interested in their perspective. But they've given me a different epiphany. I suddenly realize why people are doing confirmation RfAs: it's because of RECALL, and the one year immunity a successful RfA gives you. Maybe everyone else already figured that one out and is thinking "well duh Eek," but I guess I hadn't :) I'm not exactly sure what to do with that epiphany, besides note the emergent behavior that policy change can create. We managed to generate an entirely new process without writing a single word about it, and that's honestly impressive :P CaptainEek ⚓ 18:18, 17 December 2024 (UTC)
- Worm That Turned followed through on a pledge he made in January 2024, before the 2024 review of the request for adminship process began. I don't think a pattern can be extrapolated from a sample size of one (or even two). That being said, it's probably a good thing if admins occasionally take stock of whether or not they continue to hold the trust of the community. As I previously commented, it would be great if these admins would use a lower cost way of sampling the community's opinion. isaacl (talk) 18:31, 17 December 2024 (UTC)
- @CaptainEek: You are correct that a year's "immunity" results from a successful RRFA, but I see no evidence that this has been the reason for the RRFAs. Regards, Newyorkbrad (talk) 00:14, 22 December 2024 (UTC)
- If people decide to go through a community vote to get a one year immunity from a process that only might lead to a community vote which would then have a lower threshold then the one they decide to go through, and also give a year's immunity, then good for them. CMD (talk) 01:05, 22 December 2024 (UTC)
- @CaptainEek: You are correct that a year's "immunity" results from a successful RRFA, but I see no evidence that this has been the reason for the RRFAs. Regards, Newyorkbrad (talk) 00:14, 22 December 2024 (UTC)
- @CaptainEek I'm mildly bothered by this comment, mildly because I assume it's lighthearted and non-serious. But just in case anyone does feel this way - I was very clear about my reasons for RRFA, I've written a lot about it, anyone is welcome to use my personal recall process without prejudice, and just to be super clear - I waive my "1 year immunity" - if someone wants to start a petition in the next year, do not use my RRfA as a reason not to. I'll update my userpage accordingly. I can't speak for Hog Farm, but his reasoning seems similar to mine, and immunity isn't it. Worm(talk) 10:28, 23 December 2024 (UTC)
- @Worm That Turned my quickly written comment was perhaps not as clear as it could have been :) I'm sorry, I didn't mean to suggest that y'all had run for dubious reasons. As I said in my !vote,
Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here
. I guess what I really meant was that the reason that we're having this somewhat spirited conversation seems to be the sense that re-RfA could provide a protection from recall. If not for recall and the one year immunity period, I doubt we'd have cared so much as to suddenly run two discussions about this. CaptainEek ⚓ 16:59, 23 December 2024 (UTC)- I don't agree. No one else has raised a concern about someone seeking a one-year respite from a recall petition. Personally, I think essentially self-initiating the recall process doesn't really fit the profile of someone who wants to avoid the recall process. (I could invent some nefarious hypothetical situation, but since opening an arbitration case is still a possibility, I don't think it would work out as planned.) isaacl (talk) 05:19, 24 December 2024 (UTC)
- @Worm That Turned my quickly written comment was perhaps not as clear as it could have been :) I'm sorry, I didn't mean to suggest that y'all had run for dubious reasons. As I said in my !vote,
- I really don't think this is the reason behind WTT's and HF's reconfirmation RFA's. I don't think their RFA's had much utility and could have been avoided, but I don't doubt for a second that their motivations were anything other than trying to provide transparency and accountability for the community. BugGhost 🦗👻 12:04, 23 December 2024 (UTC)
- Worm That Turned followed through on a pledge he made in January 2024, before the 2024 review of the request for adminship process began. I don't think a pattern can be extrapolated from a sample size of one (or even two). That being said, it's probably a good thing if admins occasionally take stock of whether or not they continue to hold the trust of the community. As I previously commented, it would be great if these admins would use a lower cost way of sampling the community's opinion. isaacl (talk) 18:31, 17 December 2024 (UTC)
- I went looking for @Tamzin's comment because I know they had hung up the tools and came back, and I was interested in their perspective. But they've given me a different epiphany. I suddenly realize why people are doing confirmation RfAs: it's because of RECALL, and the one year immunity a successful RfA gives you. Maybe everyone else already figured that one out and is thinking "well duh Eek," but I guess I hadn't :) I'm not exactly sure what to do with that epiphany, besides note the emergent behavior that policy change can create. We managed to generate an entirely new process without writing a single word about it, and that's honestly impressive :P CaptainEek ⚓ 18:18, 17 December 2024 (UTC)
- I don't really care enough about reconf RFAs to think they should be restricted, but what about a lighter ORCP-like process (maybe even in the same place) where fewer editors can indicate, "yeah OK, there aren't really any concerns here, it would probably save a bit of time if you just asked at BN". Alpha3031 (t • c) 12:40, 19 December 2024 (UTC)
- Can someone accurately describe for me what the status quo is? I reread this RfC twice now and am having a hard time figuring out what the current state of affairs is, and how the proposed alternatives will change them. Duly signed, ⛵ WaltClipper -(talk) 14:42, 13 January 2025 (UTC)
- Option 2 is the status quo. The goal of the RFC is to see if the community wants to prohibit reconfirmation RFAs (option 1). The idea is that reconfirmation RFAs take up a lot more community time than a BN request so are unnecessary. There were 2 reconfirmation RFAs recently after a long dry spell. –Novem Linguae (talk) 20:49, 13 January 2025 (UTC)
- The status quo, documented at Misplaced Pages:Administrators#Restoration of admin tools, is that admins who resigned without being under controversy can seek readminship through either BN (where it's usually given at the discreetion of an arbitrary bureaucrat according to the section I linked) or RfA (where all normal RfA procedures apply, and you see a bunch of people saying "the candidate's wasting the community's time and could've uncontroversially gotten adminship back at BN instead). Aaron Liu (talk) 12:27, 14 January 2025 (UTC)
Guideline against use of AI images in BLPs and medical articles?
I have recently seen AI-generated images be added to illustrate both BLPs (e.g. Laurence Boccolini, now removed) and medical articles (e.g. Legionella#Mechanism). While we don't have any clear-cut policy or guideline about these yet, they appear to be problematic. Illustrating a living person with an AI-generated image might misinform as to how that person actually looks like, while using AI in medical diagrams can lead to anatomical inaccuracies (such as the lung structure in the second image, where the pleura becomes a bronnchiole twisting over the primary bronchi), or even medical misinformation. While a guideline against AI-generated images in general might be more debatable, do we at least have a consensus for a guideline against these two specific use cases?
To clarify, I am not including potentially relevant AI-generated images that only happen to include a living person (such as in Springfield pet-eating hoax), but exclusively those used to illustrate a living person in a WP:BLP context. Chaotic Enby (talk · contribs) 12:11, 30 December 2024 (UTC)
- What about any biographies, including dead people. The lead image shouldn't be AI generated for any biography. - Sebbog13 (talk) 12:17, 30 December 2024 (UTC)
- Same with animals, organisms etc. - Sebbog13 (talk) 12:20, 30 December 2024 (UTC)
- I personally am strongly against using AI in biographies and medical articles - as you highlighted above, AI is absolutely not reliable in generating accurate imagery and may contribute to medical or general misinformation. I would 100% support a proposal banning AI imagery from these kinds of articles - and a recommendation to not use such imagery other than in specific scenarios. jolielover♥talk 12:28, 30 December 2024 (UTC)
- I'd prefer a guideline prohibiting the use of AI images full stop. There are too many potential issues with accuracy, honesty, copyright, etc. Has this already been proposed or discussed somewhere? – Joe (talk) 12:38, 30 December 2024 (UTC)
- There hasn't been a full discussion yet, and we have a list of uses at Misplaced Pages:WikiProject AI Cleanup/AI images in non-AI contexts, but it could be good to deal with clear-cut cases like this (which are already a problem) first, as the wider discussion is less certain to reach the same level of consensus. Chaotic Enby (talk · contribs) 12:44, 30 December 2024 (UTC)
- Discussions are going on at Wikipedia_talk:Biographies_of_living_persons#Proposed_addition_to_BLP_guidelines and somewhat at Wikipedia_talk:No_original_research#Editor-created_images_based_on_text_descriptions. I recommend workshopping an RfC question (or questions) then starting an RfC. Some1 (talk) 13:03, 30 December 2024 (UTC)
- Oh, didn't catch the previous discussions! I'll take a look at them, thanks! Chaotic Enby (talk · contribs) 14:45, 30 December 2024 (UTC)
- There is one very specific exception I would put to a very sensible blanket prohibition on using AI images to illustrate people, especially BLPs. That is where the person themselves is known to use that image, which I have encountered in Simon Ekpa. CMD (talk) 15:00, 30 December 2024 (UTC)
- While the Ekpa portrait is just an upscale (and I'm not sure what positive value that has for us over its source; upscaling does not add accuracy, nor is it an artistic interpretation meant to reveal something about the source), this would be hard to translate to the general case. Many AI portraits would have copyright concerns, not just from the individual (who may have announced some appropriate release for it), but due to the fact that AI portraits can lean heavily on uncredited individual sources. --Nat Gertler (talk) 16:04, 30 December 2024 (UTC)
- For the purposes of discussing whether to allow AI images at all, we should always assume that, for the purposes of (potential) policies and guidelines, there exist AI images we can legally use to illustrate every topic. We cannot use those that are not legal (including, but not limited to, copyright violations) so they are irrelevant. An image generator trained exclusively on public domain and cc0 images (and any other licenses that explicitly allow derivative works without requiring attribution) would not be subject to any copyright restrictions (other than possibly by the prompter and/or generator's license terms, which are both easy to determine). Similarly we should not base policy on the current state of the technology, but assume that the quality of its output will improve to the point it is equal to that of a skilled human artist. Thryduulf (talk) 17:45, 30 December 2024 (UTC)
- The issue is, either there are public domain/CC0 images of the person (in which case they can be used directly) or there aren't, in which case the AI is making up how a person looks. Chaotic Enby (talk · contribs) 20:00, 30 December 2024 (UTC)
- We tend to use art representations either where no photographs are available (in which case, AI will also not have access to photographs) or where what we are showing is an artist's insight on how this person is perceived, which is not something that AI can give us. In any case, we don't have to build policy now around some theoretical AI in the future; we can deal with the current reality, and policy can be adjusted if things change in the future. And even that theoretical AI does make it more difficult to detect copyvio -- Nat Gertler (talk) 20:54, 30 December 2024 (UTC)
- I wouldn't call it an upscale given whatever was done appears to have removed detail, but we use that image as it was specifically it is the edited image which was sent to VRT. CMD (talk) 10:15, 31 December 2024 (UTC)
- For the purposes of discussing whether to allow AI images at all, we should always assume that, for the purposes of (potential) policies and guidelines, there exist AI images we can legally use to illustrate every topic. We cannot use those that are not legal (including, but not limited to, copyright violations) so they are irrelevant. An image generator trained exclusively on public domain and cc0 images (and any other licenses that explicitly allow derivative works without requiring attribution) would not be subject to any copyright restrictions (other than possibly by the prompter and/or generator's license terms, which are both easy to determine). Similarly we should not base policy on the current state of the technology, but assume that the quality of its output will improve to the point it is equal to that of a skilled human artist. Thryduulf (talk) 17:45, 30 December 2024 (UTC)
- While the Ekpa portrait is just an upscale (and I'm not sure what positive value that has for us over its source; upscaling does not add accuracy, nor is it an artistic interpretation meant to reveal something about the source), this would be hard to translate to the general case. Many AI portraits would have copyright concerns, not just from the individual (who may have announced some appropriate release for it), but due to the fact that AI portraits can lean heavily on uncredited individual sources. --Nat Gertler (talk) 16:04, 30 December 2024 (UTC)
- Is there any clarification on using purely AI-generated images vs. using AI to edit or alter images? AI tools have been implemented in a lot of photo editing software, such as to identify objects and remove them, or generate missing content. The generative expand feature would appear to be unreliable (and it is), but I use it to fill in gaps of cloudless sky produced from stitching together photos for a panorama (I don't use it if there are clouds, or for starry skies, as it produces non-existent stars or unrealistic clouds). Photos of Japan (talk) 18:18, 30 December 2024 (UTC)
- Yes, my proposal is only about AI-generated images, not AI-altered ones. That could in fact be a useful distinction to make if we want to workshop a RfC on the matter. Chaotic Enby (talk · contribs) 20:04, 30 December 2024 (UTC)
- I'm not sure if we need a clear cut policy or guideline against them... I think we treat them the same way as we would treat an editor's kitchen table sketch of the same figure. Horse Eye's Back (talk) 18:40, 30 December 2024 (UTC)
- For those wanting to ban AI images full stop, well, you are too late. Most professional image editing software, including the software in one's smartphone as well as desktop, uses AI somewhere. Noise reduction software uses AI to figure out what might be noise and what might be texture. Sharpening software uses AI to figure out what should be smooth and what might have a sharp detail it can invent. For example, a bird photo not sharp enough to capture feather detail will have feather texture imagined onto it. Same for hair. Or grass. Any image that has been cleaned up to remove litter or dust or spots will have the cleaned area AI generated based on its surroundings. The sky might be extended with AI. These examples are a bit different from a 100% imagined image created from a prompt. But probably not in a way that is useful as a rule.
- I think we should treat AI generated images the same as any user-generated image. It might be a great diagram or it might be terrible. Remove it from the article if the latter, not because someone used AI. If the image claims to photographically represent something, we may judge whether the creator has manipulated the image too much to be acceptable. For example, using AI to remove a person in the background of an image taken of the BLP subject might be perfectly fine. People did that with traditional Photoshop/Lightroom techniques for years. Using AI to generate what claims to be a photo of a notable person is on dodgy ground wrt copyright. -- Colin° 19:12, 30 December 2024 (UTC)
- I'm talking about the case of using AI to generate a depiction of a living person, not using AI to alter details in the background. That is why I only talk about AI-generated images, not AI-altered images. Chaotic Enby (talk · contribs) 20:03, 30 December 2024 (UTC)
- Regarding some sort of brightline ban on the use of any such image in anything article medical related: absolutely not. For example, if someone wanted to use AI tools as opposed to other tools to make an image such as this one (as used in the "medical" article Fluconazole) I don't see a problem, so long as it is accurate. Accurate models and illustrations are useful and that someone used AI assistance as opposed to a chisel and a rock is of no concern. — xaosflux 19:26, 30 December 2024 (UTC)
- I believe that the appropriateness of AI images depends on how its used by the user. In BLP and medical articles, it is inappropriate for the images, but it is inappropriate to ban it completely across thw site. By the same logic, if you want full ban of AI, you are banning fire just because people can get burned, without considering cooking. JekyllTheFabulous (talk) 13:33, 31 December 2024 (UTC)
- I agree that AI-generated images should not be used in most cases. They essentially serve as misinformation. I also don't think that they're really comparable to drawings or sketches because AI-generation uses a level of photorealism that can easily trick the untrained eye into thinking it is real. Di (they-them) (talk) 20:46, 30 December 2024 (UTC)
- AI doesn't need to be photorealistic though. I see two potential issues with AI. The first is images that might deceive the viewer into thinking they are photos, when they are not. The second is potential copyright issues. Outside of the copyright issues I don't see any unique concerns for an AI-generated image (that doesn't appear photorealistic). Any accuracy issues can be handled the same way a user who manually drew an image could be handled. Photos of Japan (talk) 21:46, 30 December 2024 (UTC)
- AI-generated depictions of BLP subjects are often more "illustrative" than drawings/sketches of BLP subjects made by 'regular' editors like you and me. For example, compare the AI-generated image of Pope Francis and the user-created cartoon of Brigette Lundy-Paine. Neither image belongs on their respective bios, of course, but the AI-generated image is no more "misinformation" than the drawing. Some1 (talk) 00:05, 31 December 2024 (UTC)
- I would argue the opposite: neither are made up, but the first one, because of its realism, might mislead readers into thinking that it is an actual photograph, while the second one is clearly a drawing. Which makes the first one less illustrative, as it carries potential for misinformation, despite being technically more detailed. Chaotic Enby (talk · contribs) 00:31, 31 December 2024 (UTC)
- AI-generated images should always say "AI-generated image of " in the image caption. No misleading readers that way. Some1 (talk) 00:36, 31 December 2024 (UTC)
- Yes, and they don't always do it, and we don't have a guideline about this either. The issue is, many people have many different proposals on how to deal with AI content, meaning we always end up with "no consensus" and no guidelines on use at all, even if most people are against it. Chaotic Enby (talk · contribs) 00:40, 31 December 2024 (UTC)
always end up with "no consensus" and no guidelines on use at all, even if most people are against it
Agreed. Even a simple proposal to have image captions note whether an image is AI-generated will have editors wikilawyer over the definition of 'AI-generated.' I take back my recommendation of starting an RfC; we can already predict how that RfC will end. Some1 (talk) 02:28, 31 December 2024 (UTC)
- Yes, and they don't always do it, and we don't have a guideline about this either. The issue is, many people have many different proposals on how to deal with AI content, meaning we always end up with "no consensus" and no guidelines on use at all, even if most people are against it. Chaotic Enby (talk · contribs) 00:40, 31 December 2024 (UTC)
- AI-generated images should always say "AI-generated image of " in the image caption. No misleading readers that way. Some1 (talk) 00:36, 31 December 2024 (UTC)
- I would argue the opposite: neither are made up, but the first one, because of its realism, might mislead readers into thinking that it is an actual photograph, while the second one is clearly a drawing. Which makes the first one less illustrative, as it carries potential for misinformation, despite being technically more detailed. Chaotic Enby (talk · contribs) 00:31, 31 December 2024 (UTC)
- AI-generated depictions of BLP subjects are often more "illustrative" than drawings/sketches of BLP subjects made by 'regular' editors like you and me. For example, compare the AI-generated image of Pope Francis and the user-created cartoon of Brigette Lundy-Paine. Neither image belongs on their respective bios, of course, but the AI-generated image is no more "misinformation" than the drawing. Some1 (talk) 00:05, 31 December 2024 (UTC)
- Of interest perhaps is this 2023 NOR noticeboard discussion on the use of drawn cartoon images in BLPs. Zaathras (talk) 22:38, 30 December 2024 (UTC)
- We should absolutely not be including any AI images in anything that is meant to convey facts (with the obvious exception of an AI image illustrating the concept of an AI image). I also don't think we should be encouraging AI-altered images -- the line between "regular" photo enhancement and what we'd call "AI alteration" is blurry, but we shouldn't want AI edits for the same reason we wouldn't want fake Photoshop composites.
- That said, I would assume good faith here: some of these images are probably being sourced from Commons, and Commons is dealing with a lot of undisclosed AI images. Gnomingstuff (talk) 23:31, 30 December 2024 (UTC)
- Why wouldn't we want "fake Photoshop composites"? A Composite photo can be very useful. I'd be sad if we banned c:Category:Chronophotographic photomontages. WhatamIdoing (talk) 06:40, 31 December 2024 (UTC)
- Sorry, should have been more clear -- composites that present themselves as the real thing, basically what people would use deepfakes for now. Gnomingstuff (talk) 20:20, 31 December 2024 (UTC)
- Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop through techniques like compositing. That line is that the diffusion model is reverse-engineering an image to match a text prompt from a pattern of semi-random static associated with similar text prompts. As such it's just automated glurge, at best it's only as good as the ability of the software to parse a text prompt and the ability of a prompter to draft sufficiently specific language. And absolutely none of that does anything to solve the "hallucination" problem. On the other hand, in photoshop, if I put in two layers both containing a bird on a transparent background, what I, the human making the image, sees is what the software outputs. Simonm223 (talk) 18:03, 15 January 2025 (UTC)
Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop
others do not. If you want to ban or restrict one but not the other then you need to explain how the difference can be reliably determined, and how one is materially different to the other in ways other than your personal opinion. Thryduulf (talk) 18:45, 15 January 2025 (UTC)
- Yeah I think there is a very clear line between images built by a diffusion model and images modified using photoshop through techniques like compositing. That line is that the diffusion model is reverse-engineering an image to match a text prompt from a pattern of semi-random static associated with similar text prompts. As such it's just automated glurge, at best it's only as good as the ability of the software to parse a text prompt and the ability of a prompter to draft sufficiently specific language. And absolutely none of that does anything to solve the "hallucination" problem. On the other hand, in photoshop, if I put in two layers both containing a bird on a transparent background, what I, the human making the image, sees is what the software outputs. Simonm223 (talk) 18:03, 15 January 2025 (UTC)
- Sorry, should have been more clear -- composites that present themselves as the real thing, basically what people would use deepfakes for now. Gnomingstuff (talk) 20:20, 31 December 2024 (UTC)
- Why wouldn't we want "fake Photoshop composites"? A Composite photo can be very useful. I'd be sad if we banned c:Category:Chronophotographic photomontages. WhatamIdoing (talk) 06:40, 31 December 2024 (UTC)
- I don't think any guideline, let alone policy, would be beneficial and indeed on balance is more likely to be harmful. There are always only two questions that matter when determining whether we should use an image, and both are completely independent of whether the image is AI-generated or not:
- Can we use this image in this article? This depends on matters like copyright, fair use, whether the image depicts content that is legal for an organisation based in the United States to host, etc. Obviously if the answer is "no", then everything else is irrelevant, but as the law and WMF, Commons and en.wp policies stand today there exist some images in both categories we can use, and some images in both categories we cannot use.
- Does using this image in this article improve the article? This is relative to other options, one of which is always not using any image, but in many cases also involves considering alternative images that we can use. In the case of depictions of specific, non-hypothetical people or objects one criteria we use to judge whether the image improves the article is whether it is an accurate representation of the subject. If it is not an accurate representation then it doesn't improve the article and thus should not be used, regardless of why it is inaccurate. If it is an accurate representation, then its use in the article will not be misrepresentative or misleading, regardless of whether it is or is not AI generated. It may or may not be the best option available, but if it is then it should be used regardless of whether it is or is not AI generated.
- The potential harm I mentioned above is twofold, firstly Misplaced Pages is, by definition, harmed when an images exists we could use that would improve an article but we do not use it in that article. A policy or guideline against the use of AI images would, in some cases, prevent us from using an image that would improve an article. The second aspect is misidentification of an image as AI-generated when it isn't, especially when it leads to an image not being used when it otherwise would have been.
- Finally, all the proponents of a policy or guideline are assuming that the line between images that are and are not AI-generated is sharp and objective. Other commenters here have already shown that in reality the line is blurry and it is only going to get blurrier in the future as more AI (and AI-based) technology is built into software and especially firmware. Thryduulf (talk) 00:52, 31 December 2024 (UTC)
- I agree with almost the entirety of your post with a caveat on whether something "is an accurate representation". We can tell whether non-photorealistic images are accurate by assessing whether the image accurately conveys the idea of what it is depicting. Photos do more than convey an idea, they convey the actual look of something. With AI generated images that are photorealistic it is difficult to assess whether they accurately convey the look of something (the shading might be illogical in subtle ways, there could be an extra finger that goes unnoticed, a mole gets erased), but readers might be deceived by the photo-like presentation into thinking they are looking at an actual photographic depiction of the subject which could differ significantly from the actual subject in ways that go unnoticed. Photos of Japan (talk) 04:34, 31 December 2024 (UTC)
A policy or guideline against the use of AI images would, in some cases, prevent us from using an image that would improve an article.
That's why I'm suggesting a guideline, not a policy. Guidelines are by design more flexible, and WP:IAR still does (and should) apply in edge cases.The second aspect is misidentification of an image as AI-generated when it isn't, especially when it leads to an image not being used when it otherwise would have been.
In that case, there is a licensing problem. AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.Finally, all the proponents of a policy or guideline are assuming that the line between images that are and are not AI-generated is sharp and objective. Other commenters here have already shown that in reality the line is blurry and it is only going to get blurrier in the future as more AI (and AI-based) technology is built into software and especially firmware.
In that case, it's mostly because the ambiguity in wording: AI-edited images are very common, and are sometimes called "AI-generated", but here we should focus on actual prompt outputs, of the style "I asked a model to generate me an image of a BLP". Chaotic Enby (talk · contribs) 11:13, 31 December 2024 (UTC)- Simply not having a completely unnecessary policy or guideline is infinitely better than relying on IAR - especially as this would have to be ignored every time it is relevant. When the AI image is not the best option (which obviously includes all the times its unsuitable or inaccurate) existing policies, guidelines, practice and frankly common sense mean it won't be used. This means the only time the guideline would be relevant is when an AI image is the best option and as we obviously should be using the best option in all cases we would need to ignore the guideline against using AI images.
AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.
The key words here are "supposed to be" and "shouldn't", editors absolutely will speculate that images are AI-generated and that the Commons labelling is incorrect. We are supposed to assume good faith, but this very discussion shows that when it comes to AI some editors simply do not do that.- Regarding your final point, that might be what you are meaning but it is not what all other commenters mean when they want to exclude all AI images. Thryduulf (talk) 11:43, 31 December 2024 (UTC)
- For your first point, the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image), but the model likely doesn't have any available image either and most likely just made it up. As my proposal is essentially limited to that (I don't include AI-edited images, only those that are purely generated by a model), I don't think there will be many cases where IAR would be needed.Regarding your two other points, you are entirely correct, and while I am hoping for nuance on the AI issue, it is clear that some editors might not do that. For the record, I strongly disagree with a blanket ban of "AI images" (which includes both blatant "prompt in model" creations and a wide range of more subtle AI retouching tools) or anything like that. Chaotic Enby (talk · contribs) 11:49, 31 December 2024 (UTC)
the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image)
. There are only two possible scenarios regarding verifiability:- The image is an accurate representation and we can verify that (e.g. by reference to non-free photos).
- Verifiability is no barrier to using the image, whether it is AI generated or not.
- If it is the best image available, and editors agree using it is better than not having an image, then it should be used whether it is AI generated or not.
- The image is either not an accurate representation, or we cannot verify whether it is or is not an accurate representation
- The only reasons we should ever use the image are:
- It has been the subject of notable commentary and we are presenting it in that context.
- The subject verifiably uses it as a representation of themselves (e.g. as an avatar or logo)
- This is already policy, whether the image is AI generated or not is completely irrelevant.
- The only reasons we should ever use the image are:
- The image is an accurate representation and we can verify that (e.g. by reference to non-free photos).
- You will note that in no circumstance is it relevant whether the image is AI generated or not. Thryduulf (talk) 13:27, 31 December 2024 (UTC)
- In your first scenario, there is the issue of an accurate AI-generated image misleading people into thinking it is an actual photograph of the person, especially as they are most often photorealistic. Even besides that, a mostly accurate representation can still introduce spurious details, and this can mislead readers as they do not know to what level it is actually accurate. This scenario doesn't really happen with drawings (which are clearly not photographs), and is very much a consequence of AI-generated photorealistic pictures being a thing.In the second scenario, if we cannot verify that it is not an accurate representation, it can be hard to remove the image with policy-based reasons, which is why a guideline will again be helpful. Having a single guideline against fully AI-generated images takes care of all of these scenarios, instead of having to make new specific guidelines for each case that emerges because of them. Chaotic Enby (talk · contribs) 13:52, 31 December 2024 (UTC)
- If the image is misleading or unverifiable it should not be used, regardless of why it is misleading or unverifiable. This is existing policy and we don't need anything specifically regarding AI to apply it - we just need consensus that the image is misleading or unverifiable. Whether it is or is not AI generated is completely irrelevant. Thryduulf (talk) 15:04, 31 December 2024 (UTC)
- In your first scenario, there is the issue of an accurate AI-generated image misleading people into thinking it is an actual photograph of the person, especially as they are most often photorealistic. Even besides that, a mostly accurate representation can still introduce spurious details, and this can mislead readers as they do not know to what level it is actually accurate. This scenario doesn't really happen with drawings (which are clearly not photographs), and is very much a consequence of AI-generated photorealistic pictures being a thing.In the second scenario, if we cannot verify that it is not an accurate representation, it can be hard to remove the image with policy-based reasons, which is why a guideline will again be helpful. Having a single guideline against fully AI-generated images takes care of all of these scenarios, instead of having to make new specific guidelines for each case that emerges because of them. Chaotic Enby (talk · contribs) 13:52, 31 December 2024 (UTC)
AI-generated images on Commons are supposed to be clearly labeled as such. There is no guesswork here, and we shouldn't go hunting for images that might have been AI-generated.
- I mean... yes, we should? At the very least Commons should go hunting for mislabeled images -- that's the whole point of license review. The thing is that things are absolutely swamped over there and there are hundreds of thousands of images waiting for review of some kind. Gnomingstuff (talk) 20:35, 31 December 2024 (UTC)
- Yes, but that's a Commons thing. A guideline on English Misplaced Pages shouldn't decide of what is to be done on Commons. Chaotic Enby (talk · contribs) 20:37, 31 December 2024 (UTC)
- I just mean that given the reality of the backlogs, there are going to be mislabeled images, and there are almost certainly going to be more of them over time. That's just how it is. We don't have control over that, but we do have control over what images go into articles, and if someone has legitimate concerns about an image being AI-generated, then they should be raising those. Gnomingstuff (talk) 20:45, 31 December 2024 (UTC)
- Yes, but that's a Commons thing. A guideline on English Misplaced Pages shouldn't decide of what is to be done on Commons. Chaotic Enby (talk · contribs) 20:37, 31 December 2024 (UTC)
- For your first point, the guideline is mostly to take care of the "prompt fed in model" BLP illustrations, where it is technically hard to prove that the person doesn't look like that (as we have no available image), but the model likely doesn't have any available image either and most likely just made it up. As my proposal is essentially limited to that (I don't include AI-edited images, only those that are purely generated by a model), I don't think there will be many cases where IAR would be needed.Regarding your two other points, you are entirely correct, and while I am hoping for nuance on the AI issue, it is clear that some editors might not do that. For the record, I strongly disagree with a blanket ban of "AI images" (which includes both blatant "prompt in model" creations and a wide range of more subtle AI retouching tools) or anything like that. Chaotic Enby (talk · contribs) 11:49, 31 December 2024 (UTC)
- Support blanket ban on AI-generated images on Misplaced Pages. As others have highlighted above, the is not just a slippery slope but an outright downward spiral. We don't use AI-generated text and we shouldn't use AI-generated images: these aren't reliable and they're also WP:OR scraped from who knows what and where. Use only reliable material from reliable sources. As for the argument of 'software now has AI features', we all know that there's a huge difference between someone using a smoothing feature and someone generating an image from a prompt. :bloodofox: (talk) 03:12, 31 December 2024 (UTC)
- Reply, the section of WP:OR concerning images is WP:OI which states "Original images created by a Wikimedian are not considered original research, so long as they do not illustrate or introduce unpublished ideas or arguments". Using AI to generate an image only violates WP:OR if you are using it to illustrate unpublished ideas, which can be assessed just by looking at the image itself. COPYVIO, however, cannot be assessed from looking at just the image alone, which AI could be violating. However, some images may be too simple to be copyrightable, for example AI-generated images of chemicals or mathematical structures potentially. Photos of Japan (talk) 04:34, 31 December 2024 (UTC)
- Prompt generated images are unquestionably violation of WP:OR and WP:SYNTH: Type in your description and you get an image scraping who knows what and from who knows where, often Misplaced Pages. Misplaced Pages isn't an WP:RS. Get real. :bloodofox: (talk) 23:35, 1 January 2025 (UTC)
- "Unquestionably"? Let me question that, @Bloodofox.
;-)
- If an editor were to use an AI-based image-generating service and the prompt is something like this:
- "I want a stacked bar chart that shows the number of games won and lost by FC Bayern Munich each year. Use the team colors, which are red #DC052D, blue #0066B2, and black #000000. The data is:
- 2014–15: played 34 games, won 25, tied 4, lost 5
- 2015–16: played 34 games, won 28, tied 4, lost 2
- 2016–17: played 34 games, won 25, tied 7, lost 2
- 2017–18: played 34 games, won 27, tied 3, lost 4
- 2018–19: played 34 games, won 24, tied 6, lost 4
- 2019–20: played 34 games, won 26, tied 4, lost 4
- 2020–21: played 34 games, won 24, tied 6, lost 4
- 2021–22: played 34 games, won 24, tied 5, lost 5
- 2022–23: played 34 games, won 21, tied 8, lost 5
- 2023–24: played 34 games, won 23, tied 3, lost 8"
- I would expect it to produce something that is not a violation of either OR in general or OR's SYNTH section specifically. What would you expect, and why do you think it would be okay for me to put that data into a spreadsheet and upload a screenshot of the resulting bar chart, but you don't think it would be okay for me to put that same data into a image generator, get the same thing, and upload that?
- We must not mistake the tools for the output. Hand-crafted bad output is bad. AI-generated good output is good. WhatamIdoing (talk) 01:58, 2 January 2025 (UTC)
- Assuming you'd even get what you requested the model without fiddling with the prompt for a while, these sort of 'but we can use it for graphs and charts' devil's advocate scenarios aren't helpful. We're discussing generating images of people, places, and objects here and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH. As for the charts and graphs, there are any number of ways to produce these. :bloodofox: (talk) 03:07, 2 January 2025 (UTC)
We're discussing generating images of people, places, and objects here
The proposal contains no such limitation.and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH.
Do you have a citation for that? Other people have explained better than I can how that it is not necessarily true, and certainly not unquestionable. Thryduulf (talk) 03:14, 2 January 2025 (UTC)- As you're well aware, these images are produced by scraping and synthesized material from who knows what and where: it's ultimately pure WP:OR to produce these fake images and they're a straightforward product of synthesis of multiple sources (WP:SYNTH) - worse yet, these sources are unknown because training data is by no means transparent. Personally, I'm strongly for a total ban on generative AI on the site exterior to articles on the topic of generative AI. Not only do I find this incredible unethical, I believe it is intensely detrimental to Misplaced Pages, which is an already a flailing and shrinking project. :bloodofox: (talk) 03:23, 2 January 2025 (UTC)
- So you think the lead image at Gisèle Pelicot is a SYNTH violation? Its (human) creator explicitly says "This is not done from one specific photo. As I usually do when I draw portraits of people that I can't see in person, I look at a lot of photos of them and then create my own rendition" in the image description, which sounds like the product of synthesis of multiple sources" to me, and "these sources are unknown because" the the images the artist looked at are not disclosed.
- A lot of my concern about blanket statements is the principle that what's sauce for the goose is sauce for the gander, too. If it's okay for a human to do something by hand, then it should be okay for a human using a semi-automated tool to do it, too.
- (Just in case you hadn't heard, the rumors that the editor base is shrinking have been false for over a decade now. Compared to when you created your account in mid-2005, we have about twice as many high-volume editors.) WhatamIdoing (talk) 06:47, 2 January 2025 (UTC)
- Review WP:SYNTH and your attempts at downplaying a prompt-generated image as "semi-automated" shows the root of the problem: if you can't detect the difference between a human sketching from a reference and a machine scraping who-knows-what on the internet, you shouldn't be involved in this discussion. As for editor retention, this remains a serious problem on the site: while the site continues to grow (and becomes core fodder for AI-scraping) and becomes increasingly visible, editorial retention continues to drop. :bloodofox: (talk) 09:33, 2 January 2025 (UTC)
- Please scroll down below SYNTH to the next section titled "What is not original research" which begins with WP:OI, our policies on how images relate to OR. OR (including SYNTH) only applies to images with regards to if they illustrate "unpublished ideas or arguments". It does not matter, for instance, if you synthesize an original depiction of something, so long as the idea of that thing is not original. Photos of Japan (talk) 09:55, 2 January 2025 (UTC)
- Yes, which explicitly states:
- It is not acceptable for an editor to use photo manipulation to distort the facts or position illustrated by an image. Manipulated images should be prominently noted as such. Any manipulated image where the encyclopedic value is materially affected should be posted to Misplaced Pages:Files for discussion. Images of living persons must not present the subject in a false or disparaging light.
- Using a machine to generate a fake image of someone is far beyond "manipulation" and it is certainly "false". Clearly we need explicit policies on AI-generated images of people or we wouldn't be having this discussion, but this as it stands clarly also falls under WP:SYNTH: there is zero question that this is a result of "synthesis of published material", even if the AI won't list what it used. Ultimately it's just a synthesis of a bunch of published composite images of who-knows-what (or who-knows-who?) the AI has scraped together to produce a fake image of a person. :bloodofox: (talk) 10:07, 2 January 2025 (UTC)
- Yes, which explicitly states:
- Please scroll down below SYNTH to the next section titled "What is not original research" which begins with WP:OI, our policies on how images relate to OR. OR (including SYNTH) only applies to images with regards to if they illustrate "unpublished ideas or arguments". It does not matter, for instance, if you synthesize an original depiction of something, so long as the idea of that thing is not original. Photos of Japan (talk) 09:55, 2 January 2025 (UTC)
- As you're well aware, these images are produced by scraping and synthesized material from who knows what and where: it's ultimately pure WP:OR to produce these fake images and they're a straightforward product of synthesis of multiple sources (WP:SYNTH) - worse yet, these sources are unknown because training data is by no means transparent. Personally, I'm strongly for a total ban on generative AI on the site exterior to articles on the topic of generative AI. Not only do I find this incredible unethical, I believe it is intensely detrimental to Misplaced Pages, which is an already a flailing and shrinking project. :bloodofox: (talk) 03:23, 2 January 2025 (UTC)
- Assuming you'd even get what you requested the model without fiddling with the prompt for a while, these sort of 'but we can use it for graphs and charts' devil's advocate scenarios aren't helpful. We're discussing generating images of people, places, and objects here and in those cases, yes, this would unquestionably be a form of WP:OR & WP:SYNTH. As for the charts and graphs, there are any number of ways to produce these. :bloodofox: (talk) 03:07, 2 January 2025 (UTC)
- "Unquestionably"? Let me question that, @Bloodofox.
- Prompt generated images are unquestionably violation of WP:OR and WP:SYNTH: Type in your description and you get an image scraping who knows what and from who knows where, often Misplaced Pages. Misplaced Pages isn't an WP:RS. Get real. :bloodofox: (talk) 23:35, 1 January 2025 (UTC)
- The latter images you describe should be SVG regardless. If there are models that can generate that, that seems totally fine since it can be semantically altered by hand. Any generation with photographic or "painterly" characteristics (e.g. generating something in the style of a painting or any other convention of visual art that communicates aesthetic particulars and not merely abstract visual particulars) seems totally unacceptable. Remsense ‥ 论 07:00, 31 December 2024 (UTC)
- @Bloodofox, here's an image I created. It illustrates the concept of 1% in an article. I made this myself, by typing 100 emojis and taking a screenshot. Do you really mean to say that if I'd done this with an image-generating AI tool, using a prompt like "Give me 100 dots in a 10 by 10 grid. Make 99 a dark color and 1, randomly placed, look like a baseball" that it would be hopelessly tainted, because AI is always bad? Or does your strongly worded statement mean something more moderate?
- I'd worry about photos of people (including dead people). I'd worry about photos of specific or unique objects that have to be accurate or they're worse than worthless (e.g., artwork, landmarks, maps). But I'm not worried about simple graphs and charts like this one, and I'm not worried about ordinary, everyday objects. If you want to use AI to generate a photorealistic image of a cookie, or a spoon, and the output you get genuinely looks like those objects, I'm not actually going to worry about it. WhatamIdoing (talk) 06:57, 31 December 2024 (UTC)
- As you know, Misplaced Pages has the unique factor of being entirely volunteer-ran. Misplaced Pages has fewer and fewer editors and, long-term, we're seeing plummeting birth rates in areas where most Misplaced Pages editors do exist. I wouldn't expect a wave of new ones aimed at keeping the site free of bullshit in the near future.
- In addition, the Wikimedia Foundation's hair-brained continued effort to turn the site into its political cash machine is no doubt also not helping, harming the site's public perception and leading to fewer new editors.
- Over the course of decades (I've been here for around 20 years), it seems clear that the site will be negatively impacted by all this, especially in the face of generative AI.
- As a long-time editor who has frequently stumbled upon intense WP:PROFRINGE content, fended off armies of outside actors looking to shape the site into their ideological image (and sent me more than a few death threats), and who has identified large amount of politically-motivated nonsense explicitly designed to fool non-experts in areas I know intimately well (such as folklore and historical linguistics topics), I think it need be said that the use of generative AI for content is especially dangerous because of its capabilities of fooling Misplaced Pages readers and Misplaced Pages editors alike.
- Misplaced Pages is written by people for people. We need to draw a line in the sand to keep from being flooded by increasingly accessible hoax-machines.
- A blanket ban on generative AI resolves this issue or at least hands us another tool with which to attempt to fight back. We don't need what few editors we have here wasting what little time they can give the project checking over an ocean of AI-generated slop: we need more material from reliable sources and better tools to fend off bad actors usable by our shrinking editor base (anyone at the Wikimedia Foundation listening?), not more waves of generative AI garbage. :bloodofox: (talk) 07:40, 31 December 2024 (UTC)
- A blanket ban doesn't actually resolve most of the issues though, and introduces new ones. Bad usages of AI can already be dealt with by existing policy, and malicious users will ignore a blanket ban anyways. Meanwhile, a blanket ban would harm many legitimate usages for AI. For instance, the majority of professional translators (at least Japanese to English) incorporate AI (or similar tools) into their workflow to speed up translations. Just imagine a professional translator who uses AI to help generate rough drafts of foreign language Misplaced Pages articles, before reviewing and correcting them, and another editor learning of this and mass reverting them for breaking the blanket ban, and ultimately causing them to leave. Many authors (particularly with carpal tunnel) use AI now to control their voice-to-text (you can train the AI on how you want character names spelled, the formatting of dialogue and other text, etc.). A wikipedia editor could train an AI to convert their voice into Misplaced Pages-formatted text. AI is subtly incorporated now into spell-checkers, grammar-checkers, photo editors, etc., in ways many people are not aware of. A blanket AI ban has the potential to cause many issues for a lot of people, without actually being that affective at dealing with malicious users. Photos of Japan (talk) 08:26, 31 December 2024 (UTC)
- I think this is the least convincing one I've seen here yet: It contains the ol' 'there are AI features in programs now' while also attempting to invoke accessibility and a little bit of 'we must have machines to translate!'.
- As a translator myself, I can only say: Oh please. Generative AI is notoriously terrible at translating and that's not likely to change. And I mean ever beyond a very, very basic level. Due to the complexities of communication and little matters like nuance, all machine translated material must be thoroughly checked and modified by, yes, human translators, who often encounter it spitting out complete bullshit scraped from who-knows-where (often Misplaced Pages itself).
- I get that this topic attracts a lot of 'but what if generative AI is better than humans?' from the utopian tech crowd but the reality is that anyone who needs a machine to invent text and visuals for whatever reason simply shouldn't be using it on Misplaced Pages.
- Either you, a human being, can contribute to the project or you can't. Slapping a bunch of machine-generated (generative AI) visuals and text (much of it ultimately coming from Misplaced Pages in the first place!) isn't some kind of human substitute, it's just machine-regurgitated slop and is not helping the project.
- If people can't be confident that Misplaced Pages is made by humans, for humans the project is finally on its way out.:bloodofox: (talk) 09:55, 31 December 2024 (UTC)
- I don't know how up to date you are on the current state of translation, but:
- In a previous State of the industry report for freelance translators, the word on TMs and CAT tools was to take them as "a given." A high percentage of translators use at least one CAT tool, and reports on the increased productivity and efficiency that can accompany their use are solid enough to indicate that, unless the kind of translation work you do by its very nature excludes the use of a CAT tool, you should be using one.
- Over three thousand full-time professional translators from around the world responded to the surveys, which were broken into a survey for CAT tool users and one for those who do not use any CAT tool at all.
- 88% of respondents use at least one CAT tool for at least some of their translation tasks.
- Of those using CAT tools, 83% use a CAT tool for most or all of their translation work.
- Mind you, traditionally CAT tools didn't use AI, but many do now, which only adds to potential sources of confusion in a blanket ban of AI. Photos of Japan (talk) 17:26, 31 December 2024 (UTC)
- You're barking up the tree with the pro-generative AI propaganda in response to me. I think we're all quite aware that generative AI tool integration is now common and that there's also a big effort to replace human translators — and anything that can be "written" with machines-generated text. I'm also keenly aware that generative AI is absolutely horrible at translation and all of it must be thoroughly checked by humans, as you would be if you were a translator yourself. :bloodofox: (talk) 22:20, 31 December 2024 (UTC)
- "all machine translated material must be thoroughly checked and modified by, yes, human translators"
- You are just agreeing with me here.
- There are translators (particularly with non-creative works) who are using these tools to shift more towards reviewing. It should be up to them to decide what they think is the most efficient method for them. Photos of Japan (talk) 06:48, 1 January 2025 (UTC)
- And any translator who wants to use generative AI to attempt to translate can do so off the site. We're not here to check it for them. I strongly support a total ban on any generative AI used on the site exterior to articles on generative AI. :bloodofox: (talk) 11:09, 1 January 2025 (UTC)
- I wonder what you mean by "on the site". The question here is "Is it okay for an editor to go to a completely different website, generate an image all by themselves, upload it to Commons, and put it in a Misplaced Pages article?" The question here is not "Shall we put AI-generating buttons on Misplaced Pages's own website?" WhatamIdoing (talk) 02:27, 2 January 2025 (UTC)
- I'm talking about users slapping machine-translated and/or machine-generated nonsense all over the site, only for us to have to go behind and not only check it but correct it. It takes users minutes to do this and it's already happening. It's the same for images. There are very few of us who volunteer here and our numbers are growing fewer. We need to be spending our time improving the site rather than opening the gate as wide as possible for a flood of AI-generated/rendered garbage. The site has enough problems that compound every day rather than having to fend off users armed with hoax machines at every corner. :bloodofox: (talk) 03:20, 2 January 2025 (UTC)
- Sure, we're all opposed to "nonsense", but my question is: What about when the machine happens to generate something that is not "nonsense"?
- I have some worries about AI content. I worry, for example, that they'll corrupt our sources. I worry that List of scholarly publishing stings will get dramatically longer, and also that even more undetected, unconfessed, unretracted papers will get published and believed to be true and trustworthy. I worry that academia will go back to a model in which personal connections are more important, because you really can't trust what's published. I worry that scientific journals will start refusing to publish research unless it comes from someone employed by a trusted institution, that is willing to put its reputation on the line by saying they have directly verified that the work described in the paper was actually performed to their standards, thus scuttling the citizen science movement and excluding people whose institutions are upset with them for other reasons (Oh, you thought you'd take a job elsewhere? Well, we refuse to certify the work you did for the last three years...).
- But I'm not worried about a Misplaced Pages editor saying "Hey AI, give me a diagram of swingset" or "Make a chart for me out of the data I'm going to give you". In fact, if someone wants to pull the numbers out of Template:Misplaced Pages editor graph (100 per month), feed it to an AI, and replace the template's contents with an AI-generated image (until they finally fix the Graphs extension), I'd consider that helpful. WhatamIdoing (talk) 07:09, 2 January 2025 (UTC)
- Translators are not using generative AI for translation, the applicability of LLMs to regular translation is still in its infancy and regardless will not be implementing any generative faculties to its output since that is the exact opposite of what translation is supposed to do. JoelleJay (talk) 02:57, 2 January 2025 (UTC)
Translators are not using generative AI for translation
this entirely depends on what you mean by "generative". There are at least three contradictory understandings of the term in this one thread alone. Thryduulf (talk) 03:06, 2 January 2025 (UTC)- Please, you can just go through the entire process with a simple prompt command now. The results are typically shit but you can generate a ton of it quickly, which is perfect for flooding a site like this one — especially without a strong policy against it. I've found myself cleaning up tons of AI-generated crap (and, yes, rendered) stuff here and elsewhere, and now I'm even seeing AI-generated responses to my own comments. It's beyond ridiculous. :bloodofox: (talk) 03:20, 2 January 2025 (UTC)
- And any translator who wants to use generative AI to attempt to translate can do so off the site. We're not here to check it for them. I strongly support a total ban on any generative AI used on the site exterior to articles on generative AI. :bloodofox: (talk) 11:09, 1 January 2025 (UTC)
- I don't know how up to date you are on the current state of translation, but:
- A blanket ban doesn't actually resolve most of the issues though, and introduces new ones. Bad usages of AI can already be dealt with by existing policy, and malicious users will ignore a blanket ban anyways. Meanwhile, a blanket ban would harm many legitimate usages for AI. For instance, the majority of professional translators (at least Japanese to English) incorporate AI (or similar tools) into their workflow to speed up translations. Just imagine a professional translator who uses AI to help generate rough drafts of foreign language Misplaced Pages articles, before reviewing and correcting them, and another editor learning of this and mass reverting them for breaking the blanket ban, and ultimately causing them to leave. Many authors (particularly with carpal tunnel) use AI now to control their voice-to-text (you can train the AI on how you want character names spelled, the formatting of dialogue and other text, etc.). A wikipedia editor could train an AI to convert their voice into Misplaced Pages-formatted text. AI is subtly incorporated now into spell-checkers, grammar-checkers, photo editors, etc., in ways many people are not aware of. A blanket AI ban has the potential to cause many issues for a lot of people, without actually being that affective at dealing with malicious users. Photos of Japan (talk) 08:26, 31 December 2024 (UTC)
- Reply, the section of WP:OR concerning images is WP:OI which states "Original images created by a Wikimedian are not considered original research, so long as they do not illustrate or introduce unpublished ideas or arguments". Using AI to generate an image only violates WP:OR if you are using it to illustrate unpublished ideas, which can be assessed just by looking at the image itself. COPYVIO, however, cannot be assessed from looking at just the image alone, which AI could be violating. However, some images may be too simple to be copyrightable, for example AI-generated images of chemicals or mathematical structures potentially. Photos of Japan (talk) 04:34, 31 December 2024 (UTC)
- Ban AI-generated from all articles, AI anything from BLP and medical articles is the position that seems it would permit all instances where there are plausible defenses that AI use does not fabricate or destroy facts intended to be communicated in the context of the article. That scrutiny is stricter with BLP and medical articles in general, and the restriction should be stricter to match. Remsense ‥ 论 06:53, 31 December 2024 (UTC)
- @Remsense, please see my comment immediately above. (We had an edit conflict.) Do you really mean "anything" and everything? Even a simple chart? WhatamIdoing (talk) 07:00, 31 December 2024 (UTC)
- I think my previous comment is operative: almost anything we can see AI used programmatically to generate should be SVG, not raster—even if it means we are embedding raster images in SVG to generate examples like the above. I do not know if there are models that can generate SVG, but if there are I happily state I have no problem with that. I think I'm at risk of seeming downright paranoid—but understanding how errors can propagate and go unnoticed in practice, if we're to trust a black box, we need to at least be able to check what the black box has done on a direct structural level. Remsense ‥ 论 07:02, 31 December 2024 (UTC)
- A quick web search indicates that there are generative AI programs that create SVG files. WhatamIdoing (talk) 07:16, 31 December 2024 (UTC)
- Makes perfect sense that there would be. Again, maybe I come off like a paranoid lunatic, but I really need either the ability to check what the thing is doing, or the ability to check and correct exactly what a black box has done. (In my estimation, if you want to know what procedures person has done, theoretically you can ask them to get a fairly satisfactory result, and the pre-AI algorithms used in image manipulation are canonical and more or less transparent. Acknowledging human error etc., with AI there is not even the theoretical promise that one can be given a truthful account of how it decided to do what it did.) Remsense ‥ 论 07:18, 31 December 2024 (UTC)
- Like everyone said, there should be a de facto ban on using AI images in Misplaced Pages articles. They are effectively fake images pretending to be real, so they are out of step with the values of Misplaced Pages.--♦IanMacM♦ 08:20, 31 December 2024 (UTC)
- Except, not everybody has said that, because the majority of those of us who have refrained from hyperbole have pointed out that not all AI images are "fake images pretending to be real" (and those few that are can already be removed under existing policy). You might like to try actually reading the discussion before commenting further. Thryduulf (talk) 10:24, 31 December 2024 (UTC)
- @Remsense, exactly how much "ability to check what the thing is doing" do you need to be able to do, when the image shows 99 dots and 1 baseball, to illustrate the concept of 1%? If the image above said {{pd-algorithm}} instead of {{cc-by-sa-4.0}}, would you remove if from the article, because you just can't be sure that it shows 1%? WhatamIdoing (talk) 02:33, 2 January 2025 (UTC)
- The above is a useful example to an extent, but it is a toy example. I really do think i is required in general when we aren't dealing with media we ourselves are generating. Remsense ‥ 论 04:43, 2 January 2025 (UTC)
- How do we differentiate in policy between a "toy example" (that really would be used in an article) and "real" examples? Is it just that if I upload it, then you know me, and assume I've been responsible? WhatamIdoing (talk) 07:13, 2 January 2025 (UTC)
- The above is a useful example to an extent, but it is a toy example. I really do think i is required in general when we aren't dealing with media we ourselves are generating. Remsense ‥ 论 04:43, 2 January 2025 (UTC)
- Like everyone said, there should be a de facto ban on using AI images in Misplaced Pages articles. They are effectively fake images pretending to be real, so they are out of step with the values of Misplaced Pages.--♦IanMacM♦ 08:20, 31 December 2024 (UTC)
- There definitely exist generative AI for SVG files. Here's an example: I used generative AI in Adobe Illustrator to generate the SVG gear in File:Pinwheel scheduling.svg (from Pinwheel scheduling) before drawing by hand the more informative parts of the image. The gear drawing is not great (a real gear would have uniform tooth shape) but maybe the shading is better than I would have done by hand, giving an appearance of dimensionality and surface material while remaining deliberately stylized. Is that the sort of thing everyone here is trying to forbid?
- I can definitely see a case for forbidding AI-generated photorealistic images, especially of BLPs, but that's different from human oversight of AI in the generation of schematic images such as this one. —David Eppstein (talk) 01:15, 1 January 2025 (UTC)
- I'd include BDPs, too. I had to get a few AI-generated images of allegedly Haitian presidents deleted a while ago. The "paintings" were 100% fake, right down to the deformed medals on their military uniforms. An AI-generated "generic person" would be okay for some purposes. For a few purposes (e.g., illustrations of Obesity) it could even be preferable to have a fake "person" than a real one. But for individual/named people, it would be best not to have anything unless it definitely looks like the named person. WhatamIdoing (talk) 07:35, 2 January 2025 (UTC)
- Makes perfect sense that there would be. Again, maybe I come off like a paranoid lunatic, but I really need either the ability to check what the thing is doing, or the ability to check and correct exactly what a black box has done. (In my estimation, if you want to know what procedures person has done, theoretically you can ask them to get a fairly satisfactory result, and the pre-AI algorithms used in image manipulation are canonical and more or less transparent. Acknowledging human error etc., with AI there is not even the theoretical promise that one can be given a truthful account of how it decided to do what it did.) Remsense ‥ 论 07:18, 31 December 2024 (UTC)
- A quick web search indicates that there are generative AI programs that create SVG files. WhatamIdoing (talk) 07:16, 31 December 2024 (UTC)
- I think my previous comment is operative: almost anything we can see AI used programmatically to generate should be SVG, not raster—even if it means we are embedding raster images in SVG to generate examples like the above. I do not know if there are models that can generate SVG, but if there are I happily state I have no problem with that. I think I'm at risk of seeming downright paranoid—but understanding how errors can propagate and go unnoticed in practice, if we're to trust a black box, we need to at least be able to check what the black box has done on a direct structural level. Remsense ‥ 论 07:02, 31 December 2024 (UTC)
- @Remsense, please see my comment immediately above. (We had an edit conflict.) Do you really mean "anything" and everything? Even a simple chart? WhatamIdoing (talk) 07:00, 31 December 2024 (UTC)
- I put it to you that our decision on this requires nuance. It's obviously insane to allow AI-generated images of, for example, Donald Trump, and it's obviously insane to ban AI-generated images from, for example, artificial intelligence art or Théâtre D'opéra Spatial.—S Marshall T/C 11:21, 31 December 2024 (UTC)
- Of course, that's why I'm only looking at specific cases and refrain from proposing a blanket ban on generative AI. Regarding Donald Trump, we do have one AI-generated image of him that is reasonable to allow (in Springfield pet-eating hoax), as the image itself was the subject of relevant commentary. Of course, this is different from using an AI-generated image to illustrate Donald Trump himself, which is what my proposal would recommend against. Chaotic Enby (talk · contribs) 11:32, 31 December 2024 (UTC)
- That's certainly true, but others are adopting much more extreme positions than you are, and it was the more extreme views that I wished to challenge.—S Marshall T/C 11:34, 31 December 2024 (UTC)
- Thanks for the (very reasoned) addition, I just wanted to make my original proposal clear. Chaotic Enby (talk · contribs) 11:43, 31 December 2024 (UTC)
- That's certainly true, but others are adopting much more extreme positions than you are, and it was the more extreme views that I wished to challenge.—S Marshall T/C 11:34, 31 December 2024 (UTC)
- Of course, that's why I'm only looking at specific cases and refrain from proposing a blanket ban on generative AI. Regarding Donald Trump, we do have one AI-generated image of him that is reasonable to allow (in Springfield pet-eating hoax), as the image itself was the subject of relevant commentary. Of course, this is different from using an AI-generated image to illustrate Donald Trump himself, which is what my proposal would recommend against. Chaotic Enby (talk · contribs) 11:32, 31 December 2024 (UTC)
- Going off WAID's example above, perhaps we should be trying to restrict the use of AI where image accuracy/precision is essential, as it would be for BLP and medical info, among other cases, but in cases where we are talking generic or abstract concepts, like the 1% image, it's use is reasonable. I would still say we should strongly prefer am image made by a human with high control of the output, but when accuracy is not as important as just the visualization, it's reasonable to turn to AI to help. Masem (t) 15:12, 31 December 2024 (UTC)
- Support total ban of AI imagery - There are probable copyright problems and veracity problems with anything coming out of a machine. In a word of manipulated reality, Misplaced Pages will be increasingly respected for holding a hard line against synthetic imagery. Carrite (talk) 15:39, 31 December 2024 (UTC)
- For both issues AI vs not AI is irrelevant. For copyright, if the image is a copyvio we can't use it regardless of whether it is AI or not AI, if it's not a copyvio then that's not a reason to use or not use the image. If the images is not verifiably accurate then we already can (and should) exclude it, regardless of whether it is AI or not AI. For more detail see the extensive discussion above you've either not read or ignored. Thryduulf (talk) 16:34, 31 December 2024 (UTC)
- Yes, we absolutely should ban the use of AI-generated images in these subjects (and beyond, but that's outside the scope of this discussion). AI should not be used to make up a simulation of a living person. It does not actually depict the person and may introduce errors or flaws that don't actually exist. The picture does not depict the real person because it is quite simply fake.
- Even worse would be using AI to develop medical images in articles in any way. The possibility for error there is unacceptable. Yes, humans make errors too, but there there is a) someone with the responsibility to fix it and b) someone conscious who actually made the picture, rather than a black box that spat it out after looking at similar training data. Cremastra 🎄 u — c 🎄 20:08, 31 December 2024 (UTC)
- It's incredibly disheartening to see multiple otherwise intelligent editors who have apparently not read and/or not understood what has been said in the discussion but rather responding with what appears to be knee-jerk reactions to anti-AI scaremongering. The sky will not fall in, Misplaced Pages is not going to be taken over by AI, AI is not out to subvert Misplaced Pages, we already can (and do) remove (and more commonly not add in the first placE) false and misleading information/images. Thryduulf (talk) 20:31, 31 December 2024 (UTC)
- So what benefit does allowing AI images bring? We shouldn't be forced to decide these on a case-by-case basis.
- I'm sorry to dishearten you, but I still respectfully disagree with you. And I don't think this is "scaremongering" (although I admit that if it was, I would of course claim it wasn't). Cremastra 🎄 u — c 🎄 21:02, 31 December 2024 (UTC) Cremastra 🎄 u — c 🎄 20:56, 31 December 2024 (UTC)
- Determining what benefits any image brings to Misplaced Pages can only be done on a case-by-case basis. It is literally impossible to know whether any image improves the encyclopaedia without knowing the context of which portion of what article it would illustrate, and what alternative images are and are not available for that same spot.
- The benefit of allowing AI images is that when an AI image is the best option for a given article we use it. We gain absolutely nothing by prohibiting using the best image available, indeed doing so would actively harm the project without bringing any benefits. AI images that are misleading, inaccurate or any of the other negative things any image can be are never the best option and so are never used - we don't need any policies or guidelines to tell us that. Thryduulf (talk) 21:43, 31 December 2024 (UTC)
- It's incredibly disheartening to see multiple otherwise intelligent editors who have apparently not read and/or not understood what has been said in the discussion but rather responding with what appears to be knee-jerk reactions to anti-AI scaremongering. The sky will not fall in, Misplaced Pages is not going to be taken over by AI, AI is not out to subvert Misplaced Pages, we already can (and do) remove (and more commonly not add in the first placE) false and misleading information/images. Thryduulf (talk) 20:31, 31 December 2024 (UTC)
- Support blanket ban on AI-generated text or images in articles, except in contexts where the AI-generated content is itself the subject of discussion (in a specific or general sense). Generative AI is fundamentally at odds with Misplaced Pages's mission of providing reliable information, because of its propensity to distort reality or make up information out of whole cloth. It has no place in our encyclopedia. —pythoncoder (talk | contribs) 21:34, 31 December 2024 (UTC)
- Support blanket ban on AI-generated images except in ABOUTSELF contexts. This is especially a problem given the preeminence Google gives to Misplaced Pages images in its image search. JoelleJay (talk) 22:49, 31 December 2024 (UTC)
- Ban across the board, except in articles which are actually about AI-generated imagery or the tools used to create them, or the image itself is the subject of substantial commentary within the article for some reason. Even in those cases, clearly indicating that the image is AI-generated should be required. Seraphimblade 00:29, 1 January 2025 (UTC)
- Oppose blanket bans that would forbid the use of AI assistance in creating diagrams or other deliberately stylized content. Also oppose blanket bans that would forbid AI illustrations in articles about AI illustrations. I am not opposed to banning photorealistic AI-generated images in non-AI-generation contexts or banning AI-generated images from BLPs unless the image itself is specifically relevant to the subject of the BLP. —David Eppstein (talk) 01:27, 1 January 2025 (UTC)
- Oppose blanket bans AI is just a new buzzword so, for example, Apple phones now include "Apple Intelligence" as a standard feature. Does this means that photographs taken using Apple phones will be inadmissable? That would be silly because legacy technologies are already rife with issues of accuracy and verification. For example, there's an image on the main page right now (right). This purports to be a particular person ("The Father of Australia") but, if you check the image description, you find that it may have been his brother and even the attribution to the artist is uncertain. AI features may help in exposing such existing weaknesses in our image use and so we should be free to use them in an intelligent way. Andrew🐉(talk) 08:03, 1 January 2025 (UTC)
- So, you expect an the AI, notoriously trained on Misplaced Pages (and whatever else is floating around on the internet), to correct Misplaced Pages where humans have failed... using the data it scraped from Misplaced Pages (and who knows where else)? :bloodofox: (talk) 11:12, 1 January 2025 (UTC)
- I tried using the Deep Research option of Gemini to assess the attribution of the Macquarie portrait. Its stated methodology seemed quite respectable and sensible.
- So, you expect an the AI, notoriously trained on Misplaced Pages (and whatever else is floating around on the internet), to correct Misplaced Pages where humans have failed... using the data it scraped from Misplaced Pages (and who knows where else)? :bloodofox: (talk) 11:12, 1 January 2025 (UTC)
The Opie Portrait of Lachlan Macquarie: An Examination of its Attribution: Methodology |
---|
To thoroughly investigate the attribution of the Opie portrait of Lachlan Macquarie, a comprehensive research process was undertaken. This involved several key steps:
|
- It was quite transparent in listing and citing the sources that it used for its analysis. These included the Misplaced Pages image but if one didn't want that included, it would be easy to exclude it.
- So, AIs don't have to be inscrutable black boxes. They can have programmatic parameters like the existing bots and scripts that we use routinely on Misplaced Pages. Such power tools seem needed to deal with the large image backlogs that we have on Commons. Perhaps they could help by providing captions and categories where these don't exist.
- Andrew🐉(talk) 09:09, 2 January 2025 (UTC)
- They don't have to be black boxes but they are by design: they exist in a legally dubious area and thus hide what they're scraping to avoid further legal problems. That's no secret. We know for example that Misplaced Pages is a core data set for likely most AIs today. They also notoriously and quite confidently spit out a lie ("hallucinate") and frequently spit out total nonsense. Add to that that they're restricted to whatever is floating around on the internet or whatever other data set they've been fed (usually just more internet), and many specialist topics, like texts on ancient history and even standard reference works, are not accessible on the internet (despite Google's efforts). :bloodofox: (talk) 09:39, 2 January 2025 (UTC)
- While its stated methodology seems sensible, there's no evidence that it actually followed that methodology. The bullet points are pretty vague, and are pretty much the default methodologies used to examine actual historical works. Chaotic Enby (talk · contribs) 17:40, 2 January 2025 (UTC)
- Yes, there's evidence. As I stated above, the analysis is transparent and cites the sources that it used. And these all seem to check out rather than being invented. So, this level of AI goes beyond the first generation of LLM and addresses some of their weaknesses. I suppose that image generation is likewise being developed and improved and so we shouldn't rush to judgement while the technology is undergoing rapid development. Andrew🐉(talk) 17:28, 4 January 2025 (UTC)
- Oppose blanket ban: best of luck to editors here who hope to be able to ban an entirely undefined and largely undetectable procedure. The term 'AI' as commonly used is no more than a buzzword - what exactly would be banned? And how does it improve the encyclopedia to encourage editors to object to images not simply because they are inaccurate, or inappropriate for the article, but because they subjectively look too good? Will the image creator be quizzed on Commons about the tools they used? Will creators who are transparent about what they have created have their images deleted while those who keep silent don’t? Honestly, this whole discussion is going to seem hopelessly outdated within a year at most. It’s like when early calculators were banned in exams because they were ‘cheating’, forcing students to use slide rules. MichaelMaggs (talk) 12:52, 1 January 2025 (UTC)
- I am genuinely confused as to why this has turned into a discussion about a blanket ban, even though the original proposal exclusively focused on AI-generated images (the kind that is generated by an AI model from a prompt, which are already tagged on Commons, not regular images with AI enhancement or tools being used) and only in specific contexts. Not sure where the "subjectively look too good" thing even comes from, honestly. Chaotic Enby (talk · contribs) 12:58, 1 January 2025 (UTC)
- That just show how ill-defined the whole area is. It seem you restrict the term 'AI-generated' to mean "images generated solely(?) from a text prompt". The question posed above has no such restriction. What a buzzword means is largely in the mind of the reader, of course, but to me and I think to many, 'AI-generated' means generated by AI. MichaelMaggs (talk) 13:15, 1 January 2025 (UTC)
- I used the text prompt example because that is the most common way to have an AI model generate an image, but I recognize that I should've clarified it better. There is definitely a distinction between an image being generated by AI (like the Laurence Boccolini example below) and an image being altered or retouched by AI (which includes many features integrated in smartphones today). I don't think it's a "buzzword" to say that there is a meaningful difference between an image being made up by an AI model and a preexisting image being altered in some way, and I am surprised that many people understand "AI-generated" as including the latter. Chaotic Enby (talk · contribs) 15:24, 1 January 2025 (UTC)
- That just show how ill-defined the whole area is. It seem you restrict the term 'AI-generated' to mean "images generated solely(?) from a text prompt". The question posed above has no such restriction. What a buzzword means is largely in the mind of the reader, of course, but to me and I think to many, 'AI-generated' means generated by AI. MichaelMaggs (talk) 13:15, 1 January 2025 (UTC)
- I am genuinely confused as to why this has turned into a discussion about a blanket ban, even though the original proposal exclusively focused on AI-generated images (the kind that is generated by an AI model from a prompt, which are already tagged on Commons, not regular images with AI enhancement or tools being used) and only in specific contexts. Not sure where the "subjectively look too good" thing even comes from, honestly. Chaotic Enby (talk · contribs) 12:58, 1 January 2025 (UTC)
- Oppose as unenforceable. I just want you to imagine enforcing this policy against people who have not violated it. All this will do is allow Wikipedians who primarily contribute via text to accuse artists of using AI because they don't like the results to get their contributions taken down. I understand the impulse to oppose AI on principle, but the labor and aesthetic issues don't actually have anything to do with Misplaced Pages. If there is not actually a problem with the content conveyed by the image—for example, if the illustrator intentionally corrected any hallucinations—then someone objecting over AI is not discussing page content. If the image was not even made with AI, they are hallucinating based on prejudices that are irrelevant to the image. The bottom line is that images should be judged on their content, not how they were made. Besides all the policy-driven stuff, if Misplaced Pages's response to the creation of AI imaging tools is to crack down on all artistic contributions to Misplaced Pages (which seems to be the inevitable direction of these discussions), what does that say? Categorical bans of this kind are ill-advised and anti-illustrator. lethargilistic (talk) 15:41, 1 January 2025 (UTC)
- And the same applies to photography, of course. If in my photo of a garden I notice there is a distracting piece of paper on the lawn, nobody would worry if I used the old-style clone-stamp tool to remove it in Photoshop, adding new grass in its place (I'm assuming here that I don't change details of the actual landscape in any way). Now, though, Photoshop uses AI to achieve essentially the same result while making it simpler for the user. A large proportion of all processed photos will have at least some similar but essentially undetectable "generated AI" content, even if only a small area of grass. There is simply no way to enforce the proposed policy, short of banning all high-quality photography – which requires post-processing by design, and in which similar encyclopedically non-problematic edits are commonplace. MichaelMaggs (talk) 17:39, 1 January 2025 (UTC)
- Before anyone objects that my example is not "an image generated from a text prompt", note that there's no mention of such a restriction in the proposal we are discussing. Even if there were, it makes no difference. Photoshop can already generate photo-realistic areas from a text prompt. If such use is non-misleading and essentially undetectable, it's fine; if if changes the image in such a way as to make it misleading, inaccurate or non-encycpopedic in any way it can be challenged on that basis. MichaelMaggs (talk) 17:58, 1 January 2025 (UTC)
- As I said previously, the text prompt is just an example, not a restriction of the proposal. The point is that you talk about editing an existing image (which is what you talk about, as you say
if if changes the image
), while I am talking about creating an image ex nihilo, which is what "generating" means. Chaotic Enby (talk · contribs) 18:05, 1 January 2025 (UTC)- I'm talking about a photograph with AI-generated areas within it. This is commonplace, and is targeted by the proposal. Categorical bans of the type suggested are indeed ill-advised. MichaelMaggs (talk) 18:16, 1 January 2025 (UTC)
- As I said previously, the text prompt is just an example, not a restriction of the proposal. The point is that you talk about editing an existing image (which is what you talk about, as you say
- Even if the ban is unenforceable, there are many editors who will choose to use AI images if they are allowed and just as cheerfully skip them if they are not allowed. That would mean the only people posting AI images are those who choose to break the rule and/or don't know about it. That would probably add up to many AI images not used. Darkfrog24 (talk) 22:51, 3 January 2025 (UTC)
- Support blanket ban because "AI" is a fundamentally unethical technology based on the exploitation of labor, the wanton destruction of the planetary environment, and the subversion of every value that an encyclopedia should stand for. ABOUTSELF-type exceptions for "AI" output that has already been generated might be permissible, in order to document the cursed time in which we live, but those exceptions are going to be rare. How many examples of Shrimp Jesus slop do we need? XOR'easter (talk) 23:30, 1 January 2025 (UTC)
- Support blanket ban - Primarily because of the "poisoning the well"/"dead internet" issues created by it. FOARP (talk) 14:30, 2 January 2025 (UTC)
- Support a blanket ban to assure some control over AI-creep in Misplaced Pages. And per discussion. Randy Kryn (talk) 10:50, 3 January 2025 (UTC)
- Support that WP:POLICY applies to images: images should be verifiable, neutral, and absent of original research. AI is just the latest quickest way to produce images that are original, unverifiable, and potentially biased. Is anyone in their right mind saying that we allow people to game our rules on WP:OR and WP:V by using images instead of text? Shooterwalker (talk) 17:04, 3 January 2025 (UTC)
- As an aside on this: in some cases Commons is being treated as a way of side-stepping WP:NOR and other restrictions. Stuff that would get deleted if it were written content on WP gets in to WP as images posted on Commons. The worst examples are those conflict maps that are created from a bunch of Twitter posts (eg the Syrian civil war one). AI-generated imagery is another field where that appears to be happening. FOARP (talk) 10:43, 4 January 2025 (UTC)
- Support temporary blanket ban with a posted expiration/requred rediscussion date of no more than two years from closing. AI as the term is currently used is very, very new. Right now these images would do more harm than good, but it seems likely that the culture will adjust to them. I support an exception for the when the article is about the image itself and that image is notable, such as the photograph of the black-and-blue/gold-and-white dress in The Dress and/or examples of AI images in articles in which they are relevant. E.g. "here is what a hallucination is: count the fingers." Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)
- First, I think any guidance should avoid referring to specific technology, as that changes rapidly and is used for many different purposes. Second, assuming that the image in question has a suitable copyright status for use on Misplaced Pages, the key question is whether or not the reliability of the image has been established. If the intent of the image is to display 100 dots with 99 having the same appearance and 1 with a different appearance, then ordinary math skills are sufficient and so any Misplaced Pages editor can evaluate the reliability without performing original research. If the intent is to depict a likeness of a specific person, then there needs to be reliable sources indicating that the image is sufficiently accurate. This is the same for actual photographs, re-touched ones, drawings, hedcuts, and so forth. Typically this can be established by a reliable source using that image with a corresponding description or context. isaacl (talk) 17:59, 4 January 2025 (UTC)
- Support Blanket Ban on AI generated imagery per most of the discussion above. It's a very slippery slope. I might consider a very narrow exception for an AI generated image of a person that was specifically authorized or commissioned by the subject. -Ad Orientem (talk) 02:45, 5 January 2025 (UTC)
- Oppose blanket ban It is far too early to take an absolutist position, particularly when the potential is enormous. Misplaced Pages is already is image desert and to reject something that is only at the cusp of development is unwise. scope_creep 20:11, 5 January 2025 (UTC)
- Support blanket ban on AI-generated images except in ABOUTSELF contexts. An encyclopedia should not be using fake images. I do not believe that further nuance is necessary. LEPRICAVARK (talk) 22:44, 5 January 2025 (UTC)
- Support blanket ban as the general guideline, as accuracy, personal rights, and intellectual rights issues are very weighty, here (as is disclosure to the reader). (I could see perhaps supporting adoption of a sub-guideline for ways to come to a broad consensus in individual use cases (carve-outs, except for BLPs) which address all the weighty issues on an individual use basis -- but that needs to be drafted and agreed to, and there is no good reason to wait to adopt the general ban in the meantime). Alanscottwalker (talk) 15:32, 8 January 2025 (UTC)
- Support indefinite blanket ban except ABOUTSELF and simple abstract examples (such as the image of 99 dots above). In addition to all the issues raised above, including copyvio and creator consent issues, in cases of photorealistic images it may never be obvious to all readers exactly which elements of the image are guesswork. The cormorant picture at the head of the section reminded me of the first video of a horse in gallop, in 1878. Had AI been trained on paintings of horses instead of actual videos and used to "improve" said videos, we would've ended up with serious delusions about the horse's gait. We don't know what questions -- scientific or otherwise -- photography will be used to settle in the coming years, but we do know that consumer-grade photo AI has already been trained to intentionally fake detail to draw sales, such as on photos of the Moon. I think it's unrealistic to require contributors to take photos with expensive cameras or specially-made apps, but Misplaced Pages should act to limit its exposure to this kind of technology as far as is feasible. Daß Wölf 20:57, 9 January 2025 (UTC)
- Support at least some sort of recomendation against the use AI generated imagery in non-AI contexts−except obviously where the topic of the article is specificly related to AI generated imagery (Generative artificial intelligence, Springfield pet-eating hoax, AI slop, etc.). At the very least the consensus bellow about BLPs should be extened to all historical biographies, as all the examples I've seen (see WP:AIIMAGE) fail WP:IMAGERELEVANCE (failing to add anything to the sourced text) and serving only to mislead the reader. We inclued images for a reason, not just for decoration. I'm also reminded the essay WP:PORTRAIT, and the distinction it makes between notable depictions of histoical people (which can be useful to illustarate articles) and non-notable fictional portraits which in its (imo well argued) view
have no legitimate encyclopedic function whatsoever
. Cakelot1 ☞️ talk 14:36, 14 January 2025 (UTC)- Anything that fails WP:IMAGERELEVANCE can be, should be, and is, excluded from use already, likewise any images which
have no legitimate encyclopedic function whatsoever.
This applies to AI and none AI images equally and identically. Just as we don't have or need a policy or guideline specifically saying don't use irrelevant or otherwise non-encyclopaedic watercolour images in articles we don't need any policy or guideline specifically calling out AI - because it would (as you demonstrate) need to carve out exceptions for when it's use is relevant. Thryduulf (talk) 14:45, 14 January 2025 (UTC)- That would be an easy change; just add a sentence like "AI-generated images of individual people are primarily decorative and should not be used". We should probably do that no matter what else is decided. WhatamIdoing (talk) 23:24, 14 January 2025 (UTC)
- Except that is both not true and irrelevant. Some AI-generated images of individual people are primarily decorative, but not all of them. If an image is purely decorative it shouldn't be used, regardless of whether it is AI-generated or not. Thryduulf (talk) 13:43, 15 January 2025 (UTC)
- Can you give an example of an AI-generated image of an individual person that is (a) not primarily decorative and also (b) not copied from the person's social media/own publications, and that (c) at least some editors think would be a good idea?
- "Hey, AI, please give me a realistic-looking photo of this person who died in the 12th century" is not it. "Hey, AI, we have no freely licensed photos of this celebrity, so please give me a line-art caricature" is not it. What is? WhatamIdoing (talk) 17:50, 15 January 2025 (UTC)
- Criteria (b) and (c) were not part of the statement I was responding to, and make it a very significantly different assertion. I will assume that you are not making motte-and-bailey arguments in bad faith, but the frequent fallacious argumentation in these AI discussions is getting tiresome.
- Even with the additional criteria it is still irrelevant - if no editor thinks an image is a good idea, then it won't be used in an article regardless of why they don't think it's a good idea. If some editors think an individual image is a good idea then it's obviously potentially encyclopaedic and needs to be judged on its merits (whether it is AI-generated is completely irrelevant to it's encyclopaedic value). An image that the subject uses on their social media/own publications to identify themselves (for example as an avatar) is the perfect example of the type of image which is frequently used in articles about that individual. Thryduulf (talk) 18:56, 15 January 2025 (UTC)
- Except that is both not true and irrelevant. Some AI-generated images of individual people are primarily decorative, but not all of them. If an image is purely decorative it shouldn't be used, regardless of whether it is AI-generated or not. Thryduulf (talk) 13:43, 15 January 2025 (UTC)
- That would be an easy change; just add a sentence like "AI-generated images of individual people are primarily decorative and should not be used". We should probably do that no matter what else is decided. WhatamIdoing (talk) 23:24, 14 January 2025 (UTC)
- Anything that fails WP:IMAGERELEVANCE can be, should be, and is, excluded from use already, likewise any images which
BLPs
CONSENSUS AGAINST There is clear consensus against using AI-generated imagery to depict BLP subjects. Marginal cases (such as major AI enhancement or where an AI-generated image of a living person is itself notable) can be worked out on a case-by-case basis. I will add a sentence reflecting this consensus to the image use policy and the BLP policy. —Ganesha811 (talk) 14:02, 8 January 2025 (UTC)The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Are AI-generated images (generated via text prompts, see also: text-to-image model) okay to use to depict BLP subjects? The Laurence Boccolini example was mentioned in the opening paragraph. The image was created using Grok / Aurora,
a text-to-image model developed by xAI, to generate images...As with other text-to-image models, Aurora generates images from natural language descriptions, called prompts.Some1 (talk) 12:34, 31 December 2024 (UTC)
03:58, January 3, 2025: Note: that these images can either be photorealistic in style (such as the Laurence Boccolini example) or non-photorealistic in style (see the Germán Larrea Mota-Velasco example, which was generated using DALL-E, another text-to-image model).
Some1 (talk) 11:10, 3 January 2025 (UTC)notified: Misplaced Pages talk:Biographies of living persons, Misplaced Pages talk:No original research, Misplaced Pages talk:Manual of Style/Images, Template:Centralized discussion -- Some1 (talk) 11:27, 2 January 2025 (UTC)
- No. I don't think they are at all, as, despite looking photorealistic, they are essentially just speculation about what the person might look like. A photorealistic image conveys the look of something up to the details, and giving a false impression of what the person looks like (or, at best, just guesswork) is actively counterproductive. (Edit 21:59, 31 December 2024 (UTC): clarified bolded !vote since everyone else did it) Chaotic Enby (talk · contribs) 12:46, 31 December 2024 (UTC)
- That AI generated image looks like Dick Cheney wearing a Laurence Boccolini suit. ScottishFinnishRadish (talk) 12:50, 31 December 2024 (UTC)
- There are plenty of non-free images of Laurence Boccolini with which this image can be compared. Assuming at least most of those are accurate representations of them (I've never heard of them before and have no other frame of reference) the image above is similar to but not an accurate representation of them (most obviously but probably least significantly, in none of the available images are they wearing that design of glasses). This means the image should not be used to identify them unless they use it to identify themselves. It should not be used elsewhere in the article unless it has been the subject of notable commentary. That it is an AI image makes absolutely no difference to any of this. Thryduulf (talk) 16:45, 31 December 2024 (UTC)
- No. Well, that was easy.They are fake images; they do not actually depict the person. They depict an AI-generated simulation of a person that may be inaccurate. Cremastra 🎄 u — c 🎄 20:00, 31 December 2024 (UTC)
- Even if the subject uses the image to identify themselves, the image is still fake. Cremastra (u — c) 19:17, 2 January 2025 (UTC)
- No, with the caveat that its mostly on the grounds that we don't have enough information and when it comes to BLP we are required to exercise caution. If at some point in the future AI generated photorealistic simulacrums living people become mainstream with major newspapers and academic publishers it would be fair to revisit any restrictions, but in this I strongly believe that we should follow not lead. Horse Eye's Back (talk) 20:37, 31 December 2024 (UTC)
- No. The use of AI-generated images to depict people (living or otherwise) is fundamentally misleading, because the images are not actually depicting the person. —pythoncoder (talk | contribs) 21:30, 31 December 2024 (UTC)
- No except perhaps, maybe, if the subject explicitly is already using that image to represent themselves. But mostly no. -Kj cheetham (talk) 21:32, 31 December 2024 (UTC)
- Yes, when that image is an accurate representation and better than any available alternative, used by the subject to represent themselves, or the subject of notable commentary. However, as these are the exact requirements to use any image to represent a BLP subject this is already policy. Thryduulf (talk) 21:46, 31 December 2024 (UTC)
- How well can we determine how accurate a representation it is? Looking at the example above, I'd argue that the real Laurence Boccolini has a somewhat rounder/pointier chin, a wider mouth, and possibly different eye wrinkles, although the latter probably depends quite a lot on the facial expression.
- How accurate a representation a photorealistic AI image is is ultimately a matter of editor opinion. Cremastra 🎄 u — c 🎄 21:54, 31 December 2024 (UTC)
How well can we determine how accurate a representation it is?
in exactly the same way that we can determine whether a human-crafted image is an accurate representation. How accurate a representation any image is is ultimately a matter of editor opinion. Whether an image is AI or not is irrelevant. I agree the example image above is not sufficiently accurate, but we wouldn't ban photoshopped images because one example was not deemed accurate enough, because we are rational people who understand that one example is not representative of an entire class of images - at least when the subject is something other than AI. Thryduulf (talk) 23:54, 31 December 2024 (UTC)- I think except in a few exceptional circumstances of actual complex restorations, human photoshopping is not going to change or distort a person's appearance in the same way an AI image would. Modifications done by a person who is paying attention to what they are doing and merely enhancing an image, by person who is aware, while they are making changes, that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator. Cremastra 🎄 u — c 🎄 00:14, 1 January 2025 (UTC)
- I'm guessing your filter bubble doesn't include Facetune and their notorious Filter (social media)#Beauty filter problems. WhatamIdoing (talk) 02:46, 2 January 2025 (UTC)
- A photo of a person can be connected to a specific time, place, and subject that existed. It can be compared to other images sharing one or more of those properties. A photo that was PhotoShopped is still either a generally faithful reproduction of a scene that existed, or has significant alterations that can still be attributed to a human or at least to a specific algorithm, e.g. filters. The artistic license of a painting can still be attributed to a human and doesn't run much risk of being misidentified as real, unless it's by Chuck Close et al. An AI-generated image cannot be connected to a particular scene that ever existed and cannot be attributable to a human's artistic license (and there is legal precedent that such images are not copyrightable to the prompter specifically because of this). Individual errors in a human-generated artwork are far more predictable, understandable, identifiable, traceable... than those in AI-generated images. We have innate assumptions when we encounter real images or artwork that are just not transferable. These are meaningful differences to the vast majority of people: according to a Getty poll, 87% of respondents want AI-generated art to at least be transparent, and 98% consider authentic images "pivotal in establishing trust". And even if you disagree with all that, can you not see the larger problem of AI images on Misplaced Pages getting propagated into generative AI corpora? JoelleJay (talk) 04:20, 2 January 2025 (UTC)
- I agree that our old assumptions don't hold true. I think the world will need new assumptions. We will probably have those in place in another decade or so.
- I think we're Misplaced Pages:Here to build an encyclopedia, not here to protect AI engines from ingesting AI-generated artwork. Figuring out what they should ingest is their problem, not mine. WhatamIdoing (talk) 07:40, 2 January 2025 (UTC)
- I think except in a few exceptional circumstances of actual complex restorations, human photoshopping is not going to change or distort a person's appearance in the same way an AI image would. Modifications done by a person who is paying attention to what they are doing and merely enhancing an image, by person who is aware, while they are making changes, that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator. Cremastra 🎄 u — c 🎄 00:14, 1 January 2025 (UTC)
- Absolutely no fake/AI images of people, photorealistic or otherwise. How is this even a question? These images are fake. Readers need to be able to trust Misplaced Pages, not navigate around whatever junk someone has created with a prompt and presented as somehow representative. This includes text. :bloodofox: (talk) 22:24, 31 December 2024 (UTC)
- No except for edge cases (mostly, if the image itself is notable enough to go into the article). Gnomingstuff (talk) 22:31, 31 December 2024 (UTC)
- Absolutely not, except for ABOUTSELF. "They're fine if they're accurate enough" is an obscenely naive stance. JoelleJay (talk) 23:06, 31 December 2024 (UTC)
- No with no exceptions. Carrite (talk) 23:54, 31 December 2024 (UTC)
- No. We don't permit falsifications in BLPs. Seraphimblade 00:30, 1 January 2025 (UTC)
- For the requested clarification by Some1, no AI-generated images (except when the image itself is specifically discussed in the article, and even then it should not be the lead image and it should be clearly indicated that the image is AI-generated), no drawings, no nothing of that sort. Actual photographs of the subject, nothing else. Articles are not required to have images at all; no image whatsoever is preferable to something which is not an image of the person. Seraphimblade 05:42, 3 January 2025 (UTC)
- No, but with exceptions. I could imagine a case where a specific AI-generated image has some direct relevance to the notability of the subject of a BLP. In such cases, it should be included, if it could be properly licensed. But I do oppose AI-generated images as portraits of BLP subjects. —David Eppstein (talk) 01:27, 1 January 2025 (UTC)
- Since I was pinged on this point: when I wrote "I do oppose AI-generated images as portraits", I meant exactly that, including all AI-generated images, such as those in a sketchy or artistic style, not just the photorealistic ones. I am not opposed to certain uses of AI-generated images in BLPs when they are not the main portrait of the subject, for instance in diagrams (not depicting the subject) to illustrate some concept pioneered by the subject, or in case someone becomes famous for being the subject of an AI-generated image. —David Eppstein (talk) 05:41, 3 January 2025 (UTC)
- No, and no exceptions or do-overs. Better to have no images (or Stone-Age style cave paintings) than Frankenstein images, no matter how accurate or artistic. Akin to shopped manipulated photographs, they should have no room (or room service) at the WikiInn. Randy Kryn (talk) 01:34, 1 January 2025 (UTC)
- Some "shopped manipulated photographs" are misleading and inaccurate, others are not. We can and do exclude the former from the parts of the encyclopaedia where they don't add value without specific policies and without excluding them where they are relevant (e.g. Photograph manipulation) or excluding those that are not misleading or inaccurate. AI images are no different. Thryduulf (talk) 02:57, 1 January 2025 (UTC)
- Assuming we know. Assuming it's material. The infobox image in – and the only extant photo of – Blind Lemon Jefferson was "photoshopped" by a marketing team, maybe half a century before Adobe Photoshop was created. They wanted to show him wearing a necktie. I don't think that this level of manipulation is actually a problem. WhatamIdoing (talk) 07:44, 2 January 2025 (UTC)
- Some "shopped manipulated photographs" are misleading and inaccurate, others are not. We can and do exclude the former from the parts of the encyclopaedia where they don't add value without specific policies and without excluding them where they are relevant (e.g. Photograph manipulation) or excluding those that are not misleading or inaccurate. AI images are no different. Thryduulf (talk) 02:57, 1 January 2025 (UTC)
- Yes, so long as it is an accurate representation. Hawkeye7 (discuss) 03:40, 1 January 2025 (UTC)
- No not for BLPs. Traumnovelle (talk) 04:15, 1 January 2025 (UTC)
- No Not at all relevant for pictures of people, as the accuracy is not enough and can misrepresent. Also (and I'm shocked as it seems no one has mentioned this), what about Copyright issues? Who holds the copyright for an AI-generated image? The user who wrote the prompt? The creator(s) of the AI model? The creator(s) of the images in the database that the AI used to create the images? It's sounds to me such a clusterfuck of copyright issues that I don't understand how this is even a discussion. --SuperJew (talk) 07:10, 1 January 2025 (UTC)
- Under the US law / copyright office, machine-generated images including those by AI cannot be copyrighted. That also means that AI images aren't treated as derivative works.
What is still under legal concern is whether the use of bodies of copyrighted works without any approve or license from the copyright holders to train AI models is under fair use or not. There are multiple court cases where this is the primary challenge, and none have yet to reach a decision yet. Assuming the courts rule that there was no fair use, that would either require the entity that owns the AI to pay fines and ongoing licensing costs, or delete their trained model to start afresh with free licensed/works, but in either case, that would not impact how we'd use any resulting AI image from a copyright standpoint. — Masem (t) 14:29, 1 January 2025 (UTC)
- Under the US law / copyright office, machine-generated images including those by AI cannot be copyrighted. That also means that AI images aren't treated as derivative works.
- No, I'm in agreeance with Seraphimblade here. Whether we like it or not, the usage of a portrait on an article implies that it's just that, a portrait. It's incredibly disingenuous to users to represent an AI generated photo as truth. Doawk7 (talk) 09:32, 1 January 2025 (UTC)
- So you just said a portrait can be used because wikipedia tells you it's a portrait, and thus not a real photo. Can't AI be exactly the same? As long as we tell readers it is an AI representation? Heck, most AI looks closer to the real thing than any portrait. Fyunck(click) (talk) 10:07, 2 January 2025 (UTC)
- To clarify, I didn't mean "portrait" as in "painting," I meant it as "photo of person."
- However, I really want to stick to what you say at the end there:
Heck, most AI looks closer to the real thing than any portrait.
- That's exactly the problem: by looking close to the "real thing" it misleads users into believing a non-existent source of truth.
- Per the wording of the RfC of "
depict BLP subjects
," I don't think there would be any valid case to utilize AI images. I hold a strong No. Doawk7 (talk) 04:15, 3 January 2025 (UTC)
- So you just said a portrait can be used because wikipedia tells you it's a portrait, and thus not a real photo. Can't AI be exactly the same? As long as we tell readers it is an AI representation? Heck, most AI looks closer to the real thing than any portrait. Fyunck(click) (talk) 10:07, 2 January 2025 (UTC)
- No. We should not use AI-generated images for situations like this, they are basically just guesswork by a machine as Quark said and they can misinform readers as to what a person looks like. Plus, there's a big grey area regarding copyright. For an AI generator to know what somebody looks like, it has to have photos of that person in its dataset, so it's very possible that they can be considered derivative works or copyright violations. Using an AI image (derivative work) to get around the fact that we have no free images is just fair use with extra steps. Di (they-them) (talk) 19:33, 1 January 2025 (UTC)
- Maybe There was a prominent BLP image which we displayed on the main page recently. (right) This made me uneasy because it was an artistic impression created from photographs rather than life. And it was "colored digitally". Functionally, this seems to be exactly the same sort of thing as the Laurence Boccolini composite. The issue should not be whether there's a particular technology label involved but whether such creative composites and artists' impressions are acceptable as better than nothing. Andrew🐉(talk) 08:30, 1 January 2025 (UTC)
- Except it is clear to everyone that the illustration to the right is a sketch, a human rendition, while in the photorealistic image above, it is less clear. Cremastra (u — c) 14:18, 1 January 2025 (UTC)
- Except it says right below it "AI-generated image of Laurence Boccolini." How much more clear can it be when it say point-blank "AI-generated image." Fyunck(click) (talk) 10:12, 2 January 2025 (UTC)
- Commons descriptions do not appear on our articles. CMD (talk) 10:28, 2 January 2025 (UTC)
- People taking a quick glance at an infobox image that looks pretty like a photograph are not going to scrutinize commons tagging. Cremastra (u — c) 14:15, 2 January 2025 (UTC)
- Keep in mind that many AIs can produce works that match various styles, not just photographic quality. It is still possible for AI to produce something that looks like a watercolor or sketched drawing. — Masem (t) 14:33, 1 January 2025 (UTC)
- Yes, you're absolutely right. But so far photorealistic images have been the most common to illustrate articles (see Misplaced Pages:WikiProject AI Cleanup/AI images in non-AI contexts for some examples. Cremastra (u — c) 14:37, 1 January 2025 (UTC)
- Then push to ban photorealistic images, rather than pushing for a blanket ban that would also apply to obvious sketches. —David Eppstein (talk) 20:06, 1 January 2025 (UTC)
- Same thing I wrote above, but for "photoshopping" read "drawing": (Bold added for emphasis)
...human is not going to change or distort a person's appearance in the same way an AI image would. done by a person who is paying attention to what they are doing by person who is aware, while they are making , that they might be distorting the image and is, I only assume, trying to minimise it – those careful modifications shouldn't be equated with something made up by an AI image generator.
Cremastra (u — c) 20:56, 1 January 2025 (UTC)- @Cremastra then why are you advocating for a ban on AI images rather than a ban on distorted images? Remember that with careful modifications by someone who is aware of what they are doing that AI images can be made more accurate. Why are you assuming that a human artist is trying to minimise the distortions but someone working with AI is not? Thryduulf (talk) 22:12, 1 January 2025 (UTC)
- I believe that AI-generated images are fundamentally misleading because they are a simulation by a machine rather than a drawing by a human. To quote pythoncoder above:
The use of AI-generated images to depict people (living or otherwise) is fundamentally misleading, because the images are not actually depicting the person.
Cremastra (u — c) 00:16, 2 January 2025 (UTC)- Once again your actual problem is not AI, but with misleading images. Which can be, and are, already a violation of policy. Thryduulf (talk) 01:17, 2 January 2025 (UTC)
- I think all AI-generated images, except simple diagrams as WhatamIdoing point out above, are misleading. So yes, my problem is with misleading images, which includes all photorealistic images generated by AI, which is why I support this proposal for a blanket ban in BLPs and medical articles. Cremastra (u — c) 02:30, 2 January 2025 (UTC)
- To clarify, I'm willing to make an exception in this proposal for very simple geometric diagrams. Cremastra (u — c) 02:38, 2 January 2025 (UTC)
- Despite the fact that not all AI-generated images are misleading, not all misleading images are AI-generated and it is not always possible to tell whether an image is or is not AI-generated? Thryduulf (talk) 02:58, 2 January 2025 (UTC)
- Enforcement is a separate issue. Whether or not all (or the vast majority) of AI images are misleading is the subject of this dispute.
- I'm not going to mistreat the horse further, as we've each made our points and understand where the other stands. Cremastra (u — c) 15:30, 2 January 2025 (UTC)
- Even "simple diagrams" are not clear-cut. The process of AI-generating any image, no matter how simple, is still very complex and can easily follow any number of different paths to meet the prompt constraints. These paths through embedding space are black boxes and the likelihood they converge on the same output is going to vary wildly depending on the degrees of freedom in the prompt, the dimensionality of the embedding space, token corpus size, etc. The only thing the user can really change, other than switching between models, is the prompt, and at some point constructing a prompt that is guaranteed to yield the same result 100% of the time becomes a Borgesian exercise. This is in contrast with non-generative AI diagram-rendering software that follow very fixed, reproducible, known paths. JoelleJay (talk) 04:44, 2 January 2025 (UTC)
- Why does the path matter? If the output is correct it is correct no matter what route was taken to get there. If the output is incorrect it is incorrect no matter what route was taken to get there. If it is unknown or unknowable whether the output is correct or not that is true no matter what route was taken to get there. Thryduulf (talk) 04:48, 2 January 2025 (UTC)
- If I use BioRender or GraphPad to generate a figure, I can be confident that the output does not have errors that would misrepresent the underlying data. I don't have to verify that all 18,000 data points in a scatter plot exist in the correct XYZ positions because I know the method for rendering them is published and empirically validated. Other people can also be certain that the process of getting from my input to the product is accurate and reproducible, and could in theory reconstruct my raw data from it. AI-generated figures have no prescribed method of transforming input beyond what the prompt entails; therefore I additionally have to be confident in how precise my prompt is and confident that the training corpus for this procedure is so accurate that no error-producing paths exist (not to mention absolutely certain that there is no embedded contamination from prior prompts). Other people have all those concerns, and on top of that likely don't have access to the prompt or the raw data to validate the output, nor do they necessarily know how fastidious I am about my generative AI use. At least with a hand-drawn diagram viewers can directly transfer their trust in the author's knowledge and reliability to their presumptions about the diagram's accuracy. JoelleJay (talk) 05:40, 2 January 2025 (UTC)
- If you've got 18,000 data points, we are beyond the realm of "simple geometric diagrams". WhatamIdoing (talk) 07:47, 2 January 2025 (UTC)
- The original "simple geometric diagrams" comment was referring to your 100 dots image. I don't think increasing the dots materially changes the discussion beyond increasing the laboriousness of verifying the accuracy of the image. Photos of Japan (talk) 07:56, 2 January 2025 (UTC)
- Yes, but since "the laboriousness of verifying the accuracy of the image" is exactly what she doesn't want to undertake for 18,000 dots, then I think that's very relevant. WhatamIdoing (talk) 07:58, 2 January 2025 (UTC)
- The original "simple geometric diagrams" comment was referring to your 100 dots image. I don't think increasing the dots materially changes the discussion beyond increasing the laboriousness of verifying the accuracy of the image. Photos of Japan (talk) 07:56, 2 January 2025 (UTC)
- If you've got 18,000 data points, we are beyond the realm of "simple geometric diagrams". WhatamIdoing (talk) 07:47, 2 January 2025 (UTC)
- If I use BioRender or GraphPad to generate a figure, I can be confident that the output does not have errors that would misrepresent the underlying data. I don't have to verify that all 18,000 data points in a scatter plot exist in the correct XYZ positions because I know the method for rendering them is published and empirically validated. Other people can also be certain that the process of getting from my input to the product is accurate and reproducible, and could in theory reconstruct my raw data from it. AI-generated figures have no prescribed method of transforming input beyond what the prompt entails; therefore I additionally have to be confident in how precise my prompt is and confident that the training corpus for this procedure is so accurate that no error-producing paths exist (not to mention absolutely certain that there is no embedded contamination from prior prompts). Other people have all those concerns, and on top of that likely don't have access to the prompt or the raw data to validate the output, nor do they necessarily know how fastidious I am about my generative AI use. At least with a hand-drawn diagram viewers can directly transfer their trust in the author's knowledge and reliability to their presumptions about the diagram's accuracy. JoelleJay (talk) 05:40, 2 January 2025 (UTC)
- Why does the path matter? If the output is correct it is correct no matter what route was taken to get there. If the output is incorrect it is incorrect no matter what route was taken to get there. If it is unknown or unknowable whether the output is correct or not that is true no matter what route was taken to get there. Thryduulf (talk) 04:48, 2 January 2025 (UTC)
- I think all AI-generated images, except simple diagrams as WhatamIdoing point out above, are misleading. So yes, my problem is with misleading images, which includes all photorealistic images generated by AI, which is why I support this proposal for a blanket ban in BLPs and medical articles. Cremastra (u — c) 02:30, 2 January 2025 (UTC)
- Once again your actual problem is not AI, but with misleading images. Which can be, and are, already a violation of policy. Thryduulf (talk) 01:17, 2 January 2025 (UTC)
- I believe that AI-generated images are fundamentally misleading because they are a simulation by a machine rather than a drawing by a human. To quote pythoncoder above:
- @Cremastra then why are you advocating for a ban on AI images rather than a ban on distorted images? Remember that with careful modifications by someone who is aware of what they are doing that AI images can be made more accurate. Why are you assuming that a human artist is trying to minimise the distortions but someone working with AI is not? Thryduulf (talk) 22:12, 1 January 2025 (UTC)
- Then push to ban photorealistic images, rather than pushing for a blanket ban that would also apply to obvious sketches. —David Eppstein (talk) 20:06, 1 January 2025 (UTC)
- Yes, you're absolutely right. But so far photorealistic images have been the most common to illustrate articles (see Misplaced Pages:WikiProject AI Cleanup/AI images in non-AI contexts for some examples. Cremastra (u — c) 14:37, 1 January 2025 (UTC)
- Except it says right below it "AI-generated image of Laurence Boccolini." How much more clear can it be when it say point-blank "AI-generated image." Fyunck(click) (talk) 10:12, 2 January 2025 (UTC)
- And where is that cutoff supposed to be? 1000 dots? A single straight line? An atomic diagram? What is "simple" to someone unfamiliar with a topic may be more complex.And I don't want to count 100 dots either! JoelleJay (talk) 17:43, 2 January 2025 (UTC)
- Maybe you don't. But I know for certain that you can count 10 across, 10 down, and multiply those two numbers to get 100. That's what I did when I made the image, after all. WhatamIdoing (talk) 07:44, 3 January 2025 (UTC)
- Except it is clear to everyone that the illustration to the right is a sketch, a human rendition, while in the photorealistic image above, it is less clear. Cremastra (u — c) 14:18, 1 January 2025 (UTC)
- Comment: when you Google search someone (at least from the Chrome browser), often the link to the Misplaced Pages article includes a thumbnail of the lead photo as a preview. Even if the photo is labelled as an AI image in the article, people looking at the thumbnail from Google would be misled (if the image is chosen for the preview). Photos of Japan (talk) 09:39, 1 January 2025 (UTC)
- This is why we should not use inaccurate images, regardless of how the image was created. It has absolutely nothing to do with AI. Thryduulf (talk) 11:39, 1 January 2025 (UTC)
- Already opposed a blanket ban: It's unclear to me why we have a separate BLP subsection, as BLPs are already included in the main section above. Anyway, I expressed my views there. MichaelMaggs (talk)
- Some editors might oppose a blanket ban on all AI-generated images, while at the same time, are against using AI-generated images (created by using text prompts/text-to-image models) to depict living people. Some1 (talk) 14:32, 1 January 2025 (UTC)
- No For at least now, let's not let the problems of AI intrude into BLP articles which need to have the highest level of scrutiny to protect the person represented. Other areas on WP may benefit from AI image use, but let's keep it far out of BLP at this point. --Masem (t) 14:35, 1 January 2025 (UTC)
- I am not a fan of “banning” AI images completely… but I agree that BLPs require special handling. I look at AI imagery as being akin to a computer generated painting. In a BLP, we allow paintings of the subject, but we prefer photos over paintings (if available). So… we should prefer photos over AI imagery. That said, AI imagery is getting good enough that it can be mistaken for a photo… so… If an AI generated image is the only option (ie there is no photo available), then the caption should clearly indicate that we are using an AI generated image. And that image should be replaced as soon as possible with an actual photograph. Blueboar (talk) 14:56, 1 January 2025 (UTC)
- The issue with the latter is that Misplaced Pages images get picked up by Google and other search engines, where the caption isn't there anymore to add the context that a photorealistic image was AI-generated. Chaotic Enby (talk · contribs) 15:27, 1 January 2025 (UTC)
- We're here to build an encyclopedia, not to protect commercial search engine companies.
- I think my view aligns with Blueboar's (except that I find no firm preference for photos over classical portrait paintings): We shouldn't have inaccurate AI images of people (living or dead). But the day appears to be coming when AI will generate accurate ones, or at least ones that are close enough to accurate that we can't tell the difference unless the uploader voluntarily discloses that information. Once we can no longer tell the difference, what's the point in banning them? Images need to look like the thing being depicted. When we put an photorealistic image in an article, we could be said to be implicitly claiming that the image looks like whatever's being depicted. We are not necessarily warranting that the image was created through a specific process, but the image really does need to look like the subject. WhatamIdoing (talk) 03:12, 2 January 2025 (UTC)
- You are presuming that sufficient accuracy will prevent us from knowing whether someone is uploading an AI photo, but that is not the case. For instance, if someone uploads large amounts of "photos" of famous people, and can't account for how they got them (e.g. can't give a source where they scraped them from, or dates or any Exif metadata at all for when they were taken), then it will still be obvious that they are likely using AI. Photos of Japan (talk) 17:38, 3 January 2025 (UTC)
- As another editor pointed out in their comment, there's the ethics/moral dilemma of creating fake photorealistic pictures of people and putting them on the internet, especially on a site such as Misplaced Pages and especially on their own biography. WP:BLP says the bios
must be written conservatively and with regard for the subject's privacy.
Some1 (talk) 18:37, 3 January 2025 (UTC) Once we can no longer tell the difference, what's the point in banning them?
Sounds like a wolf's in sheep's clothing to me. Just because the surface appeal of fake pictures gets better, doesn't mean we should let the horse in. Cremastra (u — c) 18:47, 3 January 2025 (UTC)
- If there are no appropriately-licensed images of a person, then by definition any AI-generated image of them will be either a copyright infringement or a complete fantasy. JoelleJay (talk) 04:48, 2 January 2025 (UTC)
- Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant: If an image is a copyvio we can't use it and it is irrelevant why it is a copyvio. If an image is a "complete fantasy" then it is exactly as unusable as a complete fantasy generated by non-AI means, so again AI is irrelevant. I've had to explain this multiple times in this discussion, so read that for more detail and note the lack of refutation. Thryduulf (talk) 04:52, 2 January 2025 (UTC)
- But we can assume good faith that a human isn't blatantly copying something. We can't assume that from an LLM like Stability AI which has been shown to even copy the watermark from Getty's images. Photos of Japan (talk) 05:50, 2 January 2025 (UTC)
- Ooooh, I'm not sure that we can assume that humans aren't blatantly copying something. We can assume that they meant to be helpful, but that's not quite the same thing. WhatamIdoing (talk) 07:48, 2 January 2025 (UTC)
- But we can assume good faith that a human isn't blatantly copying something. We can't assume that from an LLM like Stability AI which has been shown to even copy the watermark from Getty's images. Photos of Japan (talk) 05:50, 2 January 2025 (UTC)
- Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant: If an image is a copyvio we can't use it and it is irrelevant why it is a copyvio. If an image is a "complete fantasy" then it is exactly as unusable as a complete fantasy generated by non-AI means, so again AI is irrelevant. I've had to explain this multiple times in this discussion, so read that for more detail and note the lack of refutation. Thryduulf (talk) 04:52, 2 January 2025 (UTC)
- The issue with the latter is that Misplaced Pages images get picked up by Google and other search engines, where the caption isn't there anymore to add the context that a photorealistic image was AI-generated. Chaotic Enby (talk · contribs) 15:27, 1 January 2025 (UTC)
Oppose.Yes. I echo my comments from the other day regarding BLP illustrations:
lethargilistic (talk) 15:41, 1 January 2025 (UTC)What this conversation is really circling around is banning entire skillsets from contributing to Misplaced Pages merely because some of us are afraid of AI images and some others of us want to engineer a convenient, half-baked, policy-level "consensus" to point to when they delete quality images from Misplaced Pages. Every time someone generates text based on a source, they are doing some acceptable level of interpretation to extract facts or rephrase it around copyright law, and I don't think illustrations should be considered so severely differently as to justify a categorical ban. For instance, the Gisele Pelicot portrait is based on non-free photos of her. Once the illustration exists, it is trivial to compare it to non-free images to determine if it is an appropriate likeness, which it is. That's no different than judging contributed text's compliance with fact and copyright by referring to the source. It shouldn't be treated differently just because most Wikipedians contribute via text.
Additionally, referring to interpretive skillsets that synthesize new information like, random example, statistical analysis. Excluding those from Misplaced Pages is current practice and not controversial. Meanwhile, I think the ability to create images is more fundamental than that. It's not (inheretly) synthesizing new information. A portrait of a person (alongside the other examples in this thread) contains verifiable information. It is current practice to allow them to fill the gaps where non-free photos can't. That should continue. Honestly, it should expand.- Additionally, in direct response to "these images are fake": All illustrations of a subject could be called "fake" because they are not photographs. (Which can also be faked.) The standard for the inclusion of an illustration on Misplaced Pages has never been photorealism, medium, or previous publication in a RS. The standard is how adequately it reflects the facts which it claims to depict. If there is a better image that can be imported to Misplaced Pages via fair use or a license, then an image can be easily replaced. Until such a better image has been sourced, it is absolutely bewildering to me that we would even discuss removing images of people from their articles. What a person looked like is one of the most basic things that people want to know when they look someone up on Misplaced Pages. Including an image of almost any quality (yes, even a cartoon) is practically by definition an improvement to the article and addressing an important need. We should be encouraging artists to continue filling the gaps that non-free images cannot fill, not creating policies that will inevitably expand into more general prejudices against all new illustrations on Misplaced Pages. lethargilistic (talk) 15:59, 1 January 2025 (UTC)
- By "Oppose", I'm assuming your answer to the RfC question is "Yes". And this RfC is about using AI-generated images (generated via text prompts, see also: text-to-image model) to depict BLP subjects, not regarding human-created drawings/cartoons/sketches, etc. of BLPs. Some1 (talk) 16:09, 1 January 2025 (UTC)
- I've changed it to "yes" to reflect the reversed question. I think all of this is related because there is no coherent distinguishing point; AI can be used to create images in a variety of styles. These discussions have shown that a policy of banning AI images will be used against non-AI images of all kinds, so I think it's important to say these kinds of things now. lethargilistic (talk) 16:29, 1 January 2025 (UTC)
- Photorealistic images scraped from who knows where from who knows what sources are without question simply fake photographs and also clear WP:OR and outright WP:SYNTH. There's no two ways about it. Articles do not require images: An article with some Frankenstein-ed image scraped from who knows what, where and, when that you "created" from a prompt is not an improvement over having no image at all. If we can't provide a quality image (like something you didn't cook up from a prompt) then people can find quality, non-fake images elsewhere. :bloodofox: (talk) 23:39, 1 January 2025 (UTC)
- I really encourage you to read the discussion I linked before because it is on the WP:NOR talk page. Images like these do not inherently include either OR or SYNTH, and the arguments that they do cannot be distinguished from any other user-generated image content. But, briefly, I never said articles required images, and this is not about what articles require. It is about improvements to the articles. Including a relevant picture where none exists is almost always an improvement, especially for subjects like people. Your disdain for the method the person used to make an image is irrelevant to whether the content of the image is actually verifiable, and the only thing we ought to care about is the content. lethargilistic (talk) 03:21, 2 January 2025 (UTC)
- Images like these are absolutely nothing more than synthesis in the purest sense of the world and are clearly a violation of WP:SYNTH: Again, you have no idea what data was used to generate these images and you're going to have a very hard time convincing anyone to describe them as anything other than outright fakes.
- A reminder that WP:SYNTH shuts down attempts at manipulation of images ("It is not acceptable for an editor to use photo manipulation to distort the facts or position illustrated by an image. Manipulated images should be prominently noted as such. Any manipulated image where the encyclopedic value is materially affected should be posted to Misplaced Pages:Files for discussion. Images of living persons must not present the subject in a false or disparaging light.") and generating a photorealistic image (from who knows what!) is far beyond that.
- Fake images of people do not improve our articles in any way and only erode reader trust. What's next, an argument for the fake sources LLMs also love to "hallucinate"? :bloodofox: (talk) 03:37, 2 January 2025 (UTC)
- So, if you review the first sentence of SYNTH, you'll see it has no special relevance to this discussion:
Do not combine material from multiple sources to state or imply a conclusion not explicitly stated by any of the sources.
. My primary example has been a picture of a person; what a person looks like is verifiable by comparing the image to non-free images that cannot be used on Misplaced Pages. If the image resembles the person, it is not SYNTH. An illustration of a person created and intended to look like that person is not a manipulation. The training data used to make the AI is irrelevant to whether the image in fact resembles the person. You should also review WP:NOTSYNTH because SYNTH is not a policy; NOR is the policy:If a putative SYNTH doesn't constitute original research, then it doesn't constitute SYNTH.
Additionally, not all synthesis is even SYNTH. A categorical rule against AI cannot be justified by SYNTH because it does not categorically apply to all use cases of AI. To do so would be illogical on top of ill-advised. lethargilistic (talk) 08:08, 2 January 2025 (UTC)- "training data used to make the AI is irrelevant" — spoken like a true AI evangelist! Sorry, 'good enough' photorealism is still just synthetic slop, a fake image presented as real of a human being. A fake image of someone generated from who-knows-what that 'resembles' an article's subject is about as WP:SYNTH as it gets. Yikes. As for the attempts to pass of prompt-generated photorealistic fakes of people as somehow the same as someone's illustration, you're completely wasting your time. :bloodofox: (talk) 09:44, 2 January 2025 (UTC)
- NOR is a content policy and SYNTH is content guidance within NOR. Because you have admitted that this is not about the content for you, NOR and SYNTH are irrelevant to your argument, which boils down to WP:IDONTLIKEIT and, now, inaccurate personal attacks. Continuing this discussion between us would be pointless. lethargilistic (talk) 09:52, 2 January 2025 (UTC)
- This is in fact entirely about content (why the hell else would I bother?) but it is true that I also dismissed your pro-AI 'it's just like a human drawing a picture!' as outright nonsense a while back. Good luck convincing anyone else with that line - it didn't work here. :bloodofox: (talk) 09:59, 2 January 2025 (UTC)
- NOR is a content policy and SYNTH is content guidance within NOR. Because you have admitted that this is not about the content for you, NOR and SYNTH are irrelevant to your argument, which boils down to WP:IDONTLIKEIT and, now, inaccurate personal attacks. Continuing this discussion between us would be pointless. lethargilistic (talk) 09:52, 2 January 2025 (UTC)
- "training data used to make the AI is irrelevant" — spoken like a true AI evangelist! Sorry, 'good enough' photorealism is still just synthetic slop, a fake image presented as real of a human being. A fake image of someone generated from who-knows-what that 'resembles' an article's subject is about as WP:SYNTH as it gets. Yikes. As for the attempts to pass of prompt-generated photorealistic fakes of people as somehow the same as someone's illustration, you're completely wasting your time. :bloodofox: (talk) 09:44, 2 January 2025 (UTC)
- So, if you review the first sentence of SYNTH, you'll see it has no special relevance to this discussion:
- I really encourage you to read the discussion I linked before because it is on the WP:NOR talk page. Images like these do not inherently include either OR or SYNTH, and the arguments that they do cannot be distinguished from any other user-generated image content. But, briefly, I never said articles required images, and this is not about what articles require. It is about improvements to the articles. Including a relevant picture where none exists is almost always an improvement, especially for subjects like people. Your disdain for the method the person used to make an image is irrelevant to whether the content of the image is actually verifiable, and the only thing we ought to care about is the content. lethargilistic (talk) 03:21, 2 January 2025 (UTC)
- By "Oppose", I'm assuming your answer to the RfC question is "Yes". And this RfC is about using AI-generated images (generated via text prompts, see also: text-to-image model) to depict BLP subjects, not regarding human-created drawings/cartoons/sketches, etc. of BLPs. Some1 (talk) 16:09, 1 January 2025 (UTC)
- Additionally, in direct response to "these images are fake": All illustrations of a subject could be called "fake" because they are not photographs. (Which can also be faked.) The standard for the inclusion of an illustration on Misplaced Pages has never been photorealism, medium, or previous publication in a RS. The standard is how adequately it reflects the facts which it claims to depict. If there is a better image that can be imported to Misplaced Pages via fair use or a license, then an image can be easily replaced. Until such a better image has been sourced, it is absolutely bewildering to me that we would even discuss removing images of people from their articles. What a person looked like is one of the most basic things that people want to know when they look someone up on Misplaced Pages. Including an image of almost any quality (yes, even a cartoon) is practically by definition an improvement to the article and addressing an important need. We should be encouraging artists to continue filling the gaps that non-free images cannot fill, not creating policies that will inevitably expand into more general prejudices against all new illustrations on Misplaced Pages. lethargilistic (talk) 15:59, 1 January 2025 (UTC)
- Maybe: there is an implicit assumption with this RFC that an AI generated image would be photorealistic. There hasn't been any discussion of an AI generated sketch. If you asked an AI to generate a sketch (that clearly looked like a sketch, similar to the Gisèle Pelicot example) then I would potentially be ok with it. Photos of Japan (talk) 18:14, 1 January 2025 (UTC)
- That's an interesting thought to consider. At the same time, I worry about (well-intentioned) editors inundating image-less BLP articles with AI-generated images in the style of cartoons/sketches (if only photorealistic ones are prohibited) etc. At least requiring a human to draw/paint/whatever creates a barrier to entry; these AI-generated images can be created in under a minute using these text-to-image models. Editors are already wary about human-created cartoon portraits (see the NORN discussion), now they'll be tasked with dealing with AI-generated ones in BLP articles. Some1 (talk) 20:28, 1 January 2025 (UTC)
- It sounds like your problem is not with AI but with cartoon/sketch images in BLP articles, so AI is once again completely irrelevant. Thryduulf (talk) 22:14, 1 January 2025 (UTC)
- That is a good concern you brought up. There is a possibility of the spamming of low quality AI-generated images which would be laborious to discuss on a case-by-case basis but easy to generate. At the same time though that is a possibility, but not yet an actuality, and WP:CREEP states that new policies should address current problems rather than hypothetical concerns. Photos of Japan (talk) 22:16, 1 January 2025 (UTC)
- That's an interesting thought to consider. At the same time, I worry about (well-intentioned) editors inundating image-less BLP articles with AI-generated images in the style of cartoons/sketches (if only photorealistic ones are prohibited) etc. At least requiring a human to draw/paint/whatever creates a barrier to entry; these AI-generated images can be created in under a minute using these text-to-image models. Editors are already wary about human-created cartoon portraits (see the NORN discussion), now they'll be tasked with dealing with AI-generated ones in BLP articles. Some1 (talk) 20:28, 1 January 2025 (UTC)
- Easy no for me. I am not against the use of AI images wholesale, but I do think that using AI to represent an existent thing such as a person or a place is too far. Even a tag wouldn't be enough for me. Cessaune 19:05, 1 January 2025 (UTC)
- No obviously, per previous discussions about cartoonish drawn images in BLPs. Same issue here as there, it is essentially original research and misrepresentation of a living person's likeness. Zaathras (talk) 22:19, 1 January 2025 (UTC)
- No to photorealistic, no to cartoonish... this is not a hard choice. The idea that "this has nothing to do with AI" when "AI" magnifies the problem to stupendous proportions is just not tenable. XOR'easter (talk) 23:36, 1 January 2025 (UTC)
- While AI might "amplify" the thing you dislike, that does not make AI the problem. The problem is whatever underlying thing is being amplified. Thryduulf (talk) 01:16, 2 January 2025 (UTC)
- The thing that amplifies the problem is necessarily a problem. XOR'easter (talk) 02:57, 2 January 2025 (UTC)
- That is arguable, but banning the amplifier does not do anything to solve the problem. In this case, banning the amplifier would cause multiple other problems that nobody supporting this proposal as even attempted to address, let alone mitigate. Thryduulf (talk) 03:04, 2 January 2025 (UTC)
- The thing that amplifies the problem is necessarily a problem. XOR'easter (talk) 02:57, 2 January 2025 (UTC)
- While AI might "amplify" the thing you dislike, that does not make AI the problem. The problem is whatever underlying thing is being amplified. Thryduulf (talk) 01:16, 2 January 2025 (UTC)
- No for all people, per Chaotic Enby. Nikkimaria (talk) 03:23, 2 January 2025 (UTC) Add: no to any AI-generated images, whether photorealistic or not. Nikkimaria (talk) 04:00, 3 January 2025 (UTC)
- No - We should not be hosting faked images (except as notable fakes). We should also not be hosting copyvios (
"Whether it would be a copyright infringement or not is both an unsettled legal question and not relevant"
is just totally wrong - we should be steering clear of copyvios, and if the issue is unsettled then we shouldn't use them until it is). - If people upload faked images to WP or Commons the response should be as it is now. The fact that fakes are becoming harder to detect simply from looking at them hardly affects this - we simply confirm when the picture was supposed to have been taken and examine the plausibility of it from there. FOARP (talk) 14:39, 2 January 2025 (UTC) FOARP (talk) 14:39, 2 January 2025 (UTC)
we should be steering clear of copyvio
we do - if an image is a copyright violation it gets deleted, regardless of why it is a copyright violation. What we do not do is ban using images that are not copyright violations because they are copyright violations. Currently the WMF lawyers and all the people on Commons who know more about copyright than I do say that at least some AI images are legally acceptable for us to host and use. If you want to argue that, then go ahead, but it is not relevant to this discussion.if people upload faked images the response should be as it is now
in other words you are saying that the problem is faked images not AI, and that current policies are entirely adequate to deal with the problem of faked images. So we don't need any specific rules for AI images - especially given that not all AI images are fakes. Thryduulf (talk) 15:14, 2 January 2025 (UTC)- The idea that
current policies are entirely adequate
is like saying that a lab shouldn't have specific rules about wearing eye protection when it already has a poster hanging on the wall that says "don't hurt yourself". XOR'easter (talk) 18:36, 2 January 2025 (UTC)- I rely on one of those rotating shaft warnings up in my workshop at home. I figure if that doesn't keep me safe, nothing will. ScottishFinnishRadish (talk) 18:41, 2 January 2025 (UTC)
- "
in other words you are saying that the problem is faked images not AI
" - AI generated images *are* fakes. This is merely confirming that for the avoidance of doubt. - "
at least some AI images are legally acceptable for us
" - Until they decide which ones that isn't much help. FOARP (talk) 19:05, 2 January 2025 (UTC)- Yes – what FOARP said. AI-generated images are fakes and are misleading. Cremastra (u — c) 19:15, 2 January 2025 (UTC)
- "
- Those specific rules exist because generic warnings have proven not to be sufficient. Nobody has presented any evidence that the current policies are not sufficient, indeed quite the contrary. Thryduulf (talk) 19:05, 2 January 2025 (UTC)
- I rely on one of those rotating shaft warnings up in my workshop at home. I figure if that doesn't keep me safe, nothing will. ScottishFinnishRadish (talk) 18:41, 2 January 2025 (UTC)
- The idea that
- No! This would be a massive can of worms; perhaps, however, we wish to cause problems in the new year. JuxtaposedJacob (talk) | :) | he/him | 15:00, 2 January 2025 (UTC)
- Noting that I think that no AI-generated images are acceptable in BLP articles, regardless of whether they are photorealistic or not. JuxtaposedJacob (talk) | :) | he/him | 15:40, 3 January 2025 (UTC)
- No, unless the AI image has encyclopedic significance beyond "depicts a notable person". AI images, if created by editors for the purpose of inclusion in Misplaced Pages, convey little reliable information about the person they depict, and the ways in which the model works are opaque enough to most people as to raise verifiability concerns. ModernDayTrilobite (talk • contribs) 15:25, 2 January 2025 (UTC)
- To clarify, do you object to uses of an AI image in a BLP when the subject uses that image for self-identification? I presume that AI images that have been the subject of notable discussion are an example of "significance beyond depict a notable person"? Thryduulf (talk) 15:54, 2 January 2025 (UTC)
- If the subject uses the image for self-identification, I'd be fine with it - I think that'd be analogous to situations such as "cartoonist represented by a stylized self-portrait", which definitely has some precedent in articles like Al Capp. I agree with your second sentence as well; if there's notable discussion around a particular AI image, I think it would be reasonable to include that image on Misplaced Pages. ModernDayTrilobite (talk • contribs) 19:13, 2 January 2025 (UTC)
- To clarify, do you object to uses of an AI image in a BLP when the subject uses that image for self-identification? I presume that AI images that have been the subject of notable discussion are an example of "significance beyond depict a notable person"? Thryduulf (talk) 15:54, 2 January 2025 (UTC)
- No, with obvious exceptions, including if the subject theyrself uses the image as a their representation, or if the image is notable itself. Not including the lack of a free aleternative, if there is no free alternative... where did the AI find data to build an image... non free too. Not including images generated by WP editors (that's kind of original research... - Nabla (talk) 18:02, 2 January 2025 (UTC
- Maybe I think the question is unfair as it is illustrated with what appears to be a photo of the subject but isn't. People are then getting upset that they've been misled. As others note, there are copyright concerns with AI reproducing copyrighted works that in turn make an image that is potentially legally unusable. But that is more a matter for Commons than for Misplaced Pages. As many have noted, a sketch or painting never claims to be an accurate depiction of a person, and I don't care if that sketch or painting was done by hand or an AI prompt. I strongly ask Some1 to abort the RFC. You've asked people to give a yes/no vote to what is a more complex issue. A further problem with the example used is the unfortunate prejudice on Misplaced Pages against user-generated content. While the text-generated AI of today is crude and random, there will come a point where many professionally published photos illustrating subjects, including people, are AI generated. Even today, your smartphone can create a groupshot where everyone is smiling and looking at the camera. It was "trained" on the 50 images it quickly took and responded to the build-in "text prompt" of "create a montage of these photos such that everyone is smiling and looking at the camera". This vote is a knee jerk reaction to content that is best addressed by some other measure (such as that it is a misleading image). And a good example of asking people to vote way too early, when the issues haven't been throught out -- Colin° 18:17, 2 January 2025 (UTC)
- No This would very likely set a dangerous precedent. The only exception I think should be if the image itself is notable. If we move forward with AI images, especially for BLPs, it would only open up a whole slew of regulations and RfCs to keep them in check. Better no image than some digital multiverse version of someone that is "basically" them but not really. Not to mention the ethics/moral dilemma of creating fake photorealistic pictures of people and putting them on the internet. Tepkunset (talk) 18:31, 2 January 2025 (UTC)
- No. LLMs don't generate answers, they generate things that look like answers, but aren't; a lot of the time, that's good enough, but sometimes it very much isn't. It's the same issue for text-to-image models: they don't generate photos of people, they generate things that look like photos. Using them on BLPs is unacceptable. DS (talk) 19:30, 2 January 2025 (UTC)
- No. I would be pissed if the top picture of me on Google was AI-generated. I just don't think it's moral for living people. The exceptions given above by others are okay, such as if the subject uses the picture themselves or if the picture is notable (with context given). win8x (talk) 19:56, 2 January 2025 (UTC)
- No. Uploading alone, although mostly a Commons issue, would already a problem to me and may have personality rights issues. Illustrating an article with a fake photo (or drawing) of a living person, even if it is labeled as such, would not be acceptable. For example, it could end up being shown by search engines or when hovering over a Misplaced Pages link, without the disclaimer. ~ ToBeFree (talk) 23:54, 2 January 2025 (UTC)
- I was going to say no... but we allow paintings as portraits in BLPs. What's so different between an AI generated image, and a painting? Arguments above say the depiction may not be accurate, but the same is true of some paintings, right? (and conversely, not true of other paintings) ProcrastinatingReader (talk) 00:48, 3 January 2025 (UTC)
- A painting is clearly a painting; as such, the viewer knows that it is not an accurate representation of a particular reality. An AI-generated image made to look exactly like a photo, looks like a photo but is not.
- DS (talk) 02:44, 3 January 2025 (UTC)
- Not all paintings are clearly paintings. Not all AI-generated images are made to look like photographs. Not all AI-generated images made to look like photos do actually look like photos. This proposal makes no distinction. Thryduulf (talk) 02:55, 3 January 2025 (UTC)
- Not to mention, hyper-realism is a style an artist may use in virtually any medium. Colored pencils can be used to make extremely realistic portraits. If Misplaced Pages would accept an analog substitute like a painting, there's no reason Misplaced Pages shouldn't accept an equivalent painting made with digital tools, and there's no reason Misplaced Pages shouldn't accept an equivalent painting made with AI. That is, one where any obvious defects have been edited out and what remains is a straightforward picture of the subject. lethargilistic (talk) 03:45, 3 January 2025 (UTC)
- For the record (and for any media watching), while I personally find it fascinating that a few editors here are spending a substantial amount of time (in the face of an overwhelming 'absolutely not' consensus no less) attempting to convince others that computer-generated (that is, faked) photos of human article subjects are somehow a good thing, I also find it interesting that these editors seem to express absolutely no concern for the intensely negative reaction they're already seeing from their fellow editors and seem totally unconcerned about the inevitable trust drop we'd experience from Misplaced Pages readers when they would encounter fake photos on our BLP articles especially. :bloodofox: (talk) 03:54, 3 January 2025 (UTC)
- Misplaced Pages's reputation would not be affected positively or negatively by expanding the current-albeit-sparse use of illustrations to depict subjects that do not have available pictures. In all my writing about this over the last few days, you are the only one who has said anything negative about me as a person or, really, my arguments themselves. As loath as I am to cite it, WP:AGF means assuming that people you disagree with are not trying to hurt Misplaced Pages. Thryduulf, I, and others have explained in detail why we think our ultimate ideas are explicit benefits to Misplaced Pages and why our opposition to these immediate proposals comes from a desire to prevent harm to Misplaced Pages. I suggest taking a break to reflect on that, matey. lethargilistic (talk) 04:09, 3 January 2025 (UTC)
- Look, I don't know if you've been living under a rock or what for the past few years but the reality is that people hate AI images and dumping a ton of AI/fake images on Misplaced Pages, a place people go for real information and often trust, inevitably leads to a huge trust issue, something Misplaced Pages is increasingly suffering from already. This is especially a problem when they're intended to represent living people (!). I'll leave it to you to dig up the bazillion controversies that have arisen and continue to arise since companies worldwide have discovered that they can now replace human artists with 'AI art' produced by "prompt engineers" but you can't possibly expect us to ignore that reality when discussing these matters. :bloodofox: (talk) 04:55, 3 January 2025 (UTC)
- Those trust issues are born from the publication of hallucinated information. I have only said that it should be OK to use an image on Misplaced Pages when it contains only verifiable information, which is the same standard we apply to text. That standard is and ought to be applied independently of the way the initial version of an image was created. lethargilistic (talk) 06:10, 3 January 2025 (UTC)
- Look, I don't know if you've been living under a rock or what for the past few years but the reality is that people hate AI images and dumping a ton of AI/fake images on Misplaced Pages, a place people go for real information and often trust, inevitably leads to a huge trust issue, something Misplaced Pages is increasingly suffering from already. This is especially a problem when they're intended to represent living people (!). I'll leave it to you to dig up the bazillion controversies that have arisen and continue to arise since companies worldwide have discovered that they can now replace human artists with 'AI art' produced by "prompt engineers" but you can't possibly expect us to ignore that reality when discussing these matters. :bloodofox: (talk) 04:55, 3 January 2025 (UTC)
- Misplaced Pages's reputation would not be affected positively or negatively by expanding the current-albeit-sparse use of illustrations to depict subjects that do not have available pictures. In all my writing about this over the last few days, you are the only one who has said anything negative about me as a person or, really, my arguments themselves. As loath as I am to cite it, WP:AGF means assuming that people you disagree with are not trying to hurt Misplaced Pages. Thryduulf, I, and others have explained in detail why we think our ultimate ideas are explicit benefits to Misplaced Pages and why our opposition to these immediate proposals comes from a desire to prevent harm to Misplaced Pages. I suggest taking a break to reflect on that, matey. lethargilistic (talk) 04:09, 3 January 2025 (UTC)
- For the record (and for any media watching), while I personally find it fascinating that a few editors here are spending a substantial amount of time (in the face of an overwhelming 'absolutely not' consensus no less) attempting to convince others that computer-generated (that is, faked) photos of human article subjects are somehow a good thing, I also find it interesting that these editors seem to express absolutely no concern for the intensely negative reaction they're already seeing from their fellow editors and seem totally unconcerned about the inevitable trust drop we'd experience from Misplaced Pages readers when they would encounter fake photos on our BLP articles especially. :bloodofox: (talk) 03:54, 3 January 2025 (UTC)
- Not to mention, hyper-realism is a style an artist may use in virtually any medium. Colored pencils can be used to make extremely realistic portraits. If Misplaced Pages would accept an analog substitute like a painting, there's no reason Misplaced Pages shouldn't accept an equivalent painting made with digital tools, and there's no reason Misplaced Pages shouldn't accept an equivalent painting made with AI. That is, one where any obvious defects have been edited out and what remains is a straightforward picture of the subject. lethargilistic (talk) 03:45, 3 January 2025 (UTC)
- Not all paintings are clearly paintings. Not all AI-generated images are made to look like photographs. Not all AI-generated images made to look like photos do actually look like photos. This proposal makes no distinction. Thryduulf (talk) 02:55, 3 January 2025 (UTC)
- To my eye, the distinction between AI images and paintings here is less a question of medium and more of verifiability: the paintings we use (or at least the ones I can remember) are significant paintings that have been acknowledged in sources as being reasonable representations of a given person. By contrast, a purpose-generated AI image would be more akin to me painting a portrait of somebody here and now and trying to stick that on their article. The image could be a faithful representation (unlikely, given my lack of painting skills, but let's not get lost in the metaphor), but if my painting hasn't been discussed anywhere besides Misplaced Pages, then it's potentially OR or UNDUE to enshrine it in mainspace as an encyclopedic image. ModernDayTrilobite (talk • contribs) 05:57, 3 January 2025 (UTC)
- An image contains a collection of facts, and those facts need to be verifiable just like any other information posted on Misplaced Pages. An image that verifiably resembles a subject as it is depicted in reliable sources is categorically not OR. Discussion in other sources is not universally relevant; we don't restrict ourselves to only previously-published images. If we did that, Misplaced Pages would have very few images. lethargilistic (talk) 06:18, 3 January 2025 (UTC)
- Verifiable how? Only by the editor themselves comparing to a real photo (which was probably used by the LLM to create the image…).
- These things are fakes. The analysis stops there. FOARP (talk) 10:48, 4 January 2025 (UTC)
- Verifiable by comparing them to a reliable source. Exactly the same as what we do with text. There is no coherent reason to treat user-generated images differently than user-generated text, and the universalist tenor of this discussion has damaging implications for all user-generated images regardless of whether they were created with AI. Honestly, I rarely make arguments like this one, but I think it could show some intuition from another perspective: Imagine it's 2002 and Misplaced Pages is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures. The text editors say the artists cannot contribute ANYTHING to Misplaced Pages because their images that have not been previously published are not verifiable. That is a double-standard that privileges the contributions of text-editors simply because most users are text-editors and they are used to verifying text; that is not a principled reason to treat text and images differently. Moreover, that is simply not what happened—The opposite happend, and images are treated as verifiable based on their contents just like text because that's a common sense reading of the rule. It would have been madness if images had been treated differently. And yet that is essentially the fundamentalist position of people who are extending their opposition to AI with arguments that apply to all images. If they are arguing verifiability seriously at all, they are pretending that the sort of degenerate situation I just described already exists when the opposite consensus has been reached consistently for years. In the related NOR thread, they even tried to say Wikipedians had "turned a blind eye" to these image issues as if negatively characterizing those decisions would invalidate the fact that those decisions were consensus. The motivated reasoning of these discussions has been as blatant as that.
At the bottom of this dispute, I take issue with trying to alter the rules in a way that creates a new double-standard within verifiability that applies to all images but not text. That's especially upsetting when (despite my and others' best efforts) so many of us are still focusing SOLELY on their hatred for AI rather than considering the obvious second-order consequences for user-generated images as a whole.
Frankly, in no other context has any Wikipedian ever allowed me to say text they wrote was "fake" or challenge an image based on whether it was "fake." The issue has always been verifiability, not provenance or falsity. Sometimes, IMO, that has lead to disaster and Misplaced Pages saying things I know to be factually untrue despite the contents of reliable sources. But that is the policy. We compare the contents of Misplaced Pages to reliable sources, and the contents of Misplaced Pages are considered verifiable if they cohere.
I ask again: If Misplaced Pages's response to the creation of AI imaging tools is to crack down on all artistic contributions to Misplaced Pages (which seems to be the inevitable direction of these discussions), what does that say? If our negative response to AI tools is to limit what humans can do on Misplaced Pages, what does that say? Are we taking a stand for human achievements, or is this a very heated discussion of cutting off our nose to save our face? lethargilistic (talk) 23:31, 4 January 2025 (UTC)"Verifiable by comparing them to a reliable source"
- comparing two images and saying that one looks like the other is not "verifying" anything. The text equivalent is presenting something as a quotation that is actually a user-generated paraphrasing."Frankly, in no other context has any Wikipedian ever allowed me to say text they wrote was "fake" or challenge an image based on whether it was "fake.""
- Try presenting a paraphrasing as a quotation and see what happens."Imagine it's 2002 and Misplaced Pages is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures..."
- This basically happened, and is the origin of WP:NOTGALLERY. Misplaced Pages is not a host for original works. FOARP (talk) 22:01, 6 January 2025 (UTC)Comparing two images and saying that one looks like the other is not "verifying" anything.
Comparing text to text in a reliable source is literally the same thing.The text equivalent is presenting something as a quotation that is actually a user-generated paraphrasing.
No it isn't. The text equivalent is writing a sentence in an article and putting a ref tag on it. Perhaps there is room for improving the referencing of images in the sense that they should offer example comparisons to make. But an image created by a person is not unverifiable simply because it is user-generated. It is not somehow more unverifiable simply because it is created in a lifelike style.Try presenting a paraphrasing as a quotation and see what happens.
Besides what I just said, nobody is even presenting these images as equatable to quotations. People in this thread have simply been calling them "fake" of their own initiative; the uploaders have not asserted that these are literal photographs to my knowledge. The uploaders of illustrations obviously did not make that claim either. (And, if the contents of the image is a copyvio, that is a separate issue entirely.)This basically happened, and is the origin of WP:NOTGALLERY.
That is not the same thing. User-generated images that illustrate the subject are not prohibited by WP:NOTGALLERY. Misplaced Pages is a host of encyclopedic content, and user-generated images can have encyclopedic content. lethargilistic (talk) 02:41, 7 January 2025 (UTC)- Images are way more complex than text. Trying to compare them in the same way is a very dangerous simplification. Cremastra (u — c) 02:44, 7 January 2025 (UTC)
- Assume only non-free images exist of a person. An illustrator refers to those non-free images and produces a painting. From that painting, you see a person who looks like the person in the non-free photographs. The image is verified as resembling the person. That is a simplification, but to call it "dangerous" is disingenuous at best. The process for challenging the image is clear. Someone who wants to challenge the veracity of the image would just need to point to details that do not align. For instance, "he does not typically have blue hair" or "he does not have a scar." That is what we already do, and it does not come up much because it would be weird to deliberately draw an image that looks nothing like the person. Additionally, someone who does not like the image for aesthetic reasons rather than encyclopedic ones always has the option of sourcing a photograph some other way like permission, fair use, or taking a new one themself. This is not an intractable problem. lethargilistic (talk) 02:57, 7 January 2025 (UTC)
- So a photorealistic AI-generated image would be considered acceptable until someone identifies a "big enough" difference? How is that anything close to ethical? An portrait that's got an extra mole or slightly wider nose bridge or lacks a scar is still not an image of the person regardless of whether random Misplaced Pages editors notice. And while I don't think user-generated non-photorealistic images should ever be used on biographies either, at least those can be traced back to a human who is ultimately responsible for the depiction, who can point to the particular non-free images they used as references, and isn't liable to average out details across all time periods of the subject. And that's not even taking into account the copyright issues. JoelleJay (talk) 22:52, 7 January 2025 (UTC)
- +1 to what JoelleJay said. The problem is that AI-generated images are simulations trying to match existing images, sometimes, yes, with an impressive degree of accuracy. But they will always be inferior to a human-drawn painting that's trying to depict the person. We're a human encyclopedia, and we're built by humans doing human things and sometimes with human errors. Cremastra (u — c) 23:18, 7 January 2025 (UTC)
- You can't just raise this to an "ethical" issue by saying the word "ethical." You also can't just invoke copyright without articulating an actual copyright issue; we are not discussing copyvio. Everyone agrees that a photo with an actual copyvio in it is subject to that policy.
- But to address your actual point: Any image—any photo—beneath the resolution necessary to depict the mole would be missing the mole. Even with photography, we are never talking about science-fiction images that perfectly depict every facet of a person in an objective sense. We are talking about equipment that creates an approximation of reality. The same is true of illustrations and AI imagery.
- Finally, a human being is responsible for the contents of the image because a human is selecting it and is responsible for correcting any errors. The result is an image that someone is choosing to use because they believe it is an appropriate likeness. We should acknowledge that human decision and evaluate it naturally—Is it an appropriate likeness? lethargilistic (talk) 10:20, 8 January 2025 (UTC)
- (Second comment because I'm on my phone.) I realize I should also respond to this in terms of additive information. What people look like is not static in the way your comment implies. Is it inappropriate to use a photo because they had a zit on the day it was taken? Not necessarily. Is an image inappropriate because it is taken at a bad angle that makes them look fat? Judging by the prolific ComicCon photographs (where people seem to make a game of choosing the worst-looking options; seriously, it's really bad), not necessarily. Scars and bruises exist and then often heal over time. The standard for whether an image with "extra" details is acceptable would still be based on whether it comports acceptably with other images; we literally do what you have capriciously described as "unethical" and supplement it with our compassionate desire to not deliberately embarrass BLPs. (The ComicCon images aside, I guess.) So, no, I would not be a fan of using images that add prominent scars where the subject is not generally known to have one, but that is just an unverifiable fact that does not belong in a Misplaced Pages image. Simple as. lethargilistic (talk) 10:32, 8 January 2025 (UTC)
- So a photorealistic AI-generated image would be considered acceptable until someone identifies a "big enough" difference? How is that anything close to ethical? An portrait that's got an extra mole or slightly wider nose bridge or lacks a scar is still not an image of the person regardless of whether random Misplaced Pages editors notice. And while I don't think user-generated non-photorealistic images should ever be used on biographies either, at least those can be traced back to a human who is ultimately responsible for the depiction, who can point to the particular non-free images they used as references, and isn't liable to average out details across all time periods of the subject. And that's not even taking into account the copyright issues. JoelleJay (talk) 22:52, 7 January 2025 (UTC)
- Assume only non-free images exist of a person. An illustrator refers to those non-free images and produces a painting. From that painting, you see a person who looks like the person in the non-free photographs. The image is verified as resembling the person. That is a simplification, but to call it "dangerous" is disingenuous at best. The process for challenging the image is clear. Someone who wants to challenge the veracity of the image would just need to point to details that do not align. For instance, "he does not typically have blue hair" or "he does not have a scar." That is what we already do, and it does not come up much because it would be weird to deliberately draw an image that looks nothing like the person. Additionally, someone who does not like the image for aesthetic reasons rather than encyclopedic ones always has the option of sourcing a photograph some other way like permission, fair use, or taking a new one themself. This is not an intractable problem. lethargilistic (talk) 02:57, 7 January 2025 (UTC)
- Images are way more complex than text. Trying to compare them in the same way is a very dangerous simplification. Cremastra (u — c) 02:44, 7 January 2025 (UTC)
- We don't evaluate the reliability of a source solely by comparing it to other sources. For example, there is an ongoing discussion at the baseball WikiProject talk page about the reliability of a certain web site. It lists no authors nor any information on its editorial control policy, so we're not able to evaluate its reliability. The reliability of all content being used as a source, including images, needs to be considered in terms of its provenance. isaacl (talk) 23:11, 7 January 2025 (UTC)
- Verifiable by comparing them to a reliable source. Exactly the same as what we do with text. There is no coherent reason to treat user-generated images differently than user-generated text, and the universalist tenor of this discussion has damaging implications for all user-generated images regardless of whether they were created with AI. Honestly, I rarely make arguments like this one, but I think it could show some intuition from another perspective: Imagine it's 2002 and Misplaced Pages is just starting. Most users want to contribute text to the encyclopedia, but there is a cadre of artists who want to contribute pictures. The text editors say the artists cannot contribute ANYTHING to Misplaced Pages because their images that have not been previously published are not verifiable. That is a double-standard that privileges the contributions of text-editors simply because most users are text-editors and they are used to verifying text; that is not a principled reason to treat text and images differently. Moreover, that is simply not what happened—The opposite happend, and images are treated as verifiable based on their contents just like text because that's a common sense reading of the rule. It would have been madness if images had been treated differently. And yet that is essentially the fundamentalist position of people who are extending their opposition to AI with arguments that apply to all images. If they are arguing verifiability seriously at all, they are pretending that the sort of degenerate situation I just described already exists when the opposite consensus has been reached consistently for years. In the related NOR thread, they even tried to say Wikipedians had "turned a blind eye" to these image issues as if negatively characterizing those decisions would invalidate the fact that those decisions were consensus. The motivated reasoning of these discussions has been as blatant as that.
- An image contains a collection of facts, and those facts need to be verifiable just like any other information posted on Misplaced Pages. An image that verifiably resembles a subject as it is depicted in reliable sources is categorically not OR. Discussion in other sources is not universally relevant; we don't restrict ourselves to only previously-published images. If we did that, Misplaced Pages would have very few images. lethargilistic (talk) 06:18, 3 January 2025 (UTC)
- Can you note in your !vote whether AI-generated images (generated via text prompts/text-to-image models) that are not photo-realistic / hyper-realistic in style are okay to use to depict BLP subjects? For example, see the image to the right, which was added then removed from his article: Pinging people who !voted No above: User:Chaotic Enby, User:Cremastra, User:Horse Eye's Back, User:Pythoncoder, User:Kj cheetham, User:Bloodofox, User:Gnomingstuff, User:JoelleJay, User:Carrite, User:Seraphimblade, User:David Eppstein, User:Randy Kryn, User:Traumnovelle, User:SuperJew, User:Doawk7, User:Di (they-them), User:Masem, User:Cessaune, User:Zaathras, User:XOR'easter, User:Nikkimaria, User:FOARP, User:JuxtaposedJacob, User:ModernDayTrilobite, User:Nabla, User:Tepkunset, User:DragonflySixtyseven, User:Win8x, User:ToBeFree --- Some1 (talk) 03:55, 3 January 2025 (UTC)
- Still no, I thought I was clear on that but we should not be using AI-generated images in articles for anything besides representing the concept of AI-generated images, or if an AI-generated image is notable or irreplaceable in its own right -- e.g, a musician uses AI to make an album cover.
- (this isn't even a good example, it looks more like Steve Bannon)
- Gnomingstuff (talk) 04:07, 3 January 2025 (UTC)
- Was I unclear? No to all of them. XOR'easter (talk) 04:13, 3 January 2025 (UTC)
- Still no, because carving out that type of exception will just lead to arguments down the line about whether a given image is too realistic. —pythoncoder (talk | contribs) 04:24, 3 January 2025 (UTC)
- I still think no. My opposition isn't just to the fact that AI images are misinformation, but also that they essentially serve as a loophole for getting around Enwiki's image use policy. To know what somebody looks like, an AI generator needs to have images of that person in its dataset, and it draws on those images to generate a derivative work. If we have no free images of somebody and we use AI to make one, that's just using a fair use copyrighted image but removed by one step. The image use policy prohibits us from using fair use images for BLPs so I don't think we should entertain this loophole. If we do end up allowing AI images in BLPs, that just disqualifies the rationale of not allowing fair use in the first place. Di (they-them) (talk) 04:40, 3 January 2025 (UTC)
- No those are not okay, as this will just cause arguments from people saying a picture is obviously AI-generated, and that it is therefore appropriate. As I mentionned above, there are some exceptions to this, which Gnomingstuff perfectly describes. Fake sketches/cartoons are not appropriate and provide little encyclopedic value. win8x (talk) 05:27, 3 January 2025 (UTC)
- No to this as well, with the same carveout for individual images that have received notable discussion. Non-photorealistic AI images are going to be no more verifiable than photorealistic ones, and on top of that will often be lower-quality as images. ModernDayTrilobite (talk • contribs) 05:44, 3 January 2025 (UTC)
- Thanks for the ping, yes I can, the answer is no. ~ ToBeFree (talk) 07:31, 3 January 2025 (UTC)
- No, and that image should be deleted before anyone places it into a mainspace article. Changing the RfC intro long after its inception seems a second bite at an apple that's not aged well. Randy Kryn (talk) 09:28, 3 January 2025 (UTC)
- The RfC question has not been changed; another editor was complaining that the RfC question did not make a distinction between photorealistic/non-photorealistic AI-generated images, so I had to add a note to the intro and ping the editors who'd voted !No to clarify things. It has only been 3 days; there's still 27 more days to go. Some1 (talk) 11:18, 3 January 2025 (UTC)
- Also answering No to this one per all the arguments above. "It has only been 3 days" is not a good reason to change the RfC question, especially since many people have already !voted and the "30 days" is mostly indicative rather than an actual deadline for a RfC. Chaotic Enby (talk · contribs) 14:52, 3 January 2025 (UTC)
- The RfC question hasn't been changed; see my response to Zaathras below. Some1 (talk) 15:42, 3 January 2025 (UTC)
- Also answering No to this one per all the arguments above. "It has only been 3 days" is not a good reason to change the RfC question, especially since many people have already !voted and the "30 days" is mostly indicative rather than an actual deadline for a RfC. Chaotic Enby (talk · contribs) 14:52, 3 January 2025 (UTC)
- The RfC question has not been changed; another editor was complaining that the RfC question did not make a distinction between photorealistic/non-photorealistic AI-generated images, so I had to add a note to the intro and ping the editors who'd voted !No to clarify things. It has only been 3 days; there's still 27 more days to go. Some1 (talk) 11:18, 3 January 2025 (UTC)
- No, that's even a worse possible approach. — Masem (t) 13:24, 3 January 2025 (UTC)
- No. We're the human encyclopedia. We should have images drawn or taken by real humans who are trying to depict the subject, not by machines trying to simulate an image. Besides, the given example is horribly drawn. Cremastra (u — c) 15:03, 3 January 2025 (UTC)
- I like these even less than the photorealistic ones... This falls into the same basket for me: if we wouldn't let a random editor who drew this at home using conventional tools add it to the article why would we let a random editor who drew this at home using AI tools at it to the article? (and just to be clear the AI generated image of Germán Larrea Mota-Velasco is not recognizable as such) Horse Eye's Back (talk) 16:06, 3 January 2025 (UTC)
- I said *NO*. FOARP (talk) 10:37, 4 January 2025 (UTC)
- No Having such images as said above means the AI had to use copyrighted pictures to create it and we shouldn't use it. --SuperJew (talk) 01:12, 5 January 2025 (UTC)
- Still no. If for no other reason than that it's a bad precedent. As others have said, if we make one exception, it will just lead to arguments in the future about whether something is "realistic" or not. I also don't see why we would need cartoon/illustrated-looking AI pictures of people in BLPs. Tepkunset (talk) 20:43, 6 January 2025 (UTC)
- Absolutely not. These images are based on whatever the AI could find on the internet, with little to no regard for copyright. Misplaced Pages is better than this. Retswerb (talk) 10:16, 3 January 2025 (UTC)
- Comment The RfC question should not have been fiddled with, esp. for such a minor argument that the complai9nmant could have simply included in their own vote. I have no need to re-confirm my own entry. Zaathras (talk) 14:33, 3 January 2025 (UTC)
- The RfC question hasn't been modified; I've only added a 03:58, January 3, 2025: Note clarifying that these images can either be photorealistic in style or non-photorealistic in style. I pinged all the !No voters to make them aware. I could remove the Note if people prefer that I do (but the original RfC question is the exact same as it is now, so I don't think the addition of the Note makes a whole ton of difference). Some1 (talk) 15:29, 3 January 2025 (UTC)
- No At this point it feels redundant, but I'll just add to the horde of responses in the negative. I don't think we can fully appreciate the issues that this would cause. The potential problems and headaches far outweigh whatever little benefit might come from AI images for BLPs. pillowcrow 21:34, 3 January 2025 (UTC)
- Support temporary blanket ban with a posted expiration/requred rediscussion date of no more than two years from closing. AI as the term is currently used is very, very new. Right now these images would do more harm than good, but it seems likely that the culture will adjust to them. Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)
- No. Misplaced Pages is made by and for humans. I don't want to become Google. Adding an AI-generated image to a page whose topic isn't about generative AI makes me feel insulted. SWinxy (talk) 00:03, 4 January 2025 (UTC)
- No. Generative AI may have its place, and it may even have a place on Misplaced Pages in some form, but that place isn't in BLPs. There's no reason to use images of someone that do not exist over a real picture, or even something like a sketch, drawing, or painting. Even in the absence of pictures or human-drawn/painted images, I don't support using AI-generated images; they're not really pictures of the person, after all, so I can't support using them on articles of people. Using nothing would genuinely be a better choice than generated images. SmittenGalaxy | talk! 01:07, 4 January 2025 (UTC)
- No due to reasons of copyright (AI harvests copyrighted material) and verifiability. Gamaliel (talk) 18:12, 4 January 2025 (UTC)
- No. Even if you are willing to ignore the inherently fraught nature of using AI-generated anything in relation to BLP subjects, there is simply little to no benefit that could possibly come from trying something like this. There's no guarantee the images will actually look like the person in question, and therefore there's no actual context or information that the image is providing the reader. What a baffling proposal. Ithinkiplaygames (talk) 19:53, 4 January 2025 (UTC)
There's no guarantee the images will actually look like the person in question
there is no guarantee any image will look like the person in question. When an image is not a good likeness, regardless of why, we don't use it. When am image is a good likeness we consider using it. Whether an image is AI-generated or not it is completely independent of whether it is a good likeness. There are also reason other then identification why images are used on BLP-articles. Thryduulf (talk) 20:39, 4 January 2025 (UTC)
- Foreseeably there may come a time when people's official portraits are AI-enhanced. That time might not be very far in the future. Do we want an exception for official portraits?—S Marshall T/C 01:17, 5 January 2025 (UTC)
- This subsection is about purely AI-generated works, not about AI-enhanced ones. Chaotic Enby (talk · contribs) 01:23, 5 January 2025 (UTC)
- No. Per Cremastra, "We should have images drawn or taken by real humans who are trying to depict the subject," - User:RossEvans19 (talk) 02:12, 5 January 2025 (UTC)
- Yes, depending on specific case. One can use drawings by artists, even such as caricature. The latter is an intentional distortion, one could say an intentional misinformation. Still, such images are legitimate on many pages. Or consider numerous images of Jesus. How realiable are they? I am not saying we must deliberatly use AI images on all pages, but they may be fine in some cases. Now, speaking on "medical articles"... One might actually use the AI generated images of certain biological objects like proteins or organelles. Of course a qualified editorial judgement is always needed to decide if they would improve a specific page (frequently they would not), but making a blanket ban would be unacceptable, in my opinion. For example, the images of protein models generatated by AlphaFold would be fine. The AI-generated images of biological membranes I saw? I would say no. It depends. My very best wishes (talk) 02:50, 5 January 2025 (UTC) This is complicated of course. For example, there are tools that make an image of a person that (mis)represents him as someone much better and clever than he really is in life. That should be forbidden as an advertisement. This is a whole new world, but I do not think that a blanket rejection would be appropriate. My very best wishes (talk) 03:19, 5 January 2025 (UTC)
- No, I think there's legal and ethical issues here, especially with the current state of AI. Clovermoss🍀 (talk) 03:38, 5 January 2025 (UTC)
- No: Obviously, we shouldn't be using AI images to represent anyone. Lazman321 (talk) 05:31, 5 January 2025 (UTC)
- No Too risky for BLP's. Besides if people want AI generated content over editor made content, we should make it clear they are in the wrong place, and readers should be given no doubt as to our integrity, sincerity and effort to give them our best, not a program's. Alanscottwalker (talk) 14:51, 5 January 2025 (UTC)
- No, as AI's grasp on the Internet takes hold stronger and stronger, it's important Misplaced Pages, as the online encyclopedia it sets out to be, remains factual and real. Using AI images on Wiki would likely do more harm than good, further thinning the boundaries between what's real and what's not. – zmbro (talk) (cont) 16:52, 5 January 2025 (UTC)
- No, not at the moment. I think it will hard to avoid portraits that been enhanced by AI, as it already been on-going for a number of years and there is no way to avoid it, but I don't want arbitary generated AI portraits of any type. scope_creep 20:19, 5 January 2025 (UTC)
- No for natural images (e.g. photos of people). Generative AI by itself is not a reliable source for facts. In principle, generating images of people and directly sticking them in articles is no different than generating text and directly sticking it in articles. In practice, however, generating images is worse: Text can at least be discussed, edited, and improved afterwards. In contrast, we have significantly less policy and fewer rigorous methods of discussing how AI-generated images of natural objects should be improved (e.g. "make his face slightly more oblong, it's not close enough yet"). Discussion will devolve into hunches and gut feelings about the fidelity of images, all of which essentially fall under WP:OR. spintheer (talk) 20:37, 5 January 2025 (UTC)
- No I'm appalled that even a small minority of editors would support such an idea. We have enough credibility issues already; using AI-generated images to represent real people is not something that a real encyclopedia should even consider. LEPRICAVARK (talk) 22:26, 5 January 2025 (UTC)
- No I understand the comparison to using illustrations in BLP articles, but I've always viewed that as less preferable to no picture in all honestly. Images of a person are typically presented in context, such as a performer on stage, or a politician's official portrait, and I feel like there would be too many edge cases to consider in terms of making it clear that the photo is AI generated and isn't representative of anything that the person specifically did, but is rather an approximation. Tpdwkouaa (talk) 06:50, 6 January 2025 (UTC)
- No - Too often the images resemble caricatures. Real caricatures may be included in articles if the caricature (e.g., political cartoon) had significant coverage and is attributed to the artist. Otherwise, representations of living persons should be real representations taken with photographic equipment. Robert McClenon (talk) 02:31, 7 January 2025 (UTC)
- So you will be arguing for the removal of the lead images at Banksy, CGP Grey, etc. then? Thryduulf (talk) 06:10, 7 January 2025 (UTC)
- At this point you're making bad-faith "BY YOUR LOGIC" arguments. You're better than that. Don't do it. DS (talk) 19:18, 7 January 2025 (UTC)
- So you will be arguing for the removal of the lead images at Banksy, CGP Grey, etc. then? Thryduulf (talk) 06:10, 7 January 2025 (UTC)
- Strong no per bloodofox. —Nythar (💬-🍀) 03:32, 7 January 2025 (UTC)
- No for AI-generated BLP images Mrfoogles (talk) 21:40, 7 January 2025 (UTC)
- No - Not only is this effectively guesswork that usually includes unnatural artefacts, but worse, it is also based on unattributed work of photographers who didn't release their work into public domain. I don't care if it is an open legal loophole somewhere, IMO even doing away with the fair use restriction on BLPs would be morally less wrong. I suspect people on whose work LLMs in question were trained would also take less offense to that option. Daß Wölf 23:25, 7 January 2025 (UTC)
- No – WP:NFC says that
Non-free content should not be used when a freely licensed file that serves the same purpose can reasonably be expected to be uploaded, as is the case for almost all portraits of living people.
While AI images may not be considered copyrightable, it could still be a copyright violation if the output resembles other, copyrighted images, pushing the image towards NFC. At the very least, I feel the use of non-free content to generate AI images violates the spirit of the NFC policy. (I'm assuming copyrighted images of a person are used to generate an AI portrait of them; if free images of that person were used, we should just use those images, and if no images of the person were used, how on Earth would we trust the output?) RunningTiger123 (talk) 02:43, 8 January 2025 (UTC) - No, AI images should not be permitted on Misplaced Pages at all. Stifle (talk) 11:27, 8 January 2025 (UTC)
Expiration date?
"AI," as the term is currently used, is very new. It feels like large language models and the type of image generators under discussion just got here in 2024. (Yes, I know it was a little earlier.) The culture hasn't completed its initial response to them yet. Right now, these images do more harm than good, but that may change. Either we'll come up with a better way of spotting hallucinations or the machines will hallucinate less. Their copyright status also seems unstable. I suggest that any ban decided upon here have some expiration date or required rediscussion date. Two years feels about right to me, but the important thing would be that the ban has a number on it. Darkfrog24 (talk) 23:01, 3 January 2025 (UTC)
- No need for any end-date. If there comes a point where consensus on this changes, then we can change any ban then. FOARP (talk) 05:27, 5 January 2025 (UTC)
- An end date is a positive suggestion. Consensus systems like Misplaced Pages's are vulnerable to half-baked precedential decisions being treated as inviolate. With respect, this conversation does not inspire confidence that this policy proposal's consequences are well-understood at this time. If Misplaced Pages goes in this direction, it should be labeled as primarily reactionary and open to review at a later date. lethargilistic (talk) 10:22, 5 January 2025 (UTC)
- Agree with FOARP, no need for an end date. If something significantly changes (e.g. reliable sources/news outlets such as the New York Times, BBC, AP, etc. start using text-to-image models to generate images of living people for their own articles) then this topic can be revisited later. Editors will have to go through the usual process of starting a new discussion/proposal when that time comes. Some1 (talk) 11:39, 5 January 2025 (UTC)
- Seeing as this discussion has not touched at all on what other organizations may or may not do, it would not be accurate to describe any consensus derived from this conversation in terms of what other organizations may or may not be doing. That is, there has been no consensus that we ought to be looking to the New York Times as an example. Doing so would be inadvisable for several reasons. For one, they have sued an AI company over semi-related issues and they have teams explicitly working on what the future of AI in news ought to look like, so they have some investment in what the future of AI looks like and they are explicitly trying to shape its norms. For another, if they did start to use AI in a way that may be controversial, they would have no positive reason to disclose that and many disincentives. They are not a neutral signal on this issue. Misplaced Pages should decide for itself, preferably doing so while not disrupting the ability of people to continue creating user-generated images. lethargilistic (talk) 03:07, 6 January 2025 (UTC)
- WP:Consensus can change on an indefinite basis, if something changes. An arbitrary sunset date doesn't seem much use. CMD (talk) 03:15, 6 January 2025 (UTC)
- No need per others. Additionally, if practices change, it doesn't mean editors will decide to follow new practices. As for the technology, it seems the situation has been fairly stable for the past two years: we can detect some fakes and hallucinations immediately, many more in the past, but certainly not all retouched elements and all generated photos available right now, even if there was a readily accessible tool or app that enabled ordinary people to reliably do so.
- Through the history, art forgeries have been fairly reliably detected, but rarely quickly. Relatedly, I don't see why the situation with AI images would change in the next 24 months or any similar time period. Daß Wölf 22:17, 9 January 2025 (UTC)
Non-Admin XFD Close as Delete
There is a contentious Deletion Review currently resulting from a Non-administrative close of a Redirect for Discussion in which the closer found that the consensus was Delete. The closer then tagged the redirect with G6, in particular {{db-xfd}}. It was then deleted by an admin who had taken part in the deletion discussion. The guideline on non-administrative closes says that a non-administrative close of Delete is not permitted, because the closer cannot press the Delete button. The first question at DRV seems to be whether DRV must vacate the close and allow a new close by an uninvolved administrator, or whether DRV can endorse the close and leave the close standing. My opinion is that a DRV endorsement of a non-admin close is as good a close as a regular admin close, but that is only my opinion.
The second question that is raised by the DRV discussion is about the existing practice that non-admins sometimes make Delete closes at CFD. Should this practice be stopped, because the guideline says that non-admins may not make Delete closes, or should the guideline be revised, or should the guideline be left as is, but with such closes being sometimes allowed? My own opinion is that if it is necessary sometimes to allow a practice that is contrary to the guideline, the guideline should be revised, so as to reduce the load at DRV, but that is only my opinion.
Should Deletion Review be able to endorse the irregular non-admin close, or is it necessary for DRV to vacate the close? Also, should the non-admin closer be thanked, or cautioned? Robert McClenon (talk) 06:51, 31 December 2024 (UTC)
- On the first question, I think that whatever decision DRV makes at Misplaced Pages:Deletion review#Clock/calendar will be 'legal'. They are not required to vacate the close, no matter how much one editor might like to have a second bite at that apple.
- On your second question, I prefer the existing rules. It is not necessary to allow NACs (non-admin closures) of 'delete'. Good admins will not blindly trust anyone else's decision, so a NAC to delete does not necessarily save any time, and if the admin disagrees, then it could cause drama. NACs to delete should be gently but firmly discouraged. WhatamIdoing (talk) 07:30, 31 December 2024 (UTC)
- I agree with WAID. Thryduulf (talk) 10:29, 31 December 2024 (UTC)
- That's procedurally irregular, but it was the correct close of the discussion and we've got to the right outcome.—S Marshall T/C 11:32, 31 December 2024 (UTC)
- As I said at the Deletion Review, I support non-admins closing RfDs as Delete. If TfDs have been made an exception, RfDs can be too. Closing a heavily discussed nomination at RfD is more about the reading, analysis and thought process at arriving at the outcome, and less about the technicality of the subsequent page actions. It will help making non-admins mentally prepared to advance to admin roles. Jay 💬 17:28, 1 January 2025 (UTC)
- It seems dumb to think that a close can go through DRV and 'pass' only for it to be vacated because the closer didn't have the right credentials. If the close is found to be good, does it matter who closed it? If bad Delete closes by non-admins were regular occurrences, then maybe, but I don't imagine this is the case. Cessaune 19:12, 1 January 2025 (UTC)
- The only problem is that if it becomes a regular occurrence, it creates more work overall. An admin can delete and carry out the delete. A non-admin can say it needs to be deleted, but still needs an admin to carry out the delete. That's why this should be discouraged - if it becomes commonplace it will generate a lot more work, but a one-off can clearly be endorsed at DRV. SportingFlyer T·C 19:26, 1 January 2025 (UTC)
- I think it's time for the long-standing "it creates more work" argument to be retired. In the first scenario, person 1 does A and B. In the second scenario, person 1 does A and person 2 does B. That does not create more work overall, it's the same amount of work overall, and it reduces work for person 1. Splitting work between two people doesn't create more work. Levivich (talk) 14:55, 4 January 2025 (UTC)
- If person 2 here did B without doing any additional work whatsoever, then person 2 shouldn't be able to do B. We expect admins to look at what they're deleting. Plus, you're creating additional work for persons 3-20, who are watching CAT:CSD and don't appreciate demands to go sanity-check low-urgency, possibly-complex closes at unfamiliar deletion venues mixed into the copyright infringement, spam, and attack pages they're trying to prioritize. —Cryptic 15:15, 4 January 2025 (UTC)
- I think it's time for the long-standing "it creates more work" argument to be retired. In the first scenario, person 1 does A and B. In the second scenario, person 1 does A and person 2 does B. That does not create more work overall, it's the same amount of work overall, and it reduces work for person 1. Splitting work between two people doesn't create more work. Levivich (talk) 14:55, 4 January 2025 (UTC)
Discussion of How Much Attention Admins Pay to What They Are Deleting
- No, we don't expect admins to look at what they're deleting. Anyone familiar with the admin deletion stats (as you are) knows that every day, admins delete many pages without ever looking at them. Person 2 can do B without A. It's OK for person 2 to rely on person 1 to do A correctly. It's not true that in order for person 2 to do B then person 2 must also have done A, and it's not how we operate with other deletions. Non-admins are perfectly capable of closing RfCs and many other discussions; there is nothing about a deletion discussion that suddenly makes non-admins unable to summarize consensus. There is no reason an admin can't rely on a non-admins summary of consensus, just as admins rely on non-admins CSD tagging. Levivich (talk) 15:21, 4 January 2025 (UTC)
- We absolutely do expect admins to look at what they're deleting. At most a NAC delete close can save an admin the trouble of writing the close themself, but if an admin is blindly deleting without verifying that the CSD tag is actually valid for CSD then sooner or later they're going to wind up at WP:ANI. Anomie⚔ 15:49, 4 January 2025 (UTC)
- Levivich, I agree with your conclusion, but not with your analogy. As Anomie pointed out, we do not blindly delete pages tagged as CSD. WP:CSD specifically instructs us to verify that the page meets a speedy deletion criterion, rather than just appeared as such to one user. This is a very different situation to an AfD that garnered consensus among multiple editors over the course of at least seven days, where the admin's job is just to verify that the NAC read consensus correctly. Owen× ☎ 16:01, 4 January 2025 (UTC)
- We can say it all we want but we can't argue with WP:ADMINSTATS. The admin who has deleted over 1 million pages did not read 1 million pages. Nor did the admin who deleted 800,000 pages read 800,000 pages. An admin who has deleted 600,000 pages--no doubt without reading even half of them--just got elected to arbcom. Nobody is taking those admins to ANI or thinks they're doing anything wrong (including me).
- So no, admins don't read the pages they delete, and no, we don't expect them to, as proven by the massive number of deletions, and no, admins who delete without reading aren't taken to ANI, they're promoted. This includes CSDs.
- More broadly, there is no reason not to allow admins to rely on non-admin XfD closures when making deletion decisions. We already do this for multiple types of XfDs, we can and should do it for AfDs. Levivich (talk) 16:05, 4 January 2025 (UTC)
- Explicit didn't have to read over one million pages, because most of those deletions were the result of an XfD, not a CSD. He had to review almost a million XfD discussions to make sure consensus was to delete, which is exactly what he does when closing. And the majority of CSD deletions are of clumsy, unsourced self-promotion pages that take less than ten seconds to assess and confirm as such. I know for a fact that Liz -- number #3 on that list -- carefully reviews every speedy-tagged page before deleting it, and routinely rejects such requests when they do not meet CSD. Levivich, you are making some serious accusations here, with zero evidence to support them. I don't think this will help your case here. Owen× ☎ 16:24, 4 January 2025 (UTC)
- I have no case here and I'm making no accusations. If you want to believe that a person has read over a million XfDs, you go ahead and believe that. I don't, because even over 15 years, that would be 182 XfDs a day every day with no days off, and I don't believe anyone does, or is capable of doing, that much XfD reading. Levivich (talk) 17:03, 4 January 2025 (UTC)
- Levivich You're completely wrong, because you're missing a number of very obvious issues with those statistics.
- Not even a majority of Explicit's deletions are XfDs - most are speedies
- Many are G13 deletions, which don't need any analysis, they're simply done on timeframe. You can do those in seconds. Similarly U1, R2 and F8
- Most articles have a talk page, which also needs to be deleted (G8), so deleting most articles means two deletions.
- RfDs don't take very long to close at all; most are unanimous
- Of Explicit's last 500 deletions, only 58 have been XfDs (and their associated talk pages where they existed, making 97 deletions).
- The vast majority of the rest have been G13, G8, U1, R2 and F8.
- Just in the interests of accuracy, you know. Black Kite (talk) 17:18, 4 January 2025 (UTC)
- Owen:
Explicit didn't have to read over one million pages, because most of those deletions were the result of an XfD, not a CSD
- BK:
Not even a majority of Explicit's deletions are XfDs - most are speedies
- I don't know which one of you is wrong but it doesn't matter. Explicit did not read 1 million pages over the last 15 years prior to deleting them, whether they were XfDs or CSDs. Nobody is reading 182 Misplaced Pages pages 7 days a week for 15 years. Nobody has read a million Misplaced Pages pages. No human can thoroughly review 182 things a day. It's mathematics, guys. If a person spends 5 minutes on each review -- which isn't a thorough review of anything -- that's 15 hours per day to do 182 reviews. Nobody is spending that much time, 7 days a week, for 15 years. Even if the person spends just 1 minute, that's over 3 hours a day... every day for 15 years. The math proves that Explicit is spending maybe 1 minute per deletion.
- Now: an admin spending 1 minute per deletion is not providing any real kind of level of oversight that can't be provided by a non-admin closing the XfD and the admin then spending 1 minute reviewing the close and deleting the page.
- The math and the stats don't lie. The suggestion that admins carefully review every page before deleting them is plainly not true. The suggestion that if admins didn't do that, they'd be taken to ANI, is also plainly not true. We have admins spending a minute or less per deletion and not being taken to ANI. These are facts. Levivich (talk) 19:31, 4 January 2025 (UTC)
- Your "math" is based on a lot of faulty assumptions and oversimplifications, which you seem to be persisting in despite others trying to correct you. Anomie⚔ 20:42, 4 January 2025 (UTC)
- Those are not facts, because your data is faulty. Your reply simply tells me that you didn't read what I wrote. Someone with the editing pattern of Explicit isn't reviewing 182 things a day. Nowhere near that. Quite apart from the talk page issue (which, if every page had a talk page, would immediately halve the number of reviews) in some of the speedy categories I mention there's nothing to review; it's simply pressing a button. Indeed, deleting the articles in the stale G13 queue, orphaned G8 queue, or the F5 or F8 category, can result in dozens - even hundreds - of deletions in one button press. As an example, on the 3rd January alone, Explicit deleted 113 F8 images, plus 83 talk pages of those images. That's 196 deletions, all done in a few seconds, as there's nothing to review. In comparison, the number of XfDs he closed that day was 25. Black Kite (talk) 21:21, 4 January 2025 (UTC)
- Cryptic:
We expect admins to look at what they're deleting.
- BK:
That's 196 deletions, all done in a few seconds, as there's nothing to review.
- I disagreed with Cryptic's point, and thank you BK for providing an example of exactly what I'm talking about. We do not expect admins to review (or "look at") what they're deleting, sometimes there's not even anything to look at. Admins delete things without reviewing what they're deleting all the time, and they're not hauled off to ANI for it. BK, you're arguing with me, but you're not actually disagreeing with what I'm saying. You're not arguing, as others have said above, that all admins review everything they delete before they delete it, or that this is the community expectation. Levivich (talk) 22:24, 4 January 2025 (UTC)
- I'm simply pointing out that the claim you have made
The math proves that Explicit is spending maybe 1 minute per deletion. Now: an admin spending 1 minute per deletion is not providing any real kind of level of oversight that can't be provided by a non-admin closing the XfD and the admin then spending 1 minute reviewing the close and deleting the page.
is not correct; Explicit is not spending 1 minute per XfD, he is spending 1 minute per deletion, and because in many cases the admin pressing the button is simply completing an automated process (in the case of G13, that the article has not been edited for six months; in the case of F8, that the identical file exists at Commons), and deleting many files in one button-press, that skews the data. Now, if you are saying that an admin is closing AfDs or deleting G7 or G11 speedies at a high rate (not just "deleting things"), and their speed suggests they aren't reviewing them carefully - then you have a point. Black Kite (talk) 23:33, 4 January 2025 (UTC)- Not actually reviewing deletions to make sure they were correct was one of the two major factors that resulted in Arbcom desysopping RHaworth (the other was a refusal to engage with good-faith queries about his actions and refusing to engage at all with IP editors). As others have said, how long it takes to ensure that what you are deleting should be deleted varies considerably. For G13s it's as simple as checking that bot has correctly determined that a given timestamp was more than 6 months ago - either the entire batch will be correct or the entire batch will be incorrect. Most of the deletions I do are the result of RfDs, after determining that the consensus is to delete (which is usually very simple) I delete the page. The software then tells me that page has a talk page, at least 9 times out of 10 (maybe more) it takes less than a second to verify there isn't any any reason they are G8 exempt. There is no excuse for not doing the reviewing properly though because there are exceptions. Thryduulf (talk) 00:22, 5 January 2025 (UTC)
- I didn't say Explicit is spending 1 minute per XfD, I said "per deletion." And your point about "many files in one button-press" is literally the same as my point, which is that admins don't carefully review every single thing they delete, they sometimes delete many files in one button press (196 deletions in a few seconds is an example). So, you agree with me that "we expect admins to look at what they're deleting" is not correct, it's actually disproven by the data. Levivich (talk) 00:37, 5 January 2025 (UTC)
- Admins are expected to review everything they delete. It's just that in some cases many pages can be reviewed together very quickly. It is also possible to spend time carefully reviewing many pages in detail, assembling a list of which should be deleted and then deleting them the pages on that together with a single click. Thryduulf (talk) 01:08, 5 January 2025 (UTC)
- It was in the context of XfDs, however ("
If you want to believe that a person has read over a million XfDs, you go ahead and believe that
"). Black Kite (talk) 08:03, 5 January 2025 (UTC)- That was in response to someone saying they were mostly XfDs. The same point holds if they were mostly CSDs. That's why I said it doesn't matter whether they were mostly XfDs or CSDs. Regardless of whether it's XfD or CSD, when someone makes 500k or 1 million deletions, we know they didn't look at each and every individual thing (page, file, whatever) that was deleted, nor do we expect them to. (Which I said in response to people saying we expect admins to look at every single thing they delete.) Levivich (talk) 13:18, 5 January 2025 (UTC)
- I'm simply pointing out that the claim you have made
- Cryptic:
- Owen:
- Levivich You're completely wrong, because you're missing a number of very obvious issues with those statistics.
- I have no case here and I'm making no accusations. If you want to believe that a person has read over a million XfDs, you go ahead and believe that. I don't, because even over 15 years, that would be 182 XfDs a day every day with no days off, and I don't believe anyone does, or is capable of doing, that much XfD reading. Levivich (talk) 17:03, 4 January 2025 (UTC)
- Explicit didn't have to read over one million pages, because most of those deletions were the result of an XfD, not a CSD. He had to review almost a million XfD discussions to make sure consensus was to delete, which is exactly what he does when closing. And the majority of CSD deletions are of clumsy, unsourced self-promotion pages that take less than ten seconds to assess and confirm as such. I know for a fact that Liz -- number #3 on that list -- carefully reviews every speedy-tagged page before deleting it, and routinely rejects such requests when they do not meet CSD. Levivich, you are making some serious accusations here, with zero evidence to support them. I don't think this will help your case here. Owen× ☎ 16:24, 4 January 2025 (UTC)
- By pressing the delete button you are asserting that at least one of the following is true:
- The page met the specified speedy deletion criterion (e.g. if the deletion log says G12 you are asserting the page, including all of its old revisions, was copyvio).
- The page was eligible for PROD, this was the first time that it was prodded and nobody has objected to the prod.
- The deletion of the page was discussed a the relevant XfD (or, exceptionally, in an RfC) and that the consensus of that discussion was to delete the page.
- If an admin is not prepared to put their name to the truthfulness of that assertion then they should not be deleting the page. Thryduulf (talk) 16:12, 4 January 2025 (UTC)
- No, we don't expect admins to look at what they're deleting. Anyone familiar with the admin deletion stats (as you are) knows that every day, admins delete many pages without ever looking at them. Person 2 can do B without A. It's OK for person 2 to rely on person 1 to do A correctly. It's not true that in order for person 2 to do B then person 2 must also have done A, and it's not how we operate with other deletions. Non-admins are perfectly capable of closing RfCs and many other discussions; there is nothing about a deletion discussion that suddenly makes non-admins unable to summarize consensus. There is no reason an admin can't rely on a non-admins summary of consensus, just as admins rely on non-admins CSD tagging. Levivich (talk) 15:21, 4 January 2025 (UTC)
My own opinion is that if it is necessary sometimes to allow a practice that is contrary to the guideline, the guideline should be revised, so as to reduce the load at DRV, but that is only my opinion
- actually, Robert McClenon, your opinion is solidly anchored in policy. WP:NOTBURO tells us:the written rules themselves do not set accepted practice. Rather, they document already-existing community consensus regarding what should be accepted and what should be rejected.
If CfD is working fine as it is, let's update policy to reflect the practice. Owen× ☎ 16:09, 2 January 2025 (UTC)- Regarding the second question/paragraph, the guideline on non-admin "delete" closures is at WP:NACD and explicitly includes exceptions for TfD and CfD, so there isn't currently a conflict between existing practice and deletion guidelines. SilverLocust 💬 08:00, 9 January 2025 (UTC)
Should WP:Demonstrate good faith include mention of AI-generated comments?
Using AI to write your comments in a discussion makes it difficult for others to assume that you are discussing in good faith, rather than trying to use AI to argue someone into exhaustion (see example of someone using AI in their replies "Because I don't have time to argue with, in my humble opinion, stupid PHOQUING people"). More fundamentally, WP:AGF can't apply to the AI itself as AI lacks intentionality, and it is difficult for editors to assess how much of an AI-generated comment reflects the training of the AI vs. the actual thoughts of the editor.
Should WP:DGF be addended to include that using AI to generate your replies in a discussion runs counter to demonstrating good faith? Photos of Japan (talk) 00:23, 2 January 2025 (UTC)
- Yes, I think this is a good idea. :bloodofox: (talk) 00:39, 2 January 2025 (UTC)
- No. As with all the other concurrent discussions (how many times do we actually need to discuss the exact same FUD and scaremongering?) the problem is not AI, but rather inappropriate use of AI. What we need to do is to (better) explain what we actually want to see in discussions, not vaguely defined bans of swathes of technology that, used properly, can aid communication. Thryduulf (talk) 01:23, 2 January 2025 (UTC)
- Note that this topic is discussing using AI to generate replies, as opposed to using it as an aid (e.g. asking it to edit for grammar, or conciseness). As the above concurrent discussion demonstrates, users are already using AI to generate their replies in AfD, so it isn't scaremongering but an actual issue.
- WP:DGF also does not ban anything ("Showing good faith is not required"), but offers general advice on demonstrating good faith. So it seems like the most relevant place to include mention of the community's concerns regarding AI-generated comments, without outright banning anything. Photos of Japan (talk) 01:32, 2 January 2025 (UTC)
- And as pointed out, multiple times in those discussions, different people understand different things from the phrase "AI-generated". The community's concern is not AI-generated comments, but comments that do not clearly and constructively contribute to a discussion - some such comments are AI-generated, some are not. This proposal would, just as all the other related ones, cause actual harm when editors falsely accuse others of using AI (and this will happen). Thryduulf (talk) 02:34, 2 January 2025 (UTC)
- Nobody signed up to argue with bots here. If you're pasting someone else's comment into a prompt and asking the chatbot to argue against that comment and just posting it in here, that's a real problema and absolutely should not be acceptable. :bloodofox: (talk) 03:31, 2 January 2025 (UTC)
- Thank you for the assumption of bad faith and demonstrating one of my points about the harm caused. Nobody is forcing you to engage with bad-faith comments, but whether something is or is not bad faith needs to be determined by its content not by its method of generation. Simply using an AI demonstrates neither good faith nor bad faith. Thryduulf (talk) 04:36, 2 January 2025 (UTC)
- I don't see why we have any particular to reason to suspect a respected and trustworthy editor of using AI. Cremastra (u — c) 14:31, 2 January 2025 (UTC)
- I'm one of those people who clarified the difference between AI-generated vs. edited, and such a difference could be made explicit with a note. Editors are already accusing others of using AI. Could you clarify how you think addressing AI in WP:DGF would cause actual harm? Photos of Japan (talk) 04:29, 2 January 2025 (UTC)
- By encouraging editors to accuse others of using AI, by encouraging editors to dismiss or ignore comments because they suspect that they are AI-generated rather than engaging with them. @Bloodofox has already encouraged others to ignore my arguments in this discussion because they suspect I might be using an LLM and/or be a bot (for the record I'm neither). Thryduulf (talk) 04:33, 2 January 2025 (UTC)
- I think bloodofox's comment was about "you" in the rhetorical sense, not "you" as in Thryduulf. jlwoodwa (talk) 11:06, 2 January 2025 (UTC)
- Given your relentlessly pro-AI comments here, it seems that you'd be A-OK with just chatting with a group of chatbots here — or leaving the discussion to them. However, most of us clearly are not. In fact, I would immediately tell someone to get lost were it confirmed that indeed that is what is happening. I'm a human being and find the notion of wasting my time with chatbots on Misplaced Pages to be incredibly insulting and offensive. :bloodofox: (talk) 04:38, 2 January 2025 (UTC)
- My comments are neither pro-AI nor anti-AI, indeed it seems that you have not understood pretty much anything I'm saying. Thryduulf (talk) 04:43, 2 January 2025 (UTC)
- Funny, you've done nothing here but argue for more generative AI on the site and now you seem to be arguing to let chatbots run rampant on it while mocking anyone who doesn't want to interface with chatbots on Misplaced Pages. Hey, why not just sell the site to Meta, am I right? :bloodofox: (talk) 04:53, 2 January 2025 (UTC)
- I haven't been arguing for more generative AI on the site. I've been arguing against banning it on the grounds that such a ban would be unclear, unenforceable, wouldn't solve any problems (largely because whether something is AI or not is completely irrelevant to the matter at hand) but would instead cause harm. Some of the issues identified are actual problems, but AI is not the cause of them and banning AI won't fix them.
- I'm not mocking anybody, nor am I advocating to
let chatbots run rampant
. I'm utterly confused why you think I might advocate for selling Misplaced Pages to Meta (or anyone else for that matter)? Are you actually reading anything I'm writing? You clearly are not understanding it. Thryduulf (talk) 05:01, 2 January 2025 (UTC)- So we're now in 'everyone else is the problem, not me!' territory now? Perhaps try communicating in a different way because your responses here are looking very much like the typical AI apologetics one can encounter on just about any contemporary LinkedIn thread from your typical FAANG employee. :bloodofox: (talk) 05:13, 2 January 2025 (UTC)
- No, this is not a
everyone else is the problem, not me
issue because most other people appear to be able to understand my arguments and respond to them appropriately. Not everybody agrees with them, but that's not an issue. - I'm not familiar with Linkedin threads (I don't use that platform) nor what a "FAANG employee" is (I've literally never heard the term before now) so I have no idea whether your characterisation is a compliment or a personal attack, but given your comments towards me and others you disagree with elsewhere I suspect it's closer to the latter.
- AI is a tool. Just like any other tool it can be used in good faith or in bad faith, it can be used well and it can be used badly, it can be used in appropriate situations and it can be used in inappropriate situations, the results of using the tool can be good and the results of using the tool can be bad. Banning the tool inevitably bans the good results as well as the bad results but doesn't address the reasons why the results were good or bad and so does not resolve the actual issue that led to the bad outcomes. Thryduulf (talk) 12:09, 2 January 2025 (UTC)
- In the context of generating comments to other users though, AI is much easier to use for bad faith than for good faith. LLMs don't understand Misplaced Pages's policies and norms, and so are hard to utilize to generate posts that productively address them. By contrast, bad actors can easily use LLMs to make low quality posts to waste people's time or wear them down.
- In the context of generating images, or text for articles, it's easy to see how the vast majority of users using AI for those purposes is acting in good faith as these are generally constructive tasks, and most people making bad faith changes to articles are either obvious vandals who won't bother to use AI because they'll be reverted soon anyways, or trying to be subtle (povpushers) in which case they tend to want to carefully write their own text into the article.
- It's true that AI "is just a tool", but when that tool is much easier to use for bad faith purposes (in the context of discussions) then it raises suspicions about why people are using it. Photos of Japan (talk) 22:44, 2 January 2025 (UTC)
LLMs don't understand Misplaced Pages's policies and norms
They're not designed to "understand" them since the policies and norms were designed for human cognition. The fact that AI is used rampantly by people acting in bad faith on Misplaced Pages does not inherently condemn the AI. To me, it shows that it's too easy for vandals to access and do damage on Misplaced Pages. Unfortunately, the type of vetting required to prevent that at the source would also potentially require eliminating IP-editing, which won't happen. Duly signed, ⛵ WaltClipper -(talk) 14:33, 15 January 2025 (UTC)
- No, this is not a
- So we're now in 'everyone else is the problem, not me!' territory now? Perhaps try communicating in a different way because your responses here are looking very much like the typical AI apologetics one can encounter on just about any contemporary LinkedIn thread from your typical FAANG employee. :bloodofox: (talk) 05:13, 2 January 2025 (UTC)
- You mentioned "FUD". That acronym, "fear, uncertainty and doubt," is used in precisely two contexts: pro-AI propagadizing and persuading people who hold memecoin crypto to continue holding it. Since this discussion is not about memecoin crypto that would suggest you are using it in a pro-AI context. I will note, fear, uncertainty and doubt is not my problem with AI. Rather it's anger, aesthetic disgust and feeling disrespected when somebody makes me talk to their chatbot. Simonm223 (talk) 14:15, 14 January 2025 (UTC)
That acronym, "fear, uncertainty and doubt," is used in precisely two contexts
is simply- FUD both predates AI by many decades (my father introduced me to the term in the context of the phrase "nobody got fired for buying IBM", and the context of that was mainframe computer systems in the 1980s if not earlier. FUD is also used in many, many more contexts that just those two you list, including examples by those opposing the use of AI on Misplaced Pages in these very discussions. Thryduulf (talk) 14:47, 14 January 2025 (UTC)
That acronym, "fear, uncertainty and doubt," is used in precisely two contexts
is factually incorrect.- FUD both predates AI by many decades (indeed if you'd bothered to read the fear, uncertainty and doubt article you'd learn that the concept was first recorded in 1693, the exact formulation dates from at least the 1920s and the use of it in technology concepts originated in 1975 in the context of mainframe computer systems. That its use, eve in just AI contexts, is limited to pro-AI advocacy is ludicrous (even ignoring things like Roko's basilisk), examples can be found in these sprawling discussions from those opposing AI use on Misplaced Pages. Thryduulf (talk) 14:52, 14 January 2025 (UTC)
- Funny, you've done nothing here but argue for more generative AI on the site and now you seem to be arguing to let chatbots run rampant on it while mocking anyone who doesn't want to interface with chatbots on Misplaced Pages. Hey, why not just sell the site to Meta, am I right? :bloodofox: (talk) 04:53, 2 January 2025 (UTC)
- My comments are neither pro-AI nor anti-AI, indeed it seems that you have not understood pretty much anything I'm saying. Thryduulf (talk) 04:43, 2 January 2025 (UTC)
- By encouraging editors to accuse others of using AI, by encouraging editors to dismiss or ignore comments because they suspect that they are AI-generated rather than engaging with them. @Bloodofox has already encouraged others to ignore my arguments in this discussion because they suspect I might be using an LLM and/or be a bot (for the record I'm neither). Thryduulf (talk) 04:33, 2 January 2025 (UTC)
- Nobody signed up to argue with bots here. If you're pasting someone else's comment into a prompt and asking the chatbot to argue against that comment and just posting it in here, that's a real problema and absolutely should not be acceptable. :bloodofox: (talk) 03:31, 2 January 2025 (UTC)
- And as pointed out, multiple times in those discussions, different people understand different things from the phrase "AI-generated". The community's concern is not AI-generated comments, but comments that do not clearly and constructively contribute to a discussion - some such comments are AI-generated, some are not. This proposal would, just as all the other related ones, cause actual harm when editors falsely accuse others of using AI (and this will happen). Thryduulf (talk) 02:34, 2 January 2025 (UTC)
- WP:DGF also does not ban anything ("Showing good faith is not required"), but offers general advice on demonstrating good faith. So it seems like the most relevant place to include mention of the community's concerns regarding AI-generated comments, without outright banning anything. Photos of Japan (talk) 01:32, 2 January 2025 (UTC)
- Not really – I agree with Thryduulf's arguments on this one. Using AI to help tweak or summarize or "enhance" replies is of course not bad faith – the person is trying hard. Maybe English is their second language. Even for replies 100% AI-generated the user may be an ESL speaker struggling to remember the right words (I always forget 90% of my French vocabulary when writing anything in French, for example). In this case, I don't think we should make a blanket assumption that using AI to generate comments is not showing good faith. Cremastra (u — c) 02:35, 2 January 2025 (UTC)
- Yes because generating walls of text is not good faith. People "touching up" their comments is also bad (for starters, if you lack the English competency to write your statements in the first place, you probably lack the competency to tell if your meaning has been preserved or not). Exactly what AGF should say needs work, but something needs to be said, and
AGFDGF is a good place to do it. XOR'easter (talk) 02:56, 2 January 2025 (UTC)- Not all walls of text are generated by AI, not all AI generated comments are walls of text. Not everybody who uses AI to touch up their comments lacks the competencies you describe, not everybody who does lack those competencies uses AI. It is not always possible to tell which comments have been generated by AI and which have not. This proposal is not particularly relevant to the problems you describe. Thryduulf (talk) 03:01, 2 January 2025 (UTC)
- Someone has to ask: Are you generating all of these pro-AI arguments using ChatGPT? It'd explain a lot. If so, I'll happily ignore any and all of your contributions, and I'd advise anyone else to do the same. We're not here to be flooded with LLM-derived responses. :bloodofox: (talk) 03:27, 2 January 2025 (UTC)
- That you can't tell whether my comments are AI-generated or not is one of the fundamental problems with these proposals. For the record they aren't, nor are they pro-AI - they're simply anti throwing out babies with bathwater. Thryduulf (talk) 04:25, 2 January 2025 (UTC)
- I'd say it also illustrates the serious danger: We can no longer be sure that we're even talking to other people here, which is probably the most notable shift in the history of Misplaced Pages. :bloodofox: (talk) 04:34, 2 January 2025 (UTC)
- How is that a "serious danger"? If a comment makes a good point, why does it matter whether ti was AI generated or not? If it doesn't make a good point, why does it matter if it was AI generated or not? How will these proposals resolve that "danger"? How will they be enforceable? Thryduulf (talk) 04:39, 2 January 2025 (UTC)
- Misplaced Pages is made for people, by people, and I like most people will be incredibly offended to find that we're just playing some kind of LLM pong with a chatbot of your choice. You can't be serious. :bloodofox: (talk) 04:40, 2 January 2025 (UTC)
- You are entitled to that philosophy, but that doesn't actually answer any of my questions. Thryduulf (talk) 04:45, 2 January 2025 (UTC)
- "why does it matter if it was AI generated or not?"
- Because it takes little effort to post a lengthy, low quality AI-generated post, and a lot of effort for human editors to write up replies debunking them.
- "How will they be enforceable? "
- WP:DGF isn't meant to be enforced. It's meant to explain to people how they can demonstrate good faith. Posting replies to people (who took the time to write them) that are obviously AI-generated harms the ability of those people to assume good faith. Photos of Japan (talk) 05:16, 2 January 2025 (UTC)
- Misplaced Pages is made for people, by people, and I like most people will be incredibly offended to find that we're just playing some kind of LLM pong with a chatbot of your choice. You can't be serious. :bloodofox: (talk) 04:40, 2 January 2025 (UTC)
- How is that a "serious danger"? If a comment makes a good point, why does it matter whether ti was AI generated or not? If it doesn't make a good point, why does it matter if it was AI generated or not? How will these proposals resolve that "danger"? How will they be enforceable? Thryduulf (talk) 04:39, 2 January 2025 (UTC)
- I'd say it also illustrates the serious danger: We can no longer be sure that we're even talking to other people here, which is probably the most notable shift in the history of Misplaced Pages. :bloodofox: (talk) 04:34, 2 January 2025 (UTC)
- That you can't tell whether my comments are AI-generated or not is one of the fundamental problems with these proposals. For the record they aren't, nor are they pro-AI - they're simply anti throwing out babies with bathwater. Thryduulf (talk) 04:25, 2 January 2025 (UTC)
- Someone has to ask: Are you generating all of these pro-AI arguments using ChatGPT? It'd explain a lot. If so, I'll happily ignore any and all of your contributions, and I'd advise anyone else to do the same. We're not here to be flooded with LLM-derived responses. :bloodofox: (talk) 03:27, 2 January 2025 (UTC)
- The linked "example of someone using AI in their replies" appears – to me – to be a non-AI-generated comment. I think I preferred the allegedly AI-generated comments from that user (example). The AI was at least superficially polite. WhatamIdoing (talk) 04:27, 2 January 2025 (UTC)
- Obviously the person screaming in all caps that they use AI because they don't want to waste their time arguing is not using AI for that comment. Their first post calls for the article to be deleted for not "offering new insights or advancing scholarly understanding" and "merely" reiterating what other sources have written.
- Yes, after a human had wasted their time explaining all the things wrong with its first post, then the bot was able to write a second post which looks ok. Except it only superficially looks ok, it doesn't actually accurately describe the articles. Photos of Japan (talk) 04:59, 2 January 2025 (UTC)
- Multiple humans have demonstrated in these discussions that humans are equally capable of writing posts which superficially look OK but don't actually accurately relate to anything they are responding to. Thryduulf (talk) 05:03, 2 January 2025 (UTC)
- But I can assume that everyone here is acting in good faith. I can't assume good faith in the globally-locked sock puppet spamming AfD discussions with low effort posts, whose bot is just saying whatever it can to argue for the deletion of political pages the editor doesn't like. Photos of Japan (talk) 05:09, 2 January 2025 (UTC)
- True, but I think that has more to do with the "globally-locked sock puppet spamming AfD discussions" part than with the "some of it might be " part. WhatamIdoing (talk) 07:54, 2 January 2025 (UTC)
- All of which was discovered because of my suspicions from their inhuman, and meaningless replies. "Reiteration isn't the problem; redundancy is," maybe sounds pithy in a vacuum, but this was written in reply to me stating that we aren't supposed to be doing OR but reiterating what the sources say.
- "Your criticism feels overly prescriptive, as though you're evaluating this as an academic essay" also sounds good, until you realize that the bot is actually criticizing its own original post.
- The fact that my suspicions about their good faith were ultimately validated only makes it even harder for me to assume good faith in users who sound like ChatGPT. Photos of Japan (talk) 08:33, 2 January 2025 (UTC)
- I wonder if we need some other language here. I can understand feeling like this is a bad interaction. There's no sense that the person cares; there's no feeling like this is a true interaction. A contract lawyer would say that there's no meeting of the minds, and there can't be, because there's no mind in the AI, and the human copying from the AI doesn't seem to be interested in engaging their brain.
- But... do you actually think they're doing this for the purpose of intentionally harming Misplaced Pages? Or could this be explained by other motivations? Never attribute to malice that which can be adequately explained by stupidity – or to anxiety, insecurity (will they hate me if I get my grammar wrong?), incompetence, negligence, or any number of other "understandable" (but still something WP:SHUN- and even block-worthy) reasons. WhatamIdoing (talk) 08:49, 2 January 2025 (UTC)
- The user's talk page has a header at the top asking people not to template them because it is "impersonal and disrespectful", instead requesting "please take a moment to write a comment below in your own words"
- Does this look like acting in good faith to you? Requesting other people write personalized responses to them while they respond with an LLM? Because it looks to me like they are trying to waste other people's time. Photos of Japan (talk) 09:35, 2 January 2025 (UTC)
- Misplaced Pages:Assume good faith means that you assume people aren't deliberately screwing up on purpose. Humans are self-contradictory creatures. I generally do assume that someone who is being hypocritical hasn't noticed their contradictions yet. WhatamIdoing (talk) 07:54, 3 January 2025 (UTC)
- "Being hypocritical" in the abstract isn't the problem, it's the fact that asking people to put effort into their comments, while putting in minimal effort into your own comments appears bad faith, especially when said person says they don't want to waste time writing comments to stupid people. The fact you are arguing AGF for this person is both astounding and disappointing. Photos of Japan (talk) 16:08, 3 January 2025 (UTC)
- It feels like there is a lack of reciprocity in the interaction, even leaving aside the concern that the account is a block-evading sock.
- But I wonder if you have read AGF recently. The first sentence is "Assuming good faith (AGF) means assuming that people are not deliberately trying to hurt Misplaced Pages, even when their actions are harmful."
- So we've got some of this (e.g., harmful actions). But do you really believe this person woke up in the morning and decided "My main goal for today is to deliberately hurt Misplaced Pages. I might not be successful, but I sure am going to try hard to reach my goal"? WhatamIdoing (talk) 23:17, 4 January 2025 (UTC)
- Trying to hurt Misplaced Pages doesn't mean they have to literally think "I am trying to hurt Misplaced Pages", it can mean a range of things, such as "I am trying to troll Wikipedians". A person who thinks a cabal of editors is guarding an article page, and that they need to harass them off the site, may think they are improving Misplaced Pages, but at the least I wouldn't say that they are acting in good faith. Photos of Japan (talk) 23:27, 4 January 2025 (UTC)
- Sure, I'd count that as a case of "trying to hurt Misplaced Pages-the-community". WhatamIdoing (talk) 06:10, 5 January 2025 (UTC)
- Trying to hurt Misplaced Pages doesn't mean they have to literally think "I am trying to hurt Misplaced Pages", it can mean a range of things, such as "I am trying to troll Wikipedians". A person who thinks a cabal of editors is guarding an article page, and that they need to harass them off the site, may think they are improving Misplaced Pages, but at the least I wouldn't say that they are acting in good faith. Photos of Japan (talk) 23:27, 4 January 2025 (UTC)
- "Being hypocritical" in the abstract isn't the problem, it's the fact that asking people to put effort into their comments, while putting in minimal effort into your own comments appears bad faith, especially when said person says they don't want to waste time writing comments to stupid people. The fact you are arguing AGF for this person is both astounding and disappointing. Photos of Japan (talk) 16:08, 3 January 2025 (UTC)
- Misplaced Pages:Assume good faith means that you assume people aren't deliberately screwing up on purpose. Humans are self-contradictory creatures. I generally do assume that someone who is being hypocritical hasn't noticed their contradictions yet. WhatamIdoing (talk) 07:54, 3 January 2025 (UTC)
- True, but I think that has more to do with the "globally-locked sock puppet spamming AfD discussions" part than with the "some of it might be " part. WhatamIdoing (talk) 07:54, 2 January 2025 (UTC)
- But I can assume that everyone here is acting in good faith. I can't assume good faith in the globally-locked sock puppet spamming AfD discussions with low effort posts, whose bot is just saying whatever it can to argue for the deletion of political pages the editor doesn't like. Photos of Japan (talk) 05:09, 2 January 2025 (UTC)
- Multiple humans have demonstrated in these discussions that humans are equally capable of writing posts which superficially look OK but don't actually accurately relate to anything they are responding to. Thryduulf (talk) 05:03, 2 January 2025 (UTC)
- Yes, after a human had wasted their time explaining all the things wrong with its first post, then the bot was able to write a second post which looks ok. Except it only superficially looks ok, it doesn't actually accurately describe the articles. Photos of Japan (talk) 04:59, 2 January 2025 (UTC)
- The issues with AI in discussions is not related to good faith, which is narrowly defined to intent. CMD (talk) 04:45, 2 January 2025 (UTC)
- In my mind, they are related inasmuch as it is much more difficult for me to ascertain good faith if the words are eminently not written by the person I am speaking to in large part, but instead generated based on an unknown prompt in what is likely a small fraction of the expected time. To be frank, in many situations it is difficult to avoid the conclusion that the disparity in effort is being leveraged in something less than good faith. Remsense ‥ 论 05:02, 2 January 2025 (UTC)
- Assume good faith, don't ascertain! Llm use can be deeply unhelpful for discussions and the potential for mis-use is large, but the most recent discussion I've been involved with where I observed an llm post was responded to by an llm post, I believe both the users were doing this in good faith. CMD (talk) 05:07, 2 January 2025 (UTC)
- All I mean to say is it should be licit that unhelpful LLM use should be something that can be mentioned like any other unhelpful rhetorical pattern. Remsense ‥ 论 05:09, 2 January 2025 (UTC)
- Sure, but WP:DGF doesn't mention any unhelpful rhetorical patterns. CMD (talk) 05:32, 2 January 2025 (UTC)
- The fact that everyone (myself included) defending "LLM use" says "use" rather than "generated", is a pretty clear sign that no one really wants to communicate with someone using "LLM generated" comments. We can argue about bans (not being proposed here), how to know if someone is using LLM, the nuances of "LLM use", etc., but at the very least we should be able to agree that there are concerns with LLM generated replies, and if we can agree that there are concerns then we should be able to agree that somewhere in policy we should be able to find a place to express those concerns. Photos of Japan (talk) 05:38, 2 January 2025 (UTC)
- ...or they could be saying "use" because "using LLMs" is shorter and more colloquial than "generating text with LLMs"? Gnomingstuff (talk) 06:19, 2 January 2025 (UTC)
- Seems unlikely when people justify their use for editing (which I also support), and not for generating replies on their behalf. Photos of Japan (talk) 06:23, 2 January 2025 (UTC)
- This is just semantics.
- For instance, I am OK with someone using a LLM to post a productive comment on a talk page. I am also OK with someone generating a reply with a LLM that is a productive comment to post to a talk page. I am not OK with someone generating text with an LLM to include in an article, and also not OK with someone using a LLM to contribute to an article.
- The only difference between these four sentences is that two of them are more annoying to type than the other two. Gnomingstuff (talk) 08:08, 2 January 2025 (UTC)
- Most people already assume good faith in those making productive contributions. In situations where good faith is more difficult to assume, would you trust someone who uses an LLM to generate all of their comments as much as someone who doesn't? Photos of Japan (talk) 09:11, 2 January 2025 (UTC)
- Given that LLM-use is completely irrelevant to the faith in which a user contributes, yes. Of course what amount that actually is may be anywhere between completely and none. Thryduulf (talk) 11:59, 2 January 2025 (UTC)
- LLM-use is relevant as it allows bad faith users to disrupt the encyclopedia with minimal effort. Such a user posted in this thread earlier, as well as started a disruptive thread here and posted here, all using AI. I had previously been involved in a debate with another sock puppet of theirs, but at that time they didn't use AI. Now it seems they are switching to using an LLM just to troll with minimal effort. Photos of Japan (talk) 21:44, 2 January 2025 (UTC)
- LLMs are a tool that can be used by good and bad faith users alike. Using an LLM tells you nothing about whether a user is contributing in good or bad faith. If somebody is trolling they can be, and should be, blocked for trolling regardless of the specifics of how they are trolling. Thryduulf (talk) 21:56, 2 January 2025 (UTC)
- A can of spray paint, a kitchen knife, etc., are tools that can be used for good or bad, but if you bring them some place where they have few good uses and many bad uses then people will be suspicious about why you brought them. You can't just assume that a tool in any context is equally harmless. Using AI to generate replies to other editors is more suspicious than using it to generate a picture exemplifying a fashion style, or a description of a physics concept. Photos of Japan (talk) 23:09, 2 January 2025 (UTC)
- LLMs are a tool that can be used by good and bad faith users alike. Using an LLM tells you nothing about whether a user is contributing in good or bad faith. If somebody is trolling they can be, and should be, blocked for trolling regardless of the specifics of how they are trolling. Thryduulf (talk) 21:56, 2 January 2025 (UTC)
- LLM-use is relevant as it allows bad faith users to disrupt the encyclopedia with minimal effort. Such a user posted in this thread earlier, as well as started a disruptive thread here and posted here, all using AI. I had previously been involved in a debate with another sock puppet of theirs, but at that time they didn't use AI. Now it seems they are switching to using an LLM just to troll with minimal effort. Photos of Japan (talk) 21:44, 2 January 2025 (UTC)
- Given that LLM-use is completely irrelevant to the faith in which a user contributes, yes. Of course what amount that actually is may be anywhere between completely and none. Thryduulf (talk) 11:59, 2 January 2025 (UTC)
- Most people already assume good faith in those making productive contributions. In situations where good faith is more difficult to assume, would you trust someone who uses an LLM to generate all of their comments as much as someone who doesn't? Photos of Japan (talk) 09:11, 2 January 2025 (UTC)
- Seems unlikely when people justify their use for editing (which I also support), and not for generating replies on their behalf. Photos of Japan (talk) 06:23, 2 January 2025 (UTC)
- ...or they could be saying "use" because "using LLMs" is shorter and more colloquial than "generating text with LLMs"? Gnomingstuff (talk) 06:19, 2 January 2025 (UTC)
- All I mean to say is it should be licit that unhelpful LLM use should be something that can be mentioned like any other unhelpful rhetorical pattern. Remsense ‥ 论 05:09, 2 January 2025 (UTC)
- Assume good faith, don't ascertain! Llm use can be deeply unhelpful for discussions and the potential for mis-use is large, but the most recent discussion I've been involved with where I observed an llm post was responded to by an llm post, I believe both the users were doing this in good faith. CMD (talk) 05:07, 2 January 2025 (UTC)
- In my mind, they are related inasmuch as it is much more difficult for me to ascertain good faith if the words are eminently not written by the person I am speaking to in large part, but instead generated based on an unknown prompt in what is likely a small fraction of the expected time. To be frank, in many situations it is difficult to avoid the conclusion that the disparity in effort is being leveraged in something less than good faith. Remsense ‥ 论 05:02, 2 January 2025 (UTC)
- I wouldn't trust anything factual the person would have to say, but I wouldn't assume they were malicious, which is the entire point of WP:AGF. Gnomingstuff (talk) 16:47, 2 January 2025 (UTC)
- WP:AGF is not a death pact though. At times you should be suspicious. Do you think that if a user, who you already have suspicions of, is also using an LLM to generate their comments, that that doesn't have any effect on those suspicions? Photos of Japan (talk) 21:44, 2 January 2025 (UTC)
- So… If you suspect that someone is not arguing in good faith… just stop engaging them. If they are creating walls of text but not making policy based arguments, they can be ignored. Resist the urge to respond to every comment… it isn’t necessary to “have the last word”. Blueboar (talk) 21:57, 2 January 2025 (UTC)
- As the person just banned at ANI for persistently using LLMs to communicate demonstrates, you can't "just stop engaging them". When they propose changes to an article and say they will implement them if no one replies then somebody has to engage them in some way. It's not about trying to "have the last word", this is a collaborative project, it generally requires engaging with others to some degree. When someone like the person I linked to above (now banned sock), spams low quality comments across dozens of AfDs, then they are going to waste people's time, and telling others to just not engage with them is dismissive of that. Photos of Japan (talk) 22:57, 2 January 2025 (UTC)
- That they've been banned for disruption indicates we can do everything we need to do to deal with bad faith users of LLMs without assuming that everyone using an LLM is doing so in bad faith. Thryduulf (talk) 00:33, 3 January 2025 (UTC)
- I don't believe we should assume everyone using an LLM is doing so in bad faith, so I'm glad you think my comment indicates what I believe. Photos of Japan (talk) 01:09, 3 January 2025 (UTC)
- That they've been banned for disruption indicates we can do everything we need to do to deal with bad faith users of LLMs without assuming that everyone using an LLM is doing so in bad faith. Thryduulf (talk) 00:33, 3 January 2025 (UTC)
- As the person just banned at ANI for persistently using LLMs to communicate demonstrates, you can't "just stop engaging them". When they propose changes to an article and say they will implement them if no one replies then somebody has to engage them in some way. It's not about trying to "have the last word", this is a collaborative project, it generally requires engaging with others to some degree. When someone like the person I linked to above (now banned sock), spams low quality comments across dozens of AfDs, then they are going to waste people's time, and telling others to just not engage with them is dismissive of that. Photos of Japan (talk) 22:57, 2 January 2025 (UTC)
- So… If you suspect that someone is not arguing in good faith… just stop engaging them. If they are creating walls of text but not making policy based arguments, they can be ignored. Resist the urge to respond to every comment… it isn’t necessary to “have the last word”. Blueboar (talk) 21:57, 2 January 2025 (UTC)
- WP:AGF is not a death pact though. At times you should be suspicious. Do you think that if a user, who you already have suspicions of, is also using an LLM to generate their comments, that that doesn't have any effect on those suspicions? Photos of Japan (talk) 21:44, 2 January 2025 (UTC)
- I wouldn't trust anything factual the person would have to say, but I wouldn't assume they were malicious, which is the entire point of WP:AGF. Gnomingstuff (talk) 16:47, 2 January 2025 (UTC)
- No -- whatever you think of LLMs, the reason they are so popular is that the people who use them earnestly believe they are useful. Claiming otherwise is divorced from reality. Even people who add hallucinated bullshit to articles are usually well-intentioned (if wrong). Gnomingstuff (talk) 06:17, 2 January 2025 (UTC)
- Comment I have no opinion on this matter, however, note that we are currently dealing with a real-world application of this at ANI and there's a generalized state of confusion in how to address it. Chetsford (talk) 08:54, 2 January 2025 (UTC)
- Yes I find it incredibly rude for someone to procedurally generate text and then expect others to engage with it as if they were actually saying something themselves. Simonm223 (talk) 14:34, 2 January 2025 (UTC)
- Yes, mention that use of an LLM should be disclosed and that failure to do so is like not telling someone you are taping the call. Selfstudier (talk) 14:43, 2 January 2025 (UTC)
- I could support general advice that if you're using machine translation or an LLM to help you write your comments, it can be helpful to mention this in the message. The tone to take, though, should be "so people won't be mad at you if it screwed up the comment" instead of "because you're an immoral and possibly criminal person if you do this". WhatamIdoing (talk) 07:57, 3 January 2025 (UTC)
- No. When someone publishes something under their own name, they are incorporating it as their own statement. Plagiarism from an AI or elsewhere is irrelevant to whether they are engaging in good faith. lethargilistic (talk) 17:29, 2 January 2025 (UTC)
- Comment LLMs know a few tricks about logical fallacies and some general ways of arguing (rhetoric), but they are incredibly dumb at understanding the rules of Misplaced Pages. You can usually tell this because it looks like incredibly slick and professional prose, but somehow it cannot get even the simplest points about the policies and guidelines of Misplaced Pages. I would indef such users for lacking WP:CIR. tgeorgescu (talk) 17:39, 2 January 2025 (UTC)
- That guideline states "Sanctions such as blocks and bans are always considered a last resort where all other avenues of correcting problems have been tried and have failed." Gnomingstuff (talk) 19:44, 2 January 2025 (UTC)
- WP:CIR isn't a guideline, but an essay. Relevantly though it is being cited at this very moment in an ANI thread concerning a user who can't/won't communicate without an LLM. Photos of Japan (talk) 20:49, 2 January 2025 (UTC)
- I blocked that user as NOTHERE a few minutes ago after seeing them (using ChatGPT) make suggestions for text to live pagespace while their previous bad behaviors were under discussion. AGF is not a suicide pact. BusterD (talk) 20:56, 2 January 2025 (UTC)
- WP:CIR isn't a guideline, but an essay. Relevantly though it is being cited at this very moment in an ANI thread concerning a user who can't/won't communicate without an LLM. Photos of Japan (talk) 20:49, 2 January 2025 (UTC)
- That guideline states "Sanctions such as blocks and bans are always considered a last resort where all other avenues of correcting problems have been tried and have failed." Gnomingstuff (talk) 19:44, 2 January 2025 (UTC)
- No - Not a good or bad faith issue. PackMecEng (talk) 21:02, 2 January 2025 (UTC)
- Yes Using a 3rd party service to contribute to the Misplaced Pages on your behalf is clearly bad-faith, analogous to paying someone to write your article. Zaathras (talk) 14:39, 3 January 2025 (UTC)
- Its a stretch to say that a newbie writing a comment using AI is automatically acting in bad faith and not here to build an encyclopedia. PackMecEng (talk) 16:55, 3 January 2025 (UTC)
- That's true, but this and other comments here show that not a few editors perceive it as bad-faith, rude, etc. I take that as an indication that we should tell people to avoid doing this when they have enough CLUE to read WP:AGF and are making an effort to show they're acting in good faith. Daß Wölf 23:06, 9 January 2025 (UTC)
- Its a stretch to say that a newbie writing a comment using AI is automatically acting in bad faith and not here to build an encyclopedia. PackMecEng (talk) 16:55, 3 January 2025 (UTC)
- Comment Large language model AI like Chat GPT are in their infancy. The culture hasn't finished its initial reaction to them yet. I suggest that any proposal made here have an automatic expiration/required rediscussion date two years after closing. Darkfrog24 (talk) 22:42, 3 January 2025 (UTC)
- No – It is a matter of how you use AI. I use Google translate to add trans-title parameters to citations, but I am careful to check for Google's output making for good English as well as reflecting the foreign title when it is a language I somewhat understand. I like to think that I am careful, and I do not pretend to be fluent in a language I am not familiar with, although I usually don't announce the source of such a translation. If an editor uses AI profligately and without understanding the material generated, then that is the sin; not AI itself. Dhtwiki (talk) 05:04, 5 January 2025 (UTC)
- There's a legal phrase, "when the exception swallows the rule", and I think we might be headed there with the recent LLM/AI discussions.
- We start off by saying "Let's completely ban it!" Then in discussion we add "Oh, except for this very reasonable thing... and that reasonable thing... and nobody actually meant this other reasonable thing..."
- The end result is that it's "completely banned" ...except for an apparent majority of uses. WhatamIdoing (talk) 06:34, 5 January 2025 (UTC)
- Do you want us to reply to you, because you are a human? Or are you just posting the output of an LLM without bothering to read anything yourself? DS (talk) 06:08, 7 January 2025 (UTC)
- Most likely you would reply because someone posted a valid comment and you are assuming they are acting in good faith and taking responsibility for what they post. To assume otherwise is kind of weird and not inline with general Misplaced Pages values. PackMecEng (talk) 15:19, 8 January 2025 (UTC)
- Do you want us to reply to you, because you are a human? Or are you just posting the output of an LLM without bothering to read anything yourself? DS (talk) 06:08, 7 January 2025 (UTC)
- No The OP seems to misunderstand WP:DGF which is not aimed at weak editors but instead exhorts stronger editors to lead by example. That section already seems to overload the primary point of WP:AGF and adding mention of AI would be quite inappropriate per WP:CREEP. Andrew🐉(talk) 23:11, 5 January 2025 (UTC)
- No. Reading the current text of the section, adding text about AI would feel out-of-place for what the section is about. —pythoncoder (talk | contribs) 05:56, 8 January 2025 (UTC)
- No, this is not about good faith. Adumbrativus (talk) 11:14, 9 January 2025 (UTC)
- Yes. AI use is not a demonstration of bad faith (in any case not every new good-faith editor is familiar with our AI policies), but it is equally not a "demonstration of good faith", which is what the WP:DGF section is about.
- It seems some editors are missing the point and !voting as if every edit is either a demonstration of good faith or bad faith. Most interactions are neutral and so is most AI use, but I find it hard to imagine a situation where AI use would point away from unfamiliarity and incompetence (in the CIR sense), and it often (unintentionally) leads to a presumption of laziness and open disinterest. It makes perfect sense to recommend against it. Daß Wölf 22:56, 9 January 2025 (UTC)
- Indeed most kinds of actions don't inherently demonstrate good or bad. The circumspect and neutral observation that
AI use is not a demonstration of bad faith... but it is equally not a "demonstration of good faith"
, does not justify a proposal to one-sidedly say just half. And among all the actions that don't necessarily demonstrate good faith (and don't necessarily demonstrate bad faith either), it is not the purpose of "demonstrate good faith" and the broader guideline, to single out one kind of action to especially mention negatively. Adumbrativus (talk) 04:40, 13 January 2025 (UTC)
- Indeed most kinds of actions don't inherently demonstrate good or bad. The circumspect and neutral observation that
- Yes. Per Dass Wolf, though I would say passing off a completely AI-generated comment as your own anywhere is inherently bad-faith and one doesn't need to know Wiki policies to understand that. JoelleJay (talk) 23:30, 9 January 2025 (UTC)
- Yes. Sure, LLMs may have utility somewhere, and it might be a crutch for people unfamiliar with English, but as I've said above in the other AI RfC, that's a competence issue. This is about comments eating up editor time, energy, about LLMs easily being used to ram through changes and poke at editors in good standing. I don't see a case wherein a prospective editor's command of policy and language is good enough to discuss with other editors while being bad enough to require LLM use. Iseulttalk to me 01:26, 10 January 2025 (UTC)
- Good faith is separate from competence. Trying to do good is separate from having skills and knowledge to achieve good results. Adumbrativus (talk) 04:40, 13 January 2025 (UTC)
- No - anyone using a washing machine to wash their clothes must be evil and inherently lazy. They cannot be trusted. ... Oh, sorry, wrong century. Regards, --Goldsztajn (talk) 01:31, 10 January 2025 (UTC)
- No - As long as a person understands (and knows) what they are talking about, we shouldn't discriminate against folks using generative AI tech for grammar fixes or minor flow improvements. Yes, AI can create walls of text, and make arguments not grounded in policy, but we could do that even without resorting to generative AI. Sohom (talk) 11:24, 13 January 2025 (UTC)
- To expand on my point above. Completely AI generated comments (or articles) are obviously bad, but
using AI
should be thrown into the same cross-hairs as completely AI generated comments. Sohom (talk) 11:35, 13 January 2025 (UTC)- @Sohom Datta You mean shouldn't be thrown? I think that would make more sense given the context of your original !vote. Duly signed, ⛵ WaltClipper -(talk) 14:08, 14 January 2025 (UTC)
- To expand on my point above. Completely AI generated comments (or articles) are obviously bad, but
- No. Don't make any changes. It's not a good faith/bad faith issue. The 'yes' arguments are most unconvincing with very bizarre analogies to make their point. Here, I can make one too: "Don't edit with AI; you wouldn't shoot your neighbor's dog with a BB-gun, would you?" Duly signed, ⛵ WaltClipper -(talk) 14:43, 13 January 2025 (UTC)
Edit quality in Android application
Several contributors, including myself, don't understand the point of the ‘edit quality’ mention in the Android application.
The mw:Wikimedia Apps/Android FAQ states that edit quality is ‘based on how many of your edits were reverted’. Is this the only criterion used? If so, calling it ‘edit quality’ is presumptuous, not to say irresponsible, given that there should be so many criteria. What are the different levels of judgement: excellent, good, bad, horrible, etc. ? What does this mean for developers, bots, patrollers and administrators? Whatever the case, the consequences of such a statement could be quite serious.
It's clear that the choice to keep this mention in the application doesn't just concern the application developers or even a small team, but all Misplaced Pages contributors, whatever their language. So we think it's important that you answer our questions. (As my English isn't very good, could you answer me in basic English, or at least without figures of speech? Thank you in advance.) Abalg (talk) 13:12, 7 January 2025 (UTC)
- Hi Albag. I agree with the core of your message. But I don't get why you have been redirected to English Village Pump. This should be discussed somewhere on Meta or Mediawiki wiki, as it's relevant for all Misplaced Pages communities, not only the English one. (You and I are from fr-wp.) Best, — Jules* 13:28, 7 January 2025 (UTC)
- Thanks Jules*. According to Pyb, the place to discuss would be there : mw:Talk:Wikimedia Apps#Edit quality in Android application. --Abalg (talk) 13:43, 7 January 2025 (UTC)
Allowing non-admin "delete" closures at RfD
At Misplaced Pages:Deletion review#Clock/calendar, a few editors (Enos733 and Jay, while Robert McClenon and OwenX hinted at it) expressed support for allowing non-administrators to close RfD discussions as "delete". While I don't personally hold strong opinions in this regard, I would like for this idea to be discussed here. JJPMaster (she/they) 13:13, 7 January 2025 (UTC)
- That would not be helpful. -- Tavix 14:10, 7 January 2025 (UTC)
- While I have no issue with the direction the linked discussion has taken, I agree with almost every contributor there: As a practice I have zero interest in generally allowing random editors closing outside their permissions. It might make DRV a more chatty board, granted. BusterD (talk) 15:02, 7 January 2025 (UTC)
- Tamzin makes a reasonable case in their comment below. When we have already chosen to trust certain editors with advanced permissions, we might allow those folks to utilize them as fully as accepted practice allows. Those humans already have skin in the game. They are unlikely to act rashly. BusterD (talk) 19:32, 7 January 2025 (UTC)
- To me, non-admin delete closes at any XfD have always seemed inconsistent with what we say about how adminship and discussion closing work. I would be in violation of admin policy if I deleted based on someone else's close without conducting a full review myself, in which case, what was the point of their close? It's entirely redundant to my own work. That said, I can't really articulate a reason that this should be allowed at some XfDs but not others, and it seems to have gone fine at CfD and TfD. I guess call me neutral. What I'd be more open to is allowing page movers to do this. Page movers do have the tools to turn a bluelink red, so it doesn't create the same admin accountability issue if I'm just cleaning up the stray page left over from a page mover's use of a tool that they were duly granted and subject to their own accountability rules for. We could let them move a redirect to some other plausible title (this would violate WP:MOVEREDIRECT as currently written but I think I'd be okay with making this a canonical exception), and/or allow moving to some draftspace or userspace page and tagging for G6, as we do with {{db-moved}}. I'll note that when I was a non-admin pagemover, I did close a few things as delete where some edge case applied that let me effect the deletion using only suppressredirect, and no one ever objected. -- Tamzin (they|xe|🤷) 19:07, 7 January 2025 (UTC)
- I see that I was sort of vague, which is consistent with the statement that I hinted at allowing non-admin delete closures. My main concern is that I would like to see our guidelines and our practice made consistent, either by changing the guidelines or changing the practice. It appears that there is a rough consensus emerging that non-admin delete closures should continue to be disallowed in RFD, but that CFD may be a special case. So what I am saying is that if, in practice, we allow non-admin Delete closures at CFD, the guideline should say something vague to that effect.
- I also see that there is a consensus that DRV can endorse irregular non-admin closures, including irregular non-admin Delete closures. Specifically, it isn't necessary for DRV to vacate the closure for an uninvolved admin to close. A consensus at DRV, some of whose editors will be uninvolved admins, is at least as good a close as a normal close by an uninvolved admin.
- Also, maybe we need clearer guidance about non-admin Keep closures of AFDs. I think that if an editor is not sure whether they have sufficient experience to be closing AFDs as Keep, they don't have enough experience. I think that the guidance is clear enough in saying that administrator accountability applies to non-admin closes, but maybe it needs to be further strengthened, because at DRV we sometimes deal with non-admin closes where the closer doesn't respond to inquiries, or is rude in response to them.
- Also, maybe we need clearer guidance about non-admin No Consensus closures of AFDs. In particular, a close of No Consensus is a contentious closure, and should either be left to an admin, or should be Relisted.
- Robert McClenon (talk) 19:20, 7 January 2025 (UTC)
- As for
I can't really articulate a reason that this should be allowed at some XfDs
, the argument is that more work is needed to enact closures at TfD and CfD (namely orphaning templates and emptying/moving/merging categories). Those extra steps aren't present at RfD. At most, there are times when it's appropriate to unlink the redirect or add WP:RCATs but those are automated steps that WP:XFDC handles. From my limited experience at TfD and CfD though, it does seem that the extra work needed at closure does not compensate for the extra work from needing two people reviewing the closure (especially at CfD because a bot that handles the clean-up). Consistency has come up and I would much rather consistently disallow non-admin delete closures at all XfD venues. I know it's tempting for non-admins to think they're helping by enacting these closures but it's not fair for them to be spinning their wheels. As for moving redirects, that's even messier than deleting them. There's a reason that WP:MOVEREDIRECT advises not to move redirects except for limited cases when preserving history is important. -- Tavix 20:16, 7 January 2025 (UTC)
- As for
- @Tamzin: I do have one objection to this point of redundancy, which you are quite familiar with. Here, an AfD was closed as "transwiki and delete", however, the admin who did the closure does not have the technical ability to transwiki pages to the English Wikibooks, meaning that I, who does, had to determine that the outcome was actually to transwiki rather than blindly accepting a request at b:WB:RFI. Then, I had to mark the pages for G6 deletion, that way an admin, in this case you, could determine that the page was ready to be deleted. Does this mean that that admin who closed the discussion shouldn't have closed it, since they only have the technical ability to delete, not transwiki? Could I have closed it, having the technical ability to transwiki, but not delete? Either way, someone else would have had to review it. Or, should only people who have importing rights on the target wiki and admin rights on the English Misplaced Pages be allowed to close discussions as "transwiki and delete"? JJPMaster (she/they) 12:04, 8 January 2025 (UTC)
- Robert McClenon (talk) 19:20, 7 January 2025 (UTC)
- I do support being explicit when a non-administrator can close a discussion as "delete" and I think that explicitly extending to RfD and CfD is appropriate. First, there can be a backlog in both of these areas and there are often few comments in each discussion (and there is usually not the same passion as in an AfD). Second, the delete close of a non-administrator is reviewed by an administrator before action is taken to delete the link, or category (a delete close is a two-step process, the writeup and the delete action, so in theory the administrators workload is reduced). Third, non-admins do face administrator accountability for their actions, and can be subject to sanction. Fourth, the community has a role in reviewing closing decisions at DRV, so there is already a process in place to check a unexperienced editor or poor close. Finally, with many, if not most discussions for deletion the outcome is largely straight forward. --Enos733 (talk) 20:01, 7 January 2025 (UTC)
- There is currently no rule against non-admin delete closures as far as I know; the issue is the practical one that you don't have the ability to delete. However, I have made non-admin delete closures at AfD. This occurred when an admin deleted the article under consideration (usually for COPYVIO) without closing the related AfD. The closures were not controversial and there was no DRV. Hawkeye7 (discuss) 20:31, 7 January 2025 (UTC)
- The situation you're referring to is an exception allowed per WP:NACD:
If an administrator has deleted a page (including by speedy deletion) but neglected to close the discussion, anyone with a registered account may close the discussion provided that the administrator's name and deletion summary are included in the closing rationale.
-- Tavix 20:37, 7 January 2025 (UTC)
- The situation you're referring to is an exception allowed per WP:NACD:
- Bad idea to allow, this sort of closure is just busy work, that imposes more work on the admin that then has to review the arguments, close and then delete. Graeme Bartlett (talk) 22:05, 7 January 2025 (UTC)
- Is this the same as #Non-Admin XFD Close as Delete above? Anomie⚔ 23:04, 7 January 2025 (UTC)
- Yes, User:Anomie. Same issue coming from the same DRV. Robert McClenon (talk) 03:52, 8 January 2025 (UTC)
- (1) As I've also noted in the other discussion, the deletion process guidelines at WP:NACD do say non-admins shouldn't do "delete" closures and do recognize exceptions for CfD and TfD. There isn't a current inconsistency there between guidelines and practice.
(2) In circumstances where we do allow for non-admin "delete" closures, I would hope that the implementing admin isn't fully reviewing the discussion de novo before implementing, but rather giving deference to any reasonable closure. That's how it goes with requested move closers asking for technical help implementing a "moved" closure at WP:RM/TR (as noted at WP:RMNAC, the closure will "generally be respected by the administrator (or page mover)" but can be reverted by an admin if "clearly improper"). SilverLocust 💬 08:41, 9 January 2025 (UTC)
- Comment - A couple things to note about the CFD process: It very much requires work by admins. The non-admin notes info about the close at WT:CFD/Working, and then an admin enters the info on the CFD/Working page (which is protected) so that the bot can perform the various actions. Remember that altering a category is potentially more labour intensive than merely editing or deleting a single page - every page in that category must be edited, and then the category deleted. (There are other technical things involved, like the mess that template transclusion can cause, but let's keep it simple.) So I wouldn't suggest that that process is very useful as a precedent for anything here. It was done at a time when there was a bit of a backlog at CfD, and this was a solution some found to address that. Also - since then, I think at least one of the regular non-admin closers there is now an admin. So there is that as well. - jc37 09:14, 9 January 2025 (UTC)
- If the expectation is that an admin needs to review the deletion discussion to ensure they agree with that outcome before deleting via G6, as multiple people here are suggesting, then I'm not sure this is worthwhile. However, I have had many admins delete pages I've tagged with G6, and I have been assuming that they only check that the discussion was indeed closed as delete, and trust the closer to be responsible for the correctness of it. This approach makes sense to me, because if a non-admin is competent to close and be responsible for any other outcome of a discussion, I don't see any compelling reason they can't be responsible for a delete outcome and close accordingly. —Compassionate727 19:51, 9 January 2025 (UTC)
- Some closers, and you're among them, have closing accuracy similar to many sysops. But the sysop can't/shouldn't "trust" that your close is accurate. Trustworthy though you are, the sysop must, at very minimum, check firstly that the close with your signature on it was actually made by you (signatures are easily copied), secondly that the close wasn't manifestly unreasonable, and thirdly that the CSD is correct. WP:DRV holds the deleting sysop responsible for checking that the CSD were correctly applied. G6 is for uncontroversial deletions, and if there's been an XFD, then it's only "uncontroversial" if the XFD was unanimous or nearly so. We do have sysops who'll G6 without checking carefully, but they shouldn't. Basically, non-admin closing XFDs doesn't save very much sysop time. I think that if your motive as a non-admin is to relieve sysops of labour, the place you're of most use is at RfC.—S Marshall T/C 11:28, 12 January 2025 (UTC)
if your motive as a non-admin is to relieve sysops of labour, the place you're of most use is at RfC
alternatively you should consider becoming an administrator yourself. Thryduulf (talk) 13:20, 12 January 2025 (UTC)- If you're willing to tolerate the RFA process.—S Marshall T/C 15:24, 12 January 2025 (UTC)
- In all the cases I have dealt with, the admin's reason for deletion (usually copyvio) was completely different to the issues being debated in the AfD (usually notability). The closing statement was therefore something like "Discussion is now moot due to article being deleted for <reason> by <admin>". Hawkeye7 (discuss) 20:10, 14 January 2025 (UTC)
- Some closers, and you're among them, have closing accuracy similar to many sysops. But the sysop can't/shouldn't "trust" that your close is accurate. Trustworthy though you are, the sysop must, at very minimum, check firstly that the close with your signature on it was actually made by you (signatures are easily copied), secondly that the close wasn't manifestly unreasonable, and thirdly that the CSD is correct. WP:DRV holds the deleting sysop responsible for checking that the CSD were correctly applied. G6 is for uncontroversial deletions, and if there's been an XFD, then it's only "uncontroversial" if the XFD was unanimous or nearly so. We do have sysops who'll G6 without checking carefully, but they shouldn't. Basically, non-admin closing XFDs doesn't save very much sysop time. I think that if your motive as a non-admin is to relieve sysops of labour, the place you're of most use is at RfC.—S Marshall T/C 11:28, 12 January 2025 (UTC)
- I think most all the time, experienced closers will do a great job and that will save admin time because they will not have to construct and explain the close from scratch, but there will be some that are bad and that will be costly in time not just for the admin but for the project's goal of completing these issues and avoiding disruption. I think that lost time is still too costly, so I would oppose non-admin delete closes. (Now if there were a proposal for a process to make a "delete-only admin permission" that would be good -- such motivated specialists would likely be more efficient.) Alanscottwalker (talk) 16:44, 12 January 2025 (UTC)
- As I said at the "Non-Admin XFD Close as Delete" section, I support non-admins closing RfDs as Delete. If TfDs have been made an exception, RfDs can be too, especially considering RfD backlogs. Closing a heavily discussed nomination at RfD is more about the reading, analysis and thought process at arriving at the outcome, and less about the technicality of the subsequent page actions. I don't see a significant difference between non-admins closing discussions as Delete vs non-Delete. It will help making non-admins mentally prepared to advance to admin roles. Jay 💬 14:53, 14 January 2025 (UTC)
Placement of dynamic list template
Where does the template {{dynamic list}} actually go? It superficially looks like it should be a hatnote at the top of the article, but there are a great number (perhaps a majority) of dynamic list articles where the template is placed under the infobox and lead. I can't find anything in the MOS specifically addressing this. Anonymous 21:44, 7 January 2025 (UTC)
- The template documentation, which you linked to, addresses this: Place this notification template immediately before the applicable incomplete list (i.e. in a section, not at the top of the article). This template will automatically add the article to Category:Dynamic lists. voorts (talk/contributions) 21:47, 7 January 2025 (UTC)
- In many cases, the template is neither at the very top nor within a section. Just to be clear, this placement is not correct? Anonymous 22:25, 7 January 2025 (UTC)
- It should go right above the list. voorts (talk/contributions) 22:40, 7 January 2025 (UTC)
- In many cases, the template is neither at the very top nor within a section. Just to be clear, this placement is not correct? Anonymous 22:25, 7 January 2025 (UTC)
Should WP:NOTDIRECTORY be more specific about phone numbers, etc.?
Apparently the WP:NOT talk page isn't well-watched, so I'd like to draw some attention to Should WP:NOTDIRECTORY be more specific about phone numbers, etc.? over there. Gamapamani (talk) 04:05, 8 January 2025 (UTC)
- That talk page has been checked by at least 100 registered editors during the last month, which makes it one of our better watched pages. WhatamIdoing (talk) 18:22, 9 January 2025 (UTC)
Apparently
. :) Getting no response in 36 hours was surprising. The silence could mean "no objection" (since I'm not proposing a change, just a clarification that already used to be there), but since it's a policy page, it would be good to have something more tangible. Gamapamani (talk) 08:20, 10 January 2025 (UTC)
Does my name technically violate the promotional username policy?
My username, as of right now, is Tenebre.Rosso.Sangue995320, which is a reference to a song with the same name. and i recently read the username policy, and thought that this violates that policy. can someone help me out? Tenebre_Rosso_Sangue, ULTRAKILLing Vandals! Call for Medic! My Stats! 23:48, 9 January 2025 (UTC)
- No, it does not. voorts (talk/contributions) 00:11, 10 January 2025 (UTC)
Technical section headings
I had expected MOS:HEADINGS to state a preference toward easily understood headings over complicated technical headings. No such preference was stated. Am I missing something? This subject is currently at issue in regards to Techtonic Setting vs Background at Talk:2020 Sparta earthquake/GA1.-TonyTheTiger (T / C / WP:FOUR / WP:CHICAGO / WP:WAWARD) 15:32, 12 January 2025 (UTC)
- You may be looking for WP:MTAU? It doesn't specifically address headings, but in general makes the point that we should strive to make articles as accessible as possible for the general reader. This is part of GA criterion 1a. Caeciliusinhorto (talk) 15:50, 12 January 2025 (UTC)
- That is helpful. I guess there is no such guidance for the section headings. Thx.-TonyTheTiger (T / C / WP:FOUR / WP:CHICAGO / WP:WAWARD) 05:33, 13 January 2025 (UTC)
- I wouldn't say "Tectonic setting" and "Background" are interchangeable for any reader, the first is much more specific. CMD (talk) 00:25, 13 January 2025 (UTC)
- In this review the set of section headings was Tectonic setting, Earthquake, Impact or Aftermath, making for an odd Table of contents for the non-expert reader because one section heading was for experts and the others were for the general audience. The techtonics were a historical background so I advised to make all the section titles for the general audience.-TonyTheTiger (T / C / WP:FOUR / WP:CHICAGO / WP:WAWARD) 05:30, 13 January 2025 (UTC)
Modifying the first sentence of BLPSPS
FYIA discussion has been started at WT:BLP re: modifying the text of BLPSPS. FactOrOpinion (talk) 14:23, 13 January 2025 (UTC)
Upgrade MOS:ALBUM to an official guideline
- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
Misplaced Pages:WikiProject_Albums/Album_article_style_advice is an essay. I've been editing since 2010, and for the entire duration of that, this essay has been referred to and used extensively, and has even guided discussions regarding ascertaining if sources are reliable. I propose that it be formally upgraded to a status as an MOS guideline parallel to MOS:MUSIC.--3family6 (Talk to me | See what I have done) 14:28, 13 January 2025 (UTC)
- I'm broadly in favor of this proposal—I looked over the essay and most of it is aligned with what seems standard in album articles—but there are a few aspects that feel less aligned with current practice, which I'd want to reexamine before we move forward with promoting this:
- The section Recording, production suggests
What other works of art is this producer known for?
as one of the categories of information to include in a recording/production section. This can be appropriate in some cases (e.g., the Nevermind article discusses how Butch Vig's work with Killdozer inspired Nirvana to try and work with him), but recommending it outright seems like it'd risk encouraging people to WP:COATRACK. My preference would be to cut the sentence I quoted and the one immediately following it. - The section Track listing suggests that the numbered-list be the preferred format for track listings, with other formats like {{Track listing}} being alternative choices for "more complicated" cases. However, in my experience, using {{Track listing}} rather than a numbered list tends to be the standard. All of the formatting options currently listed in the essay should continue to be mentioned, but I think portraying {{Track listing}} as the primary style would be more reflective of current practice.
- The advice in the External links section seems partially outdated. In my experience, review aggregators like Metacritic are conventionally discussed in the "Critical reception" section instead these days, and I'm uncertain to what extent we still link to databases like Discogs even in ELs.
- The section Recording, production suggests
- (As a disclaimer, my familiarity with album articles comes mostly from popular-music genres, rock and hip-hop in particular. I don't know if typical practice is different in areas like classical or jazz.) Overall, while I dedicated most of my comment volume to critiques, these are a fairly minor set of issues in what seems like otherwise quite sound guidance. If they're addressed, it's my opinion that this essay would be ready for prime time. ModernDayTrilobite (talk • contribs) 15:19, 13 January 2025 (UTC)
- I'd agree with all of this, given my experience. The jazz and classical that I've seen is mostly the same.--3family6 (Talk to me | See what I have done) 16:57, 13 January 2025 (UTC)
- Me too, though sometime last year, I unexpectedly had some (inexplicably strong) pushback on the tracklist part with an editor or two. In my experience, using the track list template is the standard, and I can't recall anyone giving me any pushback for it, but some editors apparently prefer just using numbers. I guess we can wait and see if there's any current pushback on it. 17:01, 13 January 2025 (UTC) Sergecross73 msg me 17:01, 13 January 2025 (UTC)
- Was it pushback for how you had rendered the tracklist, or an existing tracklist being re-formatted by you or them?--3family6 (Talk to me | See what I have done) 18:13, 13 January 2025 (UTC)
- They came to WT:ALBUMS upset that another editor was changing track lists from "numbered" to "template" formats. My main response was surprised, because in my 15+ years of article creations and rewrites, I almost exclusively used the tracklist template, and had never once received any pushback.
- So basically, I personally agree with you and MDT above, I'm merely saying I've heard someone disagree. I'll try to dig up the discussion. Sergecross73 msg me 17:50, 14 January 2025 (UTC)
- I found this one from about a year ago, though this was more about sticking to the current wording as is than it was about opposition against changing it. Not sure if there was another one or not. Sergecross73 msg me 18:14, 14 January 2025 (UTC)
- I remember one editor being strongly against the template, but they are now community banned. Everyone else I've seen so far uses the template. AstonishingTunesAdmirer 連絡 22:25, 13 January 2025 (UTC)
- I can see the numbered-list format being used for very special cases like Guitar Songs, which was released with only two songs, and had the same co-writers and producer. But I imagine we have extremely few articles that are like that, so I believe the template should be the standard. Elias 🦗🐜 12:23, 14 January 2025 (UTC)
- Was it pushback for how you had rendered the tracklist, or an existing tracklist being re-formatted by you or them?--3family6 (Talk to me | See what I have done) 18:13, 13 January 2025 (UTC)
- ModernDayTrilobite, regarding linking to Discogs, some recent discussions I was in at the end of last year indicate that it is common to still link to Discogs as an EL, because it gives more exhaustive track, release history, and personnel listings that Misplaced Pages - generally - should not.--3family6 (Talk to me | See what I have done) 14:14, 15 January 2025 (UTC)
- Thank you for the clarification! In that case, I've got no objection to continuing to recommend it. ModernDayTrilobite (talk • contribs) 14:37, 15 January 2025 (UTC)
- Me too, though sometime last year, I unexpectedly had some (inexplicably strong) pushback on the tracklist part with an editor or two. In my experience, using the track list template is the standard, and I can't recall anyone giving me any pushback for it, but some editors apparently prefer just using numbers. I guess we can wait and see if there's any current pushback on it. 17:01, 13 January 2025 (UTC) Sergecross73 msg me 17:01, 13 January 2025 (UTC)
- There were several discussions about Discogs and an RfC here. As a user of {{Discogs master}}, I agree with what other editors said there. We can't mention every version of an album in an article, so an external link to Discogs is invaluable IMO. AstonishingTunesAdmirer 連絡 22:34, 13 January 2025 (UTC)
- I'd agree with all of this, given my experience. The jazz and classical that I've seen is mostly the same.--3family6 (Talk to me | See what I have done) 16:57, 13 January 2025 (UTC)
- We badly need this to become part of the MOS. As it stands, some editors have rejected the guidelines as they're just guidelines, not policies, which defeats the object of having them in the first place. Popcornfud (talk) 16:59, 13 January 2025 (UTC)
- I mean, they are guidelines, but deviation per WP:IAR should be for a good reason, not just because someone feels like it.--3family6 (Talk to me | See what I have done) 18:14, 13 January 2025 (UTC)
- I am very much in favor of this becoming an official MOS guideline per User:Popcornfud above. Very useful as a template for album articles. JeffSpaceman (talk) 21:03, 13 January 2025 (UTC)
- I recently wrote my first album article and this essay was crucial during the process, to the extent that me seeing this post is like someone saying "I thought you were already an admin" in RFA; I figured this was already a guideline. I would support it becoming one. DrOrinScrivello (talk) 02:00, 14 January 2025 (UTC)
- I have always wondered why all this time these pointers were categorized as an essay. It's about time we formalize them; as said earlier, there are some outdated things that need to be discussed (like in WP:PERSONNEL which advises not to use stores for credits, even though in the streaming era we have more and more albums/EPs that never get physical releases). Also, song articles should also have their own guidelines, IMV. Elias 🦗🐜 12:19, 14 January 2025 (UTC)
- I'd be in favor of discussing turning the outline at the main page for WP:WikiProject Songs into a guideline.--3family6 (Talk to me | See what I have done) 12:53, 14 January 2025 (UTC)
- I get the sense it'd have to be a separate section from this one, given the inherent complexity of album articles as opposed to that of songs. Elias 🦗🐜 14:56, 14 January 2025 (UTC)
- Yes, I think it should be a separate, parallel guideline.--3family6 (Talk to me | See what I have done) 16:53, 14 January 2025 (UTC)
- I get the sense it'd have to be a separate section from this one, given the inherent complexity of album articles as opposed to that of songs. Elias 🦗🐜 14:56, 14 January 2025 (UTC)
- I'd be in favor of discussing turning the outline at the main page for WP:WikiProject Songs into a guideline.--3family6 (Talk to me | See what I have done) 12:53, 14 January 2025 (UTC)
- I think it needs work--I recall that a former longtime album editor, Richard3120 (not pinging them, as I think they are on another break to deal with personal matters), floated a rewrite a couple of years ago. Just briefly: genres are a perennial problem, editors love unsourced exact release dates and chronology built on OR (many discography pages are sourced only to random Billboard, AllMusic, and Discogs links, rather than sources that provide a comprehensive discography), and, like others, I think all the permutations of reissue and special edition track listings has gotten out of control, as well as these long lists of not notable personnel credits (eight second engineers, 30 backing vocalists, etc.). Also agree that the track listing template issue needs consensus; if three are acceptable, then three are acceptable--again, why change it to accommodate the names of six not notable songwriters? There's still a divide on the issue of commercial links in the body of the article--I have yet to see a compelling reason for their inclusion (WP is, uh, not for sale, remember?), when a better source can always be found (and editors have noted, not that I've made a study of it, that itunes often uses incorrect release dates for older albums). But I also acknowledge that since this "floated" rewrite never happened, then the community at large may be satisfied with the guidelines. Caro7200 (talk) 13:45, 14 January 2025 (UTC)
- Regarding the personnel and reissue/special edition track listing, I don't know if I can dig up the discussions, but there seems to be a consensus against being exhaustive and instead to put an external link to Discogs. I fail to see how linking to Billboard or AllMusic links for a release date on discographies is OR, unless you're talking about in the lead. At least in the case of Billboard, that's an established RS (AllMusic isn't the most accurate with dates).-- 3family6 (Talk to me | See what I have done) 13:53, 14 January 2025 (UTC)
- I meant that editors often use discography pages to justify chronology, even though Billboard citations are simply supporting chart positions, Discogs only states that an album exists, and AllMusic entries most often do not give a sequential number in their reviews, etc. There is often not a source (or sources) that states that the discography is complete, categorized properly, and in order. Caro7200 (talk) 14:05, 14 January 2025 (UTC)
- Ah, okay, I understand now.--3family6 (Talk to me | See what I have done) 16:54, 14 January 2025 (UTC)
- I meant that editors often use discography pages to justify chronology, even though Billboard citations are simply supporting chart positions, Discogs only states that an album exists, and AllMusic entries most often do not give a sequential number in their reviews, etc. There is often not a source (or sources) that states that the discography is complete, categorized properly, and in order. Caro7200 (talk) 14:05, 14 January 2025 (UTC)
- Regarding the personnel and reissue/special edition track listing, I don't know if I can dig up the discussions, but there seems to be a consensus against being exhaustive and instead to put an external link to Discogs. I fail to see how linking to Billboard or AllMusic links for a release date on discographies is OR, unless you're talking about in the lead. At least in the case of Billboard, that's an established RS (AllMusic isn't the most accurate with dates).-- 3family6 (Talk to me | See what I have done) 13:53, 14 January 2025 (UTC)
Myself, I've noticed that some of the sourcing recommendations are contrary to WP:RS guidance (more strict, actually!) or otherwise outside consensus. For instance, MOS:ALBUMS currently says to not use vendors for track list or personnel credits, linking to WP:AFFILIATE in WP:RS, but AFFILIATE actually says that such use is acceptable but not preferred. Likewise, MOS:ALBUMS says not to use scans of liner notes, which is 1. absurd, and 2. not actual consensus, which in the discussions I've had is that actual scans are fine (which makes sense as it's a digital archived copy of the source).--3family6 (Talk to me | See what I have done) 14:05, 14 January 2025 (UTC)
- The tendency to be overreliant on liner notes is also a detriment. I've encountered some liner notes on physical releases that have missing credits (e.g. only the producers are credited and not the writers), or there are outright no notes at all. Tangentially, some physical releases of albums like Still Over It and Pink Friday 2 actually direct consumers to official websites to see the credits, which has the added problem of link rot (the credits website for Still Over It no longer works and is a permanent dead link). Elias 🦗🐜 15:04, 14 January 2025 (UTC)
- That turns editors to using stores like Spotify or Apple Music as the next-best choice, but a new problem arises -- the credits for a specific song can vary depending on the site you use. One important thing we should likely discuss is what sources should take priority wrt credits. For an example of what I mean, take "No Love". Go to Spotify to check its credits and you'd find the name Sean Garrett -- head to Apple Music, however, and that name is missing. I assume these digital credits have a chance to deviate from the albums' physical liner notes as well, if there is one available. Elias 🦗🐜 15:11, 14 January 2025 (UTC)
- Moreover, the credits in stores are not necessarily correct either. An example I encountered was on Tidal, an amazing service and the only place where I could find detailed credits for one album (not even liner notes had them, since back then artists tried to avoid sample clearance). However, as I was double checking everything, one song made no sense: in its writing credits I found "Curtis Jackson", with a link to 50 Cent's artist page. It seemed extremely unlikely that they would collaborate, nor any of his work was sampled here. Well, it turns out this song sampled a song written by Charles Jackson of The Independents. AstonishingTunesAdmirer 連絡 16:39, 14 January 2025 (UTC)
- PSA and AstonishingTunesAdmirer, I agree that it's difficult. I usually use both the physical liner notes and online streaming and retail sources to check for completeness and errors. I've also had the experience of Tidal being a great resource, and, luckily, so far I've yet to encounter an error. Perhaps advice for how to check multiple primary sources here for errors should be added to the proposed guideline.--3family6 (Talk to me | See what I have done) 17:00, 14 January 2025 (UTC)
- At this point, I am convinced as well that finding the right sources for credits should be on a case-by-case basis, with the right amount of discretion from the editor. While I was creating List of songs recorded by SZA, which included several SoundCloud songs where it was extremely hard to find songwriting credits, I found the Songview database useful for filling those missing gaps. More or less the credits there align with what's on the liner notes/digital credits. However, four issues, most of which you can see by looking at the list I started: 1) they don't necessarily align with physical liner notes either, 2) sometimes names are written differently depending on the entry, 3) there are entries where a writer (or co-writer) is unknown, and 4) some of the entries here were never officially released and confirmed as outtakes/leaks (why is "BET Awards 19 Nomination Special" here, whatever that means?). Elias 🦗🐜 22:59, 14 January 2025 (UTC)
- Yeah, I've found it particularly tricky when working on technical personnel (production, engineering, mixing, etc.) and songwriting credits for individuals. I usually use the liner notes (if there are any), check AllMusic and Bandcamp, and also check Tidal if necessary. But I'll also look at Spotify, too. I know they're user-generated, so I don't cite them, but I usually look at Discogs and Genius to get an idea if I'm missing something. Thank you for pointing me to Songview, that will probably also be really helpful. 3family6 (Talk to me | See what I have done) 12:50, 15 January 2025 (UTC)
- At this point, I am convinced as well that finding the right sources for credits should be on a case-by-case basis, with the right amount of discretion from the editor. While I was creating List of songs recorded by SZA, which included several SoundCloud songs where it was extremely hard to find songwriting credits, I found the Songview database useful for filling those missing gaps. More or less the credits there align with what's on the liner notes/digital credits. However, four issues, most of which you can see by looking at the list I started: 1) they don't necessarily align with physical liner notes either, 2) sometimes names are written differently depending on the entry, 3) there are entries where a writer (or co-writer) is unknown, and 4) some of the entries here were never officially released and confirmed as outtakes/leaks (why is "BET Awards 19 Nomination Special" here, whatever that means?). Elias 🦗🐜 22:59, 14 January 2025 (UTC)
- PSA and AstonishingTunesAdmirer, I agree that it's difficult. I usually use both the physical liner notes and online streaming and retail sources to check for completeness and errors. I've also had the experience of Tidal being a great resource, and, luckily, so far I've yet to encounter an error. Perhaps advice for how to check multiple primary sources here for errors should be added to the proposed guideline.--3family6 (Talk to me | See what I have done) 17:00, 14 January 2025 (UTC)
- Moreover, the credits in stores are not necessarily correct either. An example I encountered was on Tidal, an amazing service and the only place where I could find detailed credits for one album (not even liner notes had them, since back then artists tried to avoid sample clearance). However, as I was double checking everything, one song made no sense: in its writing credits I found "Curtis Jackson", with a link to 50 Cent's artist page. It seemed extremely unlikely that they would collaborate, nor any of his work was sampled here. Well, it turns out this song sampled a song written by Charles Jackson of The Independents. AstonishingTunesAdmirer 連絡 16:39, 14 January 2025 (UTC)
- That turns editors to using stores like Spotify or Apple Music as the next-best choice, but a new problem arises -- the credits for a specific song can vary depending on the site you use. One important thing we should likely discuss is what sources should take priority wrt credits. For an example of what I mean, take "No Love". Go to Spotify to check its credits and you'd find the name Sean Garrett -- head to Apple Music, however, and that name is missing. I assume these digital credits have a chance to deviate from the albums' physical liner notes as well, if there is one available. Elias 🦗🐜 15:11, 14 January 2025 (UTC)
- (@3family6, please see WP:PROPOSAL for advice on advertising discussions about promoting pages to a guideline. No, you don't have to start over. But maybe add an RFC tag or otherwise make sure that it is very widely publicized.) WhatamIdoing (talk) 23:37, 14 January 2025 (UTC)
- Thank you. I'll notify the Manual of Style people. I did already post a notice at WP:ALBUMS. I'll inform other relevant WikiProjects as well.--3family6 (Talk to me | See what I have done) 12:46, 15 January 2025 (UTC)
Before posting the RfC as suggested by WhatamIdoing, I'm proposing the following changes to the text of MOS:ALBUM as discussed above:
- Eliminate What other works of art is this producer known for? Keep the list of other works short, as the producer will likely have their own article with a more complete list. from the "Recording, production" sub-section.
- Rework the text of the "Style and form" for tracklistings to:
- The track listing should be under a primary heading named "Track listing".
- A track listing should generally be formatted with the {{Track listing}} template. Note, however, that the track listing template forces a numbering system, so tracks originally listed as "A", "B", etc., or with other or no designations, will not appear as such when using the template. Additionally, in the case of multi-disc/multi-sided releases, a new template may be used for each individual disc or side, if applicable.
- Alternate forms, such as a table or a numbered list, are acceptable but usually not preferred. If a table is used, it should be formatted using class="wikitable", with column headings "No.", "Title" and "Length" for the track number, the track title and the track length, respectively (see Help:Table). In special cases, such as Guitar Songs, a numbered list may be the most appropriate format.
- Move Critical reception overviews like AcclaimedMusic (using {{Acclaimed Music}}), AnyDecentMusic?, or Metacritic may be appropriate as well. from "External links" to "Album ratings templates" of "Critical reception", right before the sentence about using {{Metacritic album prose}}.
- Re-write this text from "Sourcing" under "Track listing" from However, if there is disagreement, there are other viable sources. Only provide a source for a track listing if there are exceptional circumstances, such as a dispute about the writers of a certain track. Per WP:AFFILIATE, avoid commercial sources such as online stores and streaming platforms. In the rare instances where outside citations are required, explanatory text is useful to help other editors know why the album's liner notes are insufficient. to Per WP:AFFILIATE, commercial sources such as online stores and streaming platforms are acceptable to cite for track list information, but secondary coverage in independent reliable sources is preferred if available. Similarly, in the "Personnel" section, re-write Similar to the track listing requirements, it is generally assumed that a personnel section is sourced from the liner notes. In some cases, it will be necessary to use third-party sources to include performers who are not credited in the liner notes. If you need to cite these, use {{Cite AV media}} for the liner notes and do not use third party sources such as stores (per WP:AFFILIATE) or scans uploaded to image hosting sites or Discogs.com (per WP:RS). to Similar to the track listing requirements, it is generally assumed that a personnel section is sourced from the liner notes. If you need to cite the liner notes, use {{Cite AV media}}. Scans of the physical media that have been uploaded in digital form to repositories or sites such as Discogs are acceptable for verification, but cite the physical notes themselves, not the user-generated transcriptions. Frequently, it will be necessary to use third-party sources to include performers who are not credited in the liner notes. Per WP:AFFILIATE, inline citations to e-commerce or streaming platforms to verify personnel credits are allowed. However, reliable secondary sources are preferred, if available.
- Additional guidance has been suggested for researching and verifying personnel and songwriting credits. I suggest adding It is recommended to utilize a combination of the physical liner notes (if they exist) with e-commerce sites such as Apple Music and Amazon, streaming platforms such as Spotify and Tidal, and databases such as AllMusic credits listings and Songview. Finding the correct credits requires careful, case-by-case consideration and editor discretion. If you would like assistance, you can reach out to the albums or discographies WikiProjects. The best section for this is probably in "Personnel", in the paragraph discussing that liner notes can be inaccurate.
- The excessive listing of personnel has been mentioned. I suggest adding the following to the paragraph in the "Personnel" section beginning with "The credits to an album can be extensive or sparse.": If the listing of personnel is extensive, avoid excessive, exhaustive lists, in the spirit of WP:INDISCRIMINATE. In such cases, provide an external link to Discogs and list only the major personnel to the list.
If you have any additional suggestions, or suggestions regarding the wording of any of the above (I personally think that four needs to be tightened up or expressed better), please give them. I'm pinging the editors who raised issues with the essay as currently written, or were involved in discussing those issues, for their input regarding the above proposed changes. ModernDayTrilobite, PSA, Sergecross73, AstonishingTunesAdmirer, Caro7200, what do you think? Also, I realize that I never pinged Fezmar9, the author of the essay, for their thoughts on upgrading this essay to a guideline.--3family6 (Talk to me | See what I have done) 17:21, 15 January 2025 (UTC)
- The proposed edits all look good to me. I agree there's probably some room for improvement in the phrasing of #4, but in my opinion it's still clear enough as to be workable, and I haven't managed to strike upon any other phrasings I liked better for expressing its idea. If nobody else has suggestions, I'd be content to move forward with the language as currently proposed. ModernDayTrilobite (talk • contribs) 17:37, 15 January 2025 (UTC)
- It might be better to have this discussion on its talk page. That's where we usually talk about changes to a page. WhatamIdoing (talk) 17:38, 15 January 2025 (UTC)
- WhatamIdoing - just the proposed changes, or the entire discussion about elevating this essay to a guideline?--3family6 (Talk to me | See what I have done) 18:21, 15 January 2025 (UTC)
- It would be normal to have both discussions (separately) on that talk page. WhatamIdoing (talk) 18:53, 15 January 2025 (UTC)
- Okay, thank you. I started the proposal to upgrade the essay here, as it would be far more noticed by the community, but I'm happy for everything to get moved there.-- 3family6 (Talk to me | See what I have done) 19:00, 15 January 2025 (UTC)
- It would be normal to have both discussions (separately) on that talk page. WhatamIdoing (talk) 18:53, 15 January 2025 (UTC)
- WhatamIdoing - just the proposed changes, or the entire discussion about elevating this essay to a guideline?--3family6 (Talk to me | See what I have done) 18:21, 15 January 2025 (UTC)
- These changes look good to me. Although, since we got rid of Acclaimed Music in the articles, we should probably remove it here too. AstonishingTunesAdmirer 連絡 19:36, 15 January 2025 (UTC)
- Sure thing.--3family6 (Talk to me | See what I have done) 20:56, 15 January 2025 (UTC)
reverts all edits
Hello everyone. I have an idea for the Misplaced Pages coders. Would it be possible for you to design an option that, with the click of a button, automatically reverts all edits of a disruptive user? This idea came to my mind because some people create disposable accounts to cause disruption in all their edits... In this case, a lot of time and energy is consumed by administrators and reverting users to undo all the vandalism. If there were a template that could revert all the edits of a disruptive user with one click, it would be very helpful. If you think regular users might misuse this option, you could limit it to Misplaced Pages administrators only so they can quickly and easily undo the disruption. Hulu2024 (talk) 17:31, 13 January 2025 (UTC)
- Hi @Hulu2024, there's a script that does that: User:Writ Keeper/Scripts/massRollback. Also, editors who use Twinkle can single-click revert all consecutive edits of an editor. Schazjmd (talk) 17:44, 13 January 2025 (UTC)
- Is this tool active in all the different languages of Misplaced Pages? I couldn't perform such an action with the tool you mentioned. Hulu2024 (talk) 17:51, 13 January 2025 (UTC)
- That script requires the Misplaced Pages:Rollback permission, which is available only for admins and other trusted users. Admins and other users with the tool have gotten in trouble for using it inappropriately. I never use it myself, as I find the rollback in Twinkle quite sufficient for my needs. Donald Albury 17:54, 13 January 2025 (UTC)
- (ec) I don't know about other languages. If you check the page I linked, you'll see that the script requires rollback rights. Schazjmd (talk) 17:55, 13 January 2025 (UTC)
- @Schazjmd Sorry. Does your option can reverse all edits of a user in different page's with clicking on button ? i think you mean that massrollback can reverse all edits in a special wiki page... not all edits of edits of disruptive user in multiple pages ? or i'm wrong ??? Hulu2024 (talk) 04:23, 14 January 2025 (UTC)
- If you want this for the Persian Misplaced Pages, you should probably talk to Ladsgroup. WhatamIdoing (talk) 23:41, 14 January 2025 (UTC)
- @WhatamIdoing Thank you. Hulu2024 (talk) 07:11, 15 January 2025 (UTC)
- If you want this for the Persian Misplaced Pages, you should probably talk to Ladsgroup. WhatamIdoing (talk) 23:41, 14 January 2025 (UTC)
- @Schazjmd Sorry. Does your option can reverse all edits of a user in different page's with clicking on button ? i think you mean that massrollback can reverse all edits in a special wiki page... not all edits of edits of disruptive user in multiple pages ? or i'm wrong ??? Hulu2024 (talk) 04:23, 14 January 2025 (UTC)
- Is this tool active in all the different languages of Misplaced Pages? I couldn't perform such an action with the tool you mentioned. Hulu2024 (talk) 17:51, 13 January 2025 (UTC)
Problem For Translate page
Hello everyone. I don’t know who is in charge for coding the Translate page on Misplaced Pages. But I wanted to send my message to the Misplaced Pages coders, and that is that in the Misplaced Pages translation system, the information boxes for individual persons (i.e personal biography box- see: Template:Infobox person) are not automatically translated, and it is time-consuming for Misplaced Pages users to manually translate and change the links one by one from English to another language. Please, could the coders come up with a solution for translating the information template boxes? Thank you. Hulu2024 (talk) 17:32, 13 January 2025 (UTC)
- Hi Hulu2024, this also applies to the section above. If your proposal only applies to the English Misplaced Pages then it is probably best to post it at WP:VPT in the first instance. If it is only about the Persian Misplaced Pages then you may wish to try there. If it is more general then you could try Meta:, or, for more formal proposals, phabricator. Phil Bridger (talk) 18:51, 13 January 2025 (UTC)
- @Phil Bridger Thank you. Hulu2024 (talk) 19:21, 13 January 2025 (UTC)
A discrimination policy
- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- i quit this will go no where im extremely embarassed and feel horrible i dont think ill try again
Ani cases:
I would like to start this proposal by saying that this concept was a proposal in 2009 which failed for obvious reasons. But in this year, 2025, we need it as its happened a bunch. its already under personal attacks but this I feel and a couple other Wikipedians that it should be codified as their is precedent for blocking users who discriminate. Here’s a list of the things I want to include in this policy. edit: This policy is intended to target blatant and admitted instances of discrimination. If the intent behind an action is ambiguous, users should continue to assume good until the intent is.
Just as being a member of a group does not give one special requirements to edit, it also does not endow any special privileges. One is not absolved of discrimination against a group just because one claims to be a member of that group.
What counts as discrimination
- Race
- Disability-will define this further
- Disease
- Gender-different from sex neurological
- Sex-different then gender biological
- Sexuality
- Religion
- Hobbies (e.g furry ( most often harassed hobby))
- Relationship status
- Martial status
- (Idk how to word this but) lack of parental presence
- Political position (will be a hot topic)
- Discrimination anything i missed would be in there
A disability is an umbrella term in my sight
you have mental and physical
examples for mental would be:
- schizophrenia
- autism
- ADHD
- PTSD
- mood disorders (depression, borderline personality disorder)
- dyslexia (or any learning disability)
examples of physical:
- paralyzation
- Pretty much any physical injury
- Im aware that this never really happens but its good to go over
A user may not claim without evidence that a user is affected by/are any of the above (idk how to term this).
A user may not claim that users with these disabilities/beliefs/races/genders shouldn’t edit Misplaced Pages.
A user may not imply a user is below them based on the person.
calling people woke simply cause they are queer is discrimination.
Also I would like to propose a condition.
Over reaction to what you think is discrimination (accidental misgendering and wrong pronouns) and the user apologizes for it is not grounds for an entry at ani.
This should be used as a guideline.
Misplaced Pages article on discrimination I would also like to say this would give us negative press coverage by right wing media and I’ll receive shit. But I don’t care i can deal with it •Cyberwolf•talk? 16:37, 16 January 2025 (UTC)discrimination is defined as acts, practices, or policies that wrongfully impose a relative disadvantage or deprivation on persons based on their membership in a salient social group. This is a comparative definition. An individual need not be actually harmed in order to be discriminated against. He or she just needs to be treated worse than others for some arbitrary reason. If someone decides to donate to help orphan children, but decides to donate less, say, to children of a particular race out of a racist attitude, he or she will be acting in a discriminatory way even if he or she actually benefits the people he discriminates against by donating some money to them.
- This largely seems like behavior that already is sanctionable per WP:NPA and WP:UCOC (and the adoption of the latter drew complaints at the time that it in itself was already unnecessarily redundant with existing civility policy on en.wiki). What shortcomings do you see with those existing bodies of policy en force? signed, Rosguill 16:45, 16 January 2025 (UTC)
- The fact that punishments should be a little more severe for users who go after a whole group of editors. As its not an npa its an attack on a group •Cyberwolf•talk? 16:57, 16 January 2025 (UTC)
- NPA violations are already routinely met with blocks and sitebans, often on sight without prior warning for the level of disparagement you're describing. Do you have any recent examples on hand of cases where the community's response was insufficiently severe? signed, Rosguill 17:07, 16 January 2025 (UTC)
- Ill grab some my issue is admins can unblock without community input it should be unblock from admin then= they have to appeal to the community •Cyberwolf•talk? 17:10, 16 January 2025 (UTC)
- Noting that I've now taken the time to read through the three cases listed at the top--two of them ended in NOTHERE blocks pretty quickly--I could see someone taking issue with the community's handling of RowanElder and Jwa05002, although it does seem that the discussion ultimately resulted in an indef block for one and an apparently sincere apology from the other. signed, Rosguill 17:13, 16 January 2025 (UTC)
- Ill grab some my issue is admins can unblock without community input it should be unblock from admin then= they have to appeal to the community •Cyberwolf•talk? 17:10, 16 January 2025 (UTC)
- NPA violations are already routinely met with blocks and sitebans, often on sight without prior warning for the level of disparagement you're describing. Do you have any recent examples on hand of cases where the community's response was insufficiently severe? signed, Rosguill 17:07, 16 January 2025 (UTC)
- I think the real problem is that in order to block for any reason you have to take them to a place where random editors discuss whether they are a "net positive" or "net negative" to the wiki, which in principle would be a fair way to decide, but in reality is like the work of opening an RFC just in order to get someone to stop saying random racist stuff, and it's not worth it. Besides, remember the RSP discussion where the Daily Mail couldn't be agreed to be declared unreliable on transgender topics because "being 'gender critical' is a valid opinion" according to about half the people there? I've seen comments that were blatant bigoted insults beneath a thin veneer, that people did not take to ANI because it's just not worth the huge amount of effort. There really needs to be an easy way for administrators to warn (on first violation) and then block people who harass people in discriminatory ways without a huge and exhausting-for-the-complainer "discussion" about it -- and a very clear policy that says discrimination is not OK and is always "net negative" for the encyclopedia would reduce the complexity of that discussion, and I think is an important statement to make.
- By allowing it to be exhaustively debated whether thinly-veiled homophobic insults towards gay people warrant banning is Misplaced Pages deliberately choosing not to take a stance on the topic. A stance needs to be taken, and it needs to be clear enough to allow rapid and decisive action that makes people actually afraid to discriminate against other editors, because they know that it isn't tolerated, rather than being reasonably confident their targets won't undergo another exhausting ANI discussion. Mrfoogles (talk) 17:04, 16 January 2025 (UTC)
- Said better then i could say i agree wholeheartedly it happens way too much •Cyberwolf•talk? 17:18, 16 January 2025 (UTC)
- The fact that punishments should be a little more severe for users who go after a whole group of editors. As its not an npa its an attack on a group •Cyberwolf•talk? 16:57, 16 January 2025 (UTC)
- I agree that a blind eye shouldn't be turned against discrimination against groups of Misplaced Pages editors in general, but I don't see why we need a list that doesn't include social class but includes hobbies. The determining factor for deciding whether something is discrimination should be how much choice the individual has in the matter, which seems, in practice, to be the way WP:NPA is used. Phil Bridger (talk) 17:02, 16 January 2025 (UTC)
- I agree hobbies doesn't need to be included. Haven't seen a lot of discrimination based on social class? I think this needs to be taken to the Idea Lab. Mrfoogles (talk) 17:06, 16 January 2025 (UTC)
- Sorry this was just me spit balling i personally have been harassed over my hobbies •Cyberwolf•talk? 17:07, 16 January 2025 (UTC)
- I agree hobbies doesn't need to be included. Haven't seen a lot of discrimination based on social class? I think this needs to be taken to the Idea Lab. Mrfoogles (talk) 17:06, 16 January 2025 (UTC)
- @cyberwolf Strong support in general (see above) but I strongly suggest you take this to the idea lab, because it's not written as a clear and exact proposal and it would probably benefit a lot from being developed into an RFC before taking it here. In the current format it probably can't pass because it doesn't make specific changes to policy. Mrfoogles (talk) 17:08, 16 January 2025 (UTC)
- Yeah sorry I’m new to this i was told to go here to get the ball rolling •Cyberwolf•talk? 17:11, 16 January 2025 (UTC)
- Wait...does this mean I won't be able to discriminate against people whose hobby is editing Misplaced Pages? Where's the fun in that? Anonymous 17:09, 16 January 2025 (UTC)
- I guess not :3 •Cyberwolf•talk? 17:13, 16 January 2025 (UTC)
- In general, I fail to see the problem this is solving. The UCoC and other policies/guidelines/essays (such as WP:NPA, WP:FOC, and others) already prohibit discriminatory behavior. And normal conduct processes already have the ability to lay down the strictest punishment theoretically possible - an indefinite ban - for anyone who engages in such behavior.
- I do not like the idea of what amounts to bureaucracy for bureaucracy’s sake. That is the best way I can put it. At worst, this is virtue signaling - it’s waving a flag saying “hey, public and editors, Misplaced Pages cares about discrimination so much we made a specific policy about it” - without even saying the next part “but our existing policies already get people who discriminate against other editors banned, so this was not necessary and a waste of time”. I’ll happily admit I’m proven wrong if someone can show evidence of a case where actual discrimination was not acted upon because people were “concerned” it wasn’t violating one of those other policies. -bɜ:ʳkənhɪmez | me | talk to me! 20:56, 16 January 2025 (UTC)
- To clarify, all the comments about "why is this included" or "why is this not included" are part of the reason I'm against a specific policy like this. Any disruption can be handled by normal processes, and a specific policy will lead to wikilawyering over what is or is not discrimination. There is no need to try to define/specifically treat discrimination when all discriminatory behaviors are adequately covered by other policies already. -bɜ:ʳkənhɪmez | me | talk to me! 22:27, 16 January 2025 (UTC)
- We should be relating to other editors in a kind way. But this proposal appears to make the editing environment more hostile with more blocking on the opinion of one person. We do discrimonate against those that use Misplaced Pages for wrong purposes, such as vandalism, or advertising. Pushing a particular point of view is more grey area. The proposal by cyberwolf is partly point of view that many others would disagree with. So we should concentrate policies on how a user relates to other editors, rather than their motivations or opinions. Graeme Bartlett (talk) 20:50, 16 January 2025 (UTC)
- I think this is valuable by setting a redline for a certain sort of personal attack and saying, "this is a line nobody is permitted to cross while participating in this project." Simonm223 (talk) 20:57, 16 January 2025 (UTC)
- It is not possible for the content of a discussion to be "discriminatory". Discrimination is action, not speech. This proposal looks like an attempt to limit discourse to a certain point of view. That's not a good idea. --Trovatore (talk) 21:13, 16 January 2025 (UTC)
- Discrimination can very much be speech. Akechi The Agent Of Chaos (talk) 00:36, 17 January 2025 (UTC)
- Nope. --Trovatore (talk) 00:44, 17 January 2025 (UTC)
- Cambridge says that discrimination is : "treating a person or particular group of people differently, especially in a worse way from the way in which you treat other people, because of their race, gender, sexuality, etc".
- So yes, that includes speech because you can treat people differently in speech. Speech is an act. TarnishedPath 01:04, 17 January 2025 (UTC)
- OK, look, I'll concede part of the point here. Yes, if I'm a dick to (name of group) but not to (name of other group), I suppose that is discrimination, but I don't think a discrimination policy is a particularly useful tool for this, because what I should do is not be a dick to anybody.
- What I'm concerned about is that the policy would be used to assert that certain content is discriminatory. Say someone says, here's a reliable source that says biological sex is real and has important social consequences, and someone else says, you can't bring that up, it's discriminatory. Well, no, that's a category error. That sort of thing can't be discriminatory. --Trovatore (talk) 01:29, 17 January 2025 (UTC)
- just drop it •Cyberwolf•talk? 01:23, 17 January 2025 (UTC)
- Nope. --Trovatore (talk) 00:44, 17 January 2025 (UTC)
- Discrimination can very much be speech. Akechi The Agent Of Chaos (talk) 00:36, 17 January 2025 (UTC)
- I would remove anything to do with polical position. Those on the far-right should be discriminated against. TarnishedPath 21:45, 16 January 2025 (UTC)
- The examples you use show that we've been dealing effectively without this additional set of guidelines; it would be more convincing that something was needed if you had examples where the lack of this policy caused bad outcomes. And I can see it being used as a hammer; while we're probably picturing "as a White man, I'm sure that I understand chemistry better than any of you lesser types" as what we're going after, I can see some folks trying to wield it against "as a Comanche raised on the Comanche nation, I think I have some insights on the Comanche language that others here are overlooking." As such, I'm cautious. -- Nat Gertler (talk) 21:49, 16 January 2025 (UTC)
- Comment. I am sorry that caste discrimination is being ignored here. Xxanthippe (talk) 21:54, 16 January 2025 (UTC).
- Not needed. Everything the proposal is talking about would constitute disruptive behavior, and we can block or ban someone for being disruptive already. No need to break disruption down into its component parts, and write rules for each. Blueboar (talk) 22:07, 16 January 2025 (UTC)
References
- Professor Dave Explains (2022-06-06). Let’s All Get Past This Confusion About Trans People. Retrieved 2025-01-15 – via YouTube.
- Altinay, Murat; Anand, Amit (2020-08-01). "Neuroimaging gender dysphoria: a novel psychobiological model". Brain Imaging and Behavior. 14 (4): 1281–1297. doi:10.1007/s11682-019-00121-8. ISSN 1931-7565.
- Professor Dave Explains (2022-06-06). Let’s All Get Past This Confusion About Trans People. Retrieved 2025-01-15 – via YouTube.
Repeated false retirement
There is a user (who shall remain unnamed) who has "retired" twice and had the template removed from their page by other users because they were clearly still editing. They are now on their third "retirement", yet they last edited a few days ago. I don't see any policy formally prohibiting such behavior, but it seems extremely unhelpful for obvious reasons. Anonymous 17:13, 16 January 2025 (UTC)
- Unless the material is harmful to Misplaced Pages or other users, users have considerable leeway in what they may post on their user page. Personally, I always take "retirement" notices with a grain of salt. If a user wants to claim they are retired even though they are still actively editing, I don't see the harm to anything but their credibility. If I want to know if an editor is currently active, I look at their contributions, not at notices on their user or talk page. Donald Albury 22:07, 16 January 2025 (UTC)
I can't imagine that this calls for a policy. You're allowed to be annoyed if you want. No one can take that away from you. But I'm missing an explanation of why the rest of us should care. --Trovatore (talk) 22:13, 16 January 2025 (UTC)- This seems a little prickly, my friend. Clearly, the other two users who removed older retirement notices cared. At the end of the day, it's definitely not the most major thing, but it is helpful to have a reliable and simple indication as to whether or not a user can be expected to respond to any kind of communication or feedback. I'm not going to die on this hill. Cheers. Anonymous 22:41, 16 January 2025 (UTC)