this post was submitted on 23 Mar 2026
360 points (99.5% liked)

Technology

82989 readers
2696 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 49 comments
sorted by: hot top controversial new old
[–] Mwa@thelemmy.club 9 points 4 hours ago

W Wikipedia,would be better to remove the exceptions but its fine tbh.

[–] yucandu@lemmy.world 20 points 6 hours ago (1 children)

Banned the people who openly admit it, anyway.

[–] aliser@lemmy.world 7 points 4 hours ago

there are ai detectors, although Im not sure about accuracy of those

[–] infeeeee@lemmy.zip 260 points 10 hours ago (6 children)

Saved you a click:

After much debate, the new policy is in effect: Wikipedia authors are not allowed to use LLMs for generating or rewriting article content. There are two primary exceptions, though.

First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy. In other words, it’s being treated like any other grammar checker or writing assistance tool. The policy says, “ LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”

The second exemption for LLMs is with translation assistance. Editors can use AI tools for the first pass at translating text, but they still need to be fluent enough in both languages to catch errors. As with regular writing refinements, anyone using LLMs also has to check that incorrect information hasn’t been injected.

[–] RIotingPacifist@lemmy.world 162 points 9 hours ago (3 children)

AIbros: we're creating God!!!

AI users: it can do translation & reformating pretty well but you got to check it's not chatting shit

[–] halcyoncmdr@piefed.social 49 points 8 hours ago (1 children)

The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they're asking anyway. All output needs to be verified before being used or relied upon.

The "AI" is just streamlining the process to save time.

Relying on it otherwise is stupid and just proves instantly that you are incompetent.

[–] Zagorath@quokk.au 1 points 3 hours ago (1 children)

the user needs to be smart enough to do whatever they're asking anyway

I'm gonna say that's ideal but not quite necessary. What's needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It's an easier skill to verify a result than it is to obtain that result. Think: how film critics don't necessarily need to be filmmakers, or the P=NP question in computer science.

[–] Pyro@programming.dev 3 points 2 hours ago (2 children)

But if the output has issues, what're you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI's mistakes yourself.

[–] Zagorath@quokk.au 3 points 1 hour ago (1 children)

At the risk of sounding like an overly obsequious AI… You know what, you're completely right. I'm honestly not sure what use case I was imagining when I wrote that last comment.

[–] Redjard@reddthat.com 3 points 1 hour ago

Making text flow naturally, grouping and ordeeing information, good writing.

You can verify two textst have the same facts and information, yet one reads way better than the other. But writing a text that reads well is quite hard.

[–] Redjard@reddthat.com 1 points 1 hour ago

If you don't habe the ability then you would do what you would have 5 years ago: not do it
Either submit without, or not submit at all.

[–] youcantreadthis@quokk.au 6 points 7 hours ago (1 children)

Fucking hate those anti human filth pushing slop into everything. I want to take one apart with power tools.

[–] Paranoidfactoid@lemmy.world 11 points 6 hours ago (1 children)
[–] Scrollone@feddit.it 4 points 4 hours ago (1 children)

Damn that movie was funny. I need to rewatch it.

[–] onlyhalfminotaur@lemmy.world 3 points 4 hours ago

It holds up better than any movie from the late 90s that I can think of.

[–] XLE@piefed.social 2 points 6 hours ago

I don't think AI users would say it does reformatting either (if they're honest): If you tell a chatbot to reformat text without changing it, it will change the text, because it does not understand the concept of not changing text. It should only take one time for someone to get burned for them to learn that lesson.

Seems pretty reasonable to use it as a grammar checker. As long as it's not changing content, just form or readability, that seems like a pretty decent use for it, at least with a purely educational resource like Wikipedia.

[–] daychilde@lemmy.world 16 points 9 hours ago

Liar. I already read the article before opening the comments. YOU SAVED ME NOTHING.

;-)

[–] ji59@hilariouschaos.com 20 points 9 hours ago

So, it should be used reasonably, as it should have always been.

[–] errer@lemmy.world 6 points 7 hours ago (3 children)

Wikipedia probably wants to sell access to LLMs to train. It’s only valuable if Wikipedia remains a high-quality, slop-free source.

I think even AI zealots think there should be silos of content to train from that are fully human generated. Training slop on slop makes the slop even worse.

[–] Grimy@lemmy.world 14 points 6 hours ago (1 children)

Sell licenses of what? It's already all in the creative commons iirc.

[–] Zagorath@quokk.au 2 points 3 hours ago

The content is CC licensed, but they are trying to block AI scraping because it overloads their servers. They have a paid API that uses a lot less compute for both Wikipedia and the AI, as well as being a revenue source for Wikipedia.

[–] SuspciousCarrot78@lemmy.world 11 points 7 hours ago

AI already trains on Wikipedia.

https://commoncrawl.org/

[–] MountingSuspicion@reddthat.com 7 points 6 hours ago

This was only done because the editors pushed to minimize AI involvement. There's a comment here already mentioning that: https://lemmy.world/comment/22826863

[–] FauxPseudo@lemmy.world 0 points 5 hours ago (1 children)

Seems like there should be a third exception. For those occasions where the article is about LLM generated text. They should be able to quote it when it's appropriate for an article.

[–] Zagorath@quokk.au 2 points 3 hours ago

That is a reasonable exception to no-AI policies in research papers and newspaper articles, but not for Wikipedia. As a tertiary source, Wikipedia has a strict "no original research" policy. Using AI to provide examples of AI output would be original research, and should not be done.

Quoting AI output shared in primary and secondary sources should be allowed for that reason, though.

[–] SpaceNoodle@lemmy.world 62 points 8 hours ago* (last edited 8 hours ago) (1 children)

An extremely measured and level-headed response. Kudos to Wikipedia for maintaining high standards.

[–] kazerniel@lemmy.world 87 points 8 hours ago (1 children)

It has to be said, they originally changed their stance due to the considerable editor pushback when they tried to introduce LLM summaries on the top of articles. So kudos to the editor community's resistance! ✊

[–] SpaceNoodle@lemmy.world 31 points 8 hours ago* (last edited 8 hours ago)

Good point. The real strength of Wikipedia truly lies in the editors .

[–] SunlessGameStudios@lemmy.world 36 points 9 hours ago* (last edited 9 hours ago)

I know at least one writing major who won an award from his volunteer work at Wikipedia. He did it as a hobby. They don't really need AI, they need people like him.

[–] hperrin@lemmy.ca 3 points 5 hours ago (1 children)

Good news. Hopefully they’ll get rid of those two exceptions in the future.

[–] JohnEdwa@sopuli.xyz 8 points 4 hours ago* (last edited 4 hours ago) (1 children)

Would be pretty shitty to make sure every time you are editing Wikipedia to disable any AI based grammar/spellcheckers (e.g Grammarly), and not being allowed to use translation tools.

Because those are the two exceptions.

[–] hperrin@lemmy.ca 3 points 4 hours ago

Why? That’s how they’ve been doing it for 25 years.

[–] davidgro@lemmy.world 6 points 8 hours ago* (last edited 8 hours ago) (1 children)

I hoped the exceptions would be like "Quoted example text of LLM output, when it's clearly labeled and styled separately from the article text."

[–] baltakatei@sopuli.xyz 3 points 5 hours ago (1 children)

That exception probably would be twisted into permission to add an “AI summary” section to each article.

[–] davidgro@lemmy.world 2 points 4 hours ago

Ugh. Yeah, it would have to be worded carefully, you're right

[–] phoenixz@lemmy.ca 5 points 8 hours ago (1 children)

So in other words, when used responsibly as a tool with limitations, AI has it's uses? Though very environmentally unfriendly uses?

[–] Slashme@lemmy.world 0 points 5 hours ago
[–] webp@mander.xyz 7 points 9 hours ago (2 children)

Why do they need AI at all? Wikipedia had existed long before it and was doing fine.

[–] AmbitiousProcess@piefed.social 21 points 9 hours ago

You could make that argument about any tool Wikipedia editors use. Why should they need spellcheck? They were typing words just fine before.

...except it just makes it easier to spot errors or get little suggestions on how you could reword something, and thus makes the whole process a little smoother.

It's not strictly necessary, but this could definitely be helpful to people for translation and proofreading. Doesn't have to be something people are wholly reliant on to still be beneficial to their ability to edit Wikipedia.

[–] fuckwit_mcbumcrumble@lemmy.dbzer0.com 6 points 8 hours ago (2 children)

Why should we use (insert tool) when we did just fine before?

Because when used correctly it can be great for helping you be more productive, and find errors/make improvements. The two exceptions are for grammar which AI does a surprisingly good job with. Would you have gotten mad if they used Grammarly >5 years ago? Having it rewrite an entire article is gonna be a bad idea, but asking it to rephrase a sentence, or check your phrasing for potential issues is a much safer thing. Not everyone who speaks Spanish uses it the same way. Some words are innocuous in some regions, but offensive in others.

[–] REDACTED 3 points 8 hours ago (1 children)
[–] webp@mander.xyz 2 points 7 hours ago (1 children)

Try using fire in a library.

[–] Luminous5481@anarchist.nexus -1 points 6 hours ago (1 children)

wikipedia isn't a library.

[–] webp@mander.xyz 1 points 5 hours ago (1 children)
[–] Luminous5481@anarchist.nexus -1 points 5 hours ago

You're the one that implied it was.

[–] webp@mander.xyz 1 points 8 hours ago (2 children)

Call me mad, call me crazy. AI shouldn't be altering databases of knowledge, especially when it is so inconsistent. If there is a question on whether certain words are appropriate why can't you ask another human being, they have forums for a reason, or someone else comes along and fixes it. Or look at a dictionary. The amount of energy spent for dubious information, holy. It's not like there is a shortage of human beings on earth.

[–] Qwel@sopuli.xyz 3 points 5 hours ago* (last edited 5 hours ago)

https://en.wikipedia.org/wiki/Wikipedia:Writing_articles_with_large_language_models

https://en.wikipedia.org/wiki/Wikipedia:LLM-assisted_translation

The two related "policies" are rather short, you should read them if you haven't.

AI shouldn’t be altering databases of knowledge, especially when it is so inconsistent

The policy only allows usage as an auto-translater (a task at which they are not worst than old-style auto-translaters that were always allowed) and as spellcheck/grammarcheck (where it is also not worst than other allowed options).

None of those tools were previously seen as altering Wikipedia by themselves. The goal is that LLMs should be used and considered like they were.

To be clear they always were articles for creation submitted from clearly google-translated text, and they always were dismissed as slop. To get an autotranslated article accepted, you need to clean it up until all the information is correct and the grammar is good enough. This is a rather standard workflow for translations. The same thing should apply to LLMs.

The new issue here is that LLMs can "organically" change informations while asked to translate. When a classic autotranslate changes the information, it often (not always) leaves a notable mess in the grammar. LLMs will insert their errors much more cleanly. This is acknowledged by both texts and, well, texts will change if that becomes a reocurring issue.

AI isn’t altering databases or knowledge. AI is telling the writer there’s a better way to do this, and the writer has to explicitly change their wording.

You only know to look at a dictionary for alternative wordings if you know there’s a problem. How do you know there’s a problem?

If you ask someone else what if that same someone else uses your regional dialect and not the one that has problems? Your average writer can review every single word used in the dictionary for every single article they edit. But AI can, and that’s something it’s actually good at. You may only know 5 Spanish speakers, but AI knows everything it was trained on.