this post was submitted on 10 Jan 2026
41 points (100.0% liked)

Ask Lemmy

36553 readers
1984 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Is it possible to understand this somehow, for example, with the help of drafts? Or should a person post a draft and edit it in front of the moderators?

all 29 comments
sorted by: hot top controversial new old
[–] resipsaloquitur@lemmy.world 8 points 1 day ago (1 children)

Ask the author if they’d prefer

A. A fluffy bunny

B. A letter from their sweetie

C. A properly-formatted data file.

[–] deadymouse@lemmy.world 1 points 12 hours ago

AI are very good at pretending to be people, so I think this method will stop working over time.

[–] Luccus@feddit.org 15 points 1 day ago* (last edited 1 day ago) (1 children)

TLDR: The result of current LLMs will be very bandlimited and one-directional.

I hope that means something to you, because otherwise I'm going to try to explain this very specific thing, and I'm afraid I might not be able to express it in very understandable terms (sorry):

Firstly, one-directionality: when a human wants to write a story, we usually think about the plot twist beforehand and then pave the way by hinting at the upcoming twist without giving too much away. It's just nice when a first time reader is surprised, but struggles on a second time how they missed all the obvious clues.

This process requires a lot of back-and-forth while writing. Humans do this naturally. LLMs and other transformer networks have a huge problem with this. I often hear LLMs referred to as text prediction machines. This is not entirely accurate, but a similar enough. And to keep with this analogy: text prediction doesn't really work backwards to suggest a better start to the sentence, does it? LLMs tend to take a path, from start to finish, even in great detail, but that's it. There's no setup. It's very flat writing.

Secondly, bandlimiting: Over time LLMs tend to mush different characterizations and continuity into a smooth paste, leaving little grit to it. I really struggle to not say the word derivative (like in math). But LLMs just write average characters who do average things in an average way. And then spell out how everything was totally unpredictable, important and meaningful, while using superficially eloquent language. Nothing just is everything serves as. It's a poor writing style that often misses the appropriate tone, trying to sound sophisticated.

I should point out there's a ton of mediocre writers who write just like that, before the advent of LLMs. You're describing good writing, but there's nothing unique to LLMs about it.

[–] BlameTheAntifa@lemmy.world 13 points 1 day ago (1 children)

You can’t. The best indicator of human authorship at this point are mistakes like misspellings and poor grammar. The common advice — like looking for em-dashes — was garbage to begin with and has only become worse as LLMs evolve.

[–] Nioxic@lemmy.dbzer0.com 1 points 1 day ago

Some people do use dashes though, and when i do some word processors automatically alter them to em dashes

Also its mostly chatgpt as far as ive noticed

I havent seen them all over from Mistral

[–] Lembot_0006@programming.dev 15 points 1 day ago (2 children)

It isn't possible in a common case. There are bots that check text for specific wordage rarely used by humans, but those bots are unreliable. Especially if the text is written by a non-native-speaker.

[–] Weirdfish@lemmy.world 15 points 1 day ago

So specific uncommon words like "wordage"

AI bot! We have an AI bot over here!

[–] canihasaccount@lemmy.world 1 points 1 day ago

Recently, a company called Pangram appears to have finally made a breakthrough in this. Some studies by unaffiliated faculty (e.g., at U Chicago) have replicated its claimed false positive and false negative rates. Anecdotally, it's the only AI detector I've ever run my papers through that hasn't said my papers are written by AI.

[–] henfredemars 13 points 1 day ago* (last edited 1 day ago)

If you’re concerned about detecting AI for an assignment or a competition, I suggest something that tracks changes, like a shared Google doc. It shows you how the document was written over time, which requires a lot more effort to fake.

But in general, no, you cannot reliably detect whether text was produced by AI or a human. There are some signs, but it would be difficult to prove.

EDIT: voice typing errors.

[–] Bahnd@lemmy.world 6 points 1 day ago (2 children)

I dont, I just assume you are all robots. Nice robots, who usually say smart or nice things, but unless I can drive to a place an punch you without breaking my hand... Best to just assume yall are to avoid the disappointment later.

As for proving to the rest of the robots that im not a robot... For Lemmy, I completely turn off spellcheck and amy grammar assistance. The frequency of errors, poorly constructed sentences and bad use of frequent AI tells is hopefully enough to validate that im beep boop not a robot.

[–] Apytele@sh.itjust.works 6 points 1 day ago (1 children)

Completely unrelated but "beep boop!" is what I say to patients (along with appropriate gestures) to request that they show me their wristband so that I can scan it to verify dosage etc. when administering medications.

[–] Bahnd@lemmy.world 2 points 1 day ago

Hm... A likely story, a very human thing a human medical professional would say. Carry on beep poop.

[–] spittingimage@lemmy.world 2 points 1 day ago (1 children)

unless I can drive to a place an punch you without breaking my hand…

I feel like there's steps you could take before that one. Maybe jiggle my squishy belly for a second or two.

[–] CombatWombatEsq@lemmy.world 7 points 1 day ago

Everyone in this thread saying it’s impossible is basically correct, but that doesn’t mean you don’t have to do your best to identify ai-generated text sometimes.

If you’re focused on identifying whether a given text is ai generated, here’s how Wikipedia does it: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

If you’re focused on proving something you’ve written was written without ai assistance, I think your best bet is a screencast of the writing process.

[–] droning_in_my_ears@lemmy.world 6 points 1 day ago (1 children)

Use an AI enough and you won't be able to stop noticing the telltale signs

[–] FinjaminPoach@lemmy.world 1 points 23 hours ago

True but still i expect this will be 'patched' in 10 years or less and i'm so worried about what will happen to literature by then

[–] Zwuzelmaus@feddit.org 3 points 1 day ago

It starts to get weird: I recognize that

It starts to have several of these: funny introduction phrases, logical mistakes/knots/holes, unneccessary redundancy, unneccessary jumps, funny summary towards the ending: highly suspicious, and I probably stop wasting my time with it.

[–] clean_anion@programming.dev 4 points 1 day ago

There are some generic observations you can use to identify whether a story was AI generated or written by a human. However, there are no definitive criteria for identifying AI generated text except for text directed at the LLM user such as "certainly, here is a story that fits your criteria," or "as a large language model, I cannot..."

There are some signs that can be used to identify AI generated text though they might not always be accurate. For instance, the observation that AI tends to be superficial. It often puts undue emphasis on emotions that most humans would not focus on. It tends to be somewhat more ambiguous and abstract compared to humans.

A large language model often uses poetic language instead of factual (e.g., saying that something insignificant has "profound beauty"). It tends to focus too much on the overarching themes in the background even when not required (e.g., "this highlights the significance of xyz in revolutionizing the field of ...").

There are some grammatical traits that can be used to identify AI but they are even more ambiguous than judging the quality of the content, especially because someone might not be a native English speaker or they might be a native speaker whose natural grammar sounds like AI.

The only good methods of judging whether text was AI generated are judging the quality of the content (which one should do regardless of whether they want to use content quality to identify AI generated text) and looking for text directed at the AI user.

[–] DeathByBigSad@sh.itjust.works 3 points 1 day ago (1 children)

A faraday caged room, irl in-person supervision, test takers required to be fully nude.

Side note: We need a chess championship game where Hans Niemann plays nude and livestreamed to the world. 😏

[–] droning_in_my_ears@lemmy.world 2 points 1 day ago (1 children)

We need a chess championship game where Hans Niemann plays nude and livestreamed to the world. 😏

lol

What happened to that? I heard he was filing a lawsuit

[–] DeathByBigSad@sh.itjust.works 2 points 1 day ago (1 children)

The defamation lawsuit was dismissed

I think he's still allowed to play in championships because there was no evidence that he cheated in tournaments.

He must have very strong butt muscles

[–] higgsboson@piefed.social 3 points 1 day ago (1 children)

I am only a dabbler, but every time I see a new "LLM-detection" tool, I have been able to defeat them with simple changes to parameters and/or prompts.

It is a whack-a-mole game where detection will lag evasion. For situations where authorship really matters, I suspect low-tech countermeasures like proctored exams are going to be necessary.

[–] AmidFuror@fedia.io 1 points 1 day ago

Exactly. You could preface your prompt with something like this:

In answering the following questions, don't use bullet points or emdashes, and be very concise.

And then you just paste the text from the post title and subsequent body below that. Then you make a top-level comment with ChatGPT's output answering the poster's questions.

Somewhere else in the comment section you could reveal your methodology.

[–] Randomgal@lemmy.ca 2 points 1 day ago

They'd have to write it in front of you. All the qualities people are describing here are just good writing or something that you can change by simply telling the LLM 'dont write like an llm'

[–] AmidFuror@fedia.io 2 points 1 day ago

There is no reliable way to tell whether a story was written by a human or an AI based on the final text alone. Current detection tools are inconsistent and can be wrong in both directions.

Drafts can offer some context, but they are not proof, since AI-generated text can also be produced in stages or edited to look human. Posting a draft and editing it live may demonstrate effort, but it still cannot conclusively prove authorship.

Because of this, most platforms focus on disclosure and rules about acceptable use rather than trying to definitively verify whether a text was written by a human or an AI.