this post was submitted on 25 Nov 2025
612 points (98.0% liked)

Fuck AI

4645 readers
1459 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

‘But there is a difference between recognising AI use and proving its use. So I tried an experiment. … I received 122 paper submissions. Of those, the Trojan horse easily identified 33 AI-generated papers. I sent these stats to all the students and gave them the opportunity to admit to using AI before they were locked into failing the class. Another 14 outed themselves. In other words, nearly 39% of the submissions were at least partially written by AI.‘

Article archived: https://web.archive.org/web/20251125225915/https://www.huffingtonpost.co.uk/entry/set-trap-to-catch-students-cheating-ai_uk_691f20d1e4b00ed8a94f4c01

you are viewing a single comment's thread
view the rest of the comments
[–] RampantParanoia2365@lemmy.world 2 points 1 day ago* (last edited 1 day ago) (3 children)

I've been using it for a personal project, and it's been wonderful.

It hasn't written a word for me. But it's been really damn helpful as a research assistant. I can have it provide lists of unexplained events by location, or provide historical details about specific things in about 5 seconds.

And for quicky providing editing advice, where to punch up the language, what I can cut, or communicate more clearly. And I can do that without begging a person for days to read.

Is it always perfect? Not at all, but it definitely helps overall, when you make it clear to be honest, and not sugar-coat things. It's definitely mostly mediocre for creative advice, but good for technical advice.

It's a tool, and it can be used correctly, or it can be used to cheat.

[–] Hoimo@ani.social 6 points 1 day ago (1 children)

Do you then check those historical details against trusted sources? If so, how often do they need correction?

[–] RampantParanoia2365@lemmy.world 1 points 18 hours ago* (last edited 17 hours ago)

I do, but it's for fiction, so if some thing end up through the cracks, I think it'll be ok. It gets a few things wrong or confused maybe 10% of the time.

Honestly though, I'm finding ChatGTP way smarter than the Google AI. Google's is a fucking moron. GPT is like Joe from Idiocracy with his coffee.

[–] pumpkin_spice@lemmy.today 5 points 1 day ago (2 children)

when you make it clear to be honest

It has no idea what honesty is. It has no idea what bias is.

It is fancy auto-complete. And it's wrong so often (like 40% of the time) that it should not be used to seek out factual information that the prompter doesn't already know.

[–] RampantParanoia2365@lemmy.world 1 points 17 hours ago* (last edited 17 hours ago)

Well, no, it has no opinions. But it can compare it to other work, and determine what patterns are considered good or bad, and that's not too different from a designer putting together a board of trends--or being Quentin Tarantino. It can tell by comparison if something is clearly communicated or clunky or funny, and and then you can either listen, or ignore it.

And it can either treat me like a 12 year old with Down Syndrome, or like a creative who can take criticism.

I'm 100% certain it would be completely useless to Stephen King, but I'm new to this.

[–] definitemaybe@lemmy.ca 4 points 1 day ago (1 children)

it should not be used to seek out factual information that the prompter doesn't already know.

Eh... Depends on the importance and purpose of the information.

If you're just trying to generate ideas for fiction from historical precedents, it doesn't matter if it's accurate. Or if you're using it as a starting point, then following the links to check the original source (like I do all the time for Linux terminal commands).

Hell, I often use Linux terminal commands from Google's search results AI box—I know enough to be able to parse what AI is suggesting (and identify when the proposed commands don't make sense), and enough to undo what I'm doing if it doesn't work. Saves a lot of time.

Copilot fixed some SQL syntax issues I had yesterday, too. 100% accuracy on that, despite it being a massive query with about a dozen nested subqueries. (Granted, I gave a very detailed prompt...) But, again, this was low stakes--who cares if a SELECT query fails to execute.

[–] RampantParanoia2365@lemmy.world 2 points 17 hours ago* (last edited 17 hours ago)

Exactly, and yes it's for historical fiction, not a history book. So if it gets some details wrong about Spanish colonial forts, or Queen Anne's Royal Court, and the role my witches may have had in it, I think I'll survive.

But if I had to do all this research, or seek out editing help myself, I'd be on page 5, instead of 67, and my story wouldn't be anywhere near as tight as it is.

[–] mo_lave@reddthat.com -1 points 1 day ago

And the issue is that for the people who call for a Butlerian Jihad, we are part of the problem.