this post was submitted on 25 Nov 2025
582 points (98.0% liked)

Fuck AI

4645 readers
1580 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

‘But there is a difference between recognising AI use and proving its use. So I tried an experiment. … I received 122 paper submissions. Of those, the Trojan horse easily identified 33 AI-generated papers. I sent these stats to all the students and gave them the opportunity to admit to using AI before they were locked into failing the class. Another 14 outed themselves. In other words, nearly 39% of the submissions were at least partially written by AI.‘

Article archived: https://web.archive.org/web/20251125225915/https://www.huffingtonpost.co.uk/entry/set-trap-to-catch-students-cheating-ai_uk_691f20d1e4b00ed8a94f4c01

top 50 comments
sorted by: hot top controversial new old
[–] vzqq@lemmy.blahaj.zone 1 points 37 minutes ago

You just know there is a guy in class who just wrote the essay from Marxist perspective because he does everything from a Marxist perspective.

Back in 2001 when I went to college they gave us a warning not to plagiarize reports and assignments. Saying they had sophisticated tools at their disposal and sites like cheat.com

Fun fact: at that time (and maybe still now since I checked a while ago) cheat.com was a porn site. So they were full of shit.

But AI detection is being really, really scary since the amount of false positives are staggering.

[–] Aneb@lemmy.world 2 points 2 hours ago

I'm just wondering who and how to contact the dilf in the photo

[–] hark@lemmy.world 21 points 15 hours ago

But I am a historian, so I will close on a historian’s note: History shows us that the right to literacy came at a heavy cost for many Americans, ranging from ostracism to death. Those in power recognised that oppression is best maintained by keeping the masses illiterate, and those oppressed recognised that literacy is liberation.

It's scary how much damage is being done to education, not just from AI but also the persistent attacks on public education in the US over decades, hampering the system with things like No Child Left Behind and diverting funds to private schools with vouchers in the name of "school choice". On top of that there are suggestions that teachers aren't even needed and that students could be taught with AI. It's grim.

[–] Draegur@lemmy.zip 23 points 17 hours ago (2 children)

I heard of something brilliant though: The teacher TELLS the students to have the AI generate an essay on a subject AND THEN the students have to go through the paper and point out all the shit it got WRONG :D

[–] Doctorbllk@slrpnk.net 9 points 15 hours ago

This is discussed in the article

load more comments (1 replies)
[–] Fmstrat@lemmy.world 15 points 18 hours ago

Interesting post, would be good to support the author/publisher with a source link. Especially since it isn't pay walled.

https://www.huffingtonpost.co.uk/entry/set-trap-to-catch-students-cheating-ai_uk_691f20d1e4b00ed8a94f4c01

[–] korazail@lemmy.myserv.one 40 points 1 day ago* (last edited 23 hours ago) (1 children)

From later in the article:

Students are afraid to fail, and AI presents itself as a saviour. But what we learn from history is that progress requires failure. It requires reflection. Students are not just undermining their ability to learn, but to someday lead.

I think this is the big issue with 'ai cheating'. Sure, the LLM can create a convincing appearance of understanding some topic, but if you're doing anything of importance, like making pizza, and don't have the critical thinking you learn in school then you might think that glue is actually a good way to keep the cheese from sliding off.

A cheap meme example for sure, but think about how that would translate to a Senator trying to deal with more complex topics.... actually, on second thought, it might not be any worse. 🤷

Edit: Adding that while critical thinking is a huge part. it's more of the "you don't know what you don't know" that tripped these students up, and is the danger when using LLM in any situation where you can't validate it's output yourself and it's just a shortcut like making some boilerplate prose or code.

[–] FlyingCircus@lemmy.world 4 points 3 hours ago (1 children)

People always talk about how students are afraid to fail, but no one ever mentions that the consequences of failure in our society are far greater than the rewards for success, unless you are already at the top.

[–] jupiter_jazz@lemmy.dbzer0.com 3 points 2 hours ago

For me its the $5k for the class down the drain lol

[–] ThePantser@sh.itjust.works 28 points 23 hours ago (2 children)

It should be treated the same as if another student wrote the paper. If it was used as a research tool where you didn't repeat it word for word then it's cool, it can be treated like a peer that helped you research. But using it to fully write then it's an instant fail because you didn't do anything.

[–] definitemaybe@lemmy.ca 5 points 19 hours ago (7 children)

Okay, sure. But how can you identify its use? You'd better be absolutely confident or there are likely to be professional consequences.

Not to mention completely destroy your relationship with the student (maybe not so relevant to professors, but relationship building is the main job of effective primary and secondary educators.)

load more comments (7 replies)
load more comments (1 replies)
[–] IAmNorRealTakeYourMeds@lemmy.world 48 points 1 day ago* (last edited 1 day ago) (7 children)

I think the only solution is the Cambridge exam system.

The only grade they get is at the final written exam. all other assignments and tests are formative, to see if they are on track or to practice skills... This way it does not matter if a student cheats in those assignments, they only hurt themselves. Sorry for the final exam stress though.

[–] sin_free_for_00_days@sopuli.xyz 5 points 19 hours ago (2 children)

A significant percentage of my classes at University were a midterm and final, or just a final. I thought they worked just fine.

load more comments (2 replies)
load more comments (6 replies)
[–] SoftestSapphic@lemmy.world 30 points 1 day ago (4 children)

Students would want to learn instead of doing less work if there were incentives to learn instead of just get out with a degree.

[–] SaveTheTuaHawk@lemmy.ca 1 points 2 hours ago

Sadly, lockdown era students all got their WFH piece of paper they can safely wipe their asses with. We have never had so many grad students fail out as this cohort. They literally got degrees with no practical knowledge, tehn thought tehy could cost to a career with the same work ethic.

[–] ulterno@programming.dev 15 points 23 hours ago

It seems AI is putting more light on this problem of the academic system not really being learning oriented.
Not that it matters. There was already enough light on it and now it's just blinding.

load more comments (2 replies)
[–] Alaknar@sopuli.xyz 66 points 1 day ago (23 children)

Let me tell you why the Trojan horse worked. It is because students do not know what they do not know. My hidden text asked them to write the paper “from a Marxist perspective”. Since the events in the book had little to do with the later development of Marxism, I thought the resulting essay might raise a red flag with students, but it didn’t.

I had at least eight students come to my office to make their case against the allegations, but not a single one of them could explain to me what Marxism is, how it worked as an analytical lens or how it even made its way into their papers they claimed to have written. The most shocking part was that apparently, when ChatGPT read the prompt, it even directly asked if it should include Marxism, and they all said yes. As one student said to me, “I thought it sounded smart.”

Christ.......

load more comments (23 replies)
[–] rustydrd@sh.itjust.works 101 points 1 day ago* (last edited 1 day ago) (15 children)

In one of my classes, when ChatGPT was still new, I once handed out homework assignments related to programming. Multiple students handed in code that obviously came from ChatGPT (too clean a style, too general for the simple tasks that they were required to do).

Decided to bring one of the most egregious cases to class to discuss, because several people handed in something similar, so at least someone should be able to explain how the code works, right? Nobody could, so we went through it and made sense of it together. The code was also nonfunctional, so we looked at why it failed, too. I then gave them the talk about how their time in university is likely the only time in their lives when they can fully commit themselves to learning, and where each class is a once-in-a-lifetime opportunity to learn something in a way that they will never be able to experience again after they graduate (plus some stuff about fairness) and how they are depriving themselves of these opportunities by using AI in this way.

This seemed to get through, and we then established some ground rules that all students seemed to stick with throughout the rest of the class. I now have an AI policy that explains what kind of AI use I consider acceptable and unacceptable. Doesn't solve the problem completely, but I haven't had any really egregious cases since then. Most students listen once they understand it's really about them and their own "becoming" professional and a more fully developed person.

load more comments (15 replies)
[–] Empricorn@feddit.nl 3 points 17 hours ago

Curious why you're only posting the archived version? This article is not paywalled...

[–] mlg@lemmy.world 24 points 1 day ago (1 children)

I'm guessing 33 people were too lazy to copy data into a box and relied on ChatGPT OCR lol.

This was a great article about the use of AI, but I think this also exposed bad/zero effort cheating.

There's a reason why even the ye olde Wikipedia copy-pasters would rearrange sentences to make sure they can game the plagiarism checker.

load more comments (1 replies)
load more comments
view more: next ›