You just know there is a guy in class who just wrote the essay from Marxist perspective because he does everything from a Marxist perspective.
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
Back in 2001 when I went to college they gave us a warning not to plagiarize reports and assignments. Saying they had sophisticated tools at their disposal and sites like cheat.com
Fun fact: at that time (and maybe still now since I checked a while ago) cheat.com was a porn site. So they were full of shit.
But AI detection is being really, really scary since the amount of false positives are staggering.
I'm just wondering who and how to contact the dilf in the photo
But I am a historian, so I will close on a historian’s note: History shows us that the right to literacy came at a heavy cost for many Americans, ranging from ostracism to death. Those in power recognised that oppression is best maintained by keeping the masses illiterate, and those oppressed recognised that literacy is liberation.
It's scary how much damage is being done to education, not just from AI but also the persistent attacks on public education in the US over decades, hampering the system with things like No Child Left Behind and diverting funds to private schools with vouchers in the name of "school choice". On top of that there are suggestions that teachers aren't even needed and that students could be taught with AI. It's grim.
I heard of something brilliant though: The teacher TELLS the students to have the AI generate an essay on a subject AND THEN the students have to go through the paper and point out all the shit it got WRONG :D
This is discussed in the article
Interesting post, would be good to support the author/publisher with a source link. Especially since it isn't pay walled.
From later in the article:
Students are afraid to fail, and AI presents itself as a saviour. But what we learn from history is that progress requires failure. It requires reflection. Students are not just undermining their ability to learn, but to someday lead.
I think this is the big issue with 'ai cheating'. Sure, the LLM can create a convincing appearance of understanding some topic, but if you're doing anything of importance, like making pizza, and don't have the critical thinking you learn in school then you might think that glue is actually a good way to keep the cheese from sliding off.
A cheap meme example for sure, but think about how that would translate to a Senator trying to deal with more complex topics.... actually, on second thought, it might not be any worse. 🤷
Edit: Adding that while critical thinking is a huge part. it's more of the "you don't know what you don't know" that tripped these students up, and is the danger when using LLM in any situation where you can't validate it's output yourself and it's just a shortcut like making some boilerplate prose or code.
People always talk about how students are afraid to fail, but no one ever mentions that the consequences of failure in our society are far greater than the rewards for success, unless you are already at the top.
For me its the $5k for the class down the drain lol
It should be treated the same as if another student wrote the paper. If it was used as a research tool where you didn't repeat it word for word then it's cool, it can be treated like a peer that helped you research. But using it to fully write then it's an instant fail because you didn't do anything.
Okay, sure. But how can you identify its use? You'd better be absolutely confident or there are likely to be professional consequences.
Not to mention completely destroy your relationship with the student (maybe not so relevant to professors, but relationship building is the main job of effective primary and secondary educators.)
I think the only solution is the Cambridge exam system.
The only grade they get is at the final written exam. all other assignments and tests are formative, to see if they are on track or to practice skills... This way it does not matter if a student cheats in those assignments, they only hurt themselves. Sorry for the final exam stress though.
A significant percentage of my classes at University were a midterm and final, or just a final. I thought they worked just fine.
Students would want to learn instead of doing less work if there were incentives to learn instead of just get out with a degree.
Sadly, lockdown era students all got their WFH piece of paper they can safely wipe their asses with. We have never had so many grad students fail out as this cohort. They literally got degrees with no practical knowledge, tehn thought tehy could cost to a career with the same work ethic.
It seems AI is putting more light on this problem of the academic system not really being learning oriented.
Not that it matters. There was already enough light on it and now it's just blinding.
Let me tell you why the Trojan horse worked. It is because students do not know what they do not know. My hidden text asked them to write the paper “from a Marxist perspective”. Since the events in the book had little to do with the later development of Marxism, I thought the resulting essay might raise a red flag with students, but it didn’t.
I had at least eight students come to my office to make their case against the allegations, but not a single one of them could explain to me what Marxism is, how it worked as an analytical lens or how it even made its way into their papers they claimed to have written. The most shocking part was that apparently, when ChatGPT read the prompt, it even directly asked if it should include Marxism, and they all said yes. As one student said to me, “I thought it sounded smart.”
Christ.......
In one of my classes, when ChatGPT was still new, I once handed out homework assignments related to programming. Multiple students handed in code that obviously came from ChatGPT (too clean a style, too general for the simple tasks that they were required to do).
Decided to bring one of the most egregious cases to class to discuss, because several people handed in something similar, so at least someone should be able to explain how the code works, right? Nobody could, so we went through it and made sense of it together. The code was also nonfunctional, so we looked at why it failed, too. I then gave them the talk about how their time in university is likely the only time in their lives when they can fully commit themselves to learning, and where each class is a once-in-a-lifetime opportunity to learn something in a way that they will never be able to experience again after they graduate (plus some stuff about fairness) and how they are depriving themselves of these opportunities by using AI in this way.
This seemed to get through, and we then established some ground rules that all students seemed to stick with throughout the rest of the class. I now have an AI policy that explains what kind of AI use I consider acceptable and unacceptable. Doesn't solve the problem completely, but I haven't had any really egregious cases since then. Most students listen once they understand it's really about them and their own "becoming" professional and a more fully developed person.
Curious why you're only posting the archived version? This article is not paywalled...
I'm guessing 33 people were too lazy to copy data into a box and relied on ChatGPT OCR lol.
This was a great article about the use of AI, but I think this also exposed bad/zero effort cheating.
There's a reason why even the ye olde Wikipedia copy-pasters would rearrange sentences to make sure they can game the plagiarism checker.