Critical thinking was a problem long before AI. How do you think Trump got elected?
Academia
As someone collaborating with faculty to produce higher education courses, even the professions are falling to the temptation of using AI to develop their curriculum and learning materials. And the results are across the spectrum from carefully considered to slop.
AI is a pretty great assistant if you already know how to do the thing it’s helping you with.
But it’s also great at looking like it’s coherent from a quick glance while being full of nonsense.
it feels unfair, that they can churn out what would take several hours to review in a couple minutes….
but then if you consider how expensive it is to humanity at large, it’s not cheap at all….
i think the punishment for slop should be greater than entirely plagiarizing something.
———————————-
this comment was generated by a human
I agree with the core of the text; the problem would exist even if you were to rely on another person to think for you, because you are not thinking by yourself. That's kind of obvious.
But there's a second potential problem the article doesn't address, and I think it's considerably more pressing: those large models lower your standards on what's rational enough to be acceptable.
~Half year ago, I saw in Hacker News a comment highlighting large models don't do maths correctly; they asked some models to multiply two large natural numbers, and all answers were incorrect. A lot of people replied that comment with skibidi, TL;DR "y u asking ChatGPT model to multiply? lol. grab calculator lmaaaaooo it thinx it doesn't do maths haha", missing the point completely. (C'mon, it's HN, you know.)
I repeated this test. And shared the results here, in the Fediverse. Here they are:



All models outputted wrong answers, just like in the HN test. And yet at least one other user defended the models, through bullshit like "it's close to the real result! This shows the model is smart!".
But wait a minute. Multiplication is a deterministic procedure, right? Start with exact input, follow the steps of the procedure correctly, and you'll get exact and correct output, every single time, no matter if the input contains factors with 2 or 200 digits. This means multiplication is also a damn good test for the ability to follow logic reasoning. (Or to output something that humans would interpret as such.)
And yet, I saw two instances of people giving that incorrect output a pass. They didn't defend it because of something like "those models don't think" (true); no, they did it because the reasoning in the output is "good enough". Even if a 10yo is supposed to show better reasoning than that.
And it isn't just multiplication. This lack of reasoning is evident for everything you ask from a bot. Or from the fact they can't understand a negation (and oopsie, the "agent" suddenly deleted your files). But you're supposed to give it an OK sign to be an irrational agent. And in the end you give yourself a free pass to be also an irrational.
[Worth noting that those examples are anecdotal, though, and they back up my conclusion, so you do need to take my conclusion with a grain of salt. I don't think the conclusion is incorrect, though. If anyone has literature on the topic I'd love to see it.]
Saying "professors scramble to save critical thinking in an age of AI" when Critical Thinking is not, at fucking all, taught in American schools is disconcerting.
Oh there was a plan to introduce it in high school and conservatives threw a fit because it would turn their kids against their shitty beliefs. It was scrapped and that was the last I heard about it.
Critical thinking should be taught in kindergarten and by second grade children should be able to use evidence to generate and defend basic beliefs.
They are speaking at the higher education level. But I agree with the sentiment that it should be taught throughout k-12.

Was anybody doing any of that anyway?