this post was submitted on 19 May 2025
1488 points (98.1% liked)

Microblog Memes

7667 readers
2037 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] PillowTalk420@lemmy.world 22 points 2 days ago* (last edited 2 days ago) (5 children)

Even setting aside all of those things, the whole point of school is that you learn how to do shit; not pass it off to someone or something else to do for you.

If you are just gonna use AI to do your job, why should I hire you instead of using AI myself?

load more comments (5 replies)
[–] Numuruzero@lemmy.dbzer0.com 23 points 2 days ago (1 children)

The issue as I see it is that college is a barometer for success in life, which for the sake of brevity I'll just say means economic success. It's not just a place of learning, it's the barrier to entry - and any metric that becomes a goal is prone to corruption.

A student won't necessarily think of using AI as cheating themselves out of an education because we don't teach the value of education except as a tool for economic success.

If the tool is education, the barrier to success is college, and the actual goal is to be economically successful, why wouldn't a student start using a tool that breaks open that barrier with as little effort as possible?

[–] Zink@programming.dev 7 points 2 days ago

especially in a world that seems to be repeatedly demonstrating to us that cheating and scumbaggery are the path to the highest echelons of success.

..where “success” means money and power - the stuff that these high profile scumbags care about, and the stuff that many otherwise decent people are taught should be the priority in their life.

[–] Zealousideal_Fox_900@lemmy.dbzer0.com 9 points 1 day ago (1 children)

Gotta say, if someone gets through medical school with AI, we're fucked.

[–] vane@lemmy.world 2 points 1 day ago

We have at most 10 years before it happens. I saw medical AI from google today on hugginface and at least one more.

[–] sin_free_for_00_days@sopuli.xyz 24 points 2 days ago

Students turn in bullshit LLM papers. Instructors run those bullshit LLM papers through LLM grading. Humans need not apply.

[–] Aksamit@slrpnk.net 2 points 1 day ago

And yet once they graduate, if the patients are female and/or not white all concerns for those standards are optional at best, unless the patients bring a (preferably white) man in with them to vouch for their symptoms.

Not pro-ai, just depressed about healthcare.

[–] SoftestSapphic@lemmy.world 76 points 3 days ago

The moment that we change school to be about learning instead of making it the requirement for employment then we will see students prioritize learning over "just getting through it to get the degree"

[–] conditional_soup@lemm.ee 88 points 3 days ago (10 children)

Idk, I think we're back to "it depends on how you use it". Once upon a time, the same was said of the internet in general, because people could just go online and copy and paste shit and share answers and stuff, but the Internet can also just be a really great educational resource in general. I think that using LLMs in non load-bearing "trust but verify" type roles (study buddies, brainstorming, very high level information searching) is actually really useful. One of my favorite uses of ChatGPT is when I have a concept so loose that I don't even know the right question to Google, I can just kind of chat with the LLM and potentially refine a narrower, more google-able subject.

[–] takeda@lemm.ee 129 points 3 days ago (11 children)

trust but verify

The thing is that LLM is a professional bullshitter. It is actually trained to produce text that can fool ordinary person into thinking that it was produced by a human. The facts come 2nd.

[–] conditional_soup@lemm.ee 49 points 3 days ago (1 children)

Yeah, I know. I use it for work in tech. If I encounter a novel (to me) problem and I don't even know where to start with how to attack the problem, the LLM can sometimes save me hours of googling by just describing my problem to it in a chat format, describing what I want to do, and asking if there's a commonly accepted approach or library for handling it. Sure, it sometimes hallucinate a library, but that's why I go and verify and read the docs myself instead of just blindly copying and pasting.

[–] lefaucet@slrpnk.net 32 points 3 days ago* (last edited 3 days ago) (2 children)

That last step of verifying is often being skipped and is getting HARDER to do

The hallucinations spread like wildfire on the internet. Doesn't matter what's true; just what gets clicks that encourages more apparent "citations". Another even worse fertilizer of false citations is the desire to push false narratives by power-hungry bastards

AI rabbit holes are getting too deep to verify. It really is important to keep digital hallucinations out of the academic loop, especially for things with life-and-death consequences like medical school

load more comments (2 replies)
[–] Impleader@lemmy.world 24 points 3 days ago (6 children)

I don’t trust LLMs for anything based on facts or complex reasoning. I’m a lawyer and any time I try asking an LLM a legal question, I get an answer ranging from “technically wrong/incomplete, but I can see how you got there” to “absolute fabrication.”

I actually think the best current use for LLMs is for itinerary planning and organizing thoughts. They’re pretty good at creating coherent, logical schedules based on sets of simple criteria as well as making communications more succinct (although still not perfect).

load more comments (6 replies)
load more comments (9 replies)
[–] TowardsTheFuture@lemmy.zip 21 points 3 days ago

And just as back then, the problem is not with people using something to actually learn and deepen their understanding. It is with people blatantly cheating and knowing nothing because they don’t even read the thing they’re copying down.

[–] TheTechnician27@lemmy.world 17 points 3 days ago* (last edited 3 days ago) (2 children)

Something I think you neglect in this comment is that yes, you're using LLMs in a responsible way. However, this doesn't translate well to school. The objective of homework isn't just to reproduce the correct answer. It isn't even to reproduce the steps to the correct answer. It's for you to learn the steps to the correct answer (and possibly the correct answer itself), and the reproduction of those steps is a "proof" to your teacher/professor that you put in the effort to do so. This way you have the foundation to learn other things as they come up in life.

For instance, if I'm in a class learning to read latitude and longitude, the teacher can give me an assignment to find 64° 8′ 55.03″ N, 21° 56′ 8.99″ W on the map and write where it is. If I want, I can just copy-paste that into OpenStreetMap right now and see what horrors await, but to actually learn, I need to manually track down where that is on the map. Because I learned to use latitude and longitude as a kid, I can verify what the computer is telling me, and I can imagine in my head roughly where that coordinate is without a map in front of me.

Learning without cheating lets you develop a good understanding of what you: 1) need to memorize, 2) don't need to memorize because you can reproduce it from other things you know, and 3) should just rely on an outside reference work for whenever you need it.

There's nuance to this, of course. Say, for example, that you cheat to find an answer because you just don't understand the problem, but afterward, you set aside the time to figure out how that answer came about so you can reproduce it yourself. That's still, in my opinion, a robust way to learn. But that kind of learning also requires very strict discipline.

load more comments (2 replies)
load more comments (7 replies)
[–] McDropout@lemmy.world 32 points 2 days ago (3 children)

It’s funny how everyone is against using AI for students to get summaries of texts, pdfs etc which I totally get.

But during my time through medschool, I never got my exam paper back (ever!) so the exam was a test where I needed to prove that I have enough knowledge but the exam is also allowed to show me my weaknesses are so I would work on them but no, we never get out papers back. And this extends beyond medschool, exams like the USMLE are long and tiring at the end of the day we just want a pass, another hurdle to jump on.

We criticize students a lot (righfully so) but we don’t criticize the system where students only study becase there is an exam, not because they are particularly interested in the topic at given hand.

A lot of topics that I found interesting in medicine were dropped off because I had to sit for other examinations.

load more comments (3 replies)
[–] drmoose@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

Dumb take because inaccuracies and lies are not unique to LLMs.

half of what you’ll learn in medical school will be shown to be either dead wrong or out of date within five years of your graduation.

https://retractionwatch.com/2011/07/11/so-how-often-does-medical-consensus-turn-out-to-be-wrong/ and that's 2011, it's even worse now.

Real studying is knowning that no source is perfect but being able to craft a true picture of the world using the most efficient tools at hand and like it or not, objectively LLMs are pretty good already.

[–] Jankatarch@lemmy.world 38 points 3 days ago (1 children)

Only topic I am close-minded and strict about.

If you need to cheat as a highschooler or younger there is something else going wrong, focus on that.

And if you are an undergrad or higher you should be better than AI already. Unless you cheated on important stuff before.

[–] sneekee_snek_17@lemmy.world 29 points 3 days ago (7 children)

This is my stance exactly. ChatGPT CANNOT say what I want to say, how i want to say it, in a logical and factually accurate way without me having to just rewrite the whole thing myself.

There isn't enough research about mercury bioaccumulation in the Great Smoky Mountains National Park for it to actually say anything of substance.

I know being a non-traditional student massively affects my perspective, but like, if you don't want to learn about the precise thing your major is about...... WHY ARE YOU HERE

load more comments (7 replies)
[–] Dasus@lemmy.world 14 points 2 days ago

Well that disqualifies 95% of the doctors I've had the pleasure of being the patient of in Finland.

It's just not LLM:'s they're addicted to, it's bureaucracy.

[–] disguy_ovahea@lemmy.world 30 points 3 days ago (1 children)

Even more concerning, their dependance on AI will carry over into their professional lives, effectively training our software replacements.

load more comments (1 replies)

galileosballs is the last screw holding the house together i swear

[–] TankovayaDiviziya@lemmy.world 14 points 3 days ago

This reasoning applies to everything, like the tariff rates that the Trump admin imposed to each countries and places is very likely based from the response from Chat GPT.

[–] MystikIncarnate@lemmy.ca 13 points 3 days ago (2 children)

I've said it before and I'll say it again. The only thing AI can, or should be used for in the current era, is templating... I suppose things that don't require truth or accuracy are fine too, but yeah.

You can build the framework of an article, report, story, publication, assignment, etc using AI to get some words on paper to start from. Every fact, declaration, or reference needs to be handled as false information unless otherwise proven, and most of the work will need to be rewritten. It's there to provide, more or less, a structure to start from and you do the rest.

When I did essays and the like in school, I didn't have AI to lean on, and the hardest part of doing any essay was.... How the fuck do I start this thing? I knew what I wanted to say, I knew how I wanted to say it, but the initial declarations and wording to "break the ice" so-to-speak, always gave me issues.

It's shit like that where AI can help.

Take everything AI gives you with a gigantic asterisk, that any/all information is liable to be false. Do your own research.

Given how fast things are moving in terms of knowledge and developments in science, technology, medicine, etc that's transforming how we work, now, more than ever before, what you know is less important than what you can figure out. That's what the youth need to be taught, how to figure that shit out for themselves, do the research and verify your findings. Once you know how to do that, then you'll be able to adapt to almost any job that you can comprehend from a high level, it's just a matter of time patience, research and learning. With that being said, some occupations have little to no margin for error, which is where my thought process inverts. Train long and hard before you start doing the job.... Stuff like doctors, who can literally kill patients if they don't know what they don't know.... Or nuclear power plant techs... Stuff like that.

[–] GoofSchmoofer@lemmy.world 30 points 3 days ago* (last edited 3 days ago) (6 children)

When I did essays and the like in school, I didn’t have AI to lean on, and the hardest part of doing any essay was… How the fuck do I start this thing?

I think that this is a big part of education and learning though. When you have to stare at a blank screen (or paper) and wonder "How the fuck do I start?" Having to brainstorm write shit down 50 times, edit, delete, start over. I think that process alone makes you appreciate good writing and how difficult it can be.

My opinion is that when you skip that step you skip a big part of the creative process.

load more comments (6 replies)
load more comments (1 replies)
load more comments
view more: next ›