this post was submitted on 15 Jul 2025
146 points (99.3% liked)

Fuck AI

3492 readers
446 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

Personally seen this behavior a few times in real life, often with worrying implications. Generously I'd like to believe these people use extruded text as a place to start thinking from, but in practice is seems to me that they tend to use extruded text as a thought-terminating behavior.

IRL, I find it kind of insulting, especially if I'm talking to people who should know better or if they hand me extruded stuff instead of work they were supposed to do.

Online it's just sort of harmless reply-guy stuff usually.

Many people simply straight-up believe LLMs to be genie like figures as they are advertised and written about in the "tech" rags. That bums me out sort of in the same way really uncritical religiosity bums me out.

HBU?

top 49 comments
sorted by: hot top controversial new old

It's absolutely insulting and infuriating and i want to grab them and slap them more than a couple tomes.

I'm first year into university, studying software engineering, and sometimes i like doing homework with friends, because calculus and linear algebra are hard on my brain and i specifically went to uni to understand the hard parts.

Not once, not twice, have i asked for help with something from a friend, only for them to just open the dumbass chatbot, asking it how to solve the question, and just believing in the answer like it's the moses coming down with the commandments, and then giving me the same explanation, full of orgasmic enthusiasm until i go "applying that theorem in the second step is invalid" or "this contradicts an earlier conclusion", and then they shut their fucking brains off, tell monsieur shitbot his mistake, and again, explain to me like I'm a child (I'd say mansplaining because honest to god it looked and sounded the same but I'm also a man, so.. ) word for word the bot output.

This doesn't stop at 3 or 4 times, i could only wish, sometime i got curious and burnt an hour like that with the same guy, on the same question, on the same prompt streak.

Like after the 7th time they don't understand that they are in more trouble than me and still talk like they have a phd.

So I'll sum up:

  • they turn off their brain
  • they make bot think for them
  • they believe bot like gospel
  • they swear bot knows best
  • they're shown it does not know shit
  • "just one more prompt bro i swear bro please one more time bro i swear bro Claude knows how to solve calculus bro it has LaTeX bro so it probably knows bro please bro one more prompt bro-"
[–] rizo@sh.itjust.works 9 points 1 day ago

My boss uses it to convert 3 lines of text into multiple paragraphs of PR text for our newsletter and was excited about this... 2 or 3 weeks later he told me how cool it is that he can take a multi paragraph newsletter from other companies and summaries it into 3 sentences.... Let us burn through our energy grid for marketing... (Slow clap)

[–] Strider@lemmy.world 4 points 1 day ago

A friend of mine who works in tech and is very well aware of what 'AI' is is a big fan. He runs his own bots and stuff for personal use and thinks he has the situation under control.

While he is more and more relying on the 'benefits'.

My fear is that he will not be aware how his llm interpreted output might change him and it's kind of a deal with the devil situation.

I hope I am wrong.

[–] HollowNaught@lemmy.world 8 points 1 day ago

More than a few of my work colleagues will search up something and then blindly trust the ai summary

It's infuriating

[–] AbsolutelyNotAVelociraptor@sh.itjust.works 45 points 2 days ago (1 children)

Ffs, I had one of those at work.

One day, we bought a new water sampler. The thing is pretty complex and requires from a licensed technician from the manufacturer to come and commission it.

Since I was overseeing the installation and later I would be the person responsible of connecting it to our industrial network, I had quite a few questions about the device, some of them very specific.

I swear the guy couldn't give me even the most basic answers about the device without asking chatgpt. And at a certain point, I had to answer myself one question by reading the manual (that I downloaded on the go, because the guy didn't have a paper copy of it) because chatgpt couldn't give him an answer. This guy was someone hired by the company making the water sampler as an "expert", mind you.

[–] flandish@lemmy.world 20 points 2 days ago (1 children)

assuming you were in meatspace with this person, I am curious, did they like… open gpt in mid convo with you to ask it? Or say “brb”?

[–] AbsolutelyNotAVelociraptor@sh.itjust.works 16 points 2 days ago (1 children)

Since I was inspecting the device (it's a somewhat big object, similar to a fridge), I didn't realize at first because I wasn't looking at him. I noticed the chat gpt thing when, at a certain question, I was standing next to him and he shamelessly, with the phone in hand, typed my question on chatgpt. That was when he couldn't give me the answer and I had to look for the product manual on the internet.

Funniest thing was when I asked something I couldn't find in the manual and he told me, and I quote, "if you manage to find out, let me know the answer!". Like, dude? You are the product expert? I should be the one saying that to you, not the other way!

[–] flandish@lemmy.world 5 points 2 days ago

ouch!! That’s such nonsense.

[–] flandish@lemmy.world 37 points 2 days ago (1 children)

I respond in ways like we did when Wikipedia was new: “Show me a source.” … “No GPT is not a source. Ask it for its sources. Then send me the link.” … “No, Wikipedia is not a source, find the link used in that statement and send me its link.”

If you make people at least have to acknowledge that sources are a thing you’ll find the issues go away. (Because none of these assholes will talk to you anymore anyway. ;) )

[–] wizardbeard@lemmy.dbzer0.com 28 points 2 days ago (4 children)

GPT will list fake sources. Just in case you aren't aware. Most of these things will.

[–] teegus@sh.itjust.works 22 points 2 days ago (2 children)

My municipality used hallucinated references to justify closing down schools.

[–] someguy3@lemmy.world 4 points 2 days ago (2 children)
[–] teegus@sh.itjust.works 5 points 2 days ago

It made up some references allegedly saying larger schools were better for learning than smaller schools among other things. But the main reason for restructuring the school structure was to save money.

It's government. Reason doesn't enter into it

[–] DrDystopia@lemy.lol 2 points 2 days ago

Hallois tjallabais!

[–] BlameTheAntifa@lemmy.world 10 points 2 days ago

Tracing and verifying sources is standard academic writing procedure. While you definitely can’t trust anything an LLM spits out, you can use them to track down certain types of sources more quickly than search engines. On the other hand, I feel that’s more of an indictment of the late-stage enshittification of search engines, not some special strength of LLMs. If you have to use one, don’t trust it, demand supporting links and references, and verify absolutely everything.

[–] flandish@lemmy.world 9 points 2 days ago

Yep. 100% aware. That’s one of my points - showing its fake. Sometimes enlightening to some folks.

[–] ordinarylove@lemmy.blahaj.zone 8 points 2 days ago (1 children)

indeed they literally cannot cite sources accurately by way of their function 😬

[–] BroBot9000@lemmy.world 9 points 2 days ago* (last edited 2 days ago) (1 children)

I’ll still ask the person shoving Ai slop in my face for a source or artist link just to shame these pathetic attempts to pass along slop and misinformation.

Edit for clarity

[–] Ulrich@feddit.org 1 points 2 days ago (1 children)

You can ask it for whatever you want, it will not provide sources.

[–] BroBot9000@lemmy.world 7 points 2 days ago* (last edited 2 days ago)

Ask the person shoving Ai slop in my face for their source.

Not going to ask a racist pile of linear algebra for a fake source.

[–] ThisIsNotHim@sopuli.xyz 17 points 2 days ago (1 children)

Slightly different, but I've had people insist on slop.

A higher up at work asked the difference between i.e. e.g. and ex. I answered, they weren't satisfied and made their assistant ask the large language model. Their assistant reads the reply out loud and it's near verbatim to what I just told them. Ugh

This is not the only time this has happened

[–] deaddigger@sh.itjust.works 2 points 1 day ago (1 children)

So whats the difference exactly?

[–] ThisIsNotHim@sopuli.xyz 5 points 1 day ago (1 children)

I.e. is used to restate for clarification. It doesn't really relate to the other two, and should not be used when multiple examples are listed or could be listed.

E.g. and ex. are both used to start a list of examples. They're largely equivalent, but should not be mixed. If your organization has a style guide consult that to check which to use. If it doesn't, check the document and/or similar documents to see if one is already in use, and continue to use that. If no prior use of either is found, e.g. is more common.

[–] deaddigger@sh.itjust.works 3 points 1 day ago (2 children)

Thanks

So i.e. would be like "the most useful object in the galaxy i.e. a towel"

And eg would be like "companies e.g. meta, viatris, ehrmann, edeka" Right?

[–] ThisIsNotHim@sopuli.xyz 3 points 1 day ago

Exactly. If you've got a head for remembering Latin, i.e. is id est, so you can try swapping "that is" into the sentence to see if it sounds right.

E.g. is exempli gratia so you can try swapping "for example" in for the same trick.

If you forget, avoiding the abbreviations is fine in most contexts. That said, I'd be surprised if mixing them up makes any given sentence less clear.

[–] kayzeekayzee@lemmy.blahaj.zone 3 points 1 day ago (1 children)

ie basically means "in other words"

eg and ex mean "for example"

[–] SpookyMulder@lemmy.4d2.org 3 points 1 day ago (2 children)

fwiw: I always remember it as "example given" and "in essence"

[–] kayzeekayzee@lemmy.blahaj.zone 1 points 14 hours ago* (last edited 14 hours ago)

I just use "for EGsample" and "for EXample"

Also "In Eoutherwords"

But for those wondering: ie and eg are actually Latin acronyms!

i.e. - id est - "that is"

e.g. - exempli grata - "for example"

[–] JandroDelSol@lemmy.world 1 points 21 hours ago

that's how I remember it too!

[–] BroBot9000@lemmy.world 30 points 2 days ago* (last edited 2 days ago) (1 children)

A lot of uneducated people out there without the ability to critically evaluate new information that they receive. So to them any new information is true and no further context is sought because they are lazy too.

[–] DrDystopia@lemy.lol 6 points 2 days ago

Anybody, at any level, can fall into that trap unless externally evaluated. And if never getting a reality check, they just keep on perpetually. Why not, it's worked up until now...

[–] Akasazh@feddit.nl 7 points 2 days ago

I had a friend ask me what time the tour the France would cross Claremont Ferrand om the day the stage was in Normandy. Because ai told them as part of their 'things to do in Clermont Ferrand om that day' query.

It had started in Clermont in 2023, but not even on that day, b kind of puzzling me.

[–] Aqarius@lemmy.world 8 points 2 days ago

Absolutely. People will call you a bot, then vomit out an argument ChatGPT have them without even reading it.

[–] galoisghost@aussie.zone 21 points 2 days ago (1 children)

The worst thing is when you see that the AI summary is then repeated word for word on content farm sites that appear in the result list. You know that’s just reinforcing the AI summary validity to some users.

[–] ordinarylove@lemmy.blahaj.zone 12 points 2 days ago

This propagates fake/wrong solutions to common tech problems too, it's obnoxious.

[–] lapes@lemmy.zip 16 points 2 days ago (1 children)

I work in customer support and it's very annoying when someone pastes generic GPT advice on how I should fix their issue. That stuff is usually irrelevant or straight up incorrect.

[–] Ulrich@feddit.org 4 points 2 days ago (1 children)

Dr. Google has become Dr. ChatGPT

[–] mlen@awful.systems 1 points 2 days ago

Tell them to use yahoo for the second opinion

[–] Ulrich@feddit.org 8 points 2 days ago

Absolutely. All the time.

Also had a guy that I do a little bit of work with ask me to use it. I told them no haha

[–] Catoblepas@piefed.blahaj.zone 11 points 2 days ago

It annoys me on social media, and I wouldn’t know how to react if someone did that in front of me. If I wanted to see what the slop machine slopped out I’d go slop-raking myself.

[–] Kolanaki@pawb.social 11 points 2 days ago* (last edited 2 days ago) (1 children)

It already annoyed me that some people I know IRL will start an argument not know what they are talking about, start looking shit up on Wikipedia, only to misread or not comprehend what they are reading, proving themselves in the wrong but still acting like they were right.

Now it's even worse because even if they read what they are provided carefully, it's straight up hallucinated BS 70℅ or more of the time.

[–] KazuchijouNo@lemy.lol 14 points 2 days ago (1 children)

Once I made an ARG type of game a la cicada 3301, and had highschool students try to solve it. Some used chatgpt and still were unable to continue despite chatgpt giving them the exact answer and clear instructions on what to do to next. They failed to read and comprehend even basic instructions. I don't even think they read it at all. It was really concerning.

[–] DrDystopia@lemy.lol 1 points 2 days ago

I think a lot of young people have been conditioned to be somewhat lacklustre. From what and to what end, if even intentional, who knows.

[–] lemmyknow@lemmy.today 2 points 2 days ago

I've used an LLM once in a silly discussion. We were playing some game, and I had lost. But I argued not. So to prove I was factually correct, I asked an LLM. It did not exactly agree with me, so I rephrased my request, and it then agreed with me, which I used as proof I was right. The person I guess bought it, but it wasn't anything that important (don't recall the details)

[–] jordanlund@lemmy.world 3 points 2 days ago

At work. All the time.