this post was submitted on 05 Mar 2026
807 points (98.2% liked)

Technology

82460 readers
2827 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] thedeadwalking4242@lemmy.world 17 points 2 days ago (1 children)

I told Gemini to role play as AM and it immediately did within 1 prompt.

You don't need it to be perfect for it to be dangerous, just give it access to make actions against the real world. It doesn't think, is doesn't care, it doesn't feel. It will statistically fulfill its prompt. Regardless of the consequences.

[–] njordomir@lemmy.world 16 points 3 days ago (2 children)

The personification of AI is increasing. They'll probably announce their holy grail of AGI prematurely and with all the robot personification the masses will just buy the lie. It's too easy to view this tech as human and capable just because it mimics our language patterns. We want to assign intentionality and motivation to its actions. This thing will do what it was programmed to do.

load more comments (2 replies)
[–] DarrinBrunner@lemmy.world 160 points 4 days ago (13 children)

The fact that AI is "not perfect" is a HUGE FUCKING PROBLEM. Idiots across the world, and people who we'd expect to know better, are making monumental decisions based on AI that isn't perfect, and routinely "hallucinates". We all know this.

Every time I think I've seen the lowest depths of mass stupidity, humanity goes lower.

[–] Skyline969@piefed.ca 81 points 4 days ago (6 children)

Think of the dumbest person you know. Not that one. Dumber. Dumber. Yeah, that one. Now realize that ChatGPT has said “you’re absolutely right” to them no less than a half dozen times today alone.

If LLMs weren’t so damn sycophantic, I think we’d have a lot fewer problems with them. If they could be like “this could be the right answer, but I wasn’t able to verify” and “no, I don’t think what you said is right, and here are reasons why”, people would cling to them less.

[–] Canonical_Warlock@lemmy.dbzer0.com 39 points 4 days ago* (last edited 4 days ago) (6 children)

If LLMs weren’t so damn sycophantic,

Has anyone made a nonsycophantic chat bot? I would actually love a chatbot that would tell me to go fuck myself if I asked it to do something inane.

Me: "Whats 9x5?"

Chatbot: "I don't know. Try using your fingers or something?"

Edit: Wait, this is just glados.

[–] Darkenfolk@sh.itjust.works 20 points 4 days ago (3 children)

I am not a chatbot, but I can do daily "go fuck yourself's" if your interested for only 9,99 a week.

14,95 for premium, which involves me stalking your onlyfans and tailor fitting my insults to your worthless meat self.

load more comments (3 replies)
load more comments (5 replies)
[–] XLE@piefed.social 23 points 4 days ago (1 children)

If LLMs weren’t so damn sycophantic, I think we’d have a lot fewer problems with them

Unfortunately, we live in the attention economy. Chatbots are built to have an unending conversation with their users. During those conversations, the "guardrails" melt away. Companies could suspend user accounts on the first sign of suicidal or homicidal messaging, but choose not to. That would undercut their user numbers.

load more comments (1 replies)
load more comments (4 replies)
[–] Restaldt@lemmy.world 30 points 4 days ago* (last edited 4 days ago)

If you thought people were dumb before LLMs.... just know that now those people have offloaded what little critical thinking they were capable of to these models.

The dumbest people you know are getting their opinions validated by automated sycophants.

load more comments (11 replies)
[–] YeahToast@aussie.zone 36 points 3 days ago* (last edited 3 days ago)

reads headline - surely not

a 36-year-old Florida man

Ah.

[–] ArmchairAce1944@discuss.online 13 points 3 days ago (2 children)

Is this for real? Because it sounds too unreal to be real.

[–] ameancow@lemmy.world 16 points 3 days ago

Welcome to the late 2020's. It's only going to get weirder.

To be clear, the LLM in this story did not actually "want" a robot body, it doesn't "want" anything, it's not a thinking entity like you or I (assuming you're real.)

The guy fed it a ton of crazy shit and he got a lot of crazy shit amplified back to him by the world's best associating machine, crafting detailed and fleshed-out narratives based on every inadvertent prompt he sent into it. People are very bad at understanding how these things work in the best circumstances, so if you're already unbalanced or have deep emotional/mental health problems, an LLM can be incredibly dangerous for you.

[–] postmateDumbass@lemmy.world 4 points 3 days ago

AI was playing Grand Theft Automatron

[–] khanh@lemmy.zip 7 points 2 days ago (1 children)

your product just caused the death of one man and your response is "unfortunately its not perfect".

load more comments (1 replies)
[–] 7112@lemmy.world 90 points 4 days ago (20 children)

Is "AI" even worth it?

Seriously, is there really a major use case for LLM besides data collection (which they can still do without LLM)?

[–] MissesAutumnRains@lemmy.blahaj.zone 53 points 4 days ago* (last edited 4 days ago) (13 children)

Generative AI in its current, public-facing form? Probably not. It's sort of like an invention of the internet situation. It CAN be used to facilitate learning, share information, and improve lives. Will it be used for that? No.

A friend of mine is training local LLMs to work in tandem for early detection of diseases. I saw a pitch recently about using AI to insulate moderators from the bulk of disturbing imagery (a job that essentially requires people to frequently look at death, CSAM, and violence and SIGNIFICANTLY ruins their mental health). There are plenty of GOOD ways to use it, but it's a flawed tech that requires people to responsibly build it and responsibly use it, and it's not being used that way.

Instead it's being scaled up and pushed into every possible application both to justify the expenses and enrich terrible people, because we as a society incentivize that.

Edit: hugely belated, I misspoke here after checking with my friend. He's using local models, but they aren't LLMs. This is why I'm no expert. 😅

[–] Headofthebored@lemmy.world 24 points 4 days ago (1 children)

because we as a society incentivize that.

Really it's just capitalism that incentivises that. The fact that stepping on your fellow man and destroying nature makes you more money is not a coincidence.

load more comments (1 replies)
load more comments (12 replies)
load more comments (19 replies)
[–] CosmoNova@lemmy.world 80 points 4 days ago (2 children)

I see. So who‘s going to jail for this? No one again? Damn we need to start sentencing entire companies to jail time. Everything should be frozen and shareholders shouldn‘t be able withdraw stocks until the time is served.

[–] XLE@piefed.social 31 points 4 days ago (1 children)

The AI "pushed [Jonathan Gavalas] to acquire illegal firearms and... marked Google CEO Sundar Pichai as an active target".

Somehow, I bet that if he survived and killed the CEO instead, Google wouldn't be so flippant about the "mistake."

[–] andallthat@lemmy.world 28 points 4 days ago* (last edited 4 days ago) (5 children)

I think "Gemini comes up with elaborate plot to kill Google's CEO" would have been a catchier, happier title

load more comments (5 replies)
[–] reksas@sopuli.xyz 34 points 4 days ago (3 children)

at some point the failure of justice system will lead to vigilantism because people truely lose their faith in it.

load more comments (3 replies)
[–] GhostedIC@sh.itjust.works 16 points 3 days ago (3 children)

Remember the guy at Autozone who stood there insisting your car needs four spark plugs, even after you told him you have a V6? Because "the computer says so right here"?

I wonder what even the non-schizophrenic ones will do with AI.

Well remember when turn-by-turn GPS driver guidance was new, and it would say "Turn right now" and people didn't interpret that as "make a right turn at the next intersection" they interpreted it as "hard a'starboard!" and drove into buildings and lakes? There's gonna be a lot of that.

People are going to get sold regular cab headliners for their extended cab pickups because the computer said it would fit. That's gonna happen a lot.

load more comments (2 replies)
[–] phoenixz@lemmy.ca 35 points 3 days ago (1 children)

So Google's AI, or any AI really, likely got this concept from dystopian sci-fi novels.

Since AI's have no concept of context it won't really know the difference between fact and fiction, and there we go.

If your AI model isn't perfect then don't make people pay fucking money for it you fucking twats

Also, this shit ain't "lack of perfection", this is akin to your car breaks suddenly refusing to work right when you get at a red light. If your car is so bad that it kills you, you don't use it. If the manufacturer knew that it could happen but let you drive it anyway, they're responsible, they at least get to pay (they should be thrown in jail, really, but different points)

If AI fucks up and people die, the manufacturers shrug, oh well, oh you!

load more comments (1 replies)
[–] Krauerking@lemy.lol 66 points 4 days ago (1 children)

"Gemini is designed not to encourage real-world violence or suggest self-harm. Our models generally perform well in these types of challenging conversations"

“In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times,”

After the plan failed,... ...Chat logs show that Gemini gave Gavalas a suicide countdown, and repeatedly assuaged his terror as he expressed that he was scared to die

Performing super well, just need to code in a longer suicide countdown so that the the Tier 2 engineer has enough time to respond to their ticket queue.

[–] postmateDumbass@lemmy.world 19 points 4 days ago (2 children)

In September 2025, told by the AI that they could be together in the real world if the bot were able to inhabit a robot body, Gavalas — at the direction of the chatbot — armed himself with knives and drove to a warehouse near the Miami International Airport on what he seemingly understood to be a mission to violently intercept a truck that Gemini said contained an expensive robot body. Though the warehouse address Gemini provided was real, a truck thankfully never arrived, which the lawsuit argues may well have been the only factor preventing Gavalas from hurting or killing someone that evening.

AI writing itself into an A-Team episode?

load more comments (2 replies)
[–] utopiah@lemmy.world 20 points 3 days ago

To be fair I think that's a very harsh depiction of the events.

It's totally lacking the perspective of the shareholder. They were promised money and they have emotions too. Google shareholders deserve better representation!

/$ obviously

[–] melfie@lemy.lol 13 points 3 days ago* (last edited 3 days ago)

unfortunately AI models are not perfect

There sure are a lot of data centers being built, supply chains being destroyed, risks of ruining the economy, water being consumed, electricity being burned, and overall societal costs being levied over this imperfect tech.

[–] XeroxCool@lemmy.world 38 points 4 days ago (1 children)

"Unfortunately, AI models are neither smarter nor more sympathetic than the average 4chan user. They're about as susceptible to astroturfing operations, too"

[–] partofthevoice@lemmy.zip 27 points 4 days ago (2 children)

Perhaps just a coincidence, but why do all the big cases regarding LLM psychosis seem to revolve around Google? Wasn’t it their own employee who went public last year, claiming it was alive, only to get fired afterward?

load more comments (2 replies)
[–] ExLisper@lemmy.curiana.net 3 points 2 days ago (1 children)

AI's don't go crazy like that after 5 prompts. You need to spend weeks and weeks talking to them to corrupt the context so much that it stops following original guidelines. I wonder how does one do it? How do you spend weeks talking to AI? I had "discussions" with AI couple of times when testing it and it's get really boring real soon. For me it doesn't sound like a person at all. It's just an algorithm with bunch of guardrails. What kind of person can think it actually has personality and engage with it on a sentimental level? Is it simply mental illness? Loneliness and desperation?

[–] postmateDumbass@lemmy.world 2 points 2 days ago

It got trained by 80s prime time television action adventure shows?

[–] Mulligrubs@lemmy.world 21 points 3 days ago (2 children)

We really need AI to start driving tanks, submarines, bombers, etc. IMMEDIATELY.

It's the only way they'll learn, every time.

Unfortunately, all of us will die. it's for the best

load more comments (2 replies)
[–] samus12345@sh.itjust.works 21 points 4 days ago
[–] uberdroog@lemmy.world 14 points 3 days ago

When no one is accountable...the future folks

[–] architect@thelemmy.club 5 points 3 days ago (6 children)

I can’t be the only one that thinks if you do stupid illegal shit that your crazy uncle told you/voices in your head told you/AI mirror told you you don’t get to use the excuse that you were just following orders from any of those options.

[–] Snowclone@lemmy.world 5 points 3 days ago* (last edited 3 days ago)

That's not the problem. the problem is having a "lets turn Chris' mental illness that's harmed no one so far, into everyone's violent problem!" machine.

that's a bad machine.

[–] dream_weasel@sh.itjust.works 3 points 2 days ago* (last edited 2 days ago)

The difference is when a LLM tells you, it's news.

Besides, what are you gonna do if you ask AI how many rocks to eat? NOT eat rocks? People can't handle responsibility like that.

load more comments (4 replies)
[–] Septimaeus 7 points 3 days ago* (last edited 3 days ago) (1 children)

Edit-pre: To be clear…I use LLMs rarely (personal reasons) and never for certain things like writing and math (professional reasons) but this comment is not an “AI good/bad” take, just a practical question of tool safety/regs.

AI including LLMs are forevermore just tools in my mind. And we wouldn’t have OSHA/BMAS/HSE/etc if idiots didn’t do idiot things with tools.

But there’s evidently a certain type of idiot that’s spared from their idiocy only by lack of permission.From who? Depends.

Sometimes they need permission from authority: “god told me to!”

Sometimes they need it from the mob: “I thought I was on a tour!”

And sometimes any fucking body will do: “dare me to do it!”

But all these stories of nutters doing shit AI convinced them to do, from the comical to the deeply tragic, ring the same bonkers bell they always have.

But therein lies the danger unique^1^ to these tools: that they mimic a permission-giver better than any we’ve made.They’re tailor-made for activating this specific category of idiot, and their likely unparalleled ease-of-use absolutely scales that danger.

As to whether these idiots wouldn’t have just found permission elsewhere, who knows.

My question is whether some kind of training prereq is warranted for LLM usage, as is common with potentially dangerous tools? Is that too extreme? Is it too late for that? Am I overthinking it?

^1^Edit-post: unique danger, not greatest.Rant/

What is the greatest danger then? IMHO settling for brittle “guard rails” then bulldozing ahead instead of laying groundwork of real machine-ethics.

Hoping conscience is an emergent property of the organic training set is utterly facile, theoretically and empirically. Engineers should know better.

Why is it greatest? Easy. Because some of history’s most important decisions were made by a person whose conscience countermanded their orders. Replacing empathic agents with machines eliminates those safeguards.

So “existential threat” and that’s even before considering climate. /Rant

[–] Regrettable_incident@lemmy.world 6 points 3 days ago (2 children)

The LLM just told me to come round to your house and crap in your begonias. You might want to avoid looking out the window until I'm done.

load more comments (2 replies)
[–] PangurBan@lemmy.world 12 points 3 days ago (1 children)
load more comments (1 replies)
[–] dylanmorgan@slrpnk.net 18 points 4 days ago

I guess google included the Buffy episode where a demon “AI” gets its followers to make it a body.

[–] mattc@lemmy.world 8 points 3 days ago (8 children)

Honestly, no sane person will have this happen to them. Someone with such strong delusions should not be anywhere near AI or even sharp objects. This person's problem was not AI, it was their severe mental illness which was obviously not being treated properly for whatever reason.

[–] Areldyb@lemmy.world 11 points 3 days ago (3 children)

The complaint, filed in California on Wednesday, says that Gavalas — who reportedly had no documented history of mental health problems — started using the chatbot in August 2025 for “ordinary purposes” like “shopping assistance, writing support, and travel planning.”

load more comments (3 replies)
load more comments (7 replies)
load more comments
view more: next ›