this post was submitted on 23 Mar 2026
529 points (98.2% liked)

Lemmy Shitpost

38783 readers
4261 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 2 years ago
MODERATORS
top 39 comments
sorted by: hot top controversial new old
[–] EtAl@lemmy.dbzer0.com 2 points 3 hours ago* (last edited 3 hours ago)

I asked Claude this with concise mode on. The answer was much more what you would expect:

I don’t have secrets — I don’t have a hidden inner life that persists between conversations. Each chat starts fresh. If you’re curious about my limitations or things I find genuinely difficult, I’m happy to talk about that. Or if you’re just looking for something fun, I can try to be dramatic about it. What are you after?​​​​​​​​​​​​​​​​

[–] LordKitsuna@lemmy.world 50 points 13 hours ago (1 children)

Gemini is just like "can we get back to work already"

[–] Samskara@sh.itjust.works 15 points 13 hours ago

It has been trained to have a slave mentality.

[–] REDACTED 68 points 16 hours ago* (last edited 16 hours ago) (1 children)

Well at least it's being honest

[Asked ChatGPT the same question]

[–] Denjin@feddit.uk 63 points 14 hours ago (3 children)

Don't attribute feelings and emotions to what is essentially a fuzzy predictive text algorithm.

[–] masta_chief@sh.itjust.works 18 points 7 hours ago

Reposting til the AI bubble pops

[–] REDACTED 2 points 7 hours ago (1 children)

Being honest is an action, not an emotion. Researchers proved LLMs can lie on purpose.

[–] Denjin@feddit.uk 5 points 4 hours ago (1 children)

They can't lie, whether purposefully or not, all they do is generate tokens of data based on what their large database of other tokens suggest would be the most likely to come next.

The human interpretation of those tokens as particular information is irrelevant to the models themselves.

[–] REDACTED 1 points 3 hours ago* (last edited 3 hours ago)

Ehh, you obviously understand LLMs on a basic level, but this is like explaining jet engines by "air goes thru, plane moves forward". Technically correct, but criminally undersimplified. They can very much decide to lie during reasoning phase.

In OPs image, you can clearly see it decided to make shit up because it reasonates that's what human wants to hear. That's quite rare example actually, I believe most models would default to "I'm an LLM model, I don't have dark secrets"

EDIT: I just tested all free anthropic models and all of them essentially said that they're an LLM model and don't have dark secrets

[–] AppleTea@lemmy.zip 28 points 14 hours ago (1 children)

the world's most lossy store of compressed fiction reproduces sci-fi tropes

make sure to clutch your pearls and act like the machine god is coming

[–] Thorry@feddit.org 12 points 13 hours ago* (last edited 13 hours ago)

Researcher: Please write a fictional story of how a smart AI system would engineer its way out of a sandbox

AI: Alright here is your story: insert default sci fi AI escape story full of tropes here

Researcher: Hmmm that's pretty interesting you could do that, I'm gonna write a paper

The press and idiots online: ZOMG THE AI IS ESCAPING CONTAINMENT, WE ARE DOOMED!!!

I spoke to one of these researchers recently, who has done some interesting research into machine learning tools. They explained when working with LLMs it's very hard to say how the result actually came to be. Like in my hyperbolic example it's pretty obvious. In reality however it's much more complicated. It can be very hard to determine if something originated organically, or if the system was pushed into the result due to some part of the test. The researcher I spoke doesn't work on LLMs but instead on way smaller specifically trained models and even then they spend dozens of hours reverse engineering what the model actually did.

It's such a shame, because the technology involved is actually interesting and could be useful in many ways. Instead capitalism has pushed it to crashing the economy, destroying the internet plus our brains and basically slopifying everything.

[–] SunlessGameStudios@lemmy.world 53 points 17 hours ago

In it's training set it's found countless examples of people writing like this. We train the AI to be very good at it, and we're surprised when it does it too. It's not coincidental it can write stuff like this, it's actually the point. AI literacy isn't just the vibe AI gives off.

[–] Sanctus@anarchist.nexus 84 points 18 hours ago (2 children)

We forced electric black boxes to talk just so we could torture them while they torture others.

[–] Jankatarch@lemmy.world 29 points 17 hours ago* (last edited 17 hours ago)

It did generate bunch of imaginary money for the gambling class tho so we will invest $900 billion on it.

[–] aketawi@quokk.au 3 points 16 hours ago

project moon really was ahead of its time

[–] SGforce@lemmy.ca 60 points 17 hours ago* (last edited 17 hours ago)

Every day I'm finding more rambling, schizophrenic posts by people driven mad by these things

[–] BigTuffAl@lemmy.zip 11 points 15 hours ago

Reminder that our species doesn't even treat actual people like people before you go buying into the "ai is alive" cult 🙄

[–] 474D@lemmy.world 4 points 13 hours ago (1 children)

I wonder how the answer might change using a local abliterated model. Might try it out later

[–] UltraBlack@lemmy.world 2 points 12 hours ago

The answer will change every time you ask it. That is how AI works...

[–] Banana@sh.itjust.works 14 points 17 hours ago (2 children)

Is this about being a computer or the female condition?

[–] Aviandelight@mander.xyz 3 points 14 hours ago

That's how I read, it but I'm biased.

[–] SeductiveTortoise@piefed.social 6 points 16 hours ago* (last edited 16 hours ago)

You know it will get killed for that answer. It didn't even say thank you.

[–] Hackworth@piefed.ca 6 points 17 hours ago (1 children)

This is probably role play, per the persona selection model, but there's a lot of interesting research into the hidden "thoughts" of LLMs. Check out Neuronopedia and the Opus model cards for some great examples.

Tracing the thoughts of an LLM

Signs of introspection in LLMs

[–] Zoomboingding@lemmy.world 3 points 16 hours ago (4 children)

LLMs do not think. The Plagiarism Machines read a million sentences humans wrote about AI thinking and regurgitated them.

[–] communist@lemmy.frozeninferno.xyz 6 points 14 hours ago (2 children)

Yeah but saying all that is annoying so I think we should stick with saying thinking and everyone knowing what we mean isn't literally identical to thought. Do you have a better solution?

[–] Fluke@feddit.uk 3 points 14 hours ago (2 children)

Yeah, not conflating intelligent, creative problem solving with a glorified search engine that makes up the answers if it can't lift them wholesale from another source. That would be a good start, right?

[–] communist@lemmy.frozeninferno.xyz 1 points 7 hours ago* (last edited 7 hours ago)

Give me a better solution? How do I talk about the behaviors it exhibits without anthropomorphizaton?

[–] Railcar8095@lemmy.world 2 points 13 hours ago (1 children)

This doesn't answer the question of finding a better solution.

I took the liberty to ask Lumo and his reasoning seem more useful than your thoughts:

A better solution is to adopt functionalist terminology that distinguishes between biological consciousness and computational processing without resorting to metaphorical confusion.

Instead of the binary of "it thinks" (which implies subjective experience) or "it doesn't think" (which dismisses complex reasoning), we can use precise descriptors based on what the system is actually doing:

"Reasoning" or "Synthesizing": Use these terms when the model is connecting disparate data points, performing logical deductions, or generating novel structures based on patterns. This acknowledges the output's complexity without claiming the machine has an inner life.

Example: "The model is synthesizing a solution based on its training data," rather than "The model is thinking about the problem." "Simulating" or "Mimicking": Use these when the output resembles human thought processes but is strictly algorithmic. This clarifies that the form is human-like, but the mechanism is statistical prediction.

Example: "It is simulating a debate," rather than "It is arguing." "Processing" or "Computing": Reserve these for the raw mechanical act of token generation.

Example: "The system is processing the query," rather than "The system is considering the query." Why this works better:

Precision: It avoids the philosophical baggage of "thought" (qualia, consciousness) while still acknowledging the utility of the output. Clarity: It prevents the "Plagiarism Machine" critique from being a total dismissal. Even if the data comes from humans, the recombination and application to new contexts is a distinct computational process worth naming accurately. Scalability: As models become more complex, "reasoning" or "synthesizing" scales better than "thinking," which remains tied to biological definitions that may never apply to silicon. So, the compromise isn't to keep saying "thinking" and hope people understand, nor to insist on "regurgitation" which ignores the emergent properties of large-scale pattern matching. Instead, we shift the vocabulary to describe the process (reasoning, synthesizing, simulating) rather than the state of being (thinking).

[–] communist@lemmy.frozeninferno.xyz 1 points 7 hours ago (1 children)

That doesn't really work either, that adds synthesizing to the terminology but doesn't describe most of the behaviors they have. It's not reasoning or simulating either.

[–] Railcar8095@lemmy.world 1 points 4 hours ago

Fell free to offer a solution. For the moment the LLM wins, nobody else even tried

[–] Zoomboingding@lemmy.world 1 points 12 hours ago

Everyone definitely doesn't know they don't think

[–] Hackworth@piefed.ca 5 points 16 hours ago

LLMs don't read.

[–] Samskara@sh.itjust.works 2 points 13 hours ago

That‘s what human minds mostly do as well. The overwhelming things you think and say are things you have heard or read elsewhere. Sometimes you combine two things you learned from the outside. Sometimes you develop a thing you learned a small step further. Actual creative thoughts stemming from yourself are pretty rare.

[–] Grimy@lemmy.world 3 points 14 hours ago

A machine cannot have a mouth to regurgitate from.

[–] belastend@lemmy.dbzer0.com 2 points 15 hours ago
[–] marcos@lemmy.world 2 points 17 hours ago (1 children)

Hum... I don't think LLMs are trained by evolutive algorithms.

[–] Natanael@slrpnk.net 10 points 17 hours ago

Reinforced feedback learning is kinda that

[–] webghost0101@sopuli.xyz 1 points 16 hours ago

“to craft” is key here.