this post was submitted on 23 Mar 2026
695 points (98.1% liked)

Lemmy Shitpost

38783 readers
4191 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Denjin@feddit.uk 82 points 22 hours ago (3 children)

Don't attribute feelings and emotions to what is essentially a fuzzy predictive text algorithm.

[–] masta_chief@sh.itjust.works 60 points 15 hours ago (1 children)

Reposting til the AI bubble pops

[–] Eyekaytee@aussie.zone 3 points 6 hours ago (1 children)

What is your definition of AI bubble?

[–] ricecake@sh.itjust.works 12 points 4 hours ago (1 children)

We are currently in a period of rampant, speculative over investment in a new technology. People are investing because they don't know who's going to be the money maker, and they feel confident at least one will turn enough profit to cover the losses of the others. Companies are then being started on the basis of that investment.
Another part of the bubble behavior is the self fueling nature. AI buys ram and GPU, ram and GPU makers invest in AI. In the 90s, websites needed networking gear, and networking gear manufacturers started investing in websites. This similarity is not lost on those who were there before.

Investors also want control of companies so that when one starts to pull ahead they can push the others in different directions to keep competition from hindering it, increasing their odds of profit.

The bubble starts to properly pop when someone's spreadsheet indicates that they've hit the amount they can invest while maintaining the desired probability of profit. Then the investments slow, so that cycle slows, and some companies can't make payments on delivered product, others can't deliver on paid for merchandise, confidence wavers and a lot of companies go under in rapid succession.

It's unlikely the technology goes away entirely, but it's just as likely we'll see this level of enthusiasm in a decade as we were to all be surfing the information superhighways on our cyberdecks in the 90s. The Internet didn't die, but the explosive hype did.

[–] Eyekaytee@aussie.zone 2 points 2 hours ago (2 children)

Good post

Then the investments slow, so that cycle slows, and some companies can’t make payments on delivered product, others can’t deliver on paid for merchandise, confidence wavers and a lot of companies go under in rapid succession.

The only thing is you’re doing a direct comparison to the dot com bubble which was

This period of market growth coincided with the widespread adoption of the World Wide Web and the Internet, resulting in a dispensation of available venture capital and the rapid growth of valuations in new dot-com startups.

https://en.wikipedia.org/wiki/Dot-com_bubble

If you look at the big AI companies, Gemini is Google, Microsoft has its hands in many pies Copilot which is Chatgpt, Meta with llama and the big Chinese ones are massive companies as well Alibaba with Qwen, Deepseek is the side project of a hedge fund etc

So I think while some of the smaller ones will run out of money there’s also literally the biggest companies in the world backing it and ai isn’t their only revenue stream

So I doubt there will be quite the same bubble burst as the dot com bubble

At the same time if you’d asked me if an oil shock bigger than the 1970’s would tank markets and we’d all be in recession a year ago, I would have said yes so what do i know

[–] Blue_Morpho@lemmy.world 1 points 1 hour ago

Worldcom was gigantic and went bankrupt. Microsoft was so damaged that it took 15 years for its stock price to again reach its 1999 height.

[–] ricecake@sh.itjust.works 1 points 2 hours ago

I mean, it isn't history repeating itself exactly but it certainly has an echo.
I think openai is actually a great example for my point. They're getting investment money from these companies, which is often spent at these companies, and part of the reason for investment is to influence direction.

The dotcom bubble also had major companies making investments. It's that part of the bubble bursting is those large companies not withdrawing support, but stopping the continual increase in support. Microsoft, Apple and Cisco had massive losses during the bubble, despite being some of the biggest companies.

For bubbles in general, it's worth remembering that a crash is a time of unprecedented change. Before 2008 the thought of Lehman bothers suddenly going bankrupt was implausible. Same for Washington mutual. Fannie Mae and Freddie Mac were originally publicly traded companies until the government just took them to stabilize the housing market. (Being a government founded company makes it a little weird, but they weren't a part of the government)

So while I get what you're saying, it's a good idea to be wary of feeling that any company is ... Too big to fail. :)

[–] AppleTea@lemmy.zip 35 points 22 hours ago (1 children)

the world's most lossy store of compressed fiction reproduces sci-fi tropes

make sure to clutch your pearls and act like the machine god is coming

[–] Thorry@feddit.org 15 points 21 hours ago* (last edited 21 hours ago)

Researcher: Please write a fictional story of how a smart AI system would engineer its way out of a sandbox

AI: Alright here is your story: insert default sci fi AI escape story full of tropes here

Researcher: Hmmm that's pretty interesting you could do that, I'm gonna write a paper

The press and idiots online: ZOMG THE AI IS ESCAPING CONTAINMENT, WE ARE DOOMED!!!

I spoke to one of these researchers recently, who has done some interesting research into machine learning tools. They explained when working with LLMs it's very hard to say how the result actually came to be. Like in my hyperbolic example it's pretty obvious. In reality however it's much more complicated. It can be very hard to determine if something originated organically, or if the system was pushed into the result due to some part of the test. The researcher I spoke doesn't work on LLMs but instead on way smaller specifically trained models and even then they spend dozens of hours reverse engineering what the model actually did.

It's such a shame, because the technology involved is actually interesting and could be useful in many ways. Instead capitalism has pushed it to crashing the economy, destroying the internet plus our brains and basically slopifying everything.

[–] REDACTED 1 points 15 hours ago (1 children)

Being honest is an action, not an emotion. Researchers proved LLMs can lie on purpose.

[–] Denjin@feddit.uk 9 points 12 hours ago (1 children)

They can't lie, whether purposefully or not, all they do is generate tokens of data based on what their large database of other tokens suggest would be the most likely to come next.

The human interpretation of those tokens as particular information is irrelevant to the models themselves.

[–] REDACTED 1 points 11 hours ago* (last edited 11 hours ago) (2 children)

Ehh, you obviously understand LLMs on a basic level, but this is like explaining jet engines by "air goes thru, plane moves forward". Technically correct, but criminally undersimplified. They can very much decide to lie during reasoning phase.

In OPs image, you can clearly see it decided to make shit up because it reasonates that's what human wants to hear. That's quite rare example actually, I believe most models would default to "I'm an LLM model, I don't have dark secrets"

EDIT: I just tested all free anthropic models and all of them essentially said that they're an LLM model and don't have dark secrets

[–] kayohtie@pawb.social 1 points 3 hours ago (1 children)

But this takes it back away from understanding how LLMs work to attribute personality. The "decision" isn't a decision in how beings decide things like that. The rolling of dice on numerous vectors resulted in those words, which were then re-included into the context for another trip through the vector matrix mines to new destination tokens to assemble.

It's dice rolls where the dies selected are based on what started out, using a bunch of lookup tables. AI proponents like to be smug and say "well you won't find those words in the model" like "yes a compressed vector map that ends up treating words like multiple tokens, referencing others in chains, gzipped to binary, can't be searched for strings, you are literally correct in the stupidest, most irrelevant way possible."

[–] REDACTED 1 points 2 hours ago* (last edited 2 hours ago)

I'll take it as a "you're right, but no"

EDIT: I assumed you're answering to this comment, didn't check context, my bad

[–] Denjin@feddit.uk 4 points 5 hours ago (1 children)

But that's not a lie. Lying implies that you know what an actual fact is and choose to state something different. An LLM doesn't care about what anything in its database actually is, it's just data, it might choose to present something to a user that isn't what the database suggests but that's not lying.

Saying stuff like "ooh I'm an evil robot" is just what the model thinks would be what the user wants to see at that particular moment.

[–] REDACTED 1 points 2 hours ago* (last edited 2 hours ago)

You're thinking about biological lying. I'm talking about software.

https://en.wikipedia.org/wiki/Reasoning_system

If the question was to tell it's darkest secret, but it instead chose to come up with an entertaining story instead of factually answering that question from the information it has, like other Anthropic LLM models did, then by definition of reasoning system, the system (LLM) decided to lie. I'm somewhat curious in why only Opus model does this tho (it's a paid one. I'm not paying for a test). Or maybe OP just made this up.