this post was submitted on 23 Mar 2026
65 points (73.4% liked)

Technology

83027 readers
4050 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] baggachipz@sh.itjust.works 9 points 6 hours ago
[–] rizzothesmall@sh.itjust.works 12 points 10 hours ago

Literally the story above this in my feed is OpenAI shutting down expensive services 😂

You goofy goobers

[–] Modern_medicine_isnt@lemmy.world 9 points 12 hours ago

That man is a verbal slut. He will say anything.

[–] SoloCritical@lemmy.world 16 points 15 hours ago

No.. you haven’t.

[–] NotMyOldRedditName@lemmy.world 4 points 13 hours ago

How many R's are in strawberry?

[–] CeeBee_Eh@lemmy.world 15 points 18 hours ago

This guy has completely lost the plot. I don't think it's possible to be even more disconnected from reality.

[–] ThunderComplex@lemmy.today 14 points 19 hours ago

>You think you've achieved AGI
>I know you haven't

We are not the same

[–] Formfiller@lemmy.world 3 points 14 hours ago
[–] IchNichtenLichten@lemmy.wtf 24 points 23 hours ago (1 children)

If I was a NVDA investor, I'd be worried. This clown is doing nothing but gaslighting and lying these days.

[–] cheat700000007@lemmy.world 4 points 22 hours ago

But you're wrong, you're all wrong!

[–] kewjo@lemmy.world 14 points 21 hours ago (1 children)

if agi then why still jobs?

[–] VindictiveJudge@lemmy.world 12 points 16 hours ago (1 children)

Fun fact: if true AGI were a thing, those AI programs would be people and not paying them for their work would be slavery.

[–] CheeseNoodle@lemmy.world 4 points 4 hours ago

This is honestly one of the scarier parts about the rhetoric, they're basically implying they would happily enslave a sentient being.

[–] duncan_bayne@lemmy.world 2 points 14 hours ago

I'll believe him when he tears off his skin suit.

[–] andallthat@lemmy.world 12 points 22 hours ago

"my chatbot told me so!"

[–] entropiclyclaude@lemmy.wtf 16 points 1 day ago (2 children)

These fuckers will claim whatever nonsense to keep themselves relevant enough to take on more debt before they collapse.

[–] awake@lemmy.wtf 1 points 19 hours ago

Looking at their history they were always able to create markets for their GPUs and AI has been obviously incredible for them. There will be the next hot thing after AI and they’ll try to have that, too. The alternatives to CUDA are not there yet, ROCm is still lacking and fiddly. I see a lot of things happening but NVIDIA collapsing for whatever reason is not part of that.

[–] fierysparrow89@lemmy.world 1 points 1 day ago

I agree, they start to sound desperate to keep their current momentum going. I think the bubble will burst soon. Things look solid until they're not.

[–] MonkderVierte@lemmy.zip 18 points 1 day ago* (last edited 1 day ago) (1 children)

The Turing thing again, how good a system is at mimicking a human? Like, lot's of dog owners could swear; the dog is smarter than a cat. But dogs are only better at reading their human.

I'll believe him, if he let's the LLM do his job.

[–] wewbull@feddit.uk 12 points 1 day ago

Cats may be able to read their human just as well or better, but as they don't give a shit, there's no feedback to base anything on.

[–] Zozano@aussie.zone 64 points 1 day ago (3 children)

LLMs aren't AI, let alone AGI.

They're fucking prediction engines with extra functions.

[–] Onihikage@piefed.social 31 points 1 day ago

The best description I've ever heard of LLMs is "a blurry jpeg of the internet". From the perspective of data compression and retrieval, they're impressive... but they're still a blurry jpeg. The image doesn't change, you can only zoom in on different parts of it and apply extra filters, and there's nothing you can truly do about the compression artifacts (what we call "hallucinations"). It can't think, it can't learn, it just is, and that's all it will ever be.

[–] unnamed1@feddit.org 2 points 1 day ago (1 children)

So are we. Your definition of AI also seems off. It’s a field of computer science dealing with seemingly cognitive algorithms. Basically everything that is not rule based programming. I work in AI production since over ten years. It is absolutely valid and necessary to hate AI, but not to deny technical functionality. Also the other answer to your comment: of course training a neural network is a form of learning. Wether it is by reinforcement or by training data. There are many applications of ML since many years before LLMs, it makes no sense to deny that it exists.

[–] BigJohnnyHines@lemmy.ca 2 points 23 hours ago (1 children)

What’s your psychology background?

[–] unnamed1@feddit.org 2 points 19 hours ago

I get that you’re trolling but I don’t understand where you’re coming from. Why psychology?

load more comments (1 replies)

Oh yes we have achieved AGI! But what we really need is Artificial General Super Intelligence! Just another trillion and it will be useful bro!

[–] Peruvian_Skies@sh.itjust.works 106 points 1 day ago

Sure you do. It's not at all a transparent attempt to prolong the bubble.

[–] Kolanaki@pawb.social 25 points 1 day ago

Average Gaslighting Idiot.

AKA "a CEO."

[–] Technus@lemmy.zip 75 points 1 day ago (10 children)

I only have a rather high level understanding of current AI models, but I don't see any way for the current generation of LLMs to actually be intelligent or conscious.

They're entirely stateless, once-through models: any activity in the model that could be remotely considered "thought" is completely lost the moment the model outputs a token. Then it starts over fresh for the next token with nothing but the previous inputs and outputs (the context window) to work with.

That's why it's so stupid to ask an LLM "what were you thinking", because even it doesn't know! All it's going to do is look at what it spat out last and hallucinate a reasonable-sounding answer.

[–] Modern_medicine_isnt@lemmy.world 1 points 11 hours ago (1 children)

I agree, ut not because of lost state. As mentioned by others, state can be managed. You could also just do a feedback loop. These improve, but don't solve. The issue is that it doesn't understand. You mention that it is just a word predictor. And that is the heart of it. It predicts based on odds more or less, not on understanding. That said, it has room to improve. I think having lots and lots of agents that are highly specialized is probably the key. The more narrow the focus, the closer prediction comes to fact. Then throw in asking 5 versions of the agent the same question and tossing the outliers and you should get pretty useful. Not AGI, but useful. The issue is that with current technology, that is simply too expensive. So a breakthrough in the expense of current AI is needed first, then we can get more useful AI. AGI will be a significantly different technology.

[–] Technus@lemmy.zip 2 points 10 hours ago

The conversion of the output to tokens inherently loses a lot of the information extracted by the model and any intermediate state it has synthesized (what it "thinks" of the input).

Until the model is able to retain its own internal state and able to integrate new information into that state as it receives it, all it will ever be able to do is try to fill in the blanks.

load more comments (9 replies)
[–] PushButton@lemmy.world 8 points 1 day ago

His can we take this idiot seriously; slop DLSS, tgen telling us we are wrong about this (the buddy telling me what I prefer), then we achieved AGI...

How low can he falls?

[–] mrmaplebar@fedia.io 35 points 1 day ago (1 children)

I think you're a bullshitting con artist.

[–] inari@piefed.zip 9 points 1 day ago

Grifter gonna grift

[–] Almacca@aussie.zone 30 points 1 day ago* (last edited 1 day ago) (1 children)

Geez. You can almost smell the desperation on this guy.

[–] SaveTheTuaHawk@lemmy.ca 4 points 1 day ago

Well, he wears the same leather jacket 24/7 so he can't smell good.

[–] RedFrank24@piefed.social 46 points 1 day ago (2 children)

So why do we need Jensen Huang?

[–] wewbull@feddit.uk 6 points 1 day ago

Why do we need any of them? They've completed the job. All future plans cancelled.

[–] MrVilliam@sh.itjust.works 30 points 1 day ago (2 children)

Exactly. CEO is maybe the easiest job for an AI to take over, so an AGI is possibly the most perfect candidate for that role.

Put up or shut up, tech bro CEOs. Replace yourself if it's so fucking amazing.

load more comments (2 replies)
[–] meme_historian@lemmy.dbzer0.com 39 points 1 day ago* (last edited 1 day ago)

Fridman, the podcast’s host, defines AGI as an AI system that’s able to “essentially do your job,” as in start, grow, and run a successful tech company worth more than $1 billion. He then asks Huang when he believes AGI will be real — asking if it’s, say, five, 10, 15, or 20 years away — and Huang responds, “I think it’s now. I think we’ve achieved AGI.”

So we've achieved AGI in the sense that it could replace a nonsensical fart-sniffing clown, hyping a horde of morons into valuating a company at orders of magnitude its actual worth?

[–] GottaHaveFaith@fedia.io 17 points 1 day ago

I just dropped an AGI down the toilet AMA

[–] Dindonmasker@sh.itjust.works 15 points 1 day ago

Guys i think i just found AGI in my gramp's old stuff.

[–] acosmichippo@lemmy.world 14 points 1 day ago

fart sniffer

load more comments
view more: next ›