this post was submitted on 19 Nov 2025
124 points (97.0% liked)

Technology

76918 readers
3217 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 38 comments
sorted by: hot top controversial new old
[–] tio_bira@lemmy.world 5 points 3 hours ago

Mf look like a vilian from classic Who

[–] MonkderVierte@lemmy.zip 4 points 5 hours ago* (last edited 2 hours ago)

I don't want him as a boss even if i were fully into AI.

[–] avidamoeba@lemmy.ca 53 points 9 hours ago* (last edited 9 hours ago) (3 children)

Also he thinks LLMs are a dead end for getting smarter AI while Zuck is doubling down on them.

[–] kromem@lemmy.world 1 points 1 hour ago

He's been wrong about it so far and really derailed Meta's efforts.

This is almost certainly a "you can resign or we are going to fire you" kind of situation. There's no way with the setbacks and how badly he's been wrong on transformers over the past 2 years that he is not finally being pushed out.

[–] UnderpantsWeevil@lemmy.world 22 points 7 hours ago (1 children)

Well, he's got a bowtie and Zuck wears an oversized t-shirt with Bugs Bunny dressed as a 90s rapper.

They certainly can't both be wrong, can they?

[–] XLE@piefed.social 9 points 5 hours ago (1 children)

They could both be right... From a certain point of view.

Within FAIR, LeCun has instead focused on developing world models that can truly plan and reason. Over the past year, though, Meta’s AI research groups have seen growing tension and mass layoffs as Zuckerberg has shifted the company’s AI strategy away from long-term research and toward the rapid deployment of commercial products.

LeCun says current AI models are a dead end for progress. I think he's correct.

Zuckerberg appears to believe long term development of alternative models will be a bigger money drain than pushing current ones. I think he's correct too.

It looks like two guys arguing about which dead end to pursue.

[–] UnderpantsWeevil@lemmy.world 6 points 5 hours ago* (last edited 5 hours ago)

LeCun says current AI models are a dead end for progress.

Sure. That's easily proven, as the Pacific Rim tech companies are all running laps around the American models in terms of efficiency and output.

It looks like two guys arguing about which dead end to pursue

They're both Snipe Hunting for the mythological AGI, because they're each invested in the idea of a Singularity solving all their problems.

LLMs have a set of niche useful applications, but these dudes are chucking that advancement aside in pursuit of Digital God.

With Roko's Basilisk bumping around in their heads, I can't help but detect a certain religious fervor, either. We really might have folks who believe they'll be tortured for eternity if they don't build AI Hellraiser first.

[–] tomiant@piefed.social -5 points 8 hours ago* (last edited 7 hours ago) (1 children)

Getting Smarter AI < Making More Money

Is there more money in smarter AI or in manipulating people's voting patterns with the tools you've got?

I saw Suck at Trump's inauguration, I didn't see this Chinese feller there.

[–] nymnympseudonym@piefed.social 23 points 7 hours ago (1 children)

this Chinese feller

He's French, actually.

This is one of the three people that basically invented Deep Learning . One of the others is Geoffrey Hinton, who got the Nobel Prize in 2024

No matter what you think of LeCun or his opinions... he's damn well worth listening to with attention and respect.

[–] tomiant@piefed.social 8 points 4 hours ago (1 children)

Well I've been a damn drunk fool, then, sir.

[–] krooklochurm@lemmy.ca 2 points 2 hours ago

Good for you for owning up to it like a grown up. Might I suggest rewarding yourself by shumming?

[–] violentfart@lemmy.world 4 points 6 hours ago

I mean come on, you can tell he’s been looking around for something else.

[–] tal@lemmy.today 10 points 8 hours ago* (last edited 8 hours ago) (5 children)

Meta’s chief AI scientist and Turing Award winner Yann LeCun plans to leave the company to launch his own startup focused on a different type of AI called “world models,” the Financial Times reported.

World models are hypothetical AI systems that some AI engineers expect to develop an internal “understanding” of the physical world by learning from video and spatial data rather than text alone.

Sounds reasonable.

That being said, I am willing to believe that an LLM could be part of an AGI. It might well be an efficient way to incorporate a lot of knowledge about the world. Wikipedia helps provide me with a lot of knowledge, for example, though I don't have a direct brain link to it. It's just that I don't expect an AGI to be an LLM.

EDIT: Also, IIRC from past reading, Meta has separate groups aimed at near-term commercial products (and I can very much believe that there might be plenty of room for LLMs here) and aimed advanced AI. It's not clear to me from the article whether he just wants more focus on advanced AI or whether he disagrees with an LLM focus in their afvanced AI group.

I do think that if you're a company building a lot of parallel compute capacity now, that to make a return on that, you need to take advantage of existing or quite near-future stuff, even if it's not AGI. Doesn't make sense to build a lot of compute capacity, then spend fifteen years banging on research before you have something to utilize that capacity.

https://datacentremagazine.com/news/why-is-meta-investing-600bn-in-ai-data-centres

Meta reveals US$600bn plan to build AI data centres, expand energy projects and fund local programmes through 2028

So Meta probably cannot only be doing AGI work.

[–] tomiant@piefed.social 11 points 8 hours ago (1 children)

Look, AGI would require basically a human brain. LLMs are a very specific subset mimicking a (important) part of the brain- our language module. There's more, but I got interrupted by a drunk guy who needs my attention, I'll be back.

[–] krooklochurm@lemmy.ca 3 points 2 hours ago

WHAT HAPPENED WITH THE DRHNK DUDE?

[–] avidamoeba@lemmy.ca 5 points 7 hours ago

I saw a short interview with him by France 24 and he mainy said he thinks the current direction of the research teams at Meta is wrong. He made a contrast between top-down push to deliver org as opposed to long leash, leave the researches to experiment with things. He said Meta shifted from the latter to the former and he doesn't agree with the approach.

[–] just_another_person@lemmy.world 7 points 8 hours ago (2 children)

LLMs are just fast sorting and probability, they have no way to ever develop novel ideas or comprehension.

The system he's talking about is more about using NNL, which builds new relationships to things that persist. It's deferential relationship learning and data path building. Doesn't exist yet, so if he has some ideas, it may be interesting. Also more likely to be the thing that kills all human.

[–] communist@lemmy.frozeninferno.xyz -5 points 7 hours ago (1 children)
[–] just_another_person@lemmy.world 7 points 6 hours ago* (last edited 5 hours ago) (1 children)

Lol 🤣 I'm SO EMBARRASSED. You're totally right and understand these things better than me after reading a GOOGLE BLOG ABOUT THEIR PRODUCT.

I'll never speak to this topic again since I've clearly been bested with your knowledge from a Google Blog.

[–] communist@lemmy.frozeninferno.xyz -5 points 6 hours ago* (last edited 6 hours ago) (1 children)

yes, google reported about their ai discovering a novel cancer treatment, of course they did?

now tell me about how it isn't true. Do you have anything of substance to discredit this?

this reeks of confirmation bias, did you even try to invalidate your preconcieved notions?

[–] just_another_person@lemmy.world 6 points 6 hours ago* (last edited 6 hours ago) (1 children)

I sure do. Knowledge, and being in the space for a decade.

Here's a fun one: go ask your LLM why it can't create novel ideas, it'll tell you right away 🤣🤣🤣🤣

LLMs have ZERO intentional logic that allow it to even comprehend an idea, let alone craft a new one and create relationships between others.

I can already tell from your tone you're mostly driven by bullshit PR hype from people like Sam Altman , and are an "AI" fanboy, so I won't waste my time arguing with you. You're in love with human-made logic loops and datasets, bruh. There is not now, nor was there ever, a way for any of it to become some supreme being of ideas and knowledge as you've been pitched. It's super fast sorting from static data. That's it.

You're drunk on Kool-Aid, kiddo.

[–] chrash0@lemmy.world 1 points 5 hours ago

he’s been salty about this for years now and frustrated at companies throwing training and compute scaling at LLMs hoping for another emergent breakthrough like GPT-3. i believe he’s the one that really tried to push the Llama models toward multimodality

[–] UnderpantsWeevil@lemmy.world 3 points 7 hours ago

Sounds reasonable.

Does it, though? Feels like we're just rewriting the sales manual without thinking about what "learning from video" would actually entail.

Doesn’t make sense to build a lot of compute capacity, then spend fifteen years banging on research before you have something to utilize that capacity.

There's an old book from back in 2008 - Killing Sacred Cows: Overcoming the Financial Myths That Are Destroying Your Prosperity - that a lot of the modern Techbros took perhaps too closely to heart. It posited that chasing the next generation of technological advancement was more important than keeping your existing revenue streams functional. And you really should kill the golden goose if it means you've got a shot at new one in the near future.

What these Tech Companies are chasing is the Next Big Thing, even when they don't really understand what that is. And they're so blindly devoted to advancing the technological curve that they really will blow a trillion dollars (mostly of other people's money) on whatever it is they think that might be.

The real problem is that these guys are, largely, uncreative and incurious and not particularly intelligent. So they leap on fads rather than pursuing meaningful Blue Sky Research. And that gives us this endless recycling of Sci-Fi tropes as a stand in for material investments in productive next generation infrastructure.

[–] jjlinux@lemmy.zip 0 points 8 hours ago

Yai, another BS AI slop data grabbing AI company, because we can't have enough of that shit.