this post was submitted on 19 Nov 2025
130 points (96.4% liked)

Technology

76918 readers
3076 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] just_another_person@lemmy.world 6 points 9 hours ago* (last edited 8 hours ago) (2 children)

🤦🤦🤦 No...it really isn't:

Teams at Yale are now exploring the mechanism uncovered here and testing additional AI-generated predictions in other immune contexts.

Not only is there no validation, they have only begun even looking at it.

Again: LLMs can't make novel ideas. This is PR, and because you're unfamiliar with how any of it works, you assume MAGIC.

Like every other bullshit PR release of it's kind, this is simply a model being fed a ton of data and running through thousands of millions of iterative segments testing outcomes of various combinations of things that would take humans years to do. It's not that it is intelligent or making "discoveries", it's just moving really fast.

You feed it 10^2^ combinations of amino acids, and it's eventually going to find new chains needed for protein folding. The thing you're missing there is:

  1. all the logic programmed by humans
  2. The data collected and sanitized by humans
  3. The task groups set by humans
  4. The output validated by humans

It's a tool for moving fast though data, a.k.a. A REALLY FAST SORTING MECHANISM

Nothing at any stage if developed, is novel output, or validated by any models, because...they can't do that.

[–] BrundleFly2077@sh.itjust.works -5 points 9 hours ago (1 children)

Wow, if you really do know something about this subject, you’re being a real asshole about it 🙄

[–] communist@lemmy.frozeninferno.xyz -4 points 9 hours ago (1 children)

He knows the basics, it's just that they don't lead to any of the conclusions he's claiming they do. He also boldly assumes that everyone who disagrees with him doesn't know anything. He's a beast of confirmation bias.

[–] just_another_person@lemmy.world 6 points 8 hours ago (1 children)

Nah, I'm just not going to write a novel on Lemmy, ma dude.

I'm not even spouting anything that's not readily available information anyway. This is all well known, hence everybody calling out the bubble.

[–] communist@lemmy.frozeninferno.xyz -5 points 8 hours ago* (last edited 8 hours ago) (1 children)

You have not said one thing i did not already know, none of it has to do with anything

an ai did something novel, this is an easily verified fact. The only alternative is that somebody else wrote the hypothesis.

[–] just_another_person@lemmy.world 7 points 7 hours ago (2 children)

It most certainly did not...because it can't.

You find me a model that can take multiple disparate pieces of information and combine them into a new idea not fed with a pre-selected pattern, and I'll eat my hat. The very basis of how these models operates is in complete opposition of you thinking it can spontaneously have a new and novel idea. New...that's what novel means.

I can pointlessly link you to papers, blogs from researchers explaining, or just asking one of these things for yourself, but you're not going to listen, which is on you for intentionally deciding to remain ignorant to how they function.

Here's Terrence Kim describing how they set it up using GRPO: https://www.terrencekim.net/2025/10/scaling-llms-for-next-generation-single.html

And then another researcher describing what actually took place: https://joshuaberkowitz.us/blog/news-1/googles-cell2sentence-c2s-scale-27b-ai-is-accelerating-cancer-therapy-discovery-1498

So you can obviously see...not novel ideation. They fed it a bunch of trained data, and it correctly used the different pattern alignment to say "If it works this way otherwise, it should work this way with this example."

Sure, it's not something humans had gotten to get, but that's the entire point of the tool. Good for the progress, certainly, but that's it's job. It didn't come up with some new idea about anything because it works from the data it's given, and the logic boundaries of the tasks it's set to run. It's not doing anything super special here, just very efficiently.

[–] verdi@feddit.org 3 points 6 hours ago

Pearls to pigs my friend, pearls to pigs.

If there's one bad thing about modern medicine and living in an outsized society is that intelligence is no longer evolutionarily beneficial. We are artificially selecting morons and the latest pisa results are the canary in the coal mine for the idiocracy we're heading to.

Thank you for your efforts in demystifying these fucking ads in the form of breakthroughs that have these insufferable morons thinking "AI" can now do research.

[–] communist@lemmy.frozeninferno.xyz -3 points 9 hours ago* (last edited 9 hours ago)

You addressed that they haven't tested the hypothesis completely while completely overlooking the fact that an ai suggested a novel hypothesis... even if it comes out to be wrong it is still undeniably a novel hypothesis. This is what was validated by yale...

you have still failed to answer the question. You're also neglecting to include an explanation of temperature in your argument, which may be relevant here.