this post was submitted on 23 Mar 2026
79 points (74.8% liked)

Technology

83069 readers
4893 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Modern_medicine_isnt@lemmy.world -1 points 1 day ago (1 children)

Not sure what this internal state you are referring to is. Are you talking about all the values that come out of each step of the computations?

As for your second half... integration. That is a tricky one. Because the inputs it is getting aren't necessarily correct. So that can do more harm than good. The current loop for integrating new data is too long though. They need to reduce that down to like an hour so it can absorb current events at least. And ideally they would be able to take a conversation and identify what worked and what didn't. Then integrate what did. This is what was mentioned about claud.md files and such that essentially keep track of wwhat was learned. There is room for improvement there, as I seem to have to tell the model to go read those or it doesn't.

[–] Technus@lemmy.zip 2 points 1 day ago (1 children)

Not sure what this internal state you are referring to is. Are you talking about all the values that come out of each step of the computations?

It would need to be able to form memories like real brains do, by creating new connections between neurons and adjusting their weights in real time in response to stimuli, and having those connections persist. I think that's a prerequisite to models that are capable of higher-level reasoning and understanding. But then you would need to store those changes to the model for each user, which would be tens or hundreds of gigabytes.

These current once-through LLMs don't have time to properly digest what they're looking at, because they essentially forget everything once they output a token. I don't think you can make up for that by spitting some tokens out to a file and reading them back in, because it still has to be human-readable and coherent. That transformation is inherently lossy.

This is basically what I'm talking about: https://www.comicagile.net/comic/context-switching/

But for every single token the LLM outputs. The fact that it's allowed to take notes is a mitigation for this context loss, not a silver bullet.

Yeah, I get what you are saying. I'm just not convinced that it needs to be able to update it's model in real time to be capable of high level reasoning. And while human readable files are inherantly lossy, they do still represents tracking an internal state.
They also have vector dbs. My understanding is that they are closer to what you are talking about as far as internal state. But they still don't allow th AI to update the vectordb in real time. Mainly they worry about what happens with live updates being similar to how people are easily manipulated into believeing BS. So they are more careful about what they feed it to update. I do wonder how they generate those vector dbs, and if that is something users could utilize locally.