this post was submitted on 06 Mar 2026
26 points (82.5% liked)

Ask Lemmy

38390 readers
913 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

The main thing is the title of the post, the body of the post is an addition and clarification to the question.

Article for example: Google’s AI Sent an Armed Man to Steal a Robot Body for It to Inhabit, Then Encouraged Him to Kill Himself, Lawsuit Alleges -- https://futurism.com/artificial-intelligence/google-ai-robot-body-suicide-lawsuit

My thoughts, not quite related to the question:

Well, how are you going to get through your last year when AI could get out of hand in 2027?

What is happening in the world reminds me of a novel - I have no mouth, but I must scream. Have you read this novel?

you are viewing a single comment's thread
view the rest of the comments
[–] tal@lemmy.today 8 points 1 day ago* (last edited 1 day ago) (1 children)

So, my own take -- which is not necessarily shared by everyone


is that current AI systems, the LLM things like ChatGPT or Claude or whatever, are going to have a pretty hard time running amok to a huge degree, due to technical limitations. One big one: they have a lot of static memory, edge weights in their neural network built up during the training process. They are taught a lot about the world when being trained. However, their "mutable memory" is not very large


just what lives in the context window. That is, they have a very limited ability to learn at runtime from what information they're absorbing from the world around them.

I run an LLM at home based on Llama 3 on my own hardware. It's configured to handle a 128k token context window. Think of a token as roughly approximating a word. There are some LLMs that can go larger out there (though some of the techniques for doing so degrade their effectiveness), but for perspective, the Lord of the Rings trilogy by Tolkien is about 481k words. That's the extent of the learning and thinking that it can do after it's released. And this isn't just a random-access memory that can be used as scratch in any arbitrary way, but a situation where the LLM can insert some information into its context window while the oldest gets pushed out the other end. That's a very primitive sort of mind.

So an early-2026 LLM can accurately remember a lot about the world from its training period. It's good at that. But...it's not very good at improving on that as you use it, as it acts.

Humans don't have that limitation, are far more capable of learning new things as they run around and far more capable of forming large, sophisticated new mental structures based on that new information.

And to some extent, the specific way in which hallucinations show up are an artifact of the fact that they are LLMs. My expectation is that an artificial general intelligence that can reason like a human likely will not be simply an LLM (though it might incorporate an LLM).

However, you can say, I think, that at some point, we will have artificial general intelligences that work at a human level. And then...yeah, whatever mental and reasoning processes they use, they will probably make errors, just as humans do. And that could be a problem, just as it is when humans do. In the case of an advanced AI that is much more capable than humans, how to control it and make it do things that we would want is a problem, and not an easy one. Maybe a problem that we can't actually solve.

My expectation, though, is that we won't be facing that as a problem in 2027. Further down the line.

What is happening in the world reminds me of a novel - I have no mouth, but I must scream. Have you read this novel?

No, but I did play the adventure game based on it in ScummVM.

Yep. Skynet won't be an LLM.

It'll be a hybrid Mamba model.

Also everyone should read I Have No Mouth. Harlan Ellison's depiction of an evil AI is ... prescient

8cwTqRJXefF5JlC.jpg