this post was submitted on 18 Mar 2026
598 points (92.4% liked)
Technology
82745 readers
3018 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What makes you think AI won't be writing libraries or languages in the future?
Assuming that's true, and that's a BIG assumption.... What makes you think that would matter? AI has no interiority; it isn't a thinking blob, it's a text generator. Think of it as a fancy Markov chain.
Even if it were true, where in the chain do new principles, new techniques, new concepts enter into it? All these forms of generative AI can do is regurgitate what's been fed into it. The worst thing you can train an AI from is AI-generated output.
They used the word future for a reason. The technology is still being developed so basing future predictions on the current state is silly.
The amount of money dumped into ai can't be recouped. It's already a massive bubble.
Ever heard of skills? You can essentially "teach" it new things that are not directly available in its model, right now it's still pretty early but it (to me) feels like quite a leap compared to model-only usage.
Its by no means perfect, but I do not think we're even close to scratching the surface of what all can be done with the tech.
I would bet people back at the advent of computers would scoff at many of the things computers can do now as fantasy.
Edit: Right now, context size is a limiting factor, but you can do things like assign sub-agents to specific tasks/skills and have the overall agent call the subagent to complete the task thereby reducing the context size needed for the skill on the original agent call, it sorta acts as a mediator. Of course you still need to ensure you're documenting what does/doesn't work and have that available for future tasks in the same vein so it doesn't repeat mistakes.
On your point about the underlying model used to train it, I imagine at some point there will be a breakthrough where it becomes more dynamic, I think skills are kind of a stepping stone to that. Maybe instead of models being gigantic, data is broken down into individual skills that are called to inform specific actions, and those skills can easily be dynamic already.