this post was submitted on 21 Jul 2025
77 points (83.5% liked)

Technology

73094 readers
2265 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 25 comments
sorted by: hot top controversial new old
[–] markovs_gun@lemmy.world 4 points 1 day ago

Why would we want to? 99% of the issues people have with "AI" are just problems with society more broadly that AI didn't really cause, only exacerbated. I think it's absurd to just reject this entire field because of a bunch of shitty fads going on right now with LLMs and image generators.

[–] Codpiece@feddit.uk 8 points 1 day ago (1 children)

Human level? That’s not setting the bar very high. Surely the aim would be to surpass human, or why bother?

[–] Outwit1294@lemmy.today 2 points 1 day ago

Yeah. Cheap labor is so much better than this bullshit

[–] Perspectivist@feddit.uk 10 points 1 day ago* (last edited 1 day ago) (1 children)

The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:

  1. Either there’s something fundamentally unique about how the biological brain processes information - something that cannot, even in principle, be replicated in silicon,

  2. Or we wipe ourselves out before we get the chance.

Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That's what humans do; improve our technology.

The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one thinks we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.

[–] rottingleaf@lemmy.world -1 points 1 day ago (1 children)

something that cannot, even in principle, be replicated in silicon

As if silicon were the only technology we have to build computers.

[–] Perspectivist@feddit.uk 4 points 1 day ago (1 children)

Did you genuinely not understand the point I was making, or are you just being pedantic? "Silicon" obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as "in non-biological substrates," I’m happy to oblige - but I have a feeling you already knew that.

[–] rottingleaf@lemmy.world -2 points 1 day ago (1 children)

And why is "non-biological" a limitation?

[–] Perspectivist@feddit.uk 3 points 1 day ago (1 children)

I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.

I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.

[–] rottingleaf@lemmy.world 0 points 1 day ago

I personally think that the additional component (suppose it's energy) that modern approaches miss is the sheer amount of entropy a human brain gets - plenty of many times duplicated sensory signals with pseudo-random fluctuations. I don't know how one can use lots of entropy to replace lots of computation (OK, I know what Monte-Carlo method is, just how it applies to AI), but superficially this seems to be the way that will be taken at some point.

On your point - I agree.

I'd say we might reach AGI soon enough, but it will be impractical to use as compared to a human.

While the matching efficiency is something very far away, because a human brain has undergone, so to say, an optimization\compression taking the energy of evolution since the beginning of life on Earth.

[–] SparrowHawk@feddit.it 7 points 1 day ago

A lot of people making baseless claims about it being inevitable...i mean it could happen but the hard problem of consciousness is not inevitable to solve

[–] gandalf_der_12te@discuss.tchncs.de 14 points 2 days ago* (last edited 2 days ago) (2 children)

AI will not threaten humans due to sadism or boredom, but because it takes jobs and makes people jobless.

When there is lower demand for human labor, according to the rule of supply and demand, prices (aka. wages) for human labor go down.

The real crisis is one of sinking wages, lack of social safety nets, and lack of future perspective for workers. That's what should actually be discussed.

[–] Vinstaal0@feddit.nl 0 points 1 day ago

Not sure if we will even really notice that in our lifetime, it is taking decades to get things like invoice processing to automate. Heck in the US they can't even get proper bank connections made.

Also, tractors have replaced a lot of workers on the land, computers have both lost a lot of jobs in offices and created a lot at the same time.

Jobs will change, that's for sure and I think most of the heavy labour jobs will become more expensive since they are harder to replace.

[–] Zorque@lemmy.world 0 points 2 days ago

But scary robots will take over the world! That's what all the movies are about! If it's in a movie, it has to be real.

[–] Asafum@feddit.nl 11 points 2 days ago (3 children)

Ummm no? If moneyed interests want it then it happens. We have absolutely no control over whether it happens. Did we stop Recall from being forced down our throats with windows 11? Did we stop Gemini from being forced down our throats?

If capital wants it capital gets it. :(

[–] drapeaunoir@lemmy.dbzer0.com 18 points 2 days ago (1 children)

😳 unless we destroy capitalism? 👉🏾👈🏾

[–] masterofn001@lemmy.ca 1 points 2 days ago (1 children)

The only problem with destroying capitalism is deciding who gets all the nukes.

[–] drapeaunoir@lemmy.dbzer0.com 2 points 1 day ago

Capitalism is just an economic system, I'm not sure what nukes has to do with it. It's not like billionaires directly own them, and we have to distribute the "nuke wealth" to the people or anything lol

[–] scarabic@lemmy.world 2 points 1 day ago

Couldn’t we have a good old fashioned butlerian jihad?

[–] BroBot9000@lemmy.world 3 points 2 days ago

Use Linux and don’t have any of those issues.

Get off the capitalist owned platforms.

[–] SpicyLizards@reddthat.com 4 points 1 day ago

We can change course if we can change course on capitalism

[–] palordrolap@fedia.io 4 points 2 days ago

Cataclysms notwithstanding, human-level AI is inevitable. That doesn't have to mean that it'll be next week, or even next century, but it will happen.

The only way it won't is if humans are wiped out. (And even then there might be extra-terrestrials who get there where we didn't. Human-level doesn't have to mean invented by humans.)

[–] Deathgl0be@lemmy.world 1 points 1 day ago (1 children)

It’s just a cash grab to take peoples jobs and give it to a chat bot that’s fed Wikipedia’s data on crack.

[–] Perspectivist@feddit.uk 1 points 1 day ago

Don't confuse AGI with LLMs. Both being AI systems is the only thing they have in common. They couldn't be further apart when it comes to cognitive capabilities.

[–] Etterra@discuss.online 1 points 2 days ago (1 children)

Honestly I welcome our AI overlords. They can't possibly fuck things up harder than we have.

[–] AngryRobot@lemmy.world 2 points 1 day ago

Can't they?