this post was submitted on 30 Oct 2023
495 points (94.4% liked)

Technology

72729 readers
1395 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 
  • Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said.
  • Andrew Ng told The Australian Financial Review that tech leaders hoped to trigger strict regulation.
  • Some large tech companies didn't want to compete with open source, he added.
you are viewing a single comment's thread
view the rest of the comments
[–] echodot@feddit.uk 21 points 2 years ago* (last edited 2 years ago) (3 children)

It won't end the world because AI doesn't work the way that Hollywood portrays it.

No AI has ever been shown to have self agency, if it's not given instructions it'll just sit there. Even a human child would attempt to leave room if left alone in there.

So the real risk is not that and AI will decide to destroy humanity it's that a human will tell the AI to destroy their enemies.

But then you just get back around to mutually assured destruction, if you tell your self redesigning thinking weapon to attack me I'll tell my self redesigning thinking weapon to attack you.

[–] afraid_of_zombies@lemmy.world 4 points 2 years ago

Imagine 9-11 with prions. MAD depends on everyone being rational self-interested without a very alien value system. It really only works in the case you got like three governments pointing nukes at each other. It doesn't work if the group doesn't care about tomorrow or thinks that they are going into heaven or is convinced that they can't be killed or any other of the deranged reasons that motivate people to do these types of acts.

[–] lunarul@lemmy.world 4 points 2 years ago (1 children)

AI doesn't work the way that Hollywood portrays it

AI does, but we haven't developed AI and have no idea how to. The thing everyone calls AI today is just really good ML.

[–] jarfil@lemmy.world 0 points 2 years ago* (last edited 2 years ago)

At some point ML (machine learning) becomes undistinguishable from BL (biological learning).

Whether there is any actual "intelligence" involved in either, hasn't been proven yet.

[–] jarfil@lemmy.world 2 points 2 years ago* (last edited 2 years ago)

The real risk is that humans will use AIs to asses the risk/benefits of starting a war... and an AI will give them the "go ahead" without considering mutually assured destruction from everyone else doing exactly the same.

It's not that AIs will get super-human, it's that humans will blindly trust limited AIs and exterminate each other.