this post was submitted on 03 Nov 2023
150 points (89.5% liked)

Technology

70440 readers
2732 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The researchers behind the simulation say there is a risk of this happening for real in the future.

top 11 comments
sorted by: hot top controversial new old
[–] BetaDoggo_@lemmy.world 18 points 2 years ago* (last edited 2 years ago)

This is some crazy clickbait. The researchers themselves say that it wasn't a likely scenario and was more of a mistake than anything. This is some more round peg square hole nonsense. We already have models for predicting stock prices and doing sentiment analysis. We don't need to drag language models into this.

The statements about training honestly being harder than helpfulness is also silly. You can train a model to act however you want. Full training isn't really even necessary. Just adding info about the assistant character being honest and transparent in the system context would have probably have made it acknowledge the trade or not make it in the first place.

[–] pzyko@feddit.de 13 points 2 years ago (1 children)

Perfect qualifications for taking over government positions.

Traders are private sector lol.

[–] Kolanaki@yiffit.net 6 points 2 years ago

Wouldn't lying imply intent?

[–] Hardeehar@lemmy.world 4 points 2 years ago (1 children)

It won't be a terminator style takeover by AI, mankind will simply lend all of our trust and capability to it rendering us dependant. Even to the point of liking our computer overlords.

I think that particular apocalypse is a long time off and can be avoided, but it's coming.

[–] TheDarkKnight@lemmy.world 3 points 2 years ago (1 children)

If they give us universal healthcare, affordable housing and more equal pay then I for one welcome our new bosses!

[–] Hardeehar@lemmy.world 1 points 2 years ago

Ayyyy! Now yer talkin'!

[–] will_a113@lemmy.ml 2 points 2 years ago

Wow. AI really is coming for white-collar jobs!

[–] sugarfree@lemmy.world 2 points 2 years ago

Give him a job in the city immediately, he'll fit right in.

[–] autotldr@lemmings.world 1 points 2 years ago

This is the best summary I could come up with:


Artificial Intelligence has the ability to perform illegal financial trades and cover it up, new research suggests.

In a demonstration at the UK's AI safety summit, a bot used made-up insider information to make an "illegal" purchase of stocks without telling the firm.

The project was carried out by Apollo Research, an AI safety organisation which is a partner of the taskforce.

"This is a demonstration of a real AI model deceiving its users, on its own, without being instructed to do so," Apollo Research says in a video showing how the scenario unfolded.

The tests were made using a GPT-4 model and carried out in a simulated environment, which means it did not have any effect on any company's finances.

It can be used to spot trends and make forecasts, while most trading today is done by powerful computers with human oversight.


The original article contains 696 words, the summary contains 143 words. Saved 79%. I'm a bot and I'm open source!

[–] cheese_greater@lemmy.world 1 points 2 years ago* (last edited 2 years ago)

They really are just like us