this post was submitted on 17 Feb 2024
70 points (98.6% liked)

news

23464 readers
2 users here now

Welcome to c/news! Please read the Hexbear Code of Conduct and remember... we're all comrades here.

Rules:

-- PLEASE KEEP POST TITLES INFORMATIVE --

-- Overly editorialized titles, particularly if they link to opinion pieces, may get your post removed. --

-- All posts must include a link to their source. Screenshots are fine IF you include the link in the post body. --

-- If you are citing a twitter post as news please include not just the twitter.com in your links but also nitter.net (or another Nitter instance). There is also a Firefox extension that can redirect Twitter links to a Nitter instance: https://addons.mozilla.org/en-US/firefox/addon/libredirect/ or archive them as you would any other reactionary source using e.g. https://archive.today . Twitter screenshots still need to be sourced or they will be removed --

-- Mass tagging comm moderators across multiple posts like a broken markov chain bot will result in a comm ban--

-- Repeated consecutive posting of reactionary sources, fake news, misleading / outdated news, false alarms over ghoul deaths, and/or shitposts will result in a comm ban.--

-- Neglecting to use content warnings or NSFW when dealing with disturbing content will be removed until in compliance. Users who are consecutively reported due to failing to use content warnings or NSFW tags when commenting on or posting disturbing content will result in the user being banned. --

-- Using April 1st as an excuse to post fake headlines, like the resurrection of Kissinger while he is still fortunately dead, will result in the poster being thrown in the gamer gulag and be sentenced to play and beat trashy mobile games like 'Raid: Shadow Legends' in order to be rehabilitated back into general society. --

founded 4 years ago
MODERATORS
 

A chatbot used by Air Canada hallucinated an inaccurate bereavement discount policy, and a customer who flew based on this information sued the company over being misled. The small claims court sided with the deceived customer, arguing that the chatbot was acting as an official agent of Air Canada, and that there was no reason a customer should have to double check resources from one part of the Air Canada website against different parts of the same website.

top 22 comments
sorted by: hot top controversial new old
[–] Infamousblt@hexbear.net 41 points 1 year ago (2 children)

This was inevitable and will instantly kill AI chatbots. I tried to explain this to my marketing team when they were all excited about an AI chatbot on our company website. Wonder if this will change their tune

[–] viva_la_juche@hexbear.net 29 points 1 year ago (1 children)

will instantly kill AI chatbots

inshallah

[–] 420blazeit69@hexbear.net 8 points 1 year ago

This case may have cost the airline a few grand. Sure you'll get a few losses like this (but many more situations where the customer just eats it), but if the cost savings of the chatbot are enough...

[–] carpoftruth@hexbear.net 37 points 1 year ago (2 children)

According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot's misleading information because Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions," a court order said.

Prepare for more of that, applied to weaponized drones

[–] LeopardShepherd@hexbear.net 23 points 1 year ago (1 children)

Oh that wedding? The drone just did that sorry

[–] carpoftruth@hexbear.net 15 points 1 year ago

Actually the human AI helper making $1/h has legal responsility for that strike

[–] Bloobish@hexbear.net 4 points 1 year ago

Is this how we also give AiChatbots personhood by accident?

[–] Frank@hexbear.net 35 points 1 year ago (3 children)

It's not a "hallucination" you dorks it's a random number generator that you used to replace labor.

[–] sexywheat@hexbear.net 9 points 1 year ago (2 children)

Yeah why is it a "hallucination" when the AI just makes shit up, but when a person does it they're either lying or just plain wrong.

[–] Frank@hexbear.net 9 points 1 year ago

I didn't lie on my tax returns it was a hallucination due to poorly curated traing matieral.

[–] SerLava@hexbear.net 5 points 1 year ago

Because the person knows and the AI is dumb as dirt

[–] SerLava@hexbear.net 7 points 1 year ago

I like to call it a hallucination, because while yes the thing isn't smart enough to experience thoughts, it really gets at how absolutely unreliable these things are. People are talking to the thing and taking it seriously, and it's just watching the pink dragons circle around

[–] Beaver@hexbear.net 3 points 1 year ago

Idea: how about we replace all our typists with a bunch of monkeys? They'll eventually type the right thing!

[–] PKMKII@hexbear.net 29 points 1 year ago (1 children)

I remember a state recently passed a law barring lawyers from using AI programs to generate legal documents, and this right here is why. Remove the possibility of lawyers appealing to “well it’s not our fault the document is wrong, the AI did it!”

[–] OutrageousHairdo@hexbear.net 23 points 1 year ago (2 children)

I heard about someone doing that from Leonard French. Some old boomer thought the AI could actually search for court cases, ended up getting tricked into citing a bunch of non-existent caselaw and got into a lot of trouble.

[–] D61@hexbear.net 10 points 1 year ago

I'm wishing so hard it was a Sovereign Citizen...

[–] RyanGosling@hexbear.net 9 points 1 year ago

The OG chatbot

[–] NephewAlphaBravo@hexbear.net 15 points 1 year ago

I extremely approve of describing everything AI does as hallucinations, dreaming, etc

[–] SerLava@hexbear.net 8 points 1 year ago

HAHAHAHAHAH fucking amazing

[–] TheSpectreOfGay@hexbear.net 6 points 1 year ago
[–] Bloobish@hexbear.net 2 points 1 year ago