this post was submitted on 05 Oct 2025
113 points (98.3% liked)

Fuck AI

4272 readers
639 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

An imagined town in Peru, an Eiffel tower in Beijing: travellers are increasingly using tools like ChatGPT for itinerary ideas – and being sent to destinations that don't exist.

top 28 comments
sorted by: hot top controversial new old
[–] WeavingSpider@lemmy.world 39 points 5 days ago (1 children)

I wonder how many just blindly believe the bots without fact checking the destinations, cuz wasn't there a guy who asked the llms if he needs a visa to go to Peru iirc and it said no, so he he flew to Peru and couldn't go any further because he didn't have a visa?

[–] Kirk@startrek.website 15 points 4 days ago

I wonder how many just blindly believe the bots

To be fair, the bots are being explicitly marketed as believable. The customers are being lied to. I personally try to have sympathy for them and direct my frustration at the legislators not enforcing false advertising laws.

[–] ZDL@lazysoci.al 33 points 4 days ago (1 children)

When I was on my cross-Canada trip with SO in 2024, there was a time when we were on a gondola lift with a bunch of the younger generation. They were planning a trip to Ottawa (Ottawa being sort of my stomping grounds of over 20 years). They were asking ChatGPT for things to visit and then commenting on them out loud. Which allowed us to hear that well over half the "interesting sights" they were planning on seeing in Ottawa didn't exist. Some of them were locations in Montreal (Concordia University campus) or elsewhere in Ontario (UWO campus). But the rest just didn't exist anywhere at all as far as I knew.

Even worse, the ones ChatGPT uttered that did exist they put in the wrong sections of the city. Kind of like going to Queens to see Broadway in New York.

[–] Ilovethebomb@sh.itjust.works 4 points 4 days ago (1 children)

I wonder how many times an AI needs to fuck up before people just lose confidence in it?

[–] ZDL@lazysoci.al 5 points 4 days ago

Given that it's basically just a fuck-up generator and always has been, I don't think there's any hope for that course.

The fact it can't be made into money is what kills this round of AI, triggering the fifth? sixth? winter.

[–] SkunkWorkz@lemmy.world 14 points 4 days ago

Imagine going hiking trough Peru and not triple checking the information. Even legit source can become incorrect quickly when it comes to hiking trails. Besides not checking the information they went without a guide, yeah these people were definitely noobs who came unprepared. Bet they didn’t even bring a satellite phone. This is how people die. Like two girls from my country went hiking in Panama, they went without a guide because the guide they booked didn’t arrive on time. They never returned. Search and rescue only found a camera and some parts of their bodies.

[–] Kolanaki@pawb.social 4 points 4 days ago

If I wanted a hallucinated holiday, I'd just drop acid.

[–] Zier@fedia.io 10 points 5 days ago

Who wouldn't want to visit a Winery in the middle of the Atlantic Ocean? Or go to a world class water park in the Sahara Dessert? I heard there was a new Super McDonalds opening on the Moon on Halloween. What a time to be alive.

[–] monogram@feddit.nl 6 points 4 days ago (1 children)
[–] Kirk@startrek.website 2 points 4 days ago (1 children)

I think it's important not to victim blame here. These people were lied to, by the bots, and by the companies that say the bots are trustworthy. Their government that permits the false advertising is failing them.

[–] ZDL@lazysoci.al 6 points 4 days ago (1 children)

People have a responsibility to themselves. One of the absolute first things I did when I heard about ChatGPT and the ever-increasing coterie of its imitators, was I tested them. I had them talk about things I know and counted the errors and flat-out hallucinated fictions.

Then I said to myself, "you know, they're going to be just as full of shit on things I don't know".

I saw all the lies. I saw all the advertising. I saw the same thing everybody else saw. But I saw it all and then actually tested it.

If people are willing to just believe professional liars—and make no mistake, that's what advertisers are!—without bothering to do a minute's checking, then sorry, that's entirely on them.

[–] Kirk@startrek.website 0 points 3 days ago (1 children)

"What was the person wearing when GPT gave them sightseeing suggestions?"

[–] ZDL@lazysoci.al 3 points 3 days ago

Apparently not their thinking cap.

[–] ragingHungryPanda@piefed.keyboardvagabond.com 5 points 5 days ago* (last edited 5 days ago) (2 children)

probably the only LLM I've liked for searches is Kagi's, because it tries to not answer your question. it figures out search terms for it, then does the search, summarizes or approximates an answer, them gives citation links so you can check it

[–] RiverRabbits@lemmy.blahaj.zone 12 points 5 days ago

liking LLMs? That's whack.

I haven't used theirs for search because I was so annoyed at llms on other search engines. I do love their translation llm because it gives multiple suggestions for the best translation of text. Nuance is refreshing in the world of AI.