this post was submitted on 21 Jul 2025
667 points (98.5% liked)
Technology
290 readers
371 users here now
Share interesting Technology news and links.
Rules:
- No paywalled sites at all.
- News articles has to be recent, not older than 2 weeks (14 days).
- No videos.
- Post only direct links.
To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:
- Al Jazeera.
- NBC.
- CNBC.
- Substack.
- Tom's Hardware.
- ZDNet.
- TechSpot.
- Ars Technica.
- Vox Media outlets, with exception for Axios(Due to being ad free.)
- Engadget.
- TechCrunch.
- Gizmodo.
- Futurism.
- PCWorld.
- ComputerWorld.
- Mashable.
- Hackaday.
- WCCFTECH.
More sites will be added to the blacklist as needed.
Encouraged:
- Archive links in the body of the post.
- Linking to the direct source, instead of linking to an article talking about the source.
founded 2 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
“I panicked” had me laughing so hard. Like implying that the robot can panic, and panicking can make it fuck shit up when flustered. Idk why that’s so funny to me.
It's interesting that it can "recognize" the actions as clearly illogical afterwards, as if made by someone panicking, but will still make them in the first place. Or, a possibly funnier option, it's mimicking all the stories of people panicking in this situation. Either way, it's a good lesson to learn about how AI operates... especially for this company.
Yeah I don't use LLMs often, but use ChatGPT occasionally, and sometimes when asking technical/scientific questions it will have glaring contradictions that are just completely wrong for no reason. One time when this happened I told it that it fucked up and to check it's work, and it corrected itself immediately. I tried again to see if I could get it to overcorrect or something, but it didn't go for it.
So as weird as it sounds, I think adding "also make sure to always check your replies for logical consistency" to its base prompt would improve things.
and just like that we're back to computers doing precisely what we tell them to do, nothing more and nothing less.
one day there's gonna be a sapient LLM and it'll just be a prompt of such length that it qualifies as a full genome
This unironically works, it's basically the same reason why chain-of-reasoning models produce better outputs