this post was submitted on 19 May 2025
1499 points (98.1% liked)
Microblog Memes
7675 readers
2690 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Idk, I think we're back to "it depends on how you use it". Once upon a time, the same was said of the internet in general, because people could just go online and copy and paste shit and share answers and stuff, but the Internet can also just be a really great educational resource in general. I think that using LLMs in non load-bearing "trust but verify" type roles (study buddies, brainstorming, very high level information searching) is actually really useful. One of my favorite uses of ChatGPT is when I have a concept so loose that I don't even know the right question to Google, I can just kind of chat with the LLM and potentially refine a narrower, more google-able subject.
The thing is that LLM is a professional bullshitter. It is actually trained to produce text that can fool ordinary person into thinking that it was produced by a human. The facts come 2nd.
That's true, but they're also pretty good at verifying stuff as an independent task too.
You can give them a "fact" and say "is this true, misleading or false" and it'll do a good job. ChatGPT 4.0 in particular is excellent at this.
Basically whenever I use it to generate anything factual, I then put the output back into a separate chat instance and ask it to verify each sentence (I ask it to put tags around each sentence so the misleading and false ones are coloured orange and red).
It's a two-pass solution, but it makes it a lot more reliable.
So your technique to "make it a lot more reliable" is to ask an LLM a question, then run the LLM's answer through an equally unreliable LLM to "verify" the answer?
We're so doomed.
Give it a try.
The key is in the different prompts. I don't think I should really have to explain this, but different prompts produce different results.
Ask it to create something, it creates something.
Ask it to check something, it checks something.
Is it flawless? No. But it's pretty reliable.
It's literally free to try it now, using ChatGPT.
Hey, maybe you do.
But I'm not arguing anything contentious here. Everything I've said is easily testable and verifiable.