this post was submitted on 12 Apr 2026
111 points (85.8% liked)
Futurology
4179 readers
332 users here now
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The problem is that LLM's sometimes gets the right answer and then you are like Wow this is the best! And the next minute you are thinking It must be me not giving enough context? Let me try a different model. which then also fails.