this post was submitted on 15 Oct 2025
286 points (93.6% liked)
TechTakes
2255 readers
91 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Interesting. Why would more manipulative people and ones with more focus on self-interest use AI more than other people? Because they're more likely to take shortcuts while doing stuff? Or is there any other direct benefit for them?
I imagine there are a few reasons. An LLM is a narcissist's dream--it will remain focused on you and tell you what you want to hear (and is always willing to be corrected).
In addition, LLMs are easy to manipulate, and sort of mimic a person enough to give you a sense of power or authority. So if you're the type of person who gets something from that, there's likely a draw to that kind of person.
Those are just guesses, though. I don't use LLMs myself, so I don't really know.
Thanks, that sounds reasonable. Especially the focus/attention.
Maybe it's the same as with other games or computer games... Some people also really get something out of fantasy achievements and when they win and feel like the main character... in a weird way...
My completely PIDOOMA take is that if you're self-interested and manipulative you're already treating most if not all people as lesser, less savvy, less smart than you. So just the fact that you can half-ass shit with a bot and declare yourself an expert in everything that doesn't need such things like "collaboration with other people", ew, is like a shot of cocaine into your eyeball.
LLMs' tone is also very bootlicking, so if you're already narcissistic and you get a tool that tells you yes, you are just the smartest boi, well... To quote a classic, it must be like being repeatedly kicked in the head by a horse.
increasing number of social media responses which come across as they think they're giving clarifying orders to a chatbot
As a certified bullshitter myself, I often find myself really annoyed with llms because their bullshitting is just so obvious
I would think because ai is basically just a yes man they can get instant gratification from. Easier to manipulate than a real human, when they're wrong, you can berate them without year of pushback.
For example: https://youtu.be/qhwbUL2mJMs