What is casual about the situation in the screenshots? You keep bringing that up as if it changes anything.
ech
Brett Hankison, who fired 10 shots during the raid but didn’t hit anyone, was the only officer on the scene charged in the Black woman’s death.
WTF is this? A token sacrifice while the killer walks free? Fuck that.
You're not the boss of me, now!
The 40k fascism cosplayers already made "god emperor" a thing. Might as well go the whole mile.
No, it doesn't. Would you say a calculator "lied" to you if it output an incorrect answer? Is your watch "lying" to you when it's out of sync? No, obviously not. They're just wrong, not "telling falsehoods".
Their "definition" is wrong. They don't get to redefine words to support their vague (and also wrong) suggestion that llms "might" have consciousness. It's not "difficult to say" - they don't, plain and simple.
Except these algorithms don't "know" anything. They convert the data input into a framework to generate (hopefully) sensible text from literal random noise. At no point in that process is knowledge used.
but like calling it a lie is the most efficient means to get the point across.
It very much doesn't because it enforces the idea that these algorithms know anything a or plan for anything. It is entirely inefficient to treat an llm like a person, as the clown in the screenshots demonstrated.
Correct. Because there is no "pursuit of untruth". There is no pursuit, period. It's putting words together that statistically match up based on the input it receives. The output can be wrong, but it's not ever "lying", even if the words it puts together resemble that.
Demanding the algorithm apologize is off the charts unhinged. It's amazing that people this stupid have achieved enough to fail this badly.
I explained why the word matters in my very first comment, and several since. You're the one that started the argument on semantics, so you tell me.