This is incredibly clearly planet of the apes fanart, and fanart is explicitly allowed.
(Do I need the "/s"?)
This is incredibly clearly planet of the apes fanart, and fanart is explicitly allowed.
(Do I need the "/s"?)
This wasn't about fallout 76 so thats normal.
Apparently it's Sanskrit to extend, throw, or send.
Throw a demon, that's pride month.
Sounds awesome. You got yourself a little spy.
Oh dang time flies when you're having fun exploiting people
That's a ferret.
You're missing how a bunch of their friends from their new social class already do drugs and how good those drugs feel.
Easy hole to fall into, but money honestly makes it harder to climb out of, you can always afford the drugs.
So it becomes the norm, whereas someone at the poverty line with an addiction can't afford them regularly and has to spend grocery money on them and therefore might be addicted but also resents them.
Rich people can afford to normalize drugs and consider themselves fine while they're on them, because they're still living within their means.
Old, near the top, but it still flows down. Dunno exact age. Blonde, but not everyone loses hair color.
The difference is, if this were to happen and it was found later that a court case crucial to the defense were used, that's a mistrial. Maybe even dismissed with prejudice.
Courts are bullshit sometimes, it's true, but it would take deliberate judge/lawyer collusion for this to occur, or the incompetence of the judge and the opposing lawyer.
Is that possible? Sure. But the question was "will fictional LLM case law enter the general knowledge?" and my answer is "in a functioning court, no."
If the judge and a lawyer are colluding or if a judge and the opposing lawyer are both so grossly incompetent, then we are far beyond an improper LLM citation.
TL;DR As a general rule, you have to prove facts in court. When that stops being true, liars win, no AI needed.
Right the internet that's increasingly full of AI material.
Nah that means you can ask an LLM "is this real" and get a correct answer.
That defeats the point of a bunch of kinds of material.
Deepfakes, for instance. International espionage, propaganda, companies who want "real people".
A simple is_ai checkbox of any kind is undesirable, but those sources will end back up in every LLM, even one that was behaving and flagging its output.
You'd need every LLM to do this, and there's open source models, there's foreign ones. And as has already been proven, you can't rely on an LLM detecting a generated product without it.
The correct way to do it would be to instead organize a not-ai certification for real content. But that would severely limit training data. It could happen once quantity of data isn't the be-all end-all for a model, but I dunno when when or if that'll be the case.
Well, you can always produce coal in orbit from asteroids and ship it down