Khanzarate

joined 2 years ago
[–] Khanzarate@lemmy.world 6 points 3 days ago (1 children)

That's a ferret.

[–] Khanzarate@lemmy.world 44 points 4 days ago

You're missing how a bunch of their friends from their new social class already do drugs and how good those drugs feel.

Easy hole to fall into, but money honestly makes it harder to climb out of, you can always afford the drugs.

So it becomes the norm, whereas someone at the poverty line with an addiction can't afford them regularly and has to spend grocery money on them and therefore might be addicted but also resents them.

Rich people can afford to normalize drugs and consider themselves fine while they're on them, because they're still living within their means.

[–] Khanzarate@lemmy.world 5 points 5 days ago

Old, near the top, but it still flows down. Dunno exact age. Blonde, but not everyone loses hair color.

[–] Khanzarate@lemmy.world 8 points 5 days ago (1 children)

The difference is, if this were to happen and it was found later that a court case crucial to the defense were used, that's a mistrial. Maybe even dismissed with prejudice.

Courts are bullshit sometimes, it's true, but it would take deliberate judge/lawyer collusion for this to occur, or the incompetence of the judge and the opposing lawyer.

Is that possible? Sure. But the question was "will fictional LLM case law enter the general knowledge?" and my answer is "in a functioning court, no."

If the judge and a lawyer are colluding or if a judge and the opposing lawyer are both so grossly incompetent, then we are far beyond an improper LLM citation.

TL;DR As a general rule, you have to prove facts in court. When that stops being true, liars win, no AI needed.

[–] Khanzarate@lemmy.world 6 points 5 days ago (1 children)

Right the internet that's increasingly full of AI material.

[–] Khanzarate@lemmy.world 8 points 5 days ago (1 children)

Nah that means you can ask an LLM "is this real" and get a correct answer.

That defeats the point of a bunch of kinds of material.

Deepfakes, for instance. International espionage, propaganda, companies who want "real people".

A simple is_ai checkbox of any kind is undesirable, but those sources will end back up in every LLM, even one that was behaving and flagging its output.

You'd need every LLM to do this, and there's open source models, there's foreign ones. And as has already been proven, you can't rely on an LLM detecting a generated product without it.

The correct way to do it would be to instead organize a not-ai certification for real content. But that would severely limit training data. It could happen once quantity of data isn't the be-all end-all for a model, but I dunno when when or if that'll be the case.

[–] Khanzarate@lemmy.world 16 points 5 days ago (3 children)

No, because there's still no case.

Law textbooks that taught an imaginary case would just get a lot of lawyers in trouble, because someone eventually will wanna read the whole case and will try to pull the actual case, not just a reference. Those cases aren't susceptible to this because they're essentially a historical record. It's like the difference between a scan of the declaration of independence and a high school history book describing it. Only one of those things could be bullshitted by an LLM.

Also applies to law schools. People do reference back to cases all the time, there's an opposing lawyer, after all, who'd love a slam dunk win of "your honor, my opponent is actually full of shit and making everything up". Any lawyer trained on imaginary material as if it were reality will just fail repeatedly.

LLMs can deceive lawyers who don't verify their work. Lawyers are in fact required to verify their work, and the ones that have been caught using LLMs are quite literally not doing their job. If that wasn't the case, lawyers would make up cases themselves, they don't need an LLM for that, but it doesn't happen because it doesn't work.

[–] Khanzarate@lemmy.world 116 points 1 week ago (3 children)

Yes that's what he's saying.

[–] Khanzarate@lemmy.world 1 points 1 week ago (3 children)

Good luck, wish I could help

[–] Khanzarate@lemmy.world 3 points 1 week ago (1 children)

10/10 would love to learn from you. I also love your taste in outfits.

[–] Khanzarate@lemmy.world 3 points 1 week ago

I feel you. I used to be the same. I got used to audiobooks in the same way, but only because I had to, when I had my kid, and I couldn't spare the hands to read. I could, however, get some sport headphones, bone-conducting, so I could hear the baby if she cried but could hear my book without disturbing her, and once I was used to that, that became my preferred way to read.

Maybe that made me more adaptable.

Either way, if you don't need to adapt, there's no harm in not adapting. Live your life, you'll adapt when/if you need to.

[–] Khanzarate@lemmy.world 43 points 1 week ago (29 children)

Its no problem, just learn German by translating the memes.

 
view more: next ›