this post was submitted on 06 Aug 2024
76 points (95.2% liked)

Apple

940 readers
20 users here now

There are a couple of community rules in addition to the main instance rules.

All posts must be about Apple

Anything goes as long as it’s about Apple. News about other companies and devices is allowed if it directly relates to Apple.

No NSFW content

While lemmy.zip allows NSFW content this community is intended to be a place for all to feel welcome. Any NSFW content will be removed and the user banned.

If you have any comments or suggestions please message one of the moderators.

founded 2 years ago
MODERATORS
 

Long lists of instructions show how Apple is trying to navigate AI pitfalls.

you are viewing a single comment's thread
view the rest of the comments
[–] tacticalsugar@lemmy.blahaj.zone 1 points 11 months ago* (last edited 11 months ago) (1 children)

I'm asking for a source specifically on how commanding an LLM to not hallucinate makes it provide better output.

Again, I’m not sure what kind of source you’d like to see for this, as it’s a basic consequence of how LLMs work. Maybe you could show me a source that proves you correct instead?

That's not how citations work. You are making the extraordinary claim that somehow, LLMs respond better to "do not hallucinate". I simply don't believe you and there is no evidence that you're correct, aside from you saying that maybe the entirety of reddit had "do not hallucinate" prepended when OpenAI scraped it.

[–] FooBarrington@lemmy.world 6 points 11 months ago* (last edited 11 months ago)

Yeah, that's about what I expected. If you re-read my comments, you might notice that I never stated that "commanding an LLM to not hallucinate makes it provide better output", but I don't think that you're here to have any kind of honest exchange on the topic.

I'll just leave you with one thought - you're making a very specific claim ("doing XYZ can't have a positive effect!"), and I'm just saying "here's a simple and obvious counter-example". You should either provide a source for your claim, or explain why my counter-example is not valid. But again, that would require you having any interest in actual discussion.

That’s not how citations work. You are making the extraordinary claim that somehow, LLMs respond better to “do not hallucinate”.

I didn't make an extraordinary claim, you did. You're claiming that the influence of "do not hallucinate" somehow fundamentally differs from the influence of any other phrase (extraordinary). I'm claiming that no, the influence is the same (ordinary).