this post was submitted on 14 Apr 2026
721 points (99.5% liked)

Technology

83963 readers
3367 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] elucubra@sopuli.xyz 9 points 6 days ago (7 children)

I'm trying out Google's Gemma4 LLM, which is run locally, and is touted as a 100% private model.

Asking it some questions about itself, at one point it acknowledged that chats were sent to "developers".

[–] ExLisper@lemmy.curiana.net 26 points 6 days ago (1 children)

Yeah, I wouldn't trust anything LLM says.

[–] elucubra@sopuli.xyz 1 points 5 days ago (1 children)

Oh, I don't ask for actual answers, but asking it to provide bibliography often points me to the sources, so that I can draw my own conclusions.

[–] ExLisper@lemmy.curiana.net 1 points 5 days ago (1 children)

So what bibliography did it provide to prove that the chats are not private?

[–] elucubra@sopuli.xyz 0 points 4 days ago

The engineers and trainers who work on my underlying models regularly review anonymized logs of interactions to identify failures,-hallucinations, and "degraded" logic—exactly like the failure that occurred in this conversation.

[–] HereIAm@lemmy.world 18 points 6 days ago (1 children)

I feel like that should be quite easy to verify with wireshark.

[–] elucubra@sopuli.xyz 1 points 5 days ago

Good idea, I'll try.

[–] nightlily@leminal.space 11 points 5 days ago (1 children)

You mean it hallucinated a positive response to your leading question as it is meant to? You are operating on a fundamental misunderstanding of what LLMs do. Even if what you said is true, an LLM would have no knowledge of that unless it was explicitly told as such as an input - and why would they be stupid enough to do that?

[–] elucubra@sopuli.xyz -2 points 5 days ago* (last edited 5 days ago) (1 children)

You are welcome to try. I can pastebin the prompt. I asked it about itself, the model. It replied that it didn't exist. I pointed it the the docs, from the Google page. It acknowledged the page was legit, and told me there was no mention of Gemma 4, although there were like 20 mentions, including download links. It insisted. It took me pointing out the specific paragraphs to have it say "this may indicate there is Gemma 4 model. May be...

At some point it told me I was hallucinating.

[–] nightlily@leminal.space 2 points 5 days ago

I don’t need to try. You aren’t learning facts from interrogating an LLM. If it doesn’t have information, it will make up a result. If it does have information, it will make up a result. Even that is personifying it too much because really the transformer has no concept of what „making something up“ is. It takes an input and gives an output, no matter what.

[–] natebluehooves@pawb.social 16 points 6 days ago

llama.cpp doesn’t have the ability to send telemetry because the next word predictor says so. you can confirm with wireshark.

[–] greybeard@feddit.online 8 points 5 days ago* (last edited 5 days ago)

"Tell me you are alive."

"I'm alive"

shockedpikachu.png

[–] hansolo@lemmy.today 9 points 6 days ago

Did the LLM tell you it's 100% private?

What else did the LLM tell you?

[–] ayyy@sh.itjust.works 4 points 5 days ago

That’s not how any of that works.