minoscopede

joined 11 months ago
[–] minoscopede@lemmy.world 15 points 1 day ago

πŸ’― we should all be very wary of voting machines. If it's not fully open source and cryptographically verifiable, it's not secure.

[–] minoscopede@lemmy.world 5 points 1 day ago* (last edited 1 day ago)

Context: I worked in IAM (computer security) at a past job.

In computer security, we don't wait to get proof that a vulnerability was exploited. We have to operate under the assumption that any vulnerability was immediately exploited, and take immediate action to fix it and limit the impact. Doubly so when the stakes are high.

We need popular support to get real security experts to investigate these claims. If there was even a single path that could have led to a vulnerability of this scale, we need to completely secure these systems and do an immediate recount/re-vote.

I'll also say, I was surprised to learn that these voting systems and their specs are not fully public and open source. That alone makes me very uncomfortable. Security through obscurity is not security at all.

[–] minoscopede@lemmy.world 2 points 6 days ago* (last edited 6 days ago)

That write up is much more than just "don't vote." It's about fully withdrawing from the system and rejecting citizenship, including all of the things that come with it, like paying taxes and owning private property.

If someone pays taxes, legitimizes the government, and also doesn't vote... Then that's likely the worst of both worlds from the author's perspective.

[–] minoscopede@lemmy.world 3 points 6 days ago* (last edited 6 days ago) (1 children)

I'd encourage you to research more about this space and learn more.

As it is, the statement "Markov chains are still the basis of inference" doesn't make sense, because markov chains are a separate thing. You might be thinking of Markov decision processes, which is used in training RL agents, but that's also unrelated because these models are not RL agents, they're supervised learning agents. And even if they were RL agents, the MDP describes the training environment, not the model itself, so it's not really used for inference.

I mean this just as an invitation to learn more, and not pushback for raising concerns. Many in the research community would be more than happy to welcome you into it. The world needs more people who are skeptical of AI doing research in this field.

[–] minoscopede@lemmy.world 66 points 1 week ago* (last edited 1 week ago) (22 children)

I see a lot of misunderstandings in the comments 🫀

This is a pretty important finding for researchers, and it's not obvious by any means. This finding is not showing a problem with LLMs' abilities in general. The issue they discovered is specifically for so-called "reasoning models" that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that's a flaw that needs to be corrected before models can actually reason.

[–] minoscopede@lemmy.world 1 points 2 weeks ago

Beautiful! I'll definitely give this a go

[–] minoscopede@lemmy.world 14 points 2 weeks ago (3 children)

python -m http.server is still my media server of choice. It's never let me down.

[–] minoscopede@lemmy.world 4 points 2 weeks ago

On any site with unverified signups (all of them) you can't.

If you want to talk to real people, you'd have to use a platform that has in-person ID verification. Like a pub, or a park.

Good luck finding a bot free place on your phone. It'd have to involve zero-sum proofs and biometrics. And even then you can't really be sure that person isn't using a bot to write without full root access to their system and a live webcam feed.

[–] minoscopede@lemmy.world 1 points 3 weeks ago

I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

Every step of any deductive process needs to be citable and traceable.

I mostly agree, but "never" is too high a bar IMO. It's way, way higher than the bar even for humans. Maybe like 0.1% or something would be reasonable?

Even Einstein misremembered things sometimes.

[–] minoscopede@lemmy.world 7 points 1 month ago* (last edited 1 month ago) (1 children)

Eh, certain parts of LA are safe. But LA is actually pretty conservative in other areas, due to a large religious population, and a lot of first-gen immigrants.

[–] minoscopede@lemmy.world 3 points 1 month ago* (last edited 1 month ago) (2 children)

I have to ask: would this story be so popular if they didn't mention that the four people that did this were Chinese?

Racism doesn't disappear just because the article doesn't say the quiet part out loud. We all know the thought process that led to this article's virality.

Let's do better, Lemmy. We all have an opportunity to make the world a more tolerant and empathetic place through what we post and upvote.

view more: next β€Ί