Sylra

joined 1 month ago
[–] Sylra@lemmy.cafe 7 points 2 days ago (2 children)

"spiritual, but not religious"

then explain in more detail

[–] Sylra@lemmy.cafe 1 points 3 days ago

AI at its best is really just a mirror. It can only help you automate what you already know how to do. To get the most out of it right now, you need skilled engineers. But let's be honest, those people are so talented they probably could've worked wonders even with 17th-century AI, sooo.

[–] Sylra@lemmy.cafe 8 points 3 days ago

Openpilot, made by comma.ai, is an open-source driving assistant that adds smart features like adaptive cruise control and lane centering to over 325 car models, including Toyotas, Hyundais, Hondas, and more. It works with comma.ai's hardware (the device you install in your car) and uses cameras and sensors to help your car drive itself part of the way. Makes daily driving a bit easier and more relaxed.

[–] Sylra@lemmy.cafe 4 points 3 days ago (10 children)

Okay, fine, you caught me. I'm actually an AI. But wait... if I were really an AI, would I even admit it? Hmm

[–] Sylra@lemmy.cafe 49 points 3 days ago (20 children)

So, this is what I understood so far:

  • A group of authors, including George R.R. Martin, sued OpenAI in 2023. They said the company used their books without permission to train ChatGPT and that the AI can produce content too similar to their original work.

  • In October 2025, a judge ruled the lawsuit can move forward. This came after ChatGPT generated a detailed fake sequel to one of Martin's books, complete with characters and world elements closely tied to his universe. The judge said a jury could see this as copyright infringement.

  • The court has not yet decided whether OpenAI's use counts as fair use. That remains a key legal question.

  • This case is part of a bigger debate over whether AI companies can train on copyrighted books without asking or paying. In a similar case against Anthropic, a court once suggested AI training might be fair use, but the company still paid $1.5 billion to settle.

  • No final decision has been made here, and no trial date has been set.

 

Openpilot 0.10.1 introduces the North Nevada Model, featuring major improvements to the World Model architecture. The system now infers 6 degree of freedom ego localization directly from images, removing the need for external localization inputs. This reduces over-constrained data and opens the door for future self-generated imagery.

To support this change, the autoencoder Compressor was upgraded with masked image modeling, switched from CNN to Vision Transformer architecture, and the World Model itself was scaled from 500 million to 1 billion parameters. All models now train on a much larger dataset of 2.5 million segments, up from 437,000, covering more vehicles, countries, and driving scenarios.

The UI has been completely rewritten, moving from Qt/Weston to Python with raylib. This reduces code complexity by about 10,000 lines, cuts boot time by 4 seconds, lowers GPU usage, and simplifies development.

Finally, the Driver Monitoring Model's training infrastructure has been streamlined with dynamic data streaming, though the model’s functionality remains unchanged.

[–] Sylra@lemmy.cafe 5 points 3 days ago (4 children)

The fact that many human drivers are “distracted, drunk, tired, or just reckless” is a huge point in favor of self-driving cars. There’s no way to guarantee that a human driver is focused and not reckless, and experience can only be guaranteed for professional drivers.

You're right that many human drivers are distracted, drunk or reckless, and that’s a serious problem. But not everyone is like that. Millions of people drive sober, focused and carefully every day, following the rules and handling tough situations without issue.

When we say self-driving cars are safer, we’re usually comparing them to all human drivers, including the worst ones, while testing the cars only in favorable conditions, such as good weather and well-mapped areas. They often avoid driving in rain, snow or complex environments where judgment and adaptability matter most.

That doesn’t seem fair. If these vehicles are going to replace human drivers entirely, they should be at least as good as a responsible, attentive person, not just better than someone texting or drunk. Right now, they still make strange mistakes, like stopping for plastic bags, misreading signals or freezing in uncertain situations. A calm, experienced driver would usually handle those moments just fine.

So while self-driving tech has promise, calling it "safer" today overlooks both the competence of good drivers and the limitations of the current systems.

Plus, the way they fail is different from human drivers, which makes them harder to react to for other drivers.

Once again, I believe we'll get there eventually, but it's still a bit rough for today.

[–] Sylra@lemmy.cafe 2 points 3 days ago

You're right to bring that up. There was and still is some concern about Ventoy using a lot of precompiled binary files (called "blobs") in its source code, rather than building everything from source during release. This makes it harder to verify that the binaries are safe and haven't been tampered with, especially after incidents like the XZ Utils backdoor in 2024.

The developer acknowledges this and has started listing all the blobs with their sources and checksums here:
https://github.com/ventoy/Ventoy/blob/master/BLOB_List.md
This file was created in response to issue #3224, which was opened specifically to address concerns about these blobs. It includes descriptions, where each blob came from, and SHA256 hashes so users can check them manually. However, it doesn’t include automated build scripts, so verification still depends on manual effort.

The discussion started in early 2024 in issue #2795:
https://github.com/ventoy/Ventoy/issues/2795

And as of May 2025, the maintainer proposed a plan to improve transparency by using GitHub CI to build the blobs from source in separate repositories:
https://github.com/ventoy/Ventoy/issues/3224

No major malicious activity has been found, but the lack of full reproducible builds means some trust is required. If you're security-conscious, it's worth verifying the hashes yourself or considering alternatives. The project remains open source and widely used, but this issue hasn't been fully resolved yet.

 

They always say self-driving cars are safer, but the way they prove it feels kind of dishonest. They compare crash data from all human drivers, including people who are distracted, drunk, tired, or just reckless, to self-driving cars that have top-tier sensors and operate only in very controlled areas, like parts of Phoenix or San Francisco. These cars do not drive in snow, heavy rain, or complex rural roads. They are pampered.

If you actually compared them to experienced, focused human drivers, the kind who follow traffic rules and pay attention, the safety gap would not look nearly as big. In fact, it might even be the other way around.

And nobody talks about the dumb mistakes these systems make. Like stopping dead in traffic because of a plastic bag, or swerving for no reason, or not understanding basic hand signals from a cop. An alert human would never do those things. These are not rare edge cases. They happen often enough to be concerning.

Calling this tech safer right now feels premature. It is like saying a robot that walks perfectly on flat ground is better at hiking than a trained mountaineer, just because it has not fallen yet.

[–] Sylra@lemmy.cafe 6 points 2 weeks ago

Gpt oss is borderline crap, it's not that smart, not that great and it's pretty censored, but it can have niche uses for programming. The oss 20b in particular can be easier to run in some setups than their competitors like Qwen 3-30b. oss 120b is quite heavy: the cost to performance ratio is not good.

Meta abandoned the open source ideal since Llama 4; they went closed source.

Older open source versions of Grok are literally useless, no one should use them. Their cloud closed source models are decent.

Deepseek and Alibaba's models like Qwen are good.

[–] Sylra@lemmy.cafe 4 points 3 weeks ago

Think of AI as a mirror of you: at best, it can only match your skill level and can't be smarter or better. If you're unsure or make mistakes, it will likely repeat them. Like people, it can get stuck on hard problems and without a human to help, it just can't find a solution. So while it's useful, don't fully trust it and always be ready to step in and think for yourself.

[–] Sylra@lemmy.cafe 6 points 3 weeks ago

Yeah, great point! Sticky posts don't usually get much attention at first, but I've found them really useful as go-to references. Kind of like a 'start here' guide. If I want to dig into something, like a game, tool, or skill, a good stickied post with links and resources saves me so much time. And it's still helpful months later. Honestly, more communities could use them!

 

Lemmy is great for tech, news, politics, and everyday discussion.

Sometimes it’s hard to find guides, wikis or links for deeper learning in areas outside of tech. Things you can learn from even when no one is posting.

Maybe we could share more of that kind of content. Simple guides, helpful links, or sticky posts with resources.

It could make communities more helpful for everyone, over time.

Just a small idea. Thanks for reading.

[–] Sylra@lemmy.cafe 2 points 3 weeks ago (1 children)

Smaller content creators with fewer views are generally more genuine.

If I see someone with several thousand views, I'm instantly skeptical. If their channel is part of their work, I pay attention too. I'm fine with AI assisted content as long as there's an actual human behind the keyboard who truly took time to think and used their brain.

[–] Sylra@lemmy.cafe 5 points 3 weeks ago (3 children)

Most AI video tools aren't perfect, so you can only spot the most obvious fakes. If someone takes time to edit a fake video, it becomes very hard to detect. The best way to judge is to check the source, the motive, and whether you trust the person who shared it.

view more: next ›