this post was submitted on 24 May 2025
128 points (86.8% liked)
Technology
71355 readers
3102 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
A Tesla in FSD randomly just veered off the road into a tree. There is video. It makes no sense, very difficult to work out why the AI thought that looked like a good move.
These tools this author is saying we have do not work how people claim they do.
They only have to work better and more consistently than humans to be a net positive. Which I believe most of these systems already do by a wide margin. Psychologically it's harder to accept a mistake from technology than it is from a human because the lack of control, but if the goal is to save lives, these safety systems accomplish that.
Evidence, please.
I have literally been in thousands of driving incidences where a human has not randomly driven into a tree.
You are making a claim here: that these AI systems are safer than humans. There is at least one clear counter example to your claim in existence (which I cited - https://youtu.be/frGoalySCns if anyone wants to try to figure out what this AI was doing) and there are others including ones where they have driven into the sides of tractor trailers. I assume you will make an argument about aggregates, but the sample size we have for these AI driving systems relative to the sample size we have for humans is many orders of magnitude different. And having now seen years of these incidents continuing to pile up, I believe there needs to be much more rigorous research and testing before you can make valid claims these systems are somehow safer.
There are 5 classified levels of automation. At the lower levels of automation, the very article you are responding to quotes this evidence for you. Here is another article that gets deeper into it, I haven't read it all so feel free to draw your own conclusions, but this data has been available and well reported on for many years. https://www.consumeraffairs.com/automotive/autonomous-vehicle-safety-statistics.html