this post was submitted on 29 Sep 2025
1 points (54.5% liked)

AI - Artificial intelligence

156 readers
8 users here now

AI related news and articles.

Rules:

founded 5 months ago
MODERATORS
top 1 comments
sorted by: hot top controversial new old
[–] bignose@programming.dev 1 points 1 month ago

When I’m using AI for coding, I find myself constantly making little risk assessments about whether to trust the AI, how much to trust it, and how much work I need to put into the verification of the results. And the more experience I get with using AI, the more honed and intuitive these assessments become.

For a system that has such high cost (to the environment, to the vendor, to the end user in the form of subscription), that's a damningly low level of reliability.

If my traditional code editor's code completion feature is even 0.001% unreliable – say it emits a name that just isn't in my code base – that feature is broken and needs to be fixed. If I have to start doubting whether the feature works every time I use it, that's not an acceptable tool to rely on.

Why would we accept far worse reliability in a tool that consumes gargantuan amounts of power, water, political effort, and comes with a high subscription fee?