this post was submitted on 23 Dec 2025
20 points (95.5% liked)

Linux

10737 readers
714 users here now

A community for everything relating to the GNU/Linux operating system (except the memes!)

Also, check out:

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 2 years ago
MODERATORS
 

Last week a request for comments (RFC) was issued around establishing an LLVM AI Tool Use Policy. The proposed policy would allow AI-assisted contributions to be made to this open-source compiler codebase but that there would need to be a "human in the loop" and the contributor versed enough to be able to answer questions during code review. Separately, yesterday a proposal was sent out for creating an AI-assisted fixer bot to help with Bazel build system breakage.

Last week's LLVM AI tool policy was brought up for discussion. AI-assisted contributions would be welcome as long as there is a human in the loop that understands the code and competent enough for answering any questions during the code review. Contributors should also be transparent if there are "substantial amounts" of tool-generated content. This pull request in turn is open on GitHub for adding their AI contribution policy to the LLVM documentation. That LLVM Ai tool policy remains under discussion.

top 1 comments
sorted by: hot top controversial new old
[–] Telorand@reddthat.com 11 points 1 day ago

The main maintainer of curl recently encountered a similar thing. Some users had used their own models to find and report hundreds of potential errors (and were open about using those tools when asked). After review, the maintainers incorporated around 40% of the suggested fixes, some being actual breaks and some being semantic QoL fixes. He was surprised that an AI might actually be useful for something like that.

But in the whole process, there was a human reviewing and checking the work. At no point were these fixes just taken as gospel, and even the reporters were using their own specialized models for this task. I think introducing AI-powered analysis isn't necessarily a bad thing, but relying upon public models and cutting out humans anywhere in the review and application process is a recipe for disaster.