this post was submitted on 13 Apr 2026
62 points (97.0% liked)

Linux

13259 readers
211 users here now

A community for everything relating to the GNU/Linux operating system (except the memes!)

Also, check out:

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 2 years ago
MODERATORS
all 15 comments
sorted by: hot top controversial new old
[–] onlinepersona@programming.dev 36 points 1 day ago (1 children)

AI agents MUST NOT add Signed-off-by tags. Only humans can legally certify the Developer Certificate of Origin (DCO). The human submitter is responsible for:

  • Reviewing all AI-generated code
  • Ensuring compliance with licensing requirements
  • Adding their own Signed-off-by tag to certify the DCO
  • Taking full responsibility for the contribution

That's fair. Nobody has to know you wrote it with a not, that's impossible to detect, but you have to own it and be ready for the discussions that follow.

[–] peterhorvath@mastodon.de 1 points 1 day ago (1 children)

@onlinepersona @Innerworld And what do you think, will the AI agent be ready to the following discussion?

[–] towerful@programming.dev 7 points 18 hours ago (1 children)

Well, no. But then it gets rejected, and further PRs that also fail the check will likely get you banned from contributing.

The human is responsible.
If the code or PR fails, the human has to own that.
If the human fails to own that, the human gets banned

[–] peterhorvath@mastodon.de 0 points 10 hours ago

@towerful Also you know, it is only matter of time and it will.

[–] g_blob@programming.dev 3 points 23 hours ago

It will comply but will not compile

[–] heliotrope@retrofed.com 18 points 1 day ago* (last edited 1 day ago) (2 children)

Rule-wise, this seems fair.

Regardless, if AI usage continues to increase in this manner, I'll likely be driving NetBSD, AROS, and FreeDOS by the end of the decade.

Maybe even a little TempleOS or ZealOS, for flavour.

[–] Dumhuvud@programming.dev 5 points 1 day ago* (last edited 1 day ago) (1 children)
[–] magikmw@piefed.social 3 points 1 day ago (1 children)

It's just regarding labeling. It's unenforcable to have a project "clean" of AI.

[–] Dumhuvud@programming.dev 4 points 1 day ago (1 children)
[–] magikmw@piefed.social 4 points 1 day ago (1 children)

Ok I see the intent of BDFL is different, but the linked document only mentions labeling - I can only assume the low quality etc. issues are handled as a judgement call, and in that way I consider the "No AI whatsoever" rule unenforceable.

If I use an LLM to generate code under my suprvision, review, quality check and test to be up to standard, how would it be detected I used AI if I don't label it so? They'll look for em-dashes in comments?

[–] soc@programming.dev 1 points 7 hours ago (1 children)

"Let's not have rules, because some may break them!"

🤡

[–] magikmw@piefed.social 1 points 5 hours ago

Rules without enforcement are just self-deception.

[–] misk@piefed.social 4 points 1 day ago

Given that nobody is able to guarantee that code used for training was used according to it’s license, this means no hallucinated code in Linux. Nice.

[–] abcdqfr@lemmy.world 2 points 1 day ago

It really has come a long way in a relatively very short time in terms of quality and, well, shitting under the rug