this post was submitted on 22 May 2025
200 points (95.0% liked)

Programming

20279 readers
708 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
top 44 comments
sorted by: hot top controversial new old
[–] LeFantome@programming.dev 15 points 12 hours ago (1 children)

Can Open Source defend against copyright claims for AI contributions?

If I submit code to ReactOS that was trained on leaked Microsoft Windows code, what are the legal implications?

[–] proton_lynx@lemmy.world 12 points 10 hours ago* (last edited 10 hours ago)

what are the legal implications?

It would be so fucking nice if we could use AI to bypass copyright claims.

[–] notannpc@lemmy.world 15 points 14 hours ago (1 children)

AI is at its most useful in the early stages of a project. Imagine coming to the fucking ssh project with AI slop thinking it has anything of value to add 😂

[–] HaraldvonBlauzahn@feddit.org 15 points 8 hours ago* (last edited 8 hours ago) (1 children)

The early stages of a project is exactly where you should really think hard and long about what exactly you do want to achieve, what qualities you want the software to have, what are the detailed requirements, how you test them, and how the UI should look like. And from that, you derive the architecture.

AI is fucking useless at all of that.

In all complex planned activities, laying the right groundwork and foundations is essential for success. Software engineering is no different. You won't order a bricklayer apprentice to draw the plan for a new house.

And if your difficulty is in lacking detailed knowledge of a programming language, it might be - depending on the case ! - the best approach to write a first prototype in a language you know well, so that your head is free to think about the concerns listed in paragraph 1.

[–] ulterno@programming.dev 1 points 3 hours ago* (last edited 3 hours ago) (2 children)

AI is only good for the stage when...

AI is only good in case you want to...

Can't think of anything. Edit: yes, I really tried
Playing the Devils' advocate was easier that being AI's advocate.


I might have said it to be good in case you are pitching a project and want to show some UI stuff maybe, without having to code anything.
But you know, there are actually specialised tools for that, which UI/UX designers used, to show my what I needed to implement.
And when I am pitching UI, I just use a pencil and paper and it is so much more efficient than anything AI, because I don't need to talk to something, to make a mockup, to be used to talk to someone else. I can just draw it in front of the other guy with 0 preparation, right as it came into my mind and don't need to pay for any data center usage. And if I need to go paperless, there is Whiteboards/Blackboards/Greenboards and Inkscape.

After having banged my head trying to explain code to a new developer, so that they can hopefully start making meaningful contributions, I don't want to be banging my head on something worse than a new developer, hoping that it will output something that is logically sound.

[–] ChickenLadyLovesLife@lemmy.world 3 points 2 hours ago (1 children)

AI is good for the early stages of a project ... when it's important to create the illusion of rapid progress so that management doesn't cancel the project while there's still time to do so.

[–] ulterno@programming.dev 1 points 1 hour ago

Ahh, so an outsourced con~~man~~computer.

[–] Irelephant@lemm.ee 2 points 3 hours ago (1 children)

Its good as a glorified autocomplete.

[–] ulterno@programming.dev 1 points 3 hours ago (1 children)

Except that an autocomplete, with simple, lightweight and appropriate heuristics can actually make your work much easier and will not make you have to read it again and again, before you can be confident about it.

[–] Irelephant@lemm.ee 3 points 2 hours ago

True, and it doesn't boil the oceans and poison people's air.

[–] Prime@lemmy.sdf.org 8 points 16 hours ago

Microsoft is doing this today. I can't link it because I'm on mobile. It is in dotnet. It is not going well :)

[–] oakey66@lemmy.world 22 points 20 hours ago

It’s not good because it has no context on what is correct or not. It’s constantly making up functions that don’t exist or attributing functions to packages that don’t exist. It’s often sloppy in its responses because the source code it parrots is some amalgamation of good coding and terrible coding. If you are using this for your production projects, you will likely not be knowledgeable when it breaks, it’ll likely have security flaws, and will likely have errors in it.

[–] atzanteol@sh.itjust.works 41 points 23 hours ago (2 children)

Have you used AI to code? You don't say "hey, write this file" and then commit it as "AI Bot 123 aibot@company.com".

You start writing a method and get auto-completes that are sometimes helpful. Or you ask the bot to write out an algorithm. Or to copy something and modify it 30 times.

You're not exactly keeping track of everything the bots did.

[–] eager_eagle@lemmy.world 40 points 23 hours ago (1 children)

yeah, that's... one of the points in the article

[–] zqwzzle@lemmy.ca 5 points 20 hours ago

We could see how the copilot PRs went:

[–] Blue_Morpho@lemmy.world 16 points 22 hours ago (1 children)

If humans are so good at coding, how come there are 8100000000 people and only 1500 are able to contribute to the Linux kernel?

I hypothesize that AI has average human coding skills.

[–] HaraldvonBlauzahn@feddit.org 2 points 9 hours ago* (last edited 9 hours ago) (1 children)

The average coder is a junior, due to the explosive growth of the field (similar as in some fast-growing nations the average age is very young). Thus what is average is far below what good code is.

On top of that, good code cannot be automatically identified by algorithms. Some very good codebases might look like bad at a superficial level. For example the code base of LMDB is very diffetent from what common style guidelines suggest, but it is actually a masterpiece which is widely used. And vice versa, it is not difficult to make crappy code look pretty.

[–] XM34@feddit.org 0 points 6 hours ago

"Good code" is not well defined and your example shows this perfectly. LMDBs codebase is absolutely horrendous when your quality criterias for good code are Readability and Maintainability. But it's a perfect masterpiece if your quality criteria are Performance and Efficiency.

Most modern Software should be written with the first two in mind, but for a DBMS, the latter are way more important.

[–] teije9@lemmy.blahaj.zone 13 points 23 hours ago (1 children)

who makes a contribution made by aibot514. noone. people use ai for open source contributions, but more in a 'fix this bug' way not in a fully automated contribution under the name ai123 way

[–] lemmyng@lemmy.ca 30 points 22 hours ago (2 children)

Counter-argument: If AI code was good, the owners would create official accounts to create contributions to open source, because they would be openly demonstrating how well it does. Instead all we have is Microsoft employees being forced to use and fight with Copilot on GitHub, publicly demonstrating how terrible AI is at writing code unsupervised.

[–] XM34@feddit.org 0 points 6 hours ago

Yes, that's exactly the point. AI is terrible at writing code unsupervised, but it's amazing as a supportive tool for real devs!

[–] Lucien@mander.xyz 8 points 22 hours ago
[–] 30p87@feddit.org 5 points 20 hours ago

Ask Daniel Stenberg.

[–] andybytes@programming.dev 4 points 20 hours ago

My theory is not a lot of people like this AI crap. They just lean into it for the fear of being left behind. Now you all think it's just gonna fail and it's gonna go bankrupt. But a lot of ideas in America are subsidized. And they don't work well, but they still go forward. It'll be you, the taxpayer, that will be funding these stupid ideas that don't work, that are hostile to our very well-being.

[–] thingsiplay@beehaw.org 8 points 23 hours ago (3 children)

Mostly closed source, because open source rarely accepts them as they are often just slop. Just assuming stuff here, I have no data.

[–] hemko@lemmy.dbzer0.com 7 points 21 hours ago (2 children)

To be fair if a competent dev used an ai "auto complete" tool to write their code, I'm not sure it'd be possible to detect those parts as an ai code.

I generally dislike those corporate AI tools but gave a try for copilot when writing some terraform script and it actually had good suggestions as much as bad ones. However if I didn't know that well the language and the resources I was deploying, it'd probably have led me to deep hole trying to fix the mess after blindly accepting every suggestion

[–] HaraldvonBlauzahn@feddit.org 4 points 20 hours ago (1 children)

People seem to think that the development speed of any larger and more complex software depends on the speed the wizards vsn type in code.

Spoiler: This is not the case. Even if a project is a mere 50000 lines long, one is the solo developer, and one has a pretty good or even expert domain knowledge, one spends the mayor part of the time thinking, perhaps looking up documentation, or talking with people, and the key on the keyboard which is most used doesn't need a Dvorak layout, bevause it is the "delete" key. In fact, you don't need yo know touch-typing to be a good programmer, what you need is to think clearly and logically and be able to weight many different options by a variety of complex goals.

Which LLMs can't.

[–] hemko@lemmy.dbzer0.com 1 points 20 hours ago

I don't think it makes writing code faster, just may reduce the number of key presses required

[–] thingsiplay@beehaw.org 4 points 21 hours ago

They do more than just autocomplete, even in autocomplete mode. These Ai tools suggest entire code blocks and logic and fill in multiple lines, compared to a standard autocomplete. And to use it as a standard autocomplete tool, no Ai is needed. Using it like that wouldn't be bad anyway, so I have nothing against it.

The problems arise when the Ai takes away the thinking and brain functionality of the actual programmer. Plus you as a user get used to it and basically "addicted". Independent thinking and programming without Ai will become harder and harder, if you use it for everything.

[–] magic_lobster_party@fedia.io 6 points 21 hours ago

Creator of curl just made a rant about users submitting AI slop vulnerability reports. It has gotten so bad they will reject any report they deem AI slop.

So there’s some data.

[–] joyjoy@lemm.ee 6 points 22 hours ago

And when they contribute to existing projects, their code quality is so bad, they get banned from creating more PRs.

[–] andybytes@programming.dev 3 points 20 hours ago

AI is just the lack of privacy, Authoritarian Dragnet, remote control over others computers, web scraping, The complete destruction of America's art scene, The stupidfication of America and copyright infringement with a sprinkling of baby death.

[–] Suoko@feddit.it -3 points 21 hours ago* (last edited 21 hours ago) (1 children)

I created this entirely using mistral/codestral

https://github.com/suoko/gotosocial-webui

Not a real software, but it was done by instructing the ai about the basics of the mother app and the fediverse protocol

[–] luciole@beehaw.org 5 points 16 hours ago (1 children)

I think it's established genAI can spit straightforward toy examples of a few hundred lines. Bungalows aren't simply big birdhouses though.

[–] Suoko@feddit.it 0 points 6 hours ago

Still they're just birdhouses with some more infrastructure you can read instructions about how to build it.