We have copilot as an automatic code reviews. It mostly catches my bad spelling, occasionally finds a real issue, and often is incorrect.
Programming
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
I'm a slow adopter of new technologies like ~~AI~~ LLMs. My reasoning is that if it turns out to actually be a good product, then it will eventually prove itself, and the early adopters can be the "beta testers" so to speak. But if it turns out to be a bad product, then I won't have wasted my time on something that isn't worthwhile.
Maybe a day comes when I start using these tools, but they clearly just aren't all that useful in their current form. In all honesty, I'm pretty sure that they will never be useful enough for me to consider them worth learning, but definitely not so today.
They talk like there's training needed, that it's some learned skill. It's just a means to blame the worker instead of the AI for not boosting productivity.
Yeah if anything all the people screeching that you have to adopt now or you'll be replaced by those that do just destroy their credibility.
Agreed. To make it a bit more general, whenever I see people claiming to be able to predict the future with absolute certainty and confidence, that to me is just a sign they are idiots and shouldn't be listened to. Definitely had a lot of those in past companies I have worked in. A lot of the time, they're trying to gaslight people into believing in their version of the future so they can sell us garbage (products, stock price, etc.). They'll always get some fools to believe them of course.
I'm interested to see in 5 years or so, once all the hyper-hype is hopefully subsides, what actual uses remain and how they look.
The number-one frustration, cited by 45% of respondents, is dealing with "AI solutions that are almost right, but not quite," which often makes debugging more time-consuming. In fact, 66% of developers say they are spending more time fixing "almost-right" AI-generated code.
Not surprising at all. When you write code, you're actually thinking about it. And that's valuable context when you're debugging. When you just blindly follow snippets you got from some random other place, you're not thinking about it and you don't have that context.
So it's easy to see how this could lead to a net productivity loss. Spend more time writing it yourself and less time debugging, or let something else write it for you quickly, but spend a lot of time debugging. And on top of it all, no consideration of edge cases and valuable design requirement context can also get lost too.
I use LLMs mainly for "editing text". Like if I have to refactor 100 lines of code and it can't be easily done with a regexp replace I will use LLM to do it. When I have to actually modify some logic I find it easier and faster to just do it than to explain what needs to be done to a LLM and carefully check its response for subtle bugs.
Maybe I'm misunderstanding, but are you saying that you use LLMs as refactoring tools, so things like to move code around, rename stuff, extract functions, and make changes that don't change the logic?
Or is it something else? Because as far as I know, LLMs are pretty bad at not making random changes, even if told to just reorder stuff, plus we have a lot of deterministic tools for that job, so I guess you probably mean something else. Honest question.
It's worth noting that good IDE integrated agents also have access to these deterministic tools. In my experience, they use them quite often. Even for minor parts of their tasks that I would typically just type out.
Sometimes refactoring tools are not enough and you have to do the same change in couple of places. Boring, repetitive work. For example last thing I did is that I refactored some code and I had to change the way objects used in tests are initialized. Basically couple hundred lines of just constructors and setters. I knew exactly what needs to be created because tests where there so I feed the expected structure into a LLM and it generated the code. Saved me some boring work and I didn't have to worry about mistakes because compiler and tests would pick it up.
Developers remain willing but reluctant
Management: "Maybe we're not pushing hard enough"
35% of developers use 6-10 distinct tools to get their work done, highlighting the need for seamless integration.
No ? I like my tools being separate. The only "seamless integration" I want is that the tools work in the terminal.
At my job, I have found it useful generating mediocre frontends under extremely tight time constraints. Clients are happy with the outcome and I find it more easily customizable than WordPress.
Looking at the code though, it's not a good idea to use it to build anything complex. Best it can do is "Company X needs ANY website before their presentation tomorrow." or whatever.
In other words, it's OK at covering for poor to nonexistent planning.
I'd like to run a model locally and experiment with it though. Problem is it seems no one discloses how they trained their models, open source or not.
If anyone has any suggestions, I'm open. I see Tabby has a Neovim plugin, but, again, no idea what it's trained on.
GitHub Copilot with Claude Sonnet 4.5 works okay in codebases that have established and obvious patterns. It can continue following those patterns with careful guidance, review, and a healthy dose of manual testing. I generally get a modest productivity boost for code that is pretty straight forward, though you either have to spend time ensuring quality up front or fixing the slop afterwards, and there are times when it just won’t get something right that you’re better off doing yourself. Don’t expect it to come up with any clever or innovative solutions to hard problems: it’s just going to make a big bowl of hacky slop soup for you instead. It also does a shit job writing clean, maintainable code using things like the DRY principle, high cohesion / loose coupling, properly naming things, etc.
With all the babysitting required, the productivity increase is modest, but existent.
Use AI to ask it questions and learn. Do not use it to do your work.
I would imagine SO has seen a majorly significant drop in site traffic in the past few years
It's still very meh. Like does anyone remember those visual studio wizards for doing something? AI tools are about that good, roughly. But the difference is, you still have to review everything with the AI tool because it'll still make mistakes.
I use AI maybe once or twice a month, but it's far from, replaced my snippets with Ctrl+H replacing keywords.
I don't think I could ever have an AI actively modify my code. I use one semi regularly but just for the first steps of research and brainstorming ideas. It often never lands exactly on what I need but the process helps me think a bit better and to avoid the shortcomings that the AI often produces in responses.