this post was submitted on 23 Oct 2025
61 points (89.6% liked)

Programming

23287 readers
420 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

If so, I'd like to know about that questions:

  • Do you use an code autocomplete AI or type in a chat?
  • Do you consider environment damage that use of AIs can cause?
  • What type of AI do you use?
  • Usually, what do you ask AIs to do?
(page 2) 20 comments
sorted by: hot top controversial new old
[–] cecilkorik@lemmy.ca 1 points 6 days ago* (last edited 6 days ago)

Sometimes it is helpful to summarize large unfamiliar codebases relatively quickly, provide a high level overview, quickly understanding the layout and structure and help me locate the particular areas I'm interested in but I don't really use it to write or modify code directly. It can be good at analyzing logs and datafiles to find problems or patterns or areas that need closer (human) investigation. Even the documentation it produces can sometimes be tolerably decent, at least in comparison to my own which is sometimes intolerably bad or missing completely.

But as far as generating code? I've found the autocomplete largely useless and random. As for chat, where I can direct it more carefully, it might be able to accurately provide a well-known algorithm for something but then will use a mess of variables and inputs that interact with that algorithm in the stupidest ways possible, the more code you ask it to generate the worse it gets, getting painfully overengineered in some aspects and horribly lacking in others. If it even compiles and runs at all. Even for relatively simple find this/replace it with this refactoring I find I cannot fully trust it and rely on the results, so I don't. I'm proficient enough with regex and scripting that I don't find it any faster to walk a generative AI to the result I want while analyzing the fuzzy logic it uses to get there than it is to just write a perfectly deterministic script to do it instead.

As a general rule, I find it is sometimes better at quickly communicating particular things to my manager or other developers than I am, but I am almost always better and quicker at communicating things to computers than it is. That is, after all, my job. Which I happen to think I'm pretty good at.

As for the environmental aspect, that's why I don't use it in my personal life basically at all if I can avoid it. Only at work, and only because they judge my usage of it as part of my performance. I would be just as happy not using it at all for anything. And when I do use it for personal use, which is a point I haven't really reached except for a bit of experimentation and learning, I am never willingly going to use a datacenter-hosted model/service/subscription, I will run it on my own hardware where I pay the bills so I am at least aware of the consequences and in control of the choices it's making.

[–] remotelove@lemmy.ca 1 points 6 days ago

When I use it, I use it to create single functions that have known inputs and outputs.

If absolutely needed, I use it to refactor old shitty scripts that need to look better and be used by someone else.

I always do a line-by-line analysis of what the AI is suggesting.

Any time I have leveraged AI to build out a full script with all desired functions all at once, I end up deleting most of the generated code. Context and "reasoning" can actually ruin the result I am trying to achieve. (Some models just love to add command line switch handling for no reason. That can fundamental change how an app is structured and not always desired.)

[–] 30p87@feddit.org 1 points 6 days ago* (last edited 6 days ago)

At work, I still use JetBrains', including the in-line, local code completion. Though it (or rather the machines at work) are so slow, 99% of the time I've already written everything out before it can suggest something.

[–] Nomad 0 points 5 days ago

Snippets and architecture design ideas

[–] technocrit@lemmy.dbzer0.com 0 points 5 days ago* (last edited 5 days ago)

Nobody uses "AI" because it doesn't exist.

Nobody in this thread is talking about any program that's remotely "intelligent".

As far as technologies falsely hyped as "AI", I use google's search summaries. It's usually quicker than clicking the actual sources, but I have that option as needed.

[–] Kissaki@programming.dev 0 points 5 days ago* (last edited 5 days ago)

Visual Studio provides some kind of AI even without Copilot.

Inline (single line) completions - I not always but regularly find quite useful

Repeated edits continuation - I haven't seen them in a while, but have use them on maybe two or three occasions. I am very selective about these because they're not deterministic like refractorings and quick actions, which I can be confident in correctness even when doing those across many files and lines. For example invert if changes many line indents; if an LLM does that change you can't be sure it didn't change any of those lines.

Multi-line completions/suggestions - I disabled those because it offsets/moves away the code and context I want to see around it, as well as noisy movement, for - in my limited experience - marginal if any use[fulness].

In my company we're still in selective testing phase regarding customer agreements and then source code integration into AI providers. My team is not part of that yet. So I don't have practical experience regarding any analysis, generating, or chat functionality with project context. I'm skeptical but somewhat interested.

I did do private projects, I guess one, a Nushell plugin in Rust, which is largely unfamiliar to me, and tried to make use of Copilot generating methods for me etc. It felt very messy and confusing. Generated code was often not correct or sound.

I use Phind and more recently more ChatGPT for research/search queries. I'm mindful of the type of queries I use and which provider or service I use. In general, I'm a friend of ref docs, which is the only definite source after all. I'm aware of and mindful of the environmental impact of indirectly costly free AI search/chat. Often, AI can have a quicker response to my questions than searching via search ending and on and in upstream docs. Especially when I am familiar with the tech, and can relatively quickly be reminded, or guide the AI when it responds bullshit or suboptimal or questionable stuff, or also relatively quickly disregard the entire AI when it doesn't seem capable to respond to what I am looking for.

[–] markz@suppo.fi 0 points 6 days ago

I sometimes use a chatbot as a search engine for poorly documented or otherwise hard to find functionality. Asking for how to do x in y usually points me to the right direction.

Do you consider environment damage that use of AIs can cause?

My use is so little that no. For AI bullshit in general, yes.

[–] MagicShel@lemmy.zip -2 points 6 days ago (3 children)

I use autocomplete, asking chat, and full agentic code generation with Cline & Claude.

I don't consider environmental damage by AI because it is a negligible source of such damage compared to other vastly more wasteful industries.

I am primarily interested in text generation. The only use I have for generated pictures, voices, video or music is to fuck around. I think I generated my D&D character portrait. My last portrait was a stick man.

What I ask it to do? My ChatGPT history is vast and covers everything from "how is this word used" to "what drinks can I mix given what's in my liquor cabinet" to "analyze code for me" to "my doctor ordered some labs and some came back abnormal, what does this mean? Does this test have a high rate I'd false positive?" to "someone wrote this at me on the internet, help me understand their point" to "someone wrote this at me in the internet, how can I tell them to fuck the fuck off.... nicely?" And I write atrocious fiction.

Oh I use it a lot to analyze articles and identify bias, reliability, look up other related articles, things that sound bad but really don't mean anything and point out gaps in the journalistic process (I.e. shoddy reporting).

I also have written a discord dungeon master bot. It works poorly due to aggressive censorship and slop on open AI.

load more comments (3 replies)
[–] UNY0N@lemmy.wtf 0 points 6 days ago

I'm an electrical engineer that has become a proprietary cloud-tool admin. I occasionally use an LLM (chatGPT web) to write VBA code to do various API calls and transform excel/Jason/XML/CSV data from one format to another for various import/export tasks that would otherwise eat up my time.

I just use the chat, and copy/paste the code.

I spend an hour to meticulously describe the process I need the code to do, and then another hour or two testing, debugging and polishing, and get a result that would take me days to produce by myself. I then document the code (I try to use lots of sub-modules that can be reused) so that I can use the LLM less in the future.

I don't feel great about the environment impact, which is why I try to limit the usage, and do debugging and improvements by myself. I'm also trying to push management to invest in a lean LLM that runs on the companies servers. I'm also looking into getting a better PC privately, which I could also run a local LLM on and use for work.

I use the Jetbrains AI Chat with Claude and the AI autocomplete. I mostly use the AI as a rubber duck when I need to work through a problem. I don't trust the AI to write my code, but I find it very useful for bouncing ideas off of and getting suggestions on things I might have missed. I've also found it useful for checking my code quality but it's important to not just accept everything it tells you.

[–] lIlIlIlIlIlIl@lemmy.world 0 points 6 days ago

Being able to interrogate a codebase on how it works is incredible

[–] FizzyOrange@programming.dev 0 points 6 days ago (1 children)

Yeah, I use Claude/ChatGPT sometimes for:

  • Throwaway scripts: "write me a bash script to delete all merged git branches starting with 'foo'"
  • Writing functions that are tedious to look up but I can fairly easily evaluate for correctness: "write a C function to spawn a process and capture stdout and stderr merged"
  • Doing stuff in systems I'm not very familiar with: "write an OCaml function to copy a file"

I haven't got around to setting up any of that agentic stuff yet. Based on my experience of the chat stuff I'm a bit skeptical it will be good enough to be useful on anything of the complexity I work on. Find for CRUD apps but it's not going to understand niche compiler internals or do stuff with WASM runtimes that nobody has ever done before.

load more comments (1 replies)
[–] CompactFlax@discuss.tchncs.de 0 points 6 days ago

I’ve tried chat prompt coding two ways. One, with a language I know well. It didn’t go well; it insisted that an api existed that had been deprecated since before and removed around 2020, but I didn’t know that and I lost a lot of time. I also lost a lot of time because the code was generally good, but it wasn’t mine so I didn’t have a great understanding of how it flowed. I’m not a professional dev so I’m not really used to reading and expanding on others’ code. However, the real problem is that it did some stuff that was just not real, and it wasn’t obvious. I got it to write tests (something I have been meaning to learn to do) and every test failed; I’m not sure if it’s the test or the code because the priority for me at the time was getting code out, not the tests. I know, I should be better.

I’ve also used it with a language I don’t know well to accomplish a simple task - basically vibe coding. That went OK as far as functionality but based on my other experience it is illegible, questionably written, and not very stable code.

The idea that it’ll replace coders in a meaningful way is not realistic in the current level. My understanding of how LLMs work is incomplete, but i don’t think the hallucinations are easily overcome.

[–] dohpaz42@lemmy.world 0 points 6 days ago

Ive used it when I’ve found myself completely stumped with a problem, and I don’t know exactly how to search for the solution. I’m building a macOS app, and unfortunately a lot of the search results are for iOS — even if I exclude iOS from the results (e.g. how to build a window with tabs (like safari tabs), but all results comes up for iOS’ TabView).

[–] Vinny_93@lemmy.world 0 points 6 days ago

I use the generator function in Databricks for Python to save me a lot of typing but autocomplete functions drive me crazy

[–] monkeyman512@lemmy.world 0 points 6 days ago

My company has internally hosted AI. I use the web interface to copy/past info between it and my IDE. So far I have gotten best results from uploading the official Python documentation and the documentation for the framework I am using. I then specify my requirements, review the output, and either use the code or request a new revision with information on what I want it to correct. I generally focus on requesting smaller focused bits of code. Though that may be for my benefit so I can make sure I understand what everything is doing.

[–] 6nk06@sh.itjust.works 0 points 6 days ago* (last edited 6 days ago)

I use the JetBrain AI like a search engine when the web has no obvious answer. Most of time it gives me a good starting point, and the answer is adjusted to the existing content.

It can also translate snippets from one language or framework to another. For (a fake) example, translating from Unity in Python to Vulkan in C++.

I also use it to analyze shitty code from people who left the company a long time ago. Refactoring and cleaning obscure stuff like deeply hidden variables or things that would take days to analyze can be done in minutes.

I use it once a day at most.

load more comments
view more: ‹ prev next ›