No
Programming
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
I don’t use AI when I’m learning a new system, framework or language because I won’t actually learn it.
I don’t use AI when I need to make a small change on a system I know well, because I can make it just as fast and have better insight into how it all works.
I don’t use AI when I’m developing a new system because I want to understand how it works and writing the code helps me refine my ideas.
I don’t use AI when I’m working on something with security or copyright concerns.
Basically, the only time I use AI is when I’m making a quick throw away script in a language I’m not fluent in.
No; I don't use AI at all for programming currently.
I mostly use it as a code search tool, when dealing with large projects that I'm not very familiar with. Like I can ask "where is this component actually inserted into the web page" and it can sometimes point to a file and function. It doesn't always work of course, but when it does it can save a lot of time.
I don't ever let AI write code for me though
Can't you use global search for that? I mean I do that too but using global search functionallity is way faster and guaranteed.
You can also use grep command to search occurrences inside files based in a string/regex
Of course, but this assumes I know roughly what the text will look like that I'm searching for. If I already know what it will look like, I'll use global search of course, but if all I know is that "at some point this element is put into the document" then I have no idea how that might actually happen. AI is just pretty good (ie succeeds sometimes) at generalising my words into a rough idea and searching for that.
Generating quick programs like "a python script that calculates the mean value of two hex colours, outputting the result as a HTML file displaying the resulting three-color gradient"? Yeah, AI is decent at stupid simple tasks like that, and it's much faster than me writing the script or calculating the values myself. I tend to generate things like these when I'm working on something else, don't want to spend time on things outside the project I'm working on, and can't find a website that does the thing I want.
Touching my actual code? Hell no.
Yes because I can't program.
I ask it to construct small blocks like if or for loop statements with a very verbose prompt so that all variables are properly named and the code block is small enough I can debug myself.
Basically is like building lego where the AI prints every piece.
- It's much more time consuming than if I knew the language myself but it's actually a fun way to learn and it's faster than wading through forums for n amount of time.
- I don't get paid to do it, so I don't see it as problematic, my biggest gripe is I used to cite the stack overflow, etc, user where I got the snippet of code before and now I can't give credit to the original author.
- It's useful since it has allowed me to automate a lot of tedious tasks that would otherwise be more time consuming, making the activation energy necessary to create the automation much lower.
- I use mistral exclusively, the GPT 4, 4o and 5 are quite useless in comparison. The latest mistral and codestral tower above them in my anecdotal experience, at least the way I use it.
- It works well with local models so I don't have to feed the beast.
- I'm an illiterate idiot when it comes to python so it has resulted in someone being able to do something they otherwise couldn't.
- I'm not a programmer, AI hasn't made me a programmer, If I were a programmer, the code completion is so slow I'd probably not use it, I'm unaware of other uses other than debugging, but even for its own code, debugging is hit or miss, miss, miss because of limited context, it really can't debug well.
- It's definitely not worth how many trillions are being poured into it. Especially when one uses it more and becomes painfully aware of the limitations, it becomes quite obvious that the applications lie in increasing industrial and scientific productivity rather than creating a mass market tool.
- Agentic AIs are pure cancer and a security catastrophe waiting to happen. The ease with which one can use prompt injection to exfiltrate basically any kind of data the agent has access to is probably keeping many a cyber security experts awake at night. I envision, ironically, black hat being invaded by "prompt engineers" specialized in creating injection prompts.
Thank you for coming to my ted talk.
I use AI as a rubber duck, to compliment the rubber ducks on my desk when they don't give enough feedback. So it's use is mostly conceptual, I find that models that provide "thinking" output perhaps more useful than whatever its actual answer is because it asks questions about edge cases I might not have considered.
As for code generation, I hate it. It outputs garbage, forgets things, hallucinates, and whatever thing it writes I'll have to rewrite anyway to actually make it compile.
As I'm fairly isolated at work I think it makes a good pair programmer partner, so to speak. Offering suggestions that I can take into consideration and research heavily if I think it's a good one.
I use Copilot with mostly Claude Sonnet 4.5. Don’t use the autocomplete because it’s useless and annoying. I mostly chat with it, give it specific instructions for how to implement small changes, carefully review its code, make it fix anything I don’t like, then have it write test scripts with curl calling APIs and other methods to exercise the system in a staging environment and output data so I can manually verify it and make sure all of its changes are working as expected in case I overlooked something in the automated tests.
As far as environmental impact, training is where most of the impact occurs, and inference, RAG, querying vector databases, etc. is fairly minimal AFAIK.
My colleague uses it to generate rambling code, often pointlessly rewriting existing logic to solve all kinds of hallucinated problems, which he doesn't understand a bit, then dumps it on me and acts offended when asked to explain any of it.
No, I don't. I often have to fix the work of my colleague and my boss, who do use it. I often have to gently point out to my boss that just because the chatbot outputs results for things, doesn't mean those results are accurate or helpful.
No.
I tried to use AI to help me code, it only gave me trash.
I have used AI to give me the syntax and function names I need, then researched those functions and found better ones instead.
I once asked AI to show me how to do something and it gave me a 20 line script. After 2 hours working with it, I finally got it to work. Another 30 minutes of optimizing and got it down to 3 lines. A bit more research and I discovered that what I wanted was actually a language feature, and I just needed to call a single function with a single argument.
AI occasionally saves me time, and usually causes a significant time waste.
Never used it, never will
Ask for oneline problems, that's mostly syntax. Ask for concepts/systems on new knowledge areas To summarise logoutput/debug unknown errortype To add docs and comments, mostly if im in some old code with nothing of the sort. If it absolutely boilerplate, it usually can give fine output.
The only code generation assistance I use is in the form of compilers. For fun I tried to use the free version of Chatgpt to replicate an algorithm I recently designed and after about half-hr I could only get it to produce the same trivial algorithms you find on blog posts even with feeding it much more sophisticated approaches.
My answer (OP): I use AI for short and small questions, like things I already know but I forgot, like "how to sort an array", or about Linux commands, which I can test just in time or read the man page to make sure it works as intended.
I consider my privacy and environment, so I use a local AI (16b) for most of my questions, but for more complex things that I really need any possible help I use Deep Seek Coder v3.1 (671b) in the cloud via ollama.
I don't use autocomplete code because it annoys me and don't let me think about the code, I like to ask when I think I need it.
This is basically how I roll as well.
I did have cursor build an example fastapi project (which didn't work at first) just to sort of give me a jump start on learning the framework.
I messed around with that, got it to work, learnes enough about how it works that I was then comfortable starting from scratch in a different project.
I kind of treat the local AI as a knowledge base. Short questions with examples. Mostly that just then lets me know what sort of stuff to look for in the real documentation, which is what actually solves my issues.
Cant these questions be answered more easily with an online search?
Maybe 5 years ago. Not anymore.
I use continue in VSCode hooked to ollama or mistrial. Sometimes I just ask a chat to "make a script/config that does <my MVP of the project, maybe even less>".
How much I use depends on how little I am invested. My rule is I try to correct a bad output ONCE. I cannot argue it into fucking getting it right.
I prefer net new code and add this feature. Ironically good refactoring goes a long way. The less it has to adjust the better, and less I have to review the better.
I mostly dislike using AI to code. The one exception I recently got into was when I was fighting with a python script and didn't understand why it was behaving the way it did. I used AI for possible causes and pretty quickly managed to fix it. Sometimes it's just nice to have some possible causes for a bug listed so you can check them out
Not a programmer but when I need a more complicated powershell script for something I ask copilot first and then I fix whatever it shits out so that it actually works how I want. It usually saves me some time...
Single function text prediction, class boilerplate, some refactoring.
It's decent when you inherit outrageously bad legacy code and you want better comments and variable names than "A, x, i", etc.
You do have to do it within an editor that highlights all changes so you can carefully review, though.
Not so much a productivity boost, but rather a bad intern you can delegate boring, easy tasks to. I'd rather review that kind of code than write it, but of you're the other way around, it's a punishment.
Maybe naming single-letter variables I can see being easier to review than to do.
Any other kind of refactoring though, IDE refactoring tools are instantaneous and deterministic.
When the code your have to deal with is an ASP (not .NET) created by apes throwing shit in a wall, the kind of holistic bullshit an AI makes is an improvement.
Architect here, not a programmer. I've taken Python classes but was never good enough too use it regularly. Using Gemini, I've been able to work through creating half a dozen scripts for automating tedious tasks and optimizing models/drawings. I'm hoping to improve myself so I can eventually make use of it for even more useful things, but as a start it's been awesome. Not perfect, it makes a lot of mistakes, but I've been able to work with it to get things right.
I am still relatively inexperienced and only embedded. (Electronics by trade) I am working on an embedded project with Zephyr now.
If I run into a problem I kind of do this method (e.g. trying to figure out when to use mutexes vs semaphores vs library header file booleans for checking ):
-
first look in the zephyr docs at mutexes and see if that clears it up
-
second search ecosia/ddg for things like "Zephyr when to use global boolean vs mutex in thread syncing"
-
if none of those work, I will ask AI, and then it often gives enough context that I can see if it is logical or not (in this case, it was better to use a semi-global boolean to check if a specific thread had seen the next message in the queue, and protect the boolean with a mutex to know if that thread was currently busy processing the data), but then it also gave options like using a gate check instead of a mutex, which is dumb because it doesn't exist in zephyr.
For new topics if I can't find a video or application note that doesn't assume too much knowledge or use jargon I am not yet familiar with, I will use AI to become familiar with the basic concept in the terms so that I can then go on to other, better resources.
In engineering and programming, jargon is constant and makes topic introduction quite difficult if they don't explain it in the beginning.
I never use it for code with the exception of codebases that are ingested but with no documentation on all of the keys available, or like in zephyr where macro magic is very difficult to navigate to what it actually does and isn't often documented at all.
-Type in chat, mostly for reference, snippets, help debugging, and questions about libraries or something.
-Not as much as I should
-ChatGPT is my favorite, I have been a user since day one and never tried any other ones.
-As someone who isn't a leet tier programmer to just help, and write code snippets although I often have to modify it and do things myself as well because something's the AI will fail at. Also after a certain level of complexity it starts to struggle. Better for snippets and examples but you have to integrate it yourself often times if you are creating something unique that the AI can't more or less just copy and paste and translate.
Mainly use it as a documentation search for APIs I'm not familiar with, or when I'm not sure what options there are to approach a problem. I work with unreal engine a lot, so I'd get a few pointers from an LLM first, then go read the source code of those APIs and inplement the rest myself.
I am a data scientist and we use databricks which has copilot (I think) installed by default. So with this we have an autocomplete which I use the most because it can do some of the tedious steps for an analysis if I write good comments which I do anyhow. This is around 50% accurate with it being the most accurate for simple mindless things or getting the name of things correct.
There is code generating block tool that I never use. There is also a something that troubleshoots and diagnosis any error. Those are mostly useless but has been good to finding missing commas and other simple things. Their suggestions sometimes are terrible enough that I mostly ignore this.
We have a Copilot bot as part of our Github (I don't know is this standard now?) that I actually enjoy and has uses. It writes up great summarizes of what code was commited which has a great format and seems almost 100% accurate for me. Most importantly it has a great spellchecker as part of their suggestions. I am a terrible speller and never double check names so it can fix them both in the notes and in my code (It fixes it everywhere in the code which is nice). The rest of the suggestions are okay. There are some that are useful but some that are way off or overengineered for what I am doing. This I like because it just comes in at the end of my process and I can choose to accept or deny.
As for actual coding, I use ChatGPT sometimes to write SDK glue boilerplate or learn about API semantics. For this kind of stuff it can be much more productive than scanning API docs trying to piece together how to write something simple. Like for example, writing a function to check if an S3 bucket is publicly accessible. That would have taken me a lot longer without ChatGPT.
In short: it basically replaced google and stack overflow in my workflow, at least as my first information source. I still have to fall back to a real search engine sometimes.
I do not give LLMs access to my source code tree.
Sometimes I'll use it for ideas on how to write specific SQL queries, but I've found you have to be extremely careful with this use case because ChatGPT hallucinates some pretty bad SQL sometimes.
Just intellisense and other language servers. I remember when Microsoft was boasting about how much of their code was generated by intellisense. Now whenever I hear them hype how much ai written code they use I am reminded of it. It's not an llm but is still a type of ai.
I use a chat interface as a research tool when there's something I don't know how to do, like write a relationship with custom conditions using sqlalchemy, or I want to clarify my understanding on something. first I do a Kagi search. If I don't find what I'm looking for on Stack Overflow or library docs in a few minutes then I turn to the AI.
I don't use autocompletion - I stick with LSP completions.
I do consider environmental damage. There are a few things I do to try to reduce damage:
- Search first
- Search my chat history for a question I've already asked instead of asking it again.
- Start a new chat thread for each question that doesn't follow a question I've already asked.
On the third point, my understanding is that when you write a message in an LLM chat all previous messages in the thread are processed by the LLM again so it has context to respond to the new message. (It's possible some providers are caching that context instead of replaying chat history, but I'm not counting on that.) My thinking is that by starting new threads I'm saving resources that would have been used replaying a long chat history.
I use Claude 4.5.
I ask general questions about how to do things. It's most helpful with languages and libraries I don't have a lot of experience with. I usually either check docs to verify what the LLM tells me, or verify by testing. Sometimes I ask for narrowly scoped code reviews, like "does this refactored function behave equivalently to the original" or "how could I rewrite this snippet to do this other thing" (with the relevant functions and types pasted into the chat).
My company also uses Code Rabbit AI for code reviews. It doesn't replace human reviewers, and my employer doesn't expect it to. But it is quite helpful, especially with languages and libraries that I don't have a lot of experience with. But it probably consumes a lot more tokens than my chat thread research does.
Sparingly. I use chatgpt to help with syntax and idioms when learning new languages. And sometimes I use it to help determine the best algorithm to use for a general problem. Other times I feed in working code and ask for improvements like a mini code review
The only time I had it code something from scratch for me was when I wanted some Vimscript and I didn't want to learn it. I tried the same thing with jq and it failed and I had to learn me some jq.
I hate popups in editors in general (no intellisense for me), so I lothe AI trying to auto complete my code.
Nope, nothing. I wanna have the ability to pinpoint what part of my code is causing what error and offshoring my work off AI would make it both more unfun and harder
Its sometimes useful for conceptual questions but i dont trust code generated by it.
I don't mess with code autocomplete, Cursor, agents or any of that stuff. I've got subscriptions to 2 platforms that give me access to a bunch of different models and I just ask whatever model I need directly, copy/paste the context it needs. On that note, AI search engines like Perplexity genuinely bring zero value to my workflow. I'd rather do the searching myself and feed it the relevant context, feels like it misleads me more often than it helps. I actually have a Perplexity sub (got it free) and haven't touched their web search in like 4 months.
I've thought about the environmental impact and taken steps to minimize my usage. That's actually one reason I avoid Cursor, agents, and AI web search - feels super wasteful and I'm not convinced it's sustainable long-term. I guess I just like being in control, you know? I also try using smaller open source models when I can, even if they're not as powerful.
My go-to models right now for daily use (easiest to hardest tasks): Llama 4 Scout -> DeepSeek v3.1 -> DeepSeek v3.1 (thinking) -> Gemini 2.5 Pro / Claude 4 Sonnet (thinking) -> GPT 5 (thinking). Sometimes I'll throw in other models like Gemini 2.5 Flash but mostly stick to these.
By the way I would recommend trying out t3.chat ( that's one of the platforms that I use). Cost 8 USD / month and is made by Theo pretty happy with it for the price. The UI is honestly its strongest point.
For how I actually use AI, I wrote a more detailed answer in another thread about AI usage. Have a read
I use AI as a sort of junior developer, I know the problem domain but a bit to lazy to write all the code. I develop on a remote Linux VM with tmux, nvim and opencode. Have the ai tmux session and my development session on a different project. Make sure to have a clean git tree and then I detach from one session to the ai session and check the progress.
The ai makes mistakes so a cautious review of all the code is needed.
I mostly use Claude and I can NOT recommend any kimi k2 model. If you need something okish and cheap use open router gpt-oss 120.
AI is a power tool if you don’t know what are you doing you get burned.
I use whatever line completion is built into JetBrains out of the box. Other than that, no AI whatsoever.
Only about 10% of my time at work is actually spent writing code. At least double that time is spent reading code, and the rest is documentation, coordination, and communication work that depends on precise understanding of the code I’m responsible for. If I let AI write code, maybe (doubtfully) that would save a little time out of the 10%, but it would cost me dearly in the other two categories. The code I write by hand is minimal, clear, and easy to understand, and I understand it better because I wrote it myself. I understand all the code around it, too.
If you ask me, AI code generation is based entirely on non-programmers’ incorrect understanding of what programming is.
I'm using it for some side projects. I used it as an assistant for setting up services in Kubernetes - also used it a lot for debugging and creating kubectl commands.
another side project is to write a full web app in F# SAFE stack, which uses Fable to transpile to JavaScript and React, so I'm learning several things at once.
At work I didn't use it as much, but it got used to generate tests, since no one really cared enough about them. I also did some cool stuff where someone wrote a guide on how to migrate something from a V1 of a thing to a V2 of a thing, I hooked up MCP to link the doc, asked it to migrate one, and it did it perfectly.
I used it a lot to generate Mongo queries, but they put that feature in MongoDB Compass.
We used Claude Sonnet pretty much exclusively. for my side projects I often use Auto since Sonnet burns through the monthly budget pretty quickly. It definitely isn't as good, but it tends to be favorable for template-y things, debugging why some strange thing is happening in F# or react.
For the side projects, I find I'm using it less as I learn more. It's good for getting over a quick hump if you have a sense of how things generally should be.
I've considered the lakes I've burned because I didn't copy paste those kubectl commands to a file.
I prefer Sonnet. Anything less isn't that great, which is one reason I think people hate it.
I tend to use it for crufty things. And certain front end things. It's been a long time since I've done web UI.
Sometimes. There are some cases where LLMs are better than duckduckgo, like when you're looking for something very specific but 100% of results are about similar but different topic. In 95% of cases search is still faster and more reliable.
I use it as a search engine but not as my only source. It's really good at regurgitating the most relevant Stack Overflow answer I might find, which may or may not actually be applicable to my situation. As a rule I never copy paste code directly, I always rewrite it "in my own words", even in cases where it's basically the same. If the code it provides is more than 5 lines or so I can almost always think of a better way. I feel like I'd still be better off with a really solid reference manual though, and a recipe book. But they go out of date too fast these days.