this post was submitted on 23 Oct 2025
61 points (89.6% liked)

Programming

23300 readers
353 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

If so, I'd like to know about that questions:

  • Do you use an code autocomplete AI or type in a chat?
  • Do you consider environment damage that use of AIs can cause?
  • What type of AI do you use?
  • Usually, what do you ask AIs to do?
you are viewing a single comment's thread
view the rest of the comments
[–] hallettj@leminal.space 3 points 1 week ago

I use a chat interface as a research tool when there's something I don't know how to do, like write a relationship with custom conditions using sqlalchemy, or I want to clarify my understanding on something. first I do a Kagi search. If I don't find what I'm looking for on Stack Overflow or library docs in a few minutes then I turn to the AI.

I don't use autocompletion - I stick with LSP completions.

I do consider environmental damage. There are a few things I do to try to reduce damage:

  1. Search first
  2. Search my chat history for a question I've already asked instead of asking it again.
  3. Start a new chat thread for each question that doesn't follow a question I've already asked.

On the third point, my understanding is that when you write a message in an LLM chat all previous messages in the thread are processed by the LLM again so it has context to respond to the new message. (It's possible some providers are caching that context instead of replaying chat history, but I'm not counting on that.) My thinking is that by starting new threads I'm saving resources that would have been used replaying a long chat history.

I use Claude 4.5.

I ask general questions about how to do things. It's most helpful with languages and libraries I don't have a lot of experience with. I usually either check docs to verify what the LLM tells me, or verify by testing. Sometimes I ask for narrowly scoped code reviews, like "does this refactored function behave equivalently to the original" or "how could I rewrite this snippet to do this other thing" (with the relevant functions and types pasted into the chat).

My company also uses Code Rabbit AI for code reviews. It doesn't replace human reviewers, and my employer doesn't expect it to. But it is quite helpful, especially with languages and libraries that I don't have a lot of experience with. But it probably consumes a lot more tokens than my chat thread research does.