this post was submitted on 17 Apr 2026
105 points (83.4% liked)

Technology

83963 readers
2861 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] zbyte64@awful.systems 9 points 2 days ago (1 children)

"Properly prompting" is to not prompt. A chat interface is the lowest fidelity interface to use with an LLM.

[–] el_abuelo@programming.dev 2 points 2 days ago (1 children)

Tell me more? It's the only way I'm familiar with interacting with an LLM

[–] zbyte64@awful.systems 2 points 2 days ago (1 children)

Examples to consider:

A code base with TODOs embedded will make fewer mistakes and spend less tokens than if you attempt to direct the LLM only with prompting.

A file system gives an LLM more context than a flat file (or large prompt) with the same contents because a file system has a tree like structure and makes it less likely the LLM will ingest context it doesn't need and confuse it

Lastly consider the efficacy of providing it tools vs using agent skills which is another form of prompting. Giving an LLM a deterministic feedback loop beats tweaking your prompts every time

[–] el_abuelo@programming.dev 3 points 1 day ago (1 children)

Ok so i think i do all of these things and would just describe them as "other ways to prompt and LLM" - i think the nuance youre shooting for here is that using these methods you are "pre-preparing" the prompt - not thinking about it at prompt-time and thus likely to miss stuff.

e.g. Feeding a TODO is just the same as copy-pasting that todo in as a prompt.

Have I understood you correctly?

[–] zbyte64@awful.systems 2 points 1 day ago* (last edited 1 day ago) (1 children)

No, it's not the same as copying and pasting the TODO into a prompt. Embedding the TODO in code instead of the prompt reduces tokens burned and increases accuracy because it's observing the TODO in context. Sure you can write more prompting to provide that context, but it still won't be as accurate. The less context you provide via prompting and instead provide more context through automatic deterministc feedback the better the results

[–] el_abuelo@programming.dev 2 points 1 day ago (1 children)

Okay so now I think you're describing the behaviour I take for granted with the harness i.e. Claude Code.

Having good repo readiness through a good agents/claude.md file + tests + docs means the LLM is able to read more files into its context.

It never occurred to me that anyone would prompt in isolation of their repos but I guess thats exactly what it was like for me last year when I was just feeding ChatGPT prompts away from the repo.

[–] zbyte64@awful.systems 2 points 1 day ago (1 children)

Yes, code harnesses help by providing deterministic feedback like with a language server and reduce the amount of prompting requirements. I guess I should have led with that example 😅

[–] el_abuelo@programming.dev 2 points 18 hours ago

Gotcha...I've always struggled to pick up the vocabulary around technical concepts - I guess ive just never prioritised it, and now this whole new field has materialised with a whole new vocabulary to go along with it! I get the tech. The words are my achilles heel.