this post was submitted on 01 Nov 2025
21 points (100.0% liked)
Hacker News
2914 readers
415 users here now
Posts from the RSS Feed of HackerNews.
The feed sometimes contains ads and posts that have been removed by the mod team at HN.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
LLMs have plenty of good uses. They are as skilled as the person using them, however, which is the problem.
But it seems that someone who is proficient in writing ultimately doesn't benefit in a matter of time or effort, as both are still taken in correcting the output. And if you're so curious, there's been a number of studies on this exact phenomena already.
The problem, truly, is cognitive debt and the overall expansion of people lending themselves to the dunning kruger effect in the name of trusting and living vicariously through their AI model of choice.
My last employer was pushing hard for LLMs in a field they dont do shit for, one of the project managers was convinced by his AI of choice (gemini) to actually propose replacing himself with another AI tool. IT wasnt having it because it would screen read potentially sensitive info. He was laid off with a sheriff escort not 2 months later. Now is on linkedin posting some truly schizophrenic shit, otherwise having been normal ish
There have also been a number of studies that say that if a person knows how to use an LLM and provides it with a good prompt that it can give them something they can use.
The biggest issue that I’ve seen with LLMs is that nobody knows how to write a prompt. Nobody knows how to really use it to their benefit. There is absolutely a benefit to someone who is proficient in writing. Just like there is absolutely a benefit to someone who is proficient in writing code.
I’m guessing you belong in the category that cannot write a good prompt?
No, I've done my actual work while people convinced they have "good prompts" weighed my whole team down (and promptly got laid off). We've burnt enough openai token and probed models on our own hardware to assertain their utility in my field. Manual automation with simple systems and hard logic is what the industry has ran on, and certainly will continue to.
Explain to what makes a prompt good. As long as you're using any provided model and not using sandbox you're stuck to their initiating prompt. Change that, and you still have their parameters. Run an OS model with your own parameter tunings, you still are limited by your tempterature. What is a good temperature to use for rigid logic that doesn't result in unexpected behavior but can adapt to user input well enough? These are questions every AI corp is dealing with.
For context, all we were trying to do was implement some copilot/gpt shit onto our PMS to handle customer queries, data entry, scheduling information and notifications, and some other opened ended suggestions. C suite was giddy, IT not so much, but my team was to keep an open mind and see what we could achieve.. so we evaluated it, as of about 6 months ago or so is when finally Cs stopped bugging since they had bigger fires to put out, and we had worked out a powerautomate routine (without the help of copilot.. its unfunnily useless even though it's implemented right into PA), making essentially all the effort put into working the AI from a LLM to an "agentic model" completrly mute, despite the tools the company bought into and everything.
I'm guessing you belong in the category who hasn't actually worked at a facility which part of your job is to deploy things like AI, but like to have an affirmative stance anyway.
Yawn. Let’s do this, it’s even better: You tell me a task that you need to accomplish. Then you tell me the prompt you would give an LLM to accomplish that task.
Clearly heavy LLM usage inhibits reading comprehension, I stated the usecase which my employer wanted to implement. Sorry normal people aren't as dogmatic as your AI friends lmao
Give me an example and the exact prompt. My reading is very good. You are refusing to do it