I work in a software house where everyone uses AI. Some of them can't even write a single line of code, let alone analyze it. I was shocked when I saw their work is CTRL-A, CTRL-V into Claude and CTRL-V into the IDE without a single neuron being activated. They even ask it to summarize and generate a response for 3 lines of text in a Slack message! (Partially because they don't know what they're doing and partially because they're too lazy to think).
Well, everyone talks enthusiastically about AI, some have unrealistic expectations (thinking that it's actually intelligent, when it is not) but what bothers me is that they're indeed faster than me so sometimes I think "why am I even resisting?". Well the answer is that I love to keep my brain active and having the control of what I'm doing. Does anyone else feel kinda similar? Am I in the wrong?
P.S. Also I just want to point out that I've seen with my own eyes the deterioration of cerebral functions in people who heavily rely on AI. I'm not talking about just "forgetting how to code" but I see them losing space awareness (invading personal space, sitting like a liquid on the chair), self awareness (loudly burping, hoarding half-drank bottles of water on the desk), focus and they're easily irritable. It's multiple people behaving like that and they weren't like this before. AI is a drug.
You are not alone. I feel like the world is quickly giving in to LLMs and I'm one of the very rare holdouts. My nephews, my coworkers, my bosses... all of them use a mix of ChatGPT, Gemini, and/or Claude regularly. Hell, even my therapist tells me his wife uses ChatGPT for everything. I remember being worried when kids would immediately answer questions with some obnoxious response akin to "just Google it". I wondered if abandoning the need to remember anything would impact development. Now they instantly go to a chatbot.
I've tried it a few times with difficult problems and always found hallucinations. If I'm looking for something that doesn't exist, the LLM has always made up a convincing answer. It's frightening how so many people trust it blindly.
I work in US Public Education and the adoption of AI in this space scares the living fuck out of me. I understand the argument: "Kids are using it, or are going to use it. We need to get out in front of it." That's fine. Find a provider that "promises" not to use student data. Protect PII. Great.
But some district admins are enthusiastically using it. I literally mentioned in conversation that I needed to check when something was due for the state and they immediately asked Gemini and assumed the answer was correct. Another one 100% uses it for letters summarizing student performance and freaks out when ChapGPT is down. I can only imagine how horrific it must be in the private sector where the goal is efficiency and profit over everything.
This shit needs to pop, and fast.
Everything we worried the internet would do to our brains is happening 10000x faster with llm shit.