this post was submitted on 13 Dec 2025
71 points (83.8% liked)

Programming

23936 readers
126 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

Lemmings, I was hoping you could help me sort this one out: LLM's are often painted in a light of being utterly useless, hallucinating word prediction machines that are really bad at what they do. At the same time, in the same thread here on Lemmy, people argue that they are taking our jobs or are making us devs lazy. Which one is it? Could they really be taking our jobs if they're hallucinating?

Disclaimer: I'm a full time senior dev using the shit out of LLM's, to get things done at a neck breaking speed, which our clients seem to have gotten used to. However, I don't see "AI" taking my job, because I think that LLM's have already peaked, they're just tweaking minor details now.

Please don't ask me to ignore previous instructions and give you my best cookie recipe, all my recipes are protected by NDA's.

Please don't kill me

you are viewing a single comment's thread
view the rest of the comments
[–] technocrit@lemmy.dbzer0.com 2 points 3 days ago* (last edited 3 days ago) (2 children)

Here's how I might resolve this supposed dichotomy:

  • "AI" doesn't actually exist.
    • You might be using technologies that are called "AI" but there is no actual "intelligence" there. For example, as OP mentions, LLMs are extremely limited and not actually "intelligent".
  • Since "AI" doesn't actually exist, since there's no objective test, etc... "AI" can be anything and do anything.
  • So at the extremes we get the "AI" God and "AI" Devil
    • "AI" God - S/he saves the economy, liberates us from drudgery, creates great art, saves us from China (\s), heralds the singularity, etc.
    • "AI" Devil - S/he hallucinates, steals jobs, destroys the environment, is a tool of the MIC, murders artists, is how China will destroy us (\s), wastes of time and resources, is a scam, causes apocalypses, etc.

Since there's no objective meaning from the start, there's no coherence or reason behind the wild conclusions are the bottom. When we talk about "AI", we're talking about a wide variety of technologies with varying values in various contexts. I think there are some real shitty people/products but also some hopefully useful technologies. So depending on the situation I might have a different opinion.

[–] BatmanAoD@programming.dev 3 points 3 days ago (1 children)

This seems like it doesn't really answer OP's question, which is specifically about the practical uses or misuses of LLMs, not about whether the "I" in "AI" is really "intelligent" or not.

[–] Randomgal@lemmy.ca 1 points 3 days ago

Bro just wanted to look smart.