this post was submitted on 16 May 2025
28 points (100.0% liked)
Technology
38692 readers
271 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
No, you can't. It cannot reason. It's just been fed so much existing text that it appears like it can in some cases. That's an extremely dangerous foundation on which to build anything.
You're not wrong, but I don't think you're 100% correct either. The human mind is able to synthesize reason by using a neural network to make connections and develop a profoundly complex statistical model using neurons. LLMs do the same thing, essentially, and they do it poorly in comparison. They don't have the natural optimizations we have, so they kinda suck at it now, but to dismiss the capabilities they currently have entirely is probably a mistake.
I'm not an apologist, to be clear. There is a ton of ethical and moral baggage tied up with the way they were made and how they're used and it needs addressed, andI think that we're only a few clever optimizations away from a threat.
I don't buy the "it's a neural network" argument. We don't really understand consciousness or thinking ... and consciousness is possibly a requirement for actual thinking.
Frankly, I don't think thinking in humans is based anywhere near statical probabilities.
You can of course apply statistics and observe patterns and mimic them, but coorilation is not causation (and generally speaking, society is far too willing to accept coorilation).
Maybe everything reduces to "neural networks" in the same way LLM AI models them ... but that seems like an exceptionally bold claim for humanity to make.
It makes sense that you don't buy it. LLMs are built on simplified renditions of neural structure. They're totally rudimentary.