this post was submitted on 17 Nov 2025
28 points (100.0% liked)

SneerClub

1203 readers
18 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

much more sneerclub than techtakes

you are viewing a single comment's thread
view the rest of the comments
[–] zogwarg@awful.systems 10 points 2 days ago (1 children)

Even if true, why couldn’t the electrochemical processes be simulated too?

  • You're missing the argument, that even you can simulate the process of digestion perfectly, no actual digestion takes place in the real world.
  • Even if you simulate biological processes perfectly, no actual biology occurs.
  • The main argument from the author is that trying to divorce intelligence from biological imperatives can be very foolish, which is why they highlight that even a cat is smarter than an LLM.

But even if it is, it’s “just” a matter of scale.

  • Fundamentally what the author is saying, is that it's a difference in kind not a difference in quantity.
  • Nothing actually guarantees that the laws of physics are computable, and nothing guarantees that our best model actually fits reality (aside from being a very good approximation).
  • Even numerically solving the Hamiltonians from quantum mechanics, is extremely difficult in practice.

I do know how to write a program that produces indistinguishable results from a real coin for a simulation.

  • Even if you (or anyone) can't design a statistical test that can detect the difference of a sequence of heads or tails, doesn't mean one doesn't exist.
  • Importantly you are also only restricting yourself to the heads or tails sequence, ignoring the coin moving the air, pulling on the planet, and plopping back down in a hand. I challenge you to actually write a program that can achieve these things.
  • Also decent random-number generation is not actually properly speaking Turing complete [Unless again you simulate physics but then again, you have to properly choose random starting conditions even if you assume you have a capable simulator] , modern computers use stuff like component temperature/execution time/user interaction to add "entropy" to random number generation, not direct computation.

As a summary,

  • When reducing any problem for a "simpler" one, you have to be careful what you ignore.
  • The simulation argument is a bit irrelevant, but as a small aside not guaranteed to be possible in principle, and certainly untractable with current physics model/technology.
  • Human intelligence has a lot of externalities and cannot be reduced to pure "functional objects".
    • If it's just about input/output you could be fooled by a tape recorder, and a simple filing system, but I think you'll agree those aren't intelligent. The output as meaning to you, but it doesn't have meaning for the tape-recorder.
[–] chaos@beehaw.org -2 points 2 days ago (4 children)

(I'm going to say "you" in this response even though you're stating some of these as arguments from the author and not yourself, so feel free to take this as a response to the author and not you personally if you're playing devil's advocate and don't actually think some of these things.)

You're missing the argument, that even you can simulate the process of digestion perfectly, no actual digestion takes place in the real world.

But it does take place in the real world. Where do you think the computers are going to be? Computers can and do exist in and interact with the real world, they always have, so that box is already checked. You can imagine the computations as happening in a sort of mathematical void outside of the universe, but that's mostly only useful for reasoning about a system. After you do all that, you move electrons around in a box and see the effects with your own human senses.

The main argument from the author is that trying to divorce intelligence from biological imperatives can be very foolish, which is why they highlight that even a cat is smarter than an LLM.

Well, yeah, current LLMs are tiny and stupid. Something bigger, and probably not an LLM at all, might not be.

Nothing actually guarantees that the laws of physics are computable, and nothing guarantees that our best model actually fits reality (aside from being a very good approximation). Even numerically solving the Hamiltonians from quantum mechanics, is extremely difficult in practice.

It doesn't have to actually fit reality perfectly, and it doesn't have to be able to predict reality like a grand unified theory would. It just needs to behave similarly enough to produce the same effects that brains do. It hasn't been shown to be possible, but there's also no reason to think we can never get close enough to reproduce it.

Even if you (or anyone) can't design a statistical test that can detect the difference of a sequence of heads or tails, doesn't mean one doesn't exist.

Yes it does. If they're indistinguishable, there is no difference.

Importantly you are also only restricting yourself to the heads or tails sequence, ignoring the coin moving the air, pulling on the planet, and plopping back down in a hand. I challenge you to actually write a program that can achieve these things.

I don't have any experience writing physics simulators myself, but they definitely exist. Even as a toy example, the iOS app Dice by PCalc does its die rolls by simulating a tossed die in 3D space instead of a random number generator. (Naturally, the parameters of the throw are generated, the simulation is just for fun, but again, it's a distinction without a difference. If the results have the same properties, the mechanism doesn't matter.) If I give you a billion random numbers, do you think you could tell if I used the app or a real die? Even if you could, would using one versus the other be the difference between a physics simulation being accurate or inaccurate enough to produce consciousness?

certainly untractable with current physics model/technology.

Of course. This is addressing an argument made by the post that computers are inherently incapable of intelligence or consciousness, even assuming sufficient computation power, storage space, and knowledge of physics and neurology. And I don't even think that you need to simulate a brain to produce mechanical consciousness, I think there would be other, more efficient means well before we get to that point, but sufficiently detailed simulation is something we have no reason to think is impossible.

Human intelligence has a lot of externalities and cannot be reduced to pure "functional objects".

Why not? And even if so, what's stopping you from bringing in the externalities as well?

If it's just about input/output you could be fooled by a tape recorder, and a simple filing system, but I think you'll agree those aren't intelligent.

What are the rules of the filing system? If they're complex enough, and executed sufficiently quickly that I can converse with it in my lifetime, let me be the judge of whether I think it's intelligent.

[–] scruiser@awful.systems 5 points 1 day ago

even assuming sufficient computation power, storage space, and knowledge of physics and neurology

but sufficiently detailed simulation is something we have no reason to think is impossible.

So, I actually agree broadly with you in the abstract principle but I've increasingly come around to it being computationally intractable for various reasons. But even if functionalism is correct...

  • We don't have the neurology knowledge to do a neural-level simulation, and it would be extremely computationally expensive to actually simulate all the neural features properly in full detail, well beyond the biggest super computers we have now and "moore's law" (scare quotes deliberate) has been slowing down such that I don't think we'll get there.

  • A simulation from the physics level up is even more out of reach in terms of computational power required.

As you say:

I think there would be other, more efficient means well before we get to that point

We really really don't have the neuroscience/cognitive science to find a more efficient way. And it is possible all of the neural features really are that important to overall cognition, so you won't be able to do it that much more "efficiently" in the first place...

Lesswrong actually had someone argue that the brain is within an order or magnitude or two of the thermodynamic limit on computational efficiency: https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know

[–] corbin@awful.systems 6 points 1 day ago (1 children)

I don’t have any experience writing physics simulators myself…

I think that this is your best path forward. Go simulate some rigid-body physics. Simulate genetics with genetic algorithms. Simulate chemistry with Petri nets. Simulate quantum computing. Simulate randomness with random-number generators. You'll learn a lot about the limitations that arise at each step as we idealize the real world into equations that are simple enough to compute. Fundamentally, you're proposing that Boltzmann brains are plausible, and the standard physics retort (quoting Carroll 2017, Why Boltzmann brains are bad) is that they "are cognitively unstable: they cannot simultaneously be true and justifiably believed."

A lesser path would be to keep going with consciousness and neuroscience. In that case, go read Hofstadter 2007, 'I' is a strange loop to understand what it could possibly mean for a pattern to be substrate-independent.

If they’re complex enough, and executed sufficiently quickly that I can converse with it in my lifetime, let me be the judge of whether I think it’s intelligent.

No, you're likely to suffer the ELIZA Effect. Previously, on Awful, I've explained what's going on in terms of memes. If you want to read a sci-fi story instead, I'd recommend Watts' Blindsight. You are overrating the phenomenon of intelligence.

[–] chaos@beehaw.org 2 points 1 day ago

I'm clearly failing to communicate my thoughts, and doing it in the wrong forum, but I appreciate the links, I'm excited to learn new things from them.

[–] zogwarg@awful.systems 8 points 2 days ago (1 children)

I'll gladly endorse most of what the author is saying.

This isn't really a debate club, and I'm not really trying to change your mind. I will just end on a note that:

I’ll start with the topline findings, as it were: I think the idea of a so-called “Artificial General Intelligence” is a pipe dream that does not realistically or plausibly extend from any currently existent computer technology. Indeed, my strong suspicion AGI is wholly impossible for computers as we presently understand them.

Neither the author nor me really suggest that it is impossible for machines to think (indeed humans are biological machines), only that it is likely—nothing so stark as inherently—that Turing Machines cannot. "Computable" in the essay means something specific.

Simulation != Simulacrum.

And because I can't resist, I'll just clarify that when I said:

Even if you (or anyone) can’t design a statistical test that can detect the difference of a sequence of heads or tails, doesn’t mean one doesn’t exist.

It means that the test does (or can possibly) exist that, it's just not achievable by humans. [Although I will also note that for methods that don't rely on measuring the physical world (pseudo random-number generators) the tests designed by humans a more than adequate to discriminate the generated list from the real thing.]

[–] chaos@beehaw.org 1 points 2 days ago

Sure, doesn't have to be a debate of course. My read was a pretty explicit belief that there is likely something to biology that is fundamentally unreachable to a computing machine, which I was skeptical of.

[–] dgerard@awful.systems 4 points 2 days ago (1 children)
[–] BioMan@awful.systems -1 points 1 day ago (2 children)

I think the point being made is perfectly reasonable. It's entirely possible to think that there's probably nothing stopping things more like a computer than like a brain having actual intelligence/consciousness, even if there's no particular pointer from where we are to that state. We are an existence proof that matter can do this, and there's little reason to think it's the only way it can.

[–] swlabr@awful.systems 8 points 1 day ago* (last edited 1 day ago) (1 children)

No, it's not reasonable. This is what you're saying:

  1. Matter can have consciousness (see: humans).
  2. Computers are made of matter.
  • Therefore, it's conceivable that a computer can have consciousness.

This is logically valid but meaningless. There's nothing to be done with this. There's no reason to be had here.

Then we have the banal take of "if we had a magic box with infinite capabilities, it could do X!", where, in this case, X happens to be "have consciousness". Ok! Great. You have fun playing in the sandpit, thinking about your magic box. I'm gonna smoke cigarettes and play slot machines for an hour.

It's only when we start bringing the discussion down to simulations on a Turing machine that this stuff gets interesting. But that's not what y'all are trying to talk about, because you haven't read the goddamn essay.

[–] dgerard@awful.systems 3 points 1 day ago (1 children)

You have fun playing in the sandpit, thinking about your magic box. I’m gonna smoke cigarettes and play slot machines for an hour.

thread winner

[–] swlabr@awful.systems 2 points 23 hours ago

I saw a chance to RP a deadbeat parent and I took it

[–] dgerard@awful.systems 10 points 1 day ago* (last edited 1 day ago)

the existence of a brain doesn't show you can simulate it in a computer. the universe is not necessarily feasible to simulate down to the atoms, as the essay points out. did you read the essay?