this post was submitted on 17 Nov 2025
28 points (100.0% liked)

SneerClub

1203 readers
14 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

much more sneerclub than techtakes

top 37 comments
sorted by: hot top controversial new old
[–] Amoeba_Girl@awful.systems 6 points 1 day ago (2 children)

Not to be too much of a fujoshi in public but Shakespeare is writing about a boy not a girl!!!!!!!

[–] swlabr@awful.systems 8 points 1 day ago

The real question: are you Shakespeare x Fair Youth or Fair Youth x Shakespeare?

[–] gerikson@awful.systems 1 points 1 day ago (1 children)
[–] gerikson@awful.systems 3 points 1 day ago (1 children)
[–] dgerard@awful.systems 4 points 1 day ago (1 children)

ah, but it was a very defective essay if you don't bother reading it

[–] gerikson@awful.systems 4 points 1 day ago

Schrödinger's essay except the cat is always dead

[–] swlabr@awful.systems 6 points 2 days ago

Man. Using Sonnet 18 here is just utterly brilliant. There’s a lot that could be said but my angle is: you can’t directly compare someone to a summer’s day; it has to be done through poetry and metaphor. Shit, cat. Shaka when the walls fell.

[–] sus@programming.dev 11 points 2 days ago

Sorry for the following somewhat disproportionate aggression

Searle said

I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so unplausible to start with. The idea is that while a person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible

As I was reading this I was screaming silently: YOU invented the chinese room. It was ENTIRELY YOUR IDEA to come up with a ridiculous, unphysical, implausible thought experiment where a single human somehow does the task of millenia in the span of minutes.
And now you object that it seems implausible???

Millenia is very optimistic by the way. If you tried to simulate chatgpt with paper and a pen, it would take much, much longer than that.
AND THE PIECE OF JUNK STILL WOULDN'T EVEN GET THE CHINESE CHARACTERS RIGHT.

Author echoes my thoughts by calmly stating:

Searle simply puts the cart before the horse. Let the high speed men with paper, pencil, and rubber commence using their rulebook to carry on a conversation, whether in Chinese or any other language, and then we can discuss the metaphysical implications."

Motherfucker, Sartre has set the cart on fire and shot the horse, and you are contemplating whether to dance on the remains!
Ok, ok, maybe it metaphysically makes sense. But you're exhaustively drawing a connection between the metaphysical and the practical! Now it can't make sense!

Interrogate our intuitions with one centillion shrimp.
incoherent screeching

[–] corbin@awful.systems 7 points 2 days ago (1 children)

I'm going to be a little indirect and poetic here.

In Turing’s view, if a computer were to pass the Turing Test, the calculations it carried out in doing so would still constitute thought even if carried out by a clerk on a sheet of paper with no knowledge of how a teletype machine would translate them into text, or even by a distributed mass of clerks working in isolation from each other so that nothing resembling a thinking entity even exists.

Yes. In Smullyan's view, the acoustic patterns in the air would still constitute birdsong even if whistled by a human with no beak, or even by a vibrating electromagnetically-driven membrane which is located far from the data that it is playing back, so that nothing resembling a bird even exists. Or, in Aristoteles' view, the syntactic relationship between sentences would still constitute syllogism even if attributed to a long-dead philosopher, or even verified by a distributed mass of mechanical provers so that no single prover ever localizes the entirety of the modus ponens. In all cases, the pattern is the representation; the arrangement which generates the pattern is merely a substrate.

Consider the notion that thought is a biological process. It’s true that, if all of the atoms and cells comprising the organism can be mathematically modeled, a Turing Machine would then be able to simulate them. But it doesn’t follow from this that the Turing Machine would then generate thought. Consider the analogy of digestion. Sure, a Turing Machine could model every single molecule of a steak and calculate the precise ways in which it would move through and be broken down by a human digestive system. But all this could ever accomplish would be running a simulation of eating the steak. If you put an actual ribeye in front of a computer there is no amount of computational power that would allow the computer to actually eat and digest it.

Putting an actual ribeye in front of a human, there is no amount of computational power that would allow the human to actually eat and digest it, either. The act of eating can't be provoked merely by thought; there must be some sort of mechanical linkage between thoughts and the relevant parts of the body. Turing & Champernowne invented a program that plays chess and also were known (apocryphally, apparently) to play "run-around-the-house chess" or "Turing chess" which involved standing up and jogging for a lap in-between chess moves. The ability to play Turing chess is cognitively embodied but the ability to play chess is merely the ability to represent and manipulate certain patterns.

At the end of the day what defines art is the existence of intention behind it — the fact that some consciousness experienced thoughts that it subsequently tried to communicate. Without that there’s simply lines on paper, splotches of color, and noise. At the risk of tautology, meaning exists because people mean things.

Art is about the expression of memes within a medium; it is cultural propagation. Memes are not thoughts, though; the fact that some consciousness experienced and communicated memes is not a product of thought but a product of memetic evolution. The only other thing that art can carry is what carries it: the patterns which emerge from the encoding of the memes upon the medium.

I had the idea that while the supposed mind inside the computer could not eat the physical steak placed in front of it but I could, at the same time the person in the computer could eat the simulated steak but I could not. At least an hour later I arrived at the thought that perhaps I would need to simulate a universe so a computer person could eat a steak and that's a delightful absurdity.

I think there's a distinct disconnect between the idea that a machine can think and that there's a thinking machine that can do all the things that the whole set of humanity can. Even that is far more realistic than what AGI is described to be. It supposes a thinking machine, skilled in all human disciplines, without want or need beyond staggering amounts of electricity, computing machinery, and water. It is available for every conceivable thinking task at any time and perfectly willing to carry it out.

I don't think it's absurd to say a machine is thinking if it performs similar enough processes to a human. I do think it's absurd to think such a machine will be happy to write high school English papers for all eternity.

[–] scruiser@awful.systems 7 points 2 days ago

So one point I have to disagree with.

More to the point, we know that thought is possible with far less processing power than a Microsoft Azure datacenter by dint of the fact that people can do it. Exact estimates on the storage capacity of a human brain vary, and aren’t the most useful measurement anyway, but they’re certainly not on the level of sheer computational firepower that venture capitalist money can throw at trying to nuke a problem from space. The problem simply doesn’t appear to be one of raw power, but rather one of basic capability.

There are a lot of ways to try to quantify the human brain's computational power, including storage (as this article focuses on, but I think its the wrong measure, operations, numbers of neural weights, etc.). Obviously it isn't literally a computer and neuroscience still has a long way to go, so the estimates you can get are spread over like 5 orders of magnitude (I've seen arguments from 10^13 flops and to 10^18 or even higher, and flops is of course the wrong way to look at the brain). Datacenter computational power have caught up to the lowers ones, yes, but not the higher ones. The bigger supercomputing clusters, like El Capitan for example, is in the 10^18th range. My own guess would be at the higher end, like 10^18, with the caveat/clarification that evolution has optimized the brain for what it does really really well, so that the compute is being used really really efficiently. Like one talk I went to in grad school that stuck with me... the eyeball's microsaccades are basically acting as a frequency filter on visual input. So literally before the visual signal has even got to the brain the information has already been processed in a clever and efficient way that isn't captured in any naive flop estimate! AI boosters picked estimates on human brain power that would put it in range of just one more scaling as part of their marketing. Likewise for number of neurons/synapses. The human brain has 80 billion neurons with an estimated 100 trillion synapses. GPT 4.5, which is believed to have peaked on number of weights (i.e. they gave up on straight scaling up because it is too pricey), is estimated (because of course they keep it secret) like 10 trillion parameters. Parameters are vaguely analogs to synapses, but synapses are so much more complicated and nuanced. But even accepting that premise, the biggest model was still like 1/10th the size to match a human brain (and they may have lacked the data to even train it right).

So yeah, minor factual issue, overall points are good, I just thought I would point it out, because this factual issue is one distorted by the AI boosters to make it look like they are getting close to human.

[–] zogwarg@awful.systems 9 points 3 days ago

That’s because there’s absolutely reams of writing out there about Sonnet 18—it could draw from thousands of student essays and cheap study guides, which allowed it to remain at least vaguely coherent. But when forced away from a topic for which it has ample data to plagiarize, the illusion disintegrates.

Indeed, Any intelligence present is that of the pilfered commons, and that of the reader.

I had the same thought about the few times LLMs appear to be successful in translation, (where proper translation requires understanding), it's not exactly doing nothing, but a lot of the work is done by the reader striving to make sense of what he reads, and because humans are clever they can somtimes glimpse the meaning, through the filter of AI mapping a set of words unto another, given enough context. (Until they really can't, or the subtelties of language completely reverse the meaning when not handled with the proper care).

[–] chaos@beehaw.org -2 points 2 days ago (3 children)

One needn’t go as far as souls anyway. Jefferson’s hypothesis—that there is some electrochemical basis to thought—is sufficient to solve the problem. Were it true, the reason computers seem fundamentally blocked from progress on the Turing Test would amount to the fact that they are wholly mechanical objects, while “thought” is as much a biological function as “digestion” or “copulation.”

Even if true, why couldn't the electrochemical processes be simulated too? I don't think it's necessary to strictly and completely reproduce a biological brain to produce thought in a computer, but even if it is, it's "just" a matter of scale. If you can increase the fidelity of the simulation with effectively infinite computing power, what would it be missing? It'd have to be something that can't be predicted, can't even have its unpredictability described with an equation (I don't know what any coin flip will turn up as, but I do know how to write a program that produces indistinguishable results from a real coin for a simulation), so it's just changing all the time and follows no rules whatsoever, but also you can't just write a program that does its own "random crap that can't be predicted" simulation because the real one is somehow also so precise that it's the only thing that makes consciousness work and a mechanical one isn't good enough?

[–] zbyte64@awful.systems 1 points 4 hours ago

I want to take this argument of efficiency in a different direction. First, two key observations: the system doing the simulation will never be as efficient as the system being modeled. Second a conscious system is aware of it's own efficiency. This means even if you simulate a whole human body to create consciousness it will not have the same quality. It will either be aware of all the extra resources required to create "self" or fed a simulation of self that hides it's own nature and thus cannot be self aware.

[–] dgerard@awful.systems 10 points 2 days ago (3 children)

Philosophically, right, if you allow me infinite resources, right, to do a thing I don't actually know how to define,

[–] scruiser@awful.systems 3 points 1 day ago

It's not infinite! If you take my cherry picked estimate of the computational power of the human brain, you'll see we're just one more round of scaling to have matched the human brain, and then we're sure to have AGI ~~and make our shareholders immense profits~~! Just one more scaling, bro!

[–] swlabr@awful.systems 9 points 2 days ago

money please!

[–] chaos@beehaw.org -2 points 2 days ago (1 children)

I think you're reading some arguments I'm not making. The author seems to be of the opinion that even with infinite resources it's outright impossible to have a computer that thinks or experiences consciousness, which is obviously a philosophical, not practical, argument, and I don't agree. I'm not saying we should actually try it, or that it's doable with our current or foreseeable resources.

That being said, I am defining it. I'm saying that even if we assume that it's utterly impossible to have consciousness any other way, it's some incredibly unique combination of the things that make us human and literally any deviation whatsoever makes it all fall apart, there still seems to be a possible path to a computer with consciousness via simulation of that particular and special process. That's my thing you don't think I can define: a physics simulation of sufficient fidelity to simulate a thing we already know demonstrates consciousness. "There aren't enough resources in the universe" would block that path, sure, but that's not very interesting. Lots of other things could block that path, it's an insane path to an incredibly difficult goal. But saying "artificial consciousness is impossible, period" means something is blocking it, and the idea that it's some law of physics that is both crucial to consciousness and can't be simulated is interesting. I'm struggling to imagine how that would be possible, and if it's a failure of my imagination I'd like to know. The universe doesn't have to be computable or deterministic to make a simulation that imitates it so well that observations of the real and simulated physics yield indistinguishable results, and at that point I don't see anything inherently preventing consciousness.

[–] swlabr@awful.systems 7 points 2 days ago (1 children)

How do you define consciousness?

[–] chaos@beehaw.org 0 points 2 days ago (2 children)

Well, I don't really need to for what I'm saying, which is that I don't see any reason a computer is fundamentally incapable of doing whatever it is that humans do to consider ourselves conscious. Practically incapable, maybe, but not by the nature of what it is. Define it how you like, I don't see why a computer couldn't pull the same trick in the distant future.

Personally, though, I define it as something that exhibits awareness of a separation between itself and the rest of the world, with a large fuzzy spectrum between "has it" and "doesn't have it". Rocks exhibit no awareness whatsoever, they're just things with no independence. Plants interact with the world but don't really seem like they're doing much more than reacting, which doesn't demonstrate much, if any. Animals do a lot more, and the argument that at least some are conscious is quite plausible. An LLM kinda, sorta, momentarily glances off of this sometimes, enough that you can at least ask the question, but the fact that outside of training it is literally just an unchanging matrix of numbers and that its only "view" into the world is what you choose to give it means that it can't be aware of itself versus the world, it's at best aware of itself versus its training set and/or context window, a tiny keyhole to a photograph of something in the world, means it makes a barely discernible blip on the scale, on the level of plants, and even that might be generous. An artificial consciousness would, in my opinion, need to be independent, self-modifying, and show some of the same traits and behaviors that we see in animals up to and including ourselves with regard to interacting with the rest of the world as a self-contained being.

[–] self@awful.systems 8 points 2 days ago (1 children)

yud-length posts like these that boil down to “nuh-uh” are why we have the “no debate unless it’s amusing debate” rule, and I can see by the downvotes from local and the reactions that you’ve failed to be amusing

if I were you I’d reconsider this thread

[–] chaos@beehaw.org 9 points 1 day ago

If anything I'm posting conjures that bozo's name for any reason, I'll cut my losses, my apologies.

[–] swlabr@awful.systems 6 points 2 days ago

ah, I see. Ok. Well, based on all that, you haven’t actually engaged with anything from the post, nor have you said anything non-trivial. Was hoping that you’d at least say something wrong instead.

[–] zogwarg@awful.systems 10 points 2 days ago (1 children)

Even if true, why couldn’t the electrochemical processes be simulated too?

  • You're missing the argument, that even you can simulate the process of digestion perfectly, no actual digestion takes place in the real world.
  • Even if you simulate biological processes perfectly, no actual biology occurs.
  • The main argument from the author is that trying to divorce intelligence from biological imperatives can be very foolish, which is why they highlight that even a cat is smarter than an LLM.

But even if it is, it’s “just” a matter of scale.

  • Fundamentally what the author is saying, is that it's a difference in kind not a difference in quantity.
  • Nothing actually guarantees that the laws of physics are computable, and nothing guarantees that our best model actually fits reality (aside from being a very good approximation).
  • Even numerically solving the Hamiltonians from quantum mechanics, is extremely difficult in practice.

I do know how to write a program that produces indistinguishable results from a real coin for a simulation.

  • Even if you (or anyone) can't design a statistical test that can detect the difference of a sequence of heads or tails, doesn't mean one doesn't exist.
  • Importantly you are also only restricting yourself to the heads or tails sequence, ignoring the coin moving the air, pulling on the planet, and plopping back down in a hand. I challenge you to actually write a program that can achieve these things.
  • Also decent random-number generation is not actually properly speaking Turing complete [Unless again you simulate physics but then again, you have to properly choose random starting conditions even if you assume you have a capable simulator] , modern computers use stuff like component temperature/execution time/user interaction to add "entropy" to random number generation, not direct computation.

As a summary,

  • When reducing any problem for a "simpler" one, you have to be careful what you ignore.
  • The simulation argument is a bit irrelevant, but as a small aside not guaranteed to be possible in principle, and certainly untractable with current physics model/technology.
  • Human intelligence has a lot of externalities and cannot be reduced to pure "functional objects".
    • If it's just about input/output you could be fooled by a tape recorder, and a simple filing system, but I think you'll agree those aren't intelligent. The output as meaning to you, but it doesn't have meaning for the tape-recorder.
[–] chaos@beehaw.org -2 points 2 days ago (4 children)

(I'm going to say "you" in this response even though you're stating some of these as arguments from the author and not yourself, so feel free to take this as a response to the author and not you personally if you're playing devil's advocate and don't actually think some of these things.)

You're missing the argument, that even you can simulate the process of digestion perfectly, no actual digestion takes place in the real world.

But it does take place in the real world. Where do you think the computers are going to be? Computers can and do exist in and interact with the real world, they always have, so that box is already checked. You can imagine the computations as happening in a sort of mathematical void outside of the universe, but that's mostly only useful for reasoning about a system. After you do all that, you move electrons around in a box and see the effects with your own human senses.

The main argument from the author is that trying to divorce intelligence from biological imperatives can be very foolish, which is why they highlight that even a cat is smarter than an LLM.

Well, yeah, current LLMs are tiny and stupid. Something bigger, and probably not an LLM at all, might not be.

Nothing actually guarantees that the laws of physics are computable, and nothing guarantees that our best model actually fits reality (aside from being a very good approximation). Even numerically solving the Hamiltonians from quantum mechanics, is extremely difficult in practice.

It doesn't have to actually fit reality perfectly, and it doesn't have to be able to predict reality like a grand unified theory would. It just needs to behave similarly enough to produce the same effects that brains do. It hasn't been shown to be possible, but there's also no reason to think we can never get close enough to reproduce it.

Even if you (or anyone) can't design a statistical test that can detect the difference of a sequence of heads or tails, doesn't mean one doesn't exist.

Yes it does. If they're indistinguishable, there is no difference.

Importantly you are also only restricting yourself to the heads or tails sequence, ignoring the coin moving the air, pulling on the planet, and plopping back down in a hand. I challenge you to actually write a program that can achieve these things.

I don't have any experience writing physics simulators myself, but they definitely exist. Even as a toy example, the iOS app Dice by PCalc does its die rolls by simulating a tossed die in 3D space instead of a random number generator. (Naturally, the parameters of the throw are generated, the simulation is just for fun, but again, it's a distinction without a difference. If the results have the same properties, the mechanism doesn't matter.) If I give you a billion random numbers, do you think you could tell if I used the app or a real die? Even if you could, would using one versus the other be the difference between a physics simulation being accurate or inaccurate enough to produce consciousness?

certainly untractable with current physics model/technology.

Of course. This is addressing an argument made by the post that computers are inherently incapable of intelligence or consciousness, even assuming sufficient computation power, storage space, and knowledge of physics and neurology. And I don't even think that you need to simulate a brain to produce mechanical consciousness, I think there would be other, more efficient means well before we get to that point, but sufficiently detailed simulation is something we have no reason to think is impossible.

Human intelligence has a lot of externalities and cannot be reduced to pure "functional objects".

Why not? And even if so, what's stopping you from bringing in the externalities as well?

If it's just about input/output you could be fooled by a tape recorder, and a simple filing system, but I think you'll agree those aren't intelligent.

What are the rules of the filing system? If they're complex enough, and executed sufficiently quickly that I can converse with it in my lifetime, let me be the judge of whether I think it's intelligent.

[–] scruiser@awful.systems 5 points 1 day ago

even assuming sufficient computation power, storage space, and knowledge of physics and neurology

but sufficiently detailed simulation is something we have no reason to think is impossible.

So, I actually agree broadly with you in the abstract principle but I've increasingly come around to it being computationally intractable for various reasons. But even if functionalism is correct...

  • We don't have the neurology knowledge to do a neural-level simulation, and it would be extremely computationally expensive to actually simulate all the neural features properly in full detail, well beyond the biggest super computers we have now and "moore's law" (scare quotes deliberate) has been slowing down such that I don't think we'll get there.

  • A simulation from the physics level up is even more out of reach in terms of computational power required.

As you say:

I think there would be other, more efficient means well before we get to that point

We really really don't have the neuroscience/cognitive science to find a more efficient way. And it is possible all of the neural features really are that important to overall cognition, so you won't be able to do it that much more "efficiently" in the first place...

Lesswrong actually had someone argue that the brain is within an order or magnitude or two of the thermodynamic limit on computational efficiency: https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know

[–] corbin@awful.systems 6 points 2 days ago (1 children)

I don’t have any experience writing physics simulators myself…

I think that this is your best path forward. Go simulate some rigid-body physics. Simulate genetics with genetic algorithms. Simulate chemistry with Petri nets. Simulate quantum computing. Simulate randomness with random-number generators. You'll learn a lot about the limitations that arise at each step as we idealize the real world into equations that are simple enough to compute. Fundamentally, you're proposing that Boltzmann brains are plausible, and the standard physics retort (quoting Carroll 2017, Why Boltzmann brains are bad) is that they "are cognitively unstable: they cannot simultaneously be true and justifiably believed."

A lesser path would be to keep going with consciousness and neuroscience. In that case, go read Hofstadter 2007, 'I' is a strange loop to understand what it could possibly mean for a pattern to be substrate-independent.

If they’re complex enough, and executed sufficiently quickly that I can converse with it in my lifetime, let me be the judge of whether I think it’s intelligent.

No, you're likely to suffer the ELIZA Effect. Previously, on Awful, I've explained what's going on in terms of memes. If you want to read a sci-fi story instead, I'd recommend Watts' Blindsight. You are overrating the phenomenon of intelligence.

[–] chaos@beehaw.org 2 points 1 day ago

I'm clearly failing to communicate my thoughts, and doing it in the wrong forum, but I appreciate the links, I'm excited to learn new things from them.

[–] zogwarg@awful.systems 8 points 2 days ago (1 children)

I'll gladly endorse most of what the author is saying.

This isn't really a debate club, and I'm not really trying to change your mind. I will just end on a note that:

I’ll start with the topline findings, as it were: I think the idea of a so-called “Artificial General Intelligence” is a pipe dream that does not realistically or plausibly extend from any currently existent computer technology. Indeed, my strong suspicion AGI is wholly impossible for computers as we presently understand them.

Neither the author nor me really suggest that it is impossible for machines to think (indeed humans are biological machines), only that it is likely—nothing so stark as inherently—that Turing Machines cannot. "Computable" in the essay means something specific.

Simulation != Simulacrum.

And because I can't resist, I'll just clarify that when I said:

Even if you (or anyone) can’t design a statistical test that can detect the difference of a sequence of heads or tails, doesn’t mean one doesn’t exist.

It means that the test does (or can possibly) exist that, it's just not achievable by humans. [Although I will also note that for methods that don't rely on measuring the physical world (pseudo random-number generators) the tests designed by humans a more than adequate to discriminate the generated list from the real thing.]

[–] chaos@beehaw.org 1 points 2 days ago

Sure, doesn't have to be a debate of course. My read was a pretty explicit belief that there is likely something to biology that is fundamentally unreachable to a computing machine, which I was skeptical of.

[–] dgerard@awful.systems 4 points 2 days ago (1 children)

this is an extended "nuh-uh"

[–] BioMan@awful.systems -1 points 2 days ago (2 children)

I think the point being made is perfectly reasonable. It's entirely possible to think that there's probably nothing stopping things more like a computer than like a brain having actual intelligence/consciousness, even if there's no particular pointer from where we are to that state. We are an existence proof that matter can do this, and there's little reason to think it's the only way it can.

[–] swlabr@awful.systems 8 points 1 day ago* (last edited 1 day ago) (1 children)

No, it's not reasonable. This is what you're saying:

  1. Matter can have consciousness (see: humans).
  2. Computers are made of matter.
  • Therefore, it's conceivable that a computer can have consciousness.

This is logically valid but meaningless. There's nothing to be done with this. There's no reason to be had here.

Then we have the banal take of "if we had a magic box with infinite capabilities, it could do X!", where, in this case, X happens to be "have consciousness". Ok! Great. You have fun playing in the sandpit, thinking about your magic box. I'm gonna smoke cigarettes and play slot machines for an hour.

It's only when we start bringing the discussion down to simulations on a Turing machine that this stuff gets interesting. But that's not what y'all are trying to talk about, because you haven't read the goddamn essay.

[–] dgerard@awful.systems 3 points 1 day ago (1 children)

You have fun playing in the sandpit, thinking about your magic box. I’m gonna smoke cigarettes and play slot machines for an hour.

thread winner

[–] swlabr@awful.systems 2 points 1 day ago

I saw a chance to RP a deadbeat parent and I took it

[–] dgerard@awful.systems 10 points 1 day ago* (last edited 1 day ago)

the existence of a brain doesn't show you can simulate it in a computer. the universe is not necessarily feasible to simulate down to the atoms, as the essay points out. did you read the essay?