dualmindblade

joined 4 years ago
[–] dualmindblade@hexbear.net 17 points 1 year ago (10 children)

I see this talking point that Trump and Biden are essentially equivalent, and I personally think it might be true but it's gonna be really hard to argue this to libs and if Trump ends up doing something awful you're stuck holding the bag of i told you sos. But you don't need to have Trump = Biden, because Trump is extremely and uniquely innocuous for a republican. Yeah this is also hard to swallow for libs but it's almost an indisputable fact and you need only remind them of the deeds of what the last 3 republican presidents did and ask them to produce examples that are worse.

So if there was every a time to hold a Democrat president to account, now would be it because the next Republican will not only be worse than Trump but probably worse than a Bush or Reagan, they'll be just as evil but will have learned from Trump (and Biden lol) that they don't even have to pretend that hard, in fact not pretending makes them more powerful. And aside from punishing the Dems for completely ignoring their base, we're very likely to get a republican in either 2024 or 2028, but not both years, like 4 years of either Trump or Biden and people are gonna want a change. So likely choices are 1) Biden 2024 / Satan but worse 2028, 2) Trump 2024 / Probably at least not Satan 2028. See, utilitarianism is good actually.

Biden MUST lose, hate to say it but if I lived in a swing state I would literally mark Trump on my ballot

[–] dualmindblade@hexbear.net 29 points 1 year ago

It's like marvel logic or something, good is just a reflection of evil across some axis of low resistance, fight fire with.. idk blue fire, pit the misanthropic industrialist against the cool friendly one. Ethno nationalists a problem? No worries we've got a bunch of them with different ethnicity, wheeeee!

[–] dualmindblade@hexbear.net 35 points 1 year ago

If I were homeless in San Francisco I'm sure I'd be doing drugs, also if I were employed and homed in San Francisco, also any combination of employment and housing statuses outside of San Francisco

[–] dualmindblade@hexbear.net 3 points 1 year ago (1 children)

The weird fingernail lines are like tree rings, it just means your were in the human equivalent of a forest fire or drought

[–] dualmindblade@hexbear.net 7 points 1 year ago

If you're claustrophobic: the descent might land you in the hospital. It's also really funny

[–] dualmindblade@hexbear.net 2 points 1 year ago (1 children)

That's a perfectly reasonable position, the question of how complex a human brain is compared with the largest NNs is hard to answer but I think we can agree it's a big gap. I happen to think we'll get to AGI before we get to human brain complexity, parameter wise, but we'll probably also need at least a couple architectural paradigms on top of transformers to compose one. Regardless, we don't need to achieve AGI or even approach it for these things to become a lot more dangerous, and we have seen nothing but accelerating capability gains for more than a decade. I'm very strongly of the opinion that this trend will continue for at least another decade, there's are just so many promising but unexplored avenues for progress. The lowest of the low hanging fruit has been, while lacking in nutrients, so delicious that we haven't bothered to do much climbing.

[–] dualmindblade@hexbear.net 8 points 1 year ago

The funny thing is he's right, norm is maybe the most unpleasant person to have a disagreement with even if he likes you, if he doesn't he's absolutely ruthless. You don't have to do much digging to learn this so agreeing to debate him for hours is just really dumb

[–] dualmindblade@hexbear.net 1 points 1 year ago (3 children)

Idk if we can ever see eye to eye here.. if we were to somehow make major advances in scanning and computer hardware to the point where we could simulate everything that biologists currently consider relevant to neuron behavior and we used that to simulate a real person's entire brain and body would you say that A) it wouldn't work at all, the simulation would fail to capture anything about human behavior, B) it would partly work, the brain would do some brain like stuff but would fail to capture our full intelligence, C) it would capture human behaviors we can measure such as the ability to converse but it wouldn't be conscious, or D) something else?

Personally I'm a hard core materialist and also believe the weak version of the church turing thesis, I'm quite strongly wedded to this opinion, so the idea that being made of one thing vs another or being informational vs material says anything about the nature of a mind is quite foreign. I'm aware that this isn't shared by everyone but I do believe it's the most common perspective inside the hard sciences, though not universal, Roger Penrose is a brilliant physicist who doesn't see this way.

[–] dualmindblade@hexbear.net 2 points 1 year ago

Huh? a human brain is a complex as fuck persistent feedback system

Every time-limited feedback system is entirely equivalent to a feed-forward system, similar to how you can unroll a for loop.

No see this is where we're disagreeing.... It is doing string manipulation which sometimes looks like maths.

String manipulation and computation are equivalent, do you think not just LLMs but computers themselves cannot in principal do what a brain does?

..you may as well say human reasoning is a side effect of quark bonding...

No because that has nothing to do with the issue at hand. Humans and LLMs and rocks all have this in common. What humans and LLMs do have in common is that they are a result of an optimization process and do things that weren't specifically optimized for as side effects. LLMs probably don't understand anything but certainly it would help them to predict the next token if they did understand, describing them as only token predictors doesn't help us with the question of whether they have understanding.

...but that is not evidence that it's doing the same task...

Again, I am not trying to argue that LLMs are like people or that they are intelligent or that they understand, I am not trying to give evidence of this. I'm trying to show that this reasoning (LLMs merely predict a distribution of next tokens -> LLMs don't understand anything and therefore can't do certain things) is completely invalid

[–] dualmindblade@hexbear.net 1 points 1 year ago (8 children)

The analogy is only there to point out the flaw in your thinking, the lack of persistence applies to both humans (if we shoot them quickly) and LLMs and so your argument applies in both cases. And I can do the very same trick to the clock analogy. You want to say that a clock is designed to keep time and that's all it does therefore it can't understand time. But I say, look, the clock was designed to keep time yes but that is far from all it does, it also transforms electrical energy into mechanical and uses it to swing around some arms at constant speed, and we can't see the inside of the clock who knows what is going on in there, probably nothing that understands the concept of time but we'd have to look inside and see. LLMs were designed to predict the next token, they do actually do so, but clearly they can do more than that, for example they can solve high school level math problems they have never seen before and they can classify emails as being spam or not. Yes these are side effects of their ability to predict token sequences as human reasoning is a side effect of their ability to have lots of children. The essence of a task is not necessarily the essence of the tool designed specifically for that task.

If you believe LLMs are not complex enough to have understanding and you say that head on I won't argue with you, but you're claiming that their architecture doesn't allow it even in theory then we have a very fundamental disagreement

[–] dualmindblade@hexbear.net 1 points 1 year ago (10 children)

Not the weights, the activations, these depend on the input and change every time you evaluate the model. They are not fed back into the next iteration, as is done in an RNN, so information doesn't persist for very long, but it is very much persisted and chewed upon by the various layers as it propagates through the network.

I am not trying to claim that the current crop of LLMs understand in the sense that a human does, I agree they do not, but nothing you have said actually justifies that conclusion or places any constraints on the abilities of future LLMs. If you ask a human to read a joke and then immediately shoot them in the head before it's been integrated into their long term memory they may or may not have understood the joke.

[–] dualmindblade@hexbear.net 1 points 1 year ago (12 children)

This is just a restatement of the second example argument I gave, trying to assert something about the internals of a model (it doesn't understand) based on the fact that it was optimized to predict the next token

view more: ‹ prev next ›