scruiser

joined 2 years ago
[–] scruiser@awful.systems 8 points 1 week ago

Another ironic point... Lesswronger's actually do care about ML interpretability (to the extent they care about real ML at all; and as a solution to making their God AI serve their whims not for anything practical). A lack of interpretability is a major problem (like irl problem, not just scifi skynet problem) in ML, you can models with racism or other bias buried in them and not be able to tell except by manually experimenting with your model with data from outside the training set. But Sam Altman has turned it from a problem into a humble brag intended to imply their LLM is so powerful and mysterious and bordering on AGI.

[–] scruiser@awful.systems 13 points 1 week ago* (last edited 1 week ago)

A lesswronger wrote an blog post about avoiding being overly deferential, using Eliezer as an example of someone that gets overly deferred to. Of course, they can't resist glazing him, even in the context of an blog post on not being too deferential:

Yudkowsky, being the best strategic thinker on the topic of existential risk from AGI

Another lesswronger pushes back on that and is highly upvoted (even among the doomers that think Eliezer is a genius, most of them still think he screwed up in inadvertently helping LLM companies get to where they are): https://www.lesswrong.com/posts/jzy5qqRuqA9iY7Jxu/the-problem-of-graceful-deference-1?commentId=MSAkbpgWLsXAiRN6w

The OP gets mad because this is off topic from what they wanted to talk about (they still don't acknowledge the irony).

A few days later they write an entire post, ostensibly about communication norms, but actually aimed at slamming the person that went off topic: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse

And of course the person they are slamming comes back in for another round of drama: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse?commentId=s4GPm9tNmG6AvAAjo

No big point to this, just a microcosm of lesswrongers being blind to irony, sucking up to Eliezer, and using long winded posts about meta-norms and communication as a means of fighting out their petty forum drama. (At least us sneerclubers are direct and come out and say what we mean on the rare occasions we have beef among ourselves.)

[–] scruiser@awful.systems 7 points 2 weeks ago

Thanks for the information. I won't speculate further.

[–] scruiser@awful.systems 7 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Thanks!

So it wasn't even their random hot takes, it was reporting someone? (My guess would be reporting froztbyte's criticism, which I agree have been valid if a bit harsh in tone)

[–] scruiser@awful.systems 2 points 2 weeks ago* (last edited 2 weeks ago)

Some legitimate academic papers and essays have served as fuel for the AI hype and less legitimate follow-up research, but the clearest examples that comes to mind would be either "The Bitter Lesson" essay or one of the "scaling law" papers (I guess Chinchilla scaling in particular?), not "Attention is All You Need". (Hyperscaling LLMs and the bubble fueling it is motivated by the idea that they can just throw more and more training data at bigger and bigger model). And I wouldn't blame the author(s) for that alone.

[–] scruiser@awful.systems 6 points 2 weeks ago (5 children)

BlueMonday has had a tendency to go off with a half-assed understanding of actual facts and details. Each individual instance wasn't ban worthy, but collectively I can see why it merited a temp ban. (I hope/assume it's not a permanent ban, is there a way to see?)

[–] scruiser@awful.systems 11 points 3 weeks ago

I was wondering why Eliezer picked chess of all things in his latest "parable". Even among the lesswrong community, chess playing as a useful analogy for general intelligence has been picked apart. But seeing that this is recent half-assed lesswrong research, that would explain the renewed interest in it.

[–] scruiser@awful.systems 7 points 3 weeks ago

Yud: “Woe is me, a child who was lied to!”

He really can't let down that one go, it keeps coming up. It was at least vaguely relevant to a Harry Potter self-insert, but his frustrated gifted child vibes keep leaking into other weird places. (Like Project Lawful, among it's many digressions, had an aside about how dath ilan raises it's children to avoid this. It almost made me sympathetic towards the child-abusing devil worshipers who had to put up with these asides to get to the main character's chemistry and math lectures.)

Of course this a meandering plug to his book!

Yup, now that he has a book out he's going to keep referencing back to it and it's being added to the canon that must be read before anyone is allowed to dare disagree with him. (At least the sequences were free and all online)

Is that… an incel shape-rotator reference?

I think shape-rotator has generally permeated the rationalist lingo for a certain kind of math aptitude, I wasn't aware the term had ties to the incel community. (But it wouldn't surprise me that much.)

[–] scruiser@awful.systems 6 points 3 weeks ago

I couldn't even make it through this one, he just kept repeating himself with the most absurd parody strawman he could manage.

This isn't the only obnoxiously heavy handed "parable" he's written recently: https://www.lesswrong.com/posts/dHLdf8SB8oW5L27gg/on-fleshling-safety-a-debate-by-klurl-and-trapaucius

Even the lesswronger's are kind of questioning the point:

https://www.lesswrong.com/posts/dHLdf8SB8oW5L27gg/on-fleshling-safety-a-debate-by-klurl-and-trapaucius?commentId=BhePfCvbGaNauDqfz

I enjoyed this, but don't think there are many people left who can be convinced by Ayn-Rand length explanatory dialogues in a science-fiction guise who aren't already on board with the argument.

A dialogue that references Stanislaw Lem's Cyberiad, no less. But honestly Lem was a lot more terse and concise in making his points. I agree this is probably not very relevant to any discourse at this point (especially here on LW, where everyone would be familiar with the arguments anyway).

And: https://www.lesswrong.com/posts/3q8uu2k6AfaLAupvL/the-tale-of-the-top-tier-intellect?commentId=oHdfZkiKKffqSbTya

Reading this felt like watching someone kick a dead horse for 30 straight minutes, except at the 21st minute the guy forgets for a second that he needs to kick the horse, turns to the camera and makes a couple really good jokes. (The bit where they try and fail to change the topic reminded me of the "who reads this stuff" bit in HPMOR, one of the finest bits you ever wrote in my opinion.) Then the guy remembers himself, resumes kicking the horse and it continues in that manner until the end.

Who does he think he's convincing? Numerous skeptical lesswrong posts have described why general intelligence is not like chess-playing and world-conquering/optimizing is not like a chess game. Even among his core audience this parable isn't convincing. But instead he's stuck on repeating poor analogies (and getting details wrong about the thing he is using for analogies, he messed up some details about chess playing!).

[–] scruiser@awful.systems 8 points 3 weeks ago (1 children)

Eh, cuck is kind of the right-winger's word, it's tied to their inceldom and their mix of moral-panic and fetishization of minorities' sexualities.

[–] scruiser@awful.systems 10 points 3 weeks ago

“You don’t understand how Eliezer has programmed half the people in your company to believe in that stuff,” he is reported to have told Altman at a dinner party in late 2023. “You need to take this more seriously.” Altman “tried not to roll his eyes,” according to Wall Street Journal reporter Keach Hagey.

I wonder exactly when this was. The attempted oust of Sam Altman was November 17, 2023. So either this warning was timely (but something Sam already had the pieces in place to make a counterplay against), or a bit too late (as Sam had recently just beaten an attempt by the true believers to oust him).

Sam Altman has proved adept at keeping the plates spinning and wheedling his way through various deals, I agree with the common sentiment here that he his underlying product just doesn't work well enough, in a unique/proprietary enough way for him to actually use that to get profitable company. Pivot-to-AI and Ed Zitron have a guess of 2027 for the plates to come crashing down, but with an IPO on the way to infuse more cash into OpenAI I wouldn't be that surprised if he delays the bubble pop all the way to 2030, and personally gets away cleanly with no legal liability for it and some stock sales lining his pockets.

[–] scruiser@awful.systems 19 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

“I’m sort of a complex chaotic systems guy, so I have a low estimate that I actually know what the nonlinear dynamic in the memosphere really was,” he said. (Translation: It’s complicated.)

Why do these people have the urge to talk like this? Does it make themselves feel smarter? Do they think it makes them look smart to other people? Are they so caught up in their field they can't code switch to normal person talk?

view more: ‹ prev next ›