scruiser

joined 2 years ago
[–] scruiser@awful.systems 3 points 2 months ago

It's like a cargo cult version of bootstrapping or monte carlo methods.

[–] scruiser@awful.systems 8 points 2 months ago (1 children)

That thread gives me hope. A decade ago, a random internet discussion in which rationalist came up would probably mention "quirky Harry Potter fanfiction" with mixed reviews, whereas all the top comments on that thread are calling out the alt-right pipeline and the racism.

[–] scruiser@awful.systems 7 points 2 months ago

I hadn't heard of Black Lotus. Also, the article fails to mention rationalist/lesswrong ties to that AI-doom-focused Zen Buddhism cult that was discussed on Lesswrong recently (looking it up, the name is Maple), so you can add that to the cult count.

[–] scruiser@awful.systems 5 points 2 months ago

I'm at least enjoying the many comments calling her out, but damn she just doubles down even after being given many many examples of him being a far-right nationalist monster who engaged in attempts to outright subvert democracy.

[–] scruiser@awful.systems 10 points 2 months ago (2 children)

The Oracle deal seemed absurd, but I didn't realize how absurd until I saw Ed's compilation of the numbers. Notably, it means even if OpenAI meets its projected revenue numbers (which are absurdly optimistic, like bigger than Netflix and Spotify and several other services combined) paying Oracle (along with everyone else it has promised to buy compute from) will put it net negative on revenue until 2030, meaning it has to raise even more money.

I've been assuming Sam Altman has absolutely no real belief that LLMs would lead to AGI and has instead been cynically cashing in on the sci-fi hype, but OpenAI's choices don't make any long term sense if AGI isn't coming. The obvious explanation is that at this point he simply plans to grift and hype (while staying technically within the bounds of legality) to buy few years of personal enrichment. And to even ask what his "real beliefs" are gives him too much credit.

Just to remind everyone: the market can stay irrational longer than you can stay solvent!

[–] scruiser@awful.systems 3 points 2 months ago (1 children)

This feels like a symptom of liberals having a diluted incomplete understanding of what made past movements that utilized protest succeed or fail.

[–] scruiser@awful.systems 5 points 2 months ago

It is pretty good as a source for science fiction ideas. I mean, lots of their ideas originate from science fiction, but their original ideas would make fun fantasy sci-fi concepts. Like looking off their current front page... https://www.lesswrong.com/posts/WLFRkm3PhJ3Ty27QH/the-cats-are-on-to-something cat's deliberately latching on to humans as the most lazy way of advancing their own value across the future seems like a solid point of fantasy worldworldbuilding...

[–] scruiser@awful.systems 5 points 2 months ago

To add to blakestacey's answer, his fictional worldbuilding concept, dath ilan (which he treats like rigorous academic work to the point of citing it in tweets), uses prediction markets in basically everything, from setting government policy to healthcare plans to deciding what restaurant to eat at.

[–] scruiser@awful.systems 4 points 2 months ago (2 children)

Every tweet in that thread is sneerable. Either from failing to understand the current scientific process, vastly overestimating how easily cutting edge can be turned into cleanly resolvable predictions, or assuming prediction markets are magic.

[–] scruiser@awful.systems 9 points 2 months ago (1 children)

He's the one that used the phrase "silent gentle rape"? Yeah, he's at least as bad as the worst evo-psych pseudoscience misogyny posted on lesswrong, with the added twist he has a position in academia to lend him more legitimacy.

[–] scruiser@awful.systems 7 points 2 months ago* (last edited 2 months ago) (14 children)

He had me in the first half, I thought he was calling out rationalist's problems (even if dishonestly disassociating himself from then). But then his recommended solution was prediction markets (a concept which rationalists have in fact been trying to play around with, albeit at a toy model level with fake money).

[–] scruiser@awful.systems 8 points 2 months ago* (last edited 2 months ago)

The author occasionally posts to slatestarcodex, we kind of tried to explain what was wrong with Scott Alexander and I think she halfway got it... I also see her around the comments in sneerclub occasionally, so at least she is staying aware of things...

view more: ‹ prev next ›