this post was submitted on 05 Apr 2026
22 points (92.3% liked)

TechTakes

2536 readers
126 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

you are viewing a single comment's thread
view the rest of the comments
[–] scruiser@awful.systems 14 points 3 days ago (10 children)

Rationalist Infighting!

tldr; one of the MIRI aligned rationalist (Rob Bensinger) complained about how EA actually increased AI-risk long-run by promoting OpenAI and then Anthropic. Scott Alexander responded aggressively, basically saying they are entirely wrong and also they are bad at public communications! Various lesswrongers weigh in, seemingly blind to irony and hypocrisy!

Some highlights from the quotes of the original tweets and the lesswronger comments on them:

  • Scott Alexander tries blaming Eliezer for hyping up AI and thus contributing to OpenAI in the first place. Just a reminder, Scott is one of the AI 2027 authors, he really doesn't have room to complain about rationalist creating crit-hype.

  • Scott Alexander tries claiming SBF was a unique one off in the rationalist/EA community! (Anthropic's leadership has been called out on the EA forums and lesswrong for a similar pattern of repeated lying)

  • Rob Bensinger is indirectly trying to claim Eliezer/MIRI has been serious forthright honest commentators on AI theory and policy, as opposed to Open-Phil/EA/Anthropic which have been "strategic" with their public communication, to the point of dishonesty.

  • habryka is apparently on the verge of crashing out? I can't tell if they are planning on just quitting twitter or quitting their attempts at leadership within the rationalist community. Quitting twitter is probably a good call no matter what.

  • Load of tediously long posts, mired with that long-winded rationalist way of talking, full of rationalist in-group jargon for conversations and conflict resolution

  • Disagreement on whether Ilya Sutskever's $50 billion dollar startup is going to contribute to AI safety or just continue the race to AGI.

  • Arguments over who is with the EAs vs. Open Philanthropy vs. MIRI!

  • Argument over the definition of gaslighting!

To be clear, I agree with the complaints about EA and Anthropic, I just also think MIRI has its own similar set of problems. So they are both right, all of the rationalists are terrible at pursing their alleged nominal goals of stopping AI Doom.

I did sympathize with one lesswronger's comment:

More than any other group I've been a part of, rationalists love to develop extremely long and complicated social grievances with each other, taking pages and pages of text to articulate. Maybe I'm just too stupid to understand the high level strategic nuances of what's going on -- what are these people even arguing about? The exact flavor of comms presented over the last ten years?

[–] CinnasVerses@awful.systems 9 points 3 days ago* (last edited 1 day ago) (1 children)

Bonus race pseudoscience quoted by No77e!

There is a phenomenon in which rationalists sometimes make predictions about the future, and they seem to completely forget their other belief that we're heading toward a singularity (good or bad) relatively soon. It's ubiquitous, and it kind of drives me insane. Consider these two tweets:

Richard Ngo @RichardMCNgo: Hypothesis: We'll look back on mass migration as being worse for Europe than WW 2 was. ... high-trust and homogeneous ... internal ethno-religious fractures.

Liv Boeree @Liv_Boeree: Would not be surprised if it turns out that everyone outsourcing their writing to LLMs will have a similar or worse effect on IQ as lead piping in the long run

(he shares these tweets as photos, I ain't working harder to transcribe them or using a chatbot)

[–] scruiser@awful.systems 8 points 2 days ago

No77e is correctly noting the discrepancy between the rationalist obsession with eugenics and the belief in an imminent (or even the next 40 years) technological singularity, but fails to realize that the general problem is the eugenics obsession of rationalists. It is kind of frustrating how close but far they are from realizing the problem.

Also, reminder of the time Eliezer claimed Genesmith's insane genetic engineering plan was one of the most important projects in the world (after AI obviously): https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies?commentId=fxnhSv3n4aRjPQDwQ Apparently Eliezer's plan if we aren't all doomed by LLMs is to let the genetically engineered geniuses invent friendly AI instead.

load more comments (8 replies)