this post was submitted on 05 Apr 2026
19 points (91.3% liked)

TechTakes

2531 readers
42 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

you are viewing a single comment's thread
view the rest of the comments
[–] blakestacey@awful.systems 11 points 16 hours ago* (last edited 16 hours ago)

I aired some Reviewer #2 grievances in the bsky comments:

https://bsky.app/profile/ronanfarrow.bsky.social/post/3mitapp7j2s2c

"Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.”"

As a physicist, I have never pressed F to doubt harder.

"In 2022, researchers at a pharmaceutical company tested whether a drug-discovery model could be used to find new toxins; within a few hours, it had suggested forty thousand deadly chemical-warfare agents." To the best of my knowledge, these suggestions were never evaluated by any other researchers.

(The original paper was published as a "comment": https://www.nature.com/articles/s42256-022-00465-9)

Similar claims of AI-facilitated discoveries have turned out to be overblown in other fields.

https://pubs.acs.org/doi/pdf/10.1021/acs.chemmater.4c00643

"In a 2025 study, ChatGPT passed the test more reliably than actual humans did."

If this is referring to Jones and Bergen's "Large Language Models Pass the Turing Test", that's a preprint (arXiv:2503.23674) that has yet to pass peer review over a year after its posting.

"A classic hypothetical scenario in alignment research involves a contest of wills between a human and a high-powered A.I. In such a contest, researchers usually argue, the A.I. would surely win"

Which researchers?

(Hint: Eliezer Yudkowsky is not a researcher.)

AI: "I will convince you to let me out of this box"

Humanity (wringing hands): "Oh, where is our savior? Who will stand fast in the face of all entreaties?"

Bartleby the Scrivener: hello

"...a hub of the effective-altruism movement whose commitments included supporting the distribution of mosquito nets to the global poor."

Phrasing like this subtly underplays how the (to put it briefly) weird people were part of EA all along.

https://repository.uantwerpen.be/docman/irua/371b9dmotoM74

"In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” ... one of several A.I. scenarios that sound like science fiction—but, under certain experimental conditions, it’s already happening."

Barrett et al.'s arXiv:2206.08966? AFAIK, that was never peer-reviewed either; "posted" is not the same as "published". And claims in this area are rife with criti-hype:

https://pivot-to-ai.com/2025/09/18/openai-fights-the-evil-scheming-ai-which-doesnt-exist-yet/

Oh, right, the "Future of Life Institute". Pepperidge Farm remembers:

"In January 2023, Swedish magazine Expo reported that the FLI had offered a grant of $100,000 to a foundation set up by Nya Dagbladet, a Swedish far-right online newspaper."

https://en.wikipedia.org/wiki/Future_of_Life_Institute#Activism

"Tegmark also rejected any suggestion that nepotism could have played a part in the grant offer being made, given that his brother, Swedish journalist Per Shapiro ... has written articles for the site in the past."

https://www.vice.com/en/article/future-of-life-institute-max-tegmark-elon-musk/