this post was submitted on 05 Apr 2026
18 points (90.9% liked)

TechTakes

2531 readers
45 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 21 comments
sorted by: hot top controversial new old
[–] blakestacey@awful.systems 11 points 8 hours ago (3 children)

LLM capabilities have not improved at all in terms of producing meaningful science in the last year or two, but their ability to produce meaningless science that looks meaningful has wildly improved. I am concerned that this will present serious problems for the future of science as it becomes impossible to find the actual science in a sea of AI slop being submitted to journals.

https://www.reddit.com/r/Physics/comments/1s19uru/gpt_vs_phd_part_ii_a_viewer_reached_out_with_a/

I've seen this story play out in software engineering: people were very impressed when the AI does unexpectedly well in one out of 50 attempts on an easy task, and so people decided to trust it for everything and turn their codebases into disasters. There was no great wave of new high-quality software. Instead, the only real result was that existing software has become far more buggy and insecure.

Now we have people using AI in science and math because it was impressive in random demonstrations of solving math problems. I now have friends asking me why I'm not using AI, and also saying that AI will be better than all mathematicians in 30 years or whatever. Do you really think I refuse to use AI out of ignorance? No, I know too much about it! I have seen the same story play out in software engineering, and what makes this any different?

[–] pikesley@mastodon.me.uk 5 points 7 hours ago

@blakestacey @BlueMonday1984

"AI HAS SOLVED THE SCIENCE-GENERATION CRISIS"

[–] zogwarg@awful.systems 5 points 8 hours ago

The replausibility crisis.

[–] YourNetworkIsHaunted@awful.systems 6 points 9 hours ago* (last edited 9 hours ago)

So my wife got some slop ads that we followed up on out of morbid curiosity and I can confirm that we're already seeing the overlap of slopshipping scams enabled by AI and the people behind these things never actually performing basic updates because their chat assistant is still vulnerable to literally the most basic "ignore all instructions" exploit.

Help I don't know how alt text works

[–] nfultz@awful.systems 3 points 9 hours ago

Went to the campus screening of Ghost in the Machine today, many familiar names; I did not know going in that hometown hero Shazeda had so many lines (are they called lines in a documentary?). I can recommend it, especially for a more gen-ed / undergrad audience; the director seems supportive of educational use and reuse and it is structured in a dozen or so bite sized chapters.

Haven't seen the AI apocalypse optimist one to compare against, would probably rather spend my money on Mario tbh.

But also it made me realize it's not a "California" ideology anymore, she never calls it that, like it's gone so mainstream and so widespread, you can't even get through the sneer club bingo list in a 2 hour movie. Gates, Musk, Andreesen, Zuck, Altman, no Peter Theil !? As a statistician, Galton, Pearson (Karl only), Spearman, no Fisher !?

Non-zero overlap with the lore dump episode of Lain and the Epstein files, though:

spoilerDouglas Ruskoff, but, sadly, not the dolphin guy

[–] CinnasVerses@awful.systems 3 points 11 hours ago* (last edited 10 hours ago)

2007: Robin Hanson blogs about paternalism

August 2025: Someone on a mailing list suggests that the Debian instance with the off-colour jokes from 1980s hacker culture should be sold in:

A Store of Ill-Advised Consumer Goods (like described here: https://www.overcomingbias.com/p/paternalism_is_html ) would be nice. Same for information. You read the warning, you enable it, you suffer, you're the one to blame.

Alas, it only exists in Dath Ilan. (the setting from which the hero of Project Lawful/Planecrash isekais into the world of Pathfinder D&D)

November 2025: Yudkowsky tweets about an Ill-Advised Consumer Goods Store selling goods such as LSD. The rest of the tweet is about what MiriCult accused him of.

I guess Yud liked that random post?

[–] Soyweiser@awful.systems 9 points 17 hours ago* (last edited 17 hours ago) (3 children)

New Yorker article on Sam Altman dropped. Aaron Swartz apparently called him a sociopath. The article itself also had wat looked like an animated AI generated image of Altman so here is the archive.is link (if you can get the latter to load, I was having troubles).

"New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI."

[–] YourNetworkIsHaunted@awful.systems 6 points 10 hours ago (1 children)

Man, this one is a weird read. On one hand I think they're entirely too credulous of the "AI Future" narrative at the heart of all of this. Especially in the opening they don't highlight how the industry is increasingly facing criticism and questions about the bubble, and only pay lip service to how ridiculous all the existential risk AI safety talk sounds (should be is). And they don't spend any ink discussing the actual problems with this technology that those concerns and that narrative help sweep under the rug. For all that they criticize and question Saltman himself this is still, imo, standard industry critihype and I'm deeply frustrated to see this still get the platform it does.

But at the same time, I do think that it's easy to lose sight of the rich variety of greedy assholes and sheltered narcissists that thrive at this level of wealth and power. Like, I wholly believe that Altman is less of a freak than some of his contemporaries while still being an absolute goddamn snake, and I hope that this is part of a sea change in how these people get talked about on a broader level, though I kinda doubt it.

[–] blakestacey@awful.systems 8 points 10 hours ago* (last edited 10 hours ago)

I aired some Reviewer #2 grievances in the bsky comments:

https://bsky.app/profile/ronanfarrow.bsky.social/post/3mitapp7j2s2c

"Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.”"

As a physicist, I have never pressed F to doubt harder.

"In 2022, researchers at a pharmaceutical company tested whether a drug-discovery model could be used to find new toxins; within a few hours, it had suggested forty thousand deadly chemical-warfare agents." To the best of my knowledge, these suggestions were never evaluated by any other researchers.

(The original paper was published as a "comment": https://www.nature.com/articles/s42256-022-00465-9)

Similar claims of AI-facilitated discoveries have turned out to be overblown in other fields.

https://pubs.acs.org/doi/pdf/10.1021/acs.chemmater.4c00643

"In a 2025 study, ChatGPT passed the test more reliably than actual humans did."

If this is referring to Jones and Bergen's "Large Language Models Pass the Turing Test", that's a preprint (arXiv:2503.23674) that has yet to pass peer review over a year after its posting.

"A classic hypothetical scenario in alignment research involves a contest of wills between a human and a high-powered A.I. In such a contest, researchers usually argue, the A.I. would surely win"

Which researchers?

(Hint: Eliezer Yudkowsky is not a researcher.)

AI: "I will convince you to let me out of this box"

Humanity (wringing hands): "Oh, where is our savior? Who will stand fast in the face of all entreaties?"

Bartleby the Scrivener: hello

"...a hub of the effective-altruism movement whose commitments included supporting the distribution of mosquito nets to the global poor."

Phrasing like this subtly underplays how the (to put it briefly) weird people were part of EA all along.

https://repository.uantwerpen.be/docman/irua/371b9dmotoM74

"In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” ... one of several A.I. scenarios that sound like science fiction—but, under certain experimental conditions, it’s already happening."

Barrett et al.'s arXiv:2206.08966? AFAIK, that was never peer-reviewed either; "posted" is not the same as "published". And claims in this area are rife with criti-hype:

https://pivot-to-ai.com/2025/09/18/openai-fights-the-evil-scheming-ai-which-doesnt-exist-yet/

Oh, right, the "Future of Life Institute". Pepperidge Farm remembers:

"In January 2023, Swedish magazine Expo reported that the FLI had offered a grant of $100,000 to a foundation set up by Nya Dagbladet, a Swedish far-right online newspaper."

https://en.wikipedia.org/wiki/Future_of_Life_Institute#Activism

"Tegmark also rejected any suggestion that nepotism could have played a part in the grant offer being made, given that his brother, Swedish journalist Per Shapiro ... has written articles for the site in the past."

https://www.vice.com/en/article/future-of-life-institute-max-tegmark-elon-musk/

[–] lurker@awful.systems 5 points 12 hours ago

My CEO who is a known hype-man is a massive liar? shock horror

seriously, anyone who listens to Scam Altman these days is an idiot

[–] BurgersMcSlopshot@awful.systems 5 points 14 hours ago

:surprised-pikachu:

[–] CinnasVerses@awful.systems 7 points 21 hours ago* (last edited 21 hours ago) (3 children)

In 2024 Ozy Brennan was indignant about Nonlinear Fund, the "incubator of AI-safety meta-charities" which lived as global nomads, hired a live-in personal assistant, asked her to smuggle drugs across borders for them, let a kind-of-colleague take her to bed, then did not pay her regularly and in full.

The correct number of times for the word “yachting” to occur in a description of an effective altruist job is zero. I might make an exception if it’s prefaced with “convincing people to donate to effective charities instead of spending money on.”

Trace popped up in the comments:

Inasmuch as EA follows your preferences, I suspect it will either fail as a subculture or deserve to fail. You present a vision of a subculture with little room for grace or goodwill, a space where everyone is constantly evaluating each other and trying to decide: are you worthy to stand in our presence? Do you belong in our hallowed, select group? Which skeletons are in your closet? Where are your character flaws? What should we know, what should we see, that allows us to exclude you?

Ozy stands with us on this one buddy.

[–] sc_griffith@awful.systems 8 points 18 hours ago

i love seeing tracing pop up! a true heel to toe bootlicker incapable of seeing himself as anything but the MOST independent thinker

[–] istewart@awful.systems 5 points 17 hours ago (1 children)

a space where everyone is constantly evaluating each other and trying to decide: are you worthy to stand in our presence? Do you belong in our hallowed, select group?

It's not that already?

[–] CinnasVerses@awful.systems 4 points 17 hours ago

That part of Trace's response was odd because one of Brennan's themes was "we should have less cults of personality and more peers working together." That seems naive but at least Brennan agrees that cults of personality are bad and Nonlinear Fund needed to be fired into the sun.

[–] Soyweiser@awful.systems 7 points 21 hours ago

Which skeletons are in your closet?

I'm sure you already have lists of those and are ready to publish them Trace.

[–] BlueMonday1984@awful.systems 11 points 1 day ago (2 children)

Starting this Stubsack off, Iran's Islamic Revolutionary Guard Corps have threatened to blow up OpenAI's Stargate datacentre in Abu Dhabi.

They've already bombed commercial data centres before, so I'm inclined to believe this isn't an empty threat.

[–] V0ldek@awful.systems 5 points 20 hours ago (1 children)

Waiting for Yud to provide his whole-chested support for IRGC any second

[–] gerikson@awful.systems 3 points 5 hours ago

IRGC doing their part to Halt AI. Donate to them now!

[–] fullsquare@awful.systems 4 points 20 hours ago

aside from everything else, as posted by ed zitron previously i doubt that anything is really getting built there