this post was submitted on 27 Oct 2025
23 points (92.6% liked)

TechTakes

2276 readers
136 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the spooky surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Happy Halloween, everyone!)

you are viewing a single comment's thread
view the rest of the comments
[–] sinedpick@awful.systems 16 points 4 days ago (4 children)

Ugh. Hank Green just posted a 1-hour interview with Nate Soares about That Book. I'm halfway through on 2x speed and so far zero skepticism of That Book's ridiculous premises. I know it's not his field but I still expected a bit more from Hank.

A YouTube comment says it better than I could:

Yudkowsky and his ilk are cranks.

I can understand being concerned about the problems with the technology that exist now, but hyper-fixating on an unfalsifiable existential threat is stupid as it often obfuscates from the real problems that exist and are harming people now.

[–] UltimateNoob@programming.dev 8 points 4 days ago* (last edited 4 days ago) (2 children)

there is now a video on SciShow about it too.

This perception of AI as a competent agent that is inching ever so closer to godhood is honestly gaining way too much traction for my tastes. There's a guy in the comments of Hank's first video, I checked his channel and he has a video "We Are Not Ready for Superintelligence" and it got whopping 8 million views! There's another channel I follow for sneers and their video on Scott's AI 2027 paper has 3.7 and million views and a video about AI "attempted murder" has 8.5 million. Damn.

I wonder when the market finally realises that AI is not actually smart and is not bringing any profits, and subsequently the bubble bursts, will it change this perception and in what direction? I would wager that crashing the US economy will give a big incentive to change it but will it be enough?

[–] ShakingMyHead@awful.systems 6 points 3 days ago (1 children)

I could also see the response to the bubble bursting being something like "At least the economy crashing delayed the murderous superintelligence."

[–] o7___o7@awful.systems 3 points 3 days ago* (last edited 3 days ago) (1 children)

I'm betting on a new version of the "stabbed in the back" myth. Fash love that one.

[–] pikesley@mastodon.me.uk 4 points 3 days ago (1 children)

@o7___o7 @ShakingMyHead it's a cult: it can never fail, it can only *be* failed

[–] o7___o7@awful.systems 2 points 2 days ago

"We would have been immortal God-Kings if not for you meddling (woke) kids!"

[–] BlueMonday1984@awful.systems 4 points 3 days ago

I wonder when the market finally realises that AI is not actually smart and is not bringing any profits, and subsequently the bubble bursts, will it change this perception and in what direction? I would wager that crashing the US economy will give a big incentive to change it but will it be enough?

Once the bubble bursts, I expect artificial intelligence as a concept will suffer a swift death, with the many harms and failures of this bubble (hallucinations, plagiarism, the slop-nami, etcetera) coming to be viewed as the ultimate proof that computers are incapable of humanlike intelligence (let alone Superintelligence™). There will likely be a contingent of true believers even after the bubble's burst, but the vast majority of people will respond to the question of "Can machines think?" with a resounding "no".

AI's usefulness to fascists (for propaganda, accountability sinks, misinformation, etcetera) and the actions of CEOs and AI supporters involved in the bubble (defending open theft, mocking their victims, cultural vandalism, denigrating human work, etcetera) will also pound a good few nails into AI's coffin, by giving the public plenty of reason to treat any use of AI as a major red flag.

[–] Architeuthis@awful.systems 11 points 4 days ago

it often obfuscates from the real problems that exist and are harming people now.

I am firmly on the side of it's possible to pay attention to more than one problem at a time, but the AI doomers are in fact actively downplaying stuff like climate change and even nuclear war, so them trying to suck all the oxygen out of the room is a legitimate problem.

Yudkowsky and his ilk are cranks.

That Yud is the Neil Breen of AI is the best thing ever written ~~about rationalism~~ in a youtube comment.

[–] blakestacey@awful.systems 11 points 4 days ago (1 children)

"I can read HTML but not CSS" —Eliezer Yudkowsky, 2021 (and since apparently scrubbed from the Internet, to live only in the sneers of fond memory)

[–] swlabr@awful.systems 5 points 4 days ago

It’s giving japanese mennonite reactionary coding

[–] mii@awful.systems 6 points 4 days ago (1 children)

I made it 30 minutes into this video before closing it.

What I like about Hank is that he usually reacts to community feedback and is willing to change his mind when confronted with new perspectives, so my hope is that enough people will tell him that Yud and friends are cranks and he'll do an update.

[–] Rinn@awful.systems 4 points 2 days ago

I dunno about that, recent knitting drama took a while to clear up, and I'm not sure if AI sceptics are as determined a crowd as pissed off knitters.

(Tl;dr on the drama: there was video on SciShow about knitting that many (myself included) felt was not well researched, misrepresented the craft, and had a misogynistic vibe. It took a lot of pressure from the knitting community to get, in order, a bad "apology", a better apology, and the video taken down.)