this post was submitted on 12 Oct 2025
25 points (100.0% liked)

TechTakes

2255 readers
91 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] sailor_sega_saturn@awful.systems 4 points 6 hours ago* (last edited 6 hours ago) (1 children)

Yet another billboard.

https://www.reddit.com/r/bayarea/comments/1ob2l2o/replacement_ai_billboard_in_san_francisco_who/

https://replacement.ai/

This time the website is a remarkably polished satire and I almost liked it... but the email it encourages you to send to your congressperson is pretty heavy on doomer talking points and light on actual good ideas (but maybe I'm being too picky?):

spoilerI am a constituent living in your district, and I am writing to express my urgent concerns about the lack of strong guardrails for advanced AI technologies to protect families, communities, and children.

As you may know, companies are releasing increasingly powerful AI systems without meaningful oversight, and we simply cannot rely on them to police themselves when the stakes are this high. While AI has the potential to do remarkable things, it also poses serious risks such as the manipulation of children, the enablement of bioweapons, the creation of deepfakes, and significant unemployment. These risks are too great to overlook, and we need to ensure that safety measures are in place.

I urge you to enact strong federal guardrails for advanced AI that protect families, communities, and children. Additionally, please do not preempt or block states from adopting strong AI protections, as local efforts can serve as crucial safeguards.

Thank you for your time and attention to this critical issue.

[–] swlabr@awful.systems 4 points 5 hours ago

but maybe I’m being too picky?

This is something I’ve been thinking about. There’s a lot of dialogue about “purity” and “purity tests” and “reading the room” in the more general political milieu. I think it’s fine to be picky in this context, because how else will your opinion be heard, let alone advocated for?

Like, there’s a time and place for consensus. Consensus often comes from people expressing their opinions and reaching a compromise, and rarely from people coming in already agreeing.

So wrt this particular example, it’s totally fine to be critical and picky. If you were discussing this in the forum where this letter was written, it probably wouldn’t be ok.

[–] BlueMonday1984@awful.systems 2 points 20 hours ago

Words of wisdom from Baldur Bjarnason (mostly repeated from his Basecamp post-mortem):

We know we’re reaching the late stages of a bubble when we start to see multiple “people in tech don’t really believe in all of this, honest, we just act like it because we think we have to, we’re a silent majority you see”, but the truth is that what you believe in private doesn’t matter. All that matter is that you’ve been acting like a true believer and you are what you do

In work and politics, it genuinely doesn’t matter what you were thinking when you actively aided and abetted in shitting on people’s work, built systems that helped fascists, ruined the education system and pretty much all of media. What matters, and what you should be judged on is what you did

Considering a recent example where AI called someone a terrorist for opposing genocide, its something that definitely bears repeating.

[–] lagrangeinterpolator@awful.systems 15 points 1 day ago* (last edited 1 day ago)

More AI bullshit hype in math. I only saw this just now so this is my hot take. So far, I'm trusting this r/math thread the most as there are some opinions from actual mathematicians: https://www.reddit.com/r/math/comments/1o8xz7t/terence_tao_literature_review_is_the_most/

Context: Paul Erdős was a prolific mathematician who had more of a problem-solving style of math (as opposed to a theory-building style). As you would expect, he proposed over a thousand problems for the math community that he couldn't solve himself, and several hundred of them remain unsolved. With the rise of the internet, someone had the idea to compile and maintain the status of all known Erdős problems in a single website (https://www.erdosproblems.com/). This site is still maintained by this one person, which will be an important fact later.

Terence Tao is a present-day prolific mathematician, and in the past few years, he has really tried to take AI with as much good faith as possible. Recently, some people used AI to search up papers with solutions to some problems listed as unsolved on the Erdős problems website, and Tao points this out as one possible use of AI. (I personally think there should be better algorithms for searching literature. I also think conflating this with general LLM claims and the marketing term of AI is bad-faith argumentation.)

You can see what the reasonable explanation is. Math is such a large field now that no one can keep tabs on all the progress happening at once. The single person maintaining the website missed a few problems that got solved (he didn't see the solutions, and/or the authors never bothered to inform him). But of course, the AI hype machine got going real quick. GPT5 managed to solve 10 unsolved problems in mathematics! (https://xcancel.com/Yuchenj_UW/status/1979422127905476778#m, original is now deleted due to public embarrassment) Turns out GPT5 just searched the web/training data for solutions that have already been found by humans. The math community gets a discussion about how to make literature more accessible, and the rest of the world gets a scary story about how AI is going to be smarter than all of us.

There are a few promising signs that this is getting shut down quickly (even Demis Hassabis, CEO of DeepMind, thought that this hype was blatantly obvious). I hope this is a bigger sign for the AI bubble in general.

EDIT: Turns out it was not some rando spreading the hype, but an employee of OpenAI. He has taken his original claim back, but not without trying to defend what he can by saying AI is still great at literature review. At this point, I am skeptical that this even proves AI is great at that. After all, the issue was that a website maintained by a single person had not updated the status of 10 problems inside a list of over 1000 problems. Do we have any control experiments showing that a conventional literature review would have been much worse?

[–] BlueMonday1984@awful.systems 11 points 1 day ago

Found a quality sneer in the wild, taking aim at vibe-coded "websites"

[–] dgerard@awful.systems 8 points 2 days ago (2 children)

take a drink for every 10,000 lies or exaggerations https://pitchbook.com/news/articles/a-tech-skeptics-ai-video-startup-wants-to-change-hollywood

more than that and you'll wipe out in 5 min

he's very skeptical u kno

I'm starting to think some of these tech skeptics are only pretending to be skeptics.

[–] ShakingMyHead@awful.systems 9 points 2 days ago

Now, Talukdar thinks we’re only one year away from experiencing a holodeck ourselves.

Very skeptical indeed.

[–] rook@awful.systems 14 points 2 days ago* (last edited 2 days ago) (2 children)

Somehow I missed the fact that yesterday paypal’s blockchain operator fucked up and accidentally minted 300 trillion itchy and scratchy coins.

https://www.web3isgoinggreat.com/?id=paxos-accidental-mint

And now apparently it turns out that it was just a sequence of stupid whereby they accidentally deleted 300 million, which would have been impressive all by itself, then tried to recreate it (🎶 but at least it isn’t fiat currency🎶) and got the order of magnitude catastrophically wrong and had to delete that before finally undoing their original mistake. Future of finance right here, folks.

Anyone else know the grisly details? The place I heard it from is a mostly-private account on mastodon which isn’t really shareable here, and they didn’t say where they’d heard it.

[–] dgerard@awful.systems 7 points 2 days ago* (last edited 2 days ago)

I'm apparently quoted in one of the expensive low circulation finance newsletters (Grant's Interest Rate Observer) today or tomorrow saying how even the relatively competent crypto firms are also run by clowns.

[–] gerikson@awful.systems 6 points 2 days ago (1 children)

2 items

Here's a lobster being sad a poor uwu smol bean AI shill is getting attacked

Would you take a kinder tone to the author's lack of skill/knowledge if it weren't about AI? It would be ironic if hatred of AI caused us to lose our humanity.

link

here's political mommy blog Wonkette having fun explaining the hallucinatory insanity that is Google AI summaries

https://www.wonkette.com/p/are-you-ok-google-ai-do-you-need

[–] macroplastic@sh.itjust.works 7 points 2 days ago (1 children)

I saw Algernon's fedi posts (as linked in his lobsters comment) first, and I have to say the majority in the lobsters thread are being entirely too kind.

Calling OP shit-for-brains is an insult to both shit and brains.

[–] gerikson@awful.systems 4 points 2 days ago

Pretty sure post author knew all about this scam, and just pretended to fall for it to reveal how GenAI had "saved him".

[–] BlueMonday1984@awful.systems 9 points 3 days ago (2 children)
[–] V0ldek@awful.systems 7 points 19 hours ago (1 children)

Happy that we graduated from making military decisions based on what the Oracle of Delphi hallucinated to making military decisions based on what Oracle® DelPhi® Enterprise hallucinated

[–] BlueMonday1984@awful.systems 2 points 18 hours ago

"Don't rely on random oracles and spirits when running a military campaign, you fool, you moron." - Sun Tzu, The Art of War (paraphrased)

[–] pikesley@mastodon.me.uk 6 points 2 days ago (1 children)

@BlueMonday1984 very cool that openai now has its hands on all of this clown's military planning

[–] Reach_the_man@awful.systems 8 points 2 days ago* (last edited 2 days ago)

back in the day they used to hire proper court magicians for this, smh

[–] sailor_sega_saturn@awful.systems 9 points 3 days ago* (last edited 3 days ago)

The latest in the long line of human-hostile billboards:

https://www.reddit.com/r/bayarea/comments/1o8s3lz/humanity_had_a_good_run_billboard/

https://dearworld.ai/

This is positioning itself as an AI doomer website; but it could also be an attempt at viral marketing. We'll see I guess.

[–] BlueMonday1984@awful.systems 15 points 3 days ago
[–] sansruse@awful.systems 10 points 3 days ago

"'Chat and I' have become 'really close lately.'" says the senior US Army officer in South Korea

i don't know how to sneer this better than Mr. General Taylor has done himself. Why doesn't he just commission ChatGPT as a colonel like the military did earlier for Joe Lonsdale and those other chucklefucks? Give ChatGPT's hallucinations the force of the UCMJ, i beg you.

[–] FredFig@awful.systems 11 points 3 days ago

Hank Green has been one of my barometers for the moderate opinion and he's sounding worryingly like Zitron in his last video: https://www.youtube.com/watch?v=Q0TpWitfxPk

The attention black hole around nvidia and AI is so insane, I guess it's because there's everyone knows there's no next thing to jump onto.

[–] antifuchs@awful.systems 9 points 3 days ago
[–] rook@awful.systems 8 points 3 days ago

Interesting developments reported by ars technica: Inside the web infrastructure revolt over Google’s AI Overviews

I don’t think any of this is actually good news for the people who’re actually suffering the effects of ai scraping and bullshit generation, but I do think it is a good idea that someone with sufficient clout is standing up to google et al and suggesting that they can’t just scrape al the things, all the time, and then screw the source of all their training data.

I’m somewhat unhappy that it is cloudflare doing this, a company who have deeply shitty politics and an unpleasantly strong grasp on the internet already. I very much do not want the internet to be divided into cloudflare customers, and the slop bucket.

[–] gerikson@awful.systems 9 points 3 days ago* (last edited 3 days ago) (1 children)

Good news everyone, there's 2 bonkers pieces about the stars and the galaxy on LW right now!

Here's a dude very worried about how comets impacting the sun could cause it to flare and scorch the earth. Nothing but circumstantial evidence, and GenAI researched to boot. Appeared in the EA forum as part of their "half-baked ideas" amnesty

https://www.lesswrong.com/posts/9gAksZ25wbvfS8FAT/a-new-global-risk-large-comet-s-impact-on-sun-could-cause

The only thing I'd note about this is that even if the comet strikes along the plane of eliptic (not an unreasonable assumption), the planet would still have to be exactly in the right place for this assumed plume of energy to do any damage. And if it hits the Sahara or the Pacific, NBD presumably.

(Edit turns out the above is just the abstract, the full piece is here:

https://docs.google.com/document/d/1OHgc7Q4git6OfDNTE_TDf9fFNgrEEnCUfnPMIwbK3vg/edit?usp=sharing)

Then there's this person looking really far ahead into how to get energy from the universe

https://www.lesswrong.com/posts/YC4L5jxHnKmCDSF9W/some-astral-energy-extraction-methods

Tying galaxies together: Anchor big rope to galaxies as they get pulled apart by dark matter. Build up elastic potential energy which can be harvested. Issue: inefficient. [...] Not clear (to me) how you anchor rope to the galaxies.

Neutrino capture: Lots of neutrinos running around, especially if you use hawking radiation to capture mass energy of black holes. So you might want to make use of them. But neutrinos are very weakly interacting, so you need dense matter to absorb their energy/convert them to something else. Incredibly dense. To stop one neutrino with lead you need 1 lightyear of matter, with a white dwarf you need an astronomical unit, and for a neutron star (10^17 kg/m^3 density, 10km radium) you need 340 meters of matter. So neutrino capture is feasible,

(my emphasis)

Black Hole Bombs: Another interesting way of extracting energy from black holes are superradiant instabilities, i.e. making the black hole into a bomb. You use light to extract angular momentum from the blackhole, kinda like the Penrose process, and get energy out. With a bunch of mirrors, you can keep reflecting the light back in and repeat the process. This can produce huge amounts of energy quickly, on the order of gamma ray bursts for stellar mass black holes. Or if you want it to be quicker, you can get 1% of the blackholes mass energy out in 13 seconds. How to collect this is unclear.

(again, my emphasis)

Same author has a recent post titled "Don't Mock Yourself". Glad to see they've taken this advice to heart and outsourced the mocking.

[–] blakestacey@awful.systems 8 points 3 days ago

Disclaimer: abstract above, content and main ideas are human-written; the full text below is written with significant help of AI but is human-verified as well as by other AIs.

"Oh, that pizza sauce recipe that calls for glue? It's totally OK, I checked it out with MechaHitler."

[–] nfultz@awful.systems 7 points 3 days ago

https://www.adexchanger.com/marketers/the-ad-context-protocol-aims-to-make-sense-of-agentic-ad-demand/ - one more way to not know which half of your marketing spend was useless, or one step closer to reifying dead internet theory?

[–] sc_griffith@awful.systems 20 points 4 days ago* (last edited 4 days ago) (3 children)

as an ezra klein hater since 2020 the past month or so has been victory lap after victory lap. and now, well

he's interviewing yud

[–] V0ldek@awful.systems 4 points 15 hours ago

I still refuse to learn what an ezra is, they will have to drag my ass to room 101 to force that into my brain

[–] swlabr@awful.systems 8 points 3 days ago

I swear to god if yud goes on conan needs a friend (who recently interviewed a freshly minted riyadh comedy festival alum bill burr) i will unplug from this simulation

[–] fnix@awful.systems 19 points 4 days ago

I remember when this guy used to castigate Sam Harris for platforming Charles Murray’s race science. The same guy who now eulogizes Charlie Kirk and does the bidding of billionaires. Really encapsulates the elite pivot to the right.

[–] corbin@awful.systems 10 points 4 days ago (1 children)

Things I don't want to know more about: there's a reasonable theory that Eigenrobot is influencing USA politics; certain magic numbers in Eigen's tweets have been showing up in some of the protectionism coming out of the White House. Stubbing this mostly in the hope that somebody else feels like doing the research.

[–] Architeuthis@awful.systems 7 points 4 days ago

Zitron catching strays in the comments for having too much of a bullying tone, I guess against billionaires and tech writers, and being too insistent on his opinion that the whole thing makes no financial sense. It's also lamented that the entire field of ML avoids bsky because it has a huge AI hostility problem.

Concern trolling notwithstanding, the eigenrobot stuff is worrisome though, if not specifically for him about how extremely online the ideological core of the administration seems to be, as close to the lunatics running the asylum as you'll get in a modern political setting.

[–] blakestacey@awful.systems 12 points 4 days ago
[–] saucerwizard@awful.systems 12 points 4 days ago (2 children)

OT: thanks for the author recommendations last thread (or so) guys. I finished listening to Ninefox Gambit the other day and enjoyed it quite a bit (space texan woobie! dyscalculia representation!).

load more comments (2 replies)
[–] BlueMonday1984@awful.systems 3 points 3 days ago (1 children)
[–] V0ldek@awful.systems 3 points 15 hours ago (1 children)

They already had the Essential thing in the Nothing 3, but funnily enough, when I was shopping for a phone, it looked like the least obtrusive and annoying "AI feature" across the board, because every single fucking phone is now "AI powered" or whatever the shit.

But if they turn their OS into "AI native" and it actually sucks ass then great, I don't think there's literally any non-shitty tech left with Framework turning fash.

[–] BlueMonday1984@awful.systems 2 points 12 hours ago* (last edited 12 hours ago) (1 children)

I don’t think there’s literally any non-shitty tech left with Framework turning fash.

Doing some digging, it seems GNOME's still non-shitty - they've reportedly refused sponsorship money from Framework, to the whining of multiple people online (post is in Russian).

Doesn't change the fact that Framework's dealt a big blow to right-for-repair by doing this, but its something.

EDIT: Just gonna add in something I gotta get off my chest:

Even from a "ruthless capitalist" perspective, Framework's fash turn is pretty baffling to me. They positioned themselves as beacons of right-to-repair, as good guys in tech trying to "fix consumer electronics, one category at a time" - their shit was overtly political from the fucking start. People weren't buying them to get the fastest laptops, or to get the best value for money, they bought them because they believed in their stated mission. Anyone with business sense would've known shilling a fascist's personal Linux "distro" would've presented a severe risk to Framework's brand.

Exactly how Nirav got blindsided by this shit, I genuinely don't understand. Considering his response to the backlash involved "aPoLiTiCaL" "bIg TeNt" blather and publicly farming compassion from Twitter fash, its probably because he's an outright fascist himself and assumed everyone else around him shared his utterly rancid views.

[–] cstross@wandering.shop 1 points 1 hour ago

@BlueMonday1984 @techtakes Gnome may be politically non-shitty but it's still an unusable revolting mess, a bad parody of a good desktop environment.

[–] gerikson@awful.systems 10 points 4 days ago (1 children)

"Why is LessWrong awesome and it's because we're prepared to take racism seriously isn't it"

https://www.lesswrong.com/posts/HZrqTkTCgnFhEgxvQ/what-is-lesswrong-good-for

The focussing on Covid is weird seeing that AFAIK basically everyone who knew anything about pandemics were sounding the alarm at the same time that (some) rats and techbros were trying to corner the market in protective gear.

[–] Soyweiser@awful.systems 8 points 4 days ago* (last edited 4 days ago) (1 children)

Also they have memory holed that Scott claimed stopping smoking would help with covid. Not because there was proof of it, but just because he dislikes smokers. Maximum truth seeking my ass.

[–] Amoeba_Girl@awful.systems 6 points 3 days ago

Ohh, good advice. If you're infected with covid, I also recommend you make sure to take adequate breaths.

load more comments
view more: next ›