this post was submitted on 05 Jan 2026
16 points (94.4% liked)

TechTakes

2354 readers
119 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(2026 is off to a great start, isn't it? Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] rook@awful.systems 6 points 6 hours ago (3 children)

Been listening to the latest oxide and friends podcast (predictions 2026), and ugh, so much incoherent ai boosting.

They’re an interesting company doing interesting things with a lot of very capable and clever engineers, but every year the ai enthusiasm ramps up, to the point where it seems like they’re not even listening to the things they’re saying and how they’re a little bit contradictory… “everyone will be a 10x vibe coder” and “everything will be made with some level of llm assistance in the near future” vs “no-one should be letting llms access anything where they could be doing permanent damage” and “there’s so much worthless slop in crates.io”. There’s enthusing over llm law firms, without any awareness of the recent robin ai collapse. Talk of llms generating their own programming language that isn’t readily human readable but is somehow more convenient for llms to extrude, but also talking about the need for more human review of vibe code. Simon Willison is there.

I feel like there’s a certain kind of very smart and capable vibe coder who really cannot imagine how people can and are using these tools to avoid having to think or do anything, and aren’t considering what an absolute disaster this is for everything and everyone.

Anyway, I can recommend skipping this episode and only bothering with the technical or more business oriented ones, which are often pretty good.

[–] istewart@awful.systems 3 points 3 hours ago

I'm sure it's all meant to bolster a sales pitch to corporate clients that "this is YOUR AI, that YOU CONTROL!"

I've been wondering, since Rust has a more complex compiler that can take longer to run, and people are typically farming it out to a build/CI server anyway... are these otherwise accomplished vibe coders like Klabnik and the Oxide bros pursuing an experience similar to the REPL/incremental compilation of Lisp or Smalltalk? We've already discussed how the mechanics are similar to a slot machine, but if you can convince yourself you're getting a "liveness" that you wouldn't otherwise get with a compiled, rigorously type-checked language, you're probably more than willing to ignore all that. I'm curious, but not curious enough to go pin one of these people up against the wall, or start poking the slop machine myself.

[–] BlueMonday1984@awful.systems 3 points 5 hours ago (1 children)

Anyway, I can recommend skipping this episode and only bothering with the technical or more business oriented ones, which are often pretty good.

AI puffery is easy for anyone to see through. If they're regularly mistaking for something of actual substance, their technical/business sense is likely worthless, too.

[–] rook@awful.systems 3 points 4 hours ago

There’s room for some nuance there. They make some reasonable predictions, like chatbot use seems likely to enter the dsm as a contributing factor for psychosis, and they’re all experience systems programmers who immediately shot down Willison when he said that an llm-generated device driver would be fine, because device drivers either obviously work or obviously don’t, but then fall foul of the old gell-mann amnesia problem.

Certainly, their past episodes have been good, and the back catalogue stretches back quite some time, but I'm not particularly interested in that sort of discussion here.

[–] rook@awful.systems 3 points 5 hours ago (2 children)

Ugh, I carried to listening to the episode in the hopes it might get better, but it didn’t deliver.

I don’t understand how people can say, with a straight face, that ai isn’t coming for your job and it is just going to make everyone more productive. Even if you ignore all the externalities of providing llm services (which is a pretty serious thing to ignore), have they not noticed the vast sweeping layoffs in the tech industry alone, let alone the damage to other sectors? They seem to be aware that the promise of the bubble is that agi will replace human labour, but seem not to think any harder about that.

Also, Willison thinks that a world without work would be awful, and that people need work to give their lives meaning and purpose and bruh. I cannot even.

[–] istewart@awful.systems 2 points 3 hours ago (1 children)

Even if you ignore all the externalities of providing llm services (which is a pretty serious thing to ignore)

Beyond the obvious and well-discussed material externalities, it strikes me that we don't know and can't yet know the true total cost of the LLM-driven development cycle. The manifestation of security holes and rewrites are possibly still years off in the future, maybe decades in the case of lower-level code. And yet, given industry practice and the mentality of most of the management strata, I have little doubt that such future costs will either a) be ignored completely and thus rendered true externalities or b) somebody else's problem, I done got my bag, brah, see ya...

[–] rook@awful.systems 1 points 3 hours ago

I feel like one day that “no guarantee of merchantability or fitness for any particular purpose” thing will have to give.

[–] jonhendry@iosdev.space 2 points 5 hours ago (1 children)

@rook

I figure two things will happen:

a) In a year or two companies will realize that LLMs aren't going to improve enough, and that they need skilled people because AI has turned their software into a shit show, and start hiring desperately.

or

b) In a year or two LLMs will get good enough for code that the software developed is just good enough despite the deskilling effects, and companies can get by with drastically reduced staff.

[–] rook@awful.systems 3 points 5 hours ago

My gloomy prediction is that (b) is the way things will go, at least in part because there are fewer meaningful consequences for producing awful software, and if you started from something that was basically ok it’ll take longer for you to fail.

Startups will be slopcoded and fail quick, or be human coded but will struggle to distinguish themselves well enough to get customers and investment, especially after the ai bubble pops and we get a global recession.

The problems will eventually work themselves out of the system one way or another, because people would like things that aren’t complete garbage and will eventually discover how to make and/or buy them, but it could take years for the current damage to go away.

I don’t like being a doomer, but it is hard to be optimistic about the sector right now.

[–] sc_griffith@awful.systems 7 points 7 hours ago (1 children)

update on the grok csam story: the heat on this was not dying down, so X has taken steps to address the issue.

update update: by restricting the csam generator to paying users

update update update: actually they didn't do that https://www.theverge.com/news/859309/grok-undressing-limit-access-gaslighting

[–] PedestrianError@towns.gay 6 points 6 hours ago (1 children)

@sc_griffith @BlueMonday1984 So just making it official that rich white men can commit all the sex crimes they want but those with less privilege might be stopped or face consequences.

[–] rook@awful.systems 4 points 5 hours ago

If that won’t sell it to governments around the world, I don’t know what will. Elon’s on to a winner with that strategy.

[–] BlueMonday1984@awful.systems 9 points 21 hours ago (1 children)

Found someone showing some well-founded concern over the state of programming, and decided to share it before heading off to bed:

alt text:

Is anyone else experience this thing where your fellow senior engineers seem to be lobotomised by AI?

I've had 4 different senior engineers in the last week come up with absolutely insane changes or code, that they were instructed to do by AI. Things that if you used your brain for a few minutes you should realise just don't work.

They also rarely can explain why they make these changes or what the code actually does.

I feel like I'm absolutely going insane, and it also makes me not able to trust anyones answers or analysis' because I /know/ there is a high chance they should asked AI and wrote it off as their own.

I think the effect AI has had on our industry's knowledge is really significant, and it's honestly very scary.

[–] Soyweiser@awful.systems 4 points 8 hours ago

Haha, wow the reactions to that, 3 levels deep and suddenly people are talking about screws. (Im being positive here btw, funny to see what people have made/learned and how happy they seem with it).

[–] bitofhope@awful.systems 10 points 1 day ago (2 children)

How to neither downplay the death of Renee Good nor the uncountable number of people, mostly people of color, who were murdered by near equally fascist police forces without the public outrage her murder finally rightly elicited? I am tired and yet I feel bad to even complain about it because look at this shit.

[–] Soyweiser@awful.systems 3 points 8 hours ago

Yeah, just learned a black person was killed in the USA, and the only reason this was getting some attention was because Renee Good was also killed.

With the benefit of a sea of distance between me and the USA this is just really fucked up. Two different Americas.

[–] V0ldek@awful.systems 7 points 11 hours ago* (last edited 11 hours ago) (1 children)

There are two things here in my opinion:

  1. American cops are trained murderers, but they are trained, in particular to avoid causing massive PR disasters with their murders*. A paramilitary goon with a rifle in a government organisation so opaque we still don't even know his identity is materially worse than a cop. It also looks much worse, the police have some completely undue public trust, ICE just looks like military forces.
  2. We immediatelly had video with the full event. When cops kill people of colour there's usually no evidence since again, they know how to pull murder off without causing PR disasters. Basically the only reason George Floyd's murder wasn't successfully brushed aside is that we had video of it, and they tried to bury that shit hard. In this case I don't even think the victim being white or a citizen matters, the event itself is so fucking horrifying it'd elicit outrage anyway. I am 100% sure that if there wasn't video, just witness reports, it'd be out of the media cycle already.

* I don't want this to seem like a moral distinction, if anything the decorum granted to police forces is arguably a stepping stone that brought the USA here. Recall Mamdami's recent words: "For too long, those fluent in the good grammar of civility have deployed decorum to mask agendas of cruelty". HOWEVER, to me personally this is a rather chilling escalation. It shows that the PR part doesn't actually matter anymore. America is so far into the fascist pipeline that paramilitary forces can just execute citizens in broad daylight on the street. They don't need to hide it, they don't need to play coy about it, they can just post-facto label the victim as an Enemy of the State and move on. I'm sorry but to me this is like one step away from just rounding people up against a wall for fun. Human life is not only practically worthless to state actors, it's proudly and openly worthless as a matter of policy.

[–] mawhrin@awful.systems 5 points 10 hours ago (1 children)

i'm a bit conflicted here: on the one hand it's true that the american fascists are now escalating, buyoed by the feeling of being virtually untouchable, but on the other hand, this is not a distinct change of behaviour, it's that they basically widened their target group to include white people too.

the blm protests were fueled not by new knowledge or radically changed police behaviour after all, but by the wider availability of documentation (mainly phone videos).

(and on the gripping hand, extending brutal repressions to a majority group is a sign of escalation. but that only means that a large population of u.s. residents, i.e. the non-white ones, live and have always lived in a totalitarian state; the totalitarianism just wasn't evenly distributed until trump.)

[–] V0ldek@awful.systems 7 points 9 hours ago (1 children)

this is not a distinct change of behaviour

This is what I disagree with. The theatrics of justifying police brutality don't change the outcomes of police brutality -- people still die -- but the fact that the theatrics can now be dispensed of in favour of paramilitaries directly using violence to terrorise the people is a distinct change of behaviour towards fascism.

And I think it's important to recognise that because, as many scholars of fascism have warned time and time again, this is not a binary where a switch get flipped and haha, since today you're in a fascist state. It's a progressive erosion of the social contract. ICE as deployed by the Trump regime right now is a basically textbook run: create a paramilitary force, recruit from existing criminal militias to select for loyalists and violent personalities, normalise them as keepers of order, push out or integrate any other enforcement structures so that the paramilitary becomes dominant. Basically the only difference is that Trump didn't have to create ICE, it was already there just waiting to be pushed through the pipeline.

Does this event fundamentally change how you and I perceive America? No, if you were paying attention you knew the rot inside, and you've been shouting that Trump is a fascist since the very beginning. It is, however, a sign that the situation is much worse than it was months ago, that fascism is progressing, and if this is the point at which someone not paying attention wisens up and goes "shit, we are moving towards a totalitarian nightmare" then good, welcome, grab a pitchfork.

[–] mawhrin@awful.systems 5 points 8 hours ago

It is, however, a sign that the situation is much worse than it was months ago, that fascism is progressing, and if this is the point at which someone not paying attention wisens up and goes “shit, we are moving towards a totalitarian nightmare” then good, welcome, grab a pitchfork.

oh, i'm not a pitchfork purist. anyone is welcome to grab one at any time.

[–] o7___o7@awful.systems 17 points 1 day ago (5 children)

Hey I think I discovered a way to fix America! What if we rewrite the US Constitution in Rust?

[–] EponymousBosh@awful.systems 2 points 3 hours ago

Nearly laughed out loud in a waiting room

[–] Soyweiser@awful.systems 2 points 8 hours ago

Too late, I already put it on the blockchain.

[–] jackr@lemmy.dbzer0.com 6 points 1 day ago

maybe memory safety would help people remember what letting fascists run things tends to do to your country?

[–] froztbyte@awful.systems 4 points 23 hours ago* (last edited 23 hours ago)

P0 blocker: USAian memory management suffers from multiple holes and transactional consistency issues

Prior attempts haven’t worked, closing LOLBUG

[–] BlueMonday1984@awful.systems 2 points 1 day ago (2 children)

I'd personally just overthrow the US government and make it a British colony once more /j

[–] fullsquare@awful.systems 5 points 1 day ago (1 children)

brexit would look wild in that timeline

[–] antifuchs@awful.systems 5 points 21 hours ago

Breconstruction

[–] o7___o7@awful.systems 8 points 1 day ago* (last edited 1 day ago) (1 children)

You should probably read "Monarchy Considered Harmful" (1776)

Edit: sorry I'll behave now

[–] froztbyte@awful.systems 2 points 23 hours ago (1 children)

Edit: sorry I'll behave now

promises promises.. ;p

[–] BlueMonday1984@awful.systems 5 points 1 day ago

The OpenAI Psychosis Suicide Machine now has a medical spinoff, which automates HIPAA violations so OpenAI can commit medical malpractice more efficiently

[–] sinedpick@awful.systems 12 points 1 day ago (1 children)

got my Urbit newsletter for this quarter (or whatever the fuck the cadence is) and what stood out to me this time was nockchain.org. I was going to sit and do a deep dive to come up with sneers for this but I just don't have the executive function right now. @self thoughts?

[–] self@awful.systems 13 points 1 day ago (2 children)

dear fuck just when I think I’m out urbit pulls me back in

analysis coming soon, I’m reviewing some technical sources provided by a world-class cryptocurrency expert (yes it’s @dgerard@awful.systems) and howdy fuck does this ever resemble the cardano grift but with Haskell replaced with a much worse ML with a much less coherent type system

[–] jackr@lemmy.dbzer0.com 6 points 1 day ago* (last edited 1 day ago) (2 children)

Are there any resources(read: sneers) on cardano*? I have a family member who is into that shit.

* Cardano specifically I mean, I am already aware of the extensive literature on crypto in general

[–] V0ldek@awful.systems 4 points 10 hours ago

It's a very elaborate parkour trick in Haskell that through piles and piles or rigorous category theory manages to achieve Nothing in a type-safe manner.

[–] dgerard@awful.systems 5 points 12 hours ago* (last edited 10 hours ago)

here's mine on their foray into smart contracts https://davidgerard.co.uk/blockchain/2021/09/06/news-the-eos-ico-was-a-fake-pump-cardano-smart-contracts-fail-cycling-vs-nexthash-microstrategy-de-indexed-el-salvador-live-tomorrow/

cardano is a shitcoin for traders to trade like a shitcoin and it pretends being written in Haskell by a mathematician who failed to get his degree, lied about going into a Ph.D program and claimed to have worked for DARPA but did not makes it very interesting and not just a shitcoin. this turns out not to be the case.

tl;dr UTXO was always fucking stupid if you wanted a system that did more than be a basic shitcoin

if a family member is into cardano nothing will get through. charles is a mathematical genius you see.

[–] Evinceo@awful.systems 7 points 1 day ago* (last edited 1 day ago)

but with Haskell replaced with a much worse ML with a much less coherent type system

Urbit moment

[–] nfultz@awful.systems 8 points 1 day ago

PSA - https://consumer.drop.privacy.ca.gov/ - CA residents can now request data deletion to many adtech data brokers.

load more comments
view more: next ›