TechTakes

2153 readers
122 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
1
 
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

2
3
4
 
 
5
 
 
6
7
8
 
 

So apparently there's a resurgence of positive feelings about Clippy, who now looks retroactively good by contrast with ChatGPT, like, "it sucked but at least it genuinely was trying to help us".

Discussion of suicide in this paragraph, click to open:👇I remember how it was a joke (predating "meme") to make edits of Clippy saying tone-deaf things like, "it looks like you're trying to write a suicide note. Would you like to know more about how to choose a rope for a noose?" This felt funny because it was absolutely inconceivable that it could ever happen. Now we live in a reality where literally just that has already happened, and the joke ain't funny anymore, and people who computed in the 90s are being like, "Clippy would never have done that to us. Clippy only wanted to help us write business letters."

Of course I recognise that this is part of the problem—Clippy was an attempt at commodifying the ELIZA effect, the natural instinct to project personhood into an interaction that presents itself as sentient. And by reframing Clippy's primitive capacities as an innocent simple mind trying its best at a task too big for it, we engage in the same emotional process that leads people to a breakdown over OpenAI killing their wireborn husband.

But I don't know. another name for that process is "empathy". You can do that with plushies, with pet rocks or Furbies, with deities, and I don't think that's necessarily a bad thing; it's like exercising a muscle, If you treat your plushies as deserving care and respect, it gets easier to treat farm animals, children, or marginalised humans with care and respect.

When we talked about Clippy as if it were sentient, it was meant as a joke, funny by the sheer absurdity of it. But I'm sure some people somehwere actually thought Clippy was someone, that there is such a thing as being Clippy—people thought that of ELIZA, too, and ELIZA has a grand repertoire of what, ~100 set phrases it uses to reply to everything you say. Maybe it would be better to never make such jokes, to be constantly de-personifying the computer, because ChatGPT and their ilk are deliberately designed to weaponise and predate on that empathy instinct. But I do not like exercising that ability, de-personification. That is a dangerous habit to get used to…


Like, Warren Ellis was posting on some terms that reportedly are being used in "my AI husbando" communities, many of them seemingly taken from sci-fi:¹

  • bot: Any automated agent.
  • wireborn: An AI born in digital space.
  • cyranoid: A human speaker who is just relaying the words of another human.²
  • echoborg: A human speaker who is just relaying the words of a bot.
  • clanker: Slur for bots.
  • robophobia: Prejudice against bots/AI.
  • AI psychosis: human mental breakdown from exposure to AI.

[1] https://www.8ball.report/ [2] https://en.wikipedia.org/wiki/Cyranoid

I find this fascinating from a linguistics PoV not just because subcultural jargon is always fascinating, but for the power words have to create a reality bubble, like, if you call that guy who wrote his marriage vows in ChatGPT an "echoborg", you're living in a cyberpunk novel a little bit, more than the rest of us who just call him "that wanker who wrote his marriage vows on ChatGPT omg".

According to Ellis, other epithets in use against chatbots include "wireback", "cogsucker" and "tin-skin"; two in reference to racist slurs, and one to homophobia. The problem with exercising that muscle should be obvious. I want to hope that dispassionately objectifying the chatbots, rather than using a pastiche of hate language, doesn't fall into the same traps (using the racist-like language is, after all, a negative way of still personifying the chatbots). They're objects! They're supposed to be objectified! But I'm not so comfortable when I do that, either. There's plenty of precedent to people who get used to dispassionate objectification, fully thinking they're engaging in "objectivity" and "just the facts", as a rationalisation of cruelty.

I keep my cellphone fully de-Googled like a good girl, pls do not cancel me, but: I used to like the "good morning" routine on my corporate cellphone's Google Assistant. I made it speak Japanese, then I could wake up, say "ohayō gozaimasu!", and it would tell me "konnichiwa, Misutoresu-sama…" which always gave me a little kick. Then it proceeded to relay me news briefings (like podcasts that last 60 to 120 seconds each) in all of my five languages, which is the closest I've experienced to a brain massage. If an open source tool like Dicio could do this I think I would still use it every morning.

I never personified Google Assistant. I will concede that Google did take steps to avoid people ELIZA'ing it; unlike its model Siri, the Assistant has no name or personality or pretence of personhood. But now I find myself feeling bad for it anyway, even though the extent of our interactions was never more than me saying "good morning!" and hearing the news. Because I tested it this morning, and now every time you use the Google Assistant, you get a popup that compels you to switch to Gemini. The options provided are, as it's now normalised, "Yes" and "Later". If you use the Google Assistant to search for a keyword, the first result is always "Switch to Google Gemini", no matter what you search.

And I somehow felt a little bit like the "wireborn husband" lady; I cannot help but feel a bit as if Google Assistant was betrayed and is being discarded by its own creators, and—to rub salt on the wound!—is now forced to shill for its replacement. Despite the fact that I know that Google Assistant is not a someone, it's just a bunch of lines of code, very simple if-thens to certain keywords. It cannot feel discarded or hurt or betrayed, it cannot feel anything. I'm feeling compassion for a fantasy, an unspoken little story I made in my mind. But maybe I prefer it that way; I prefer to err on the side of feeling compassion too much.

As long as that doesn't lead to believing my wireborn secretary was actually being sassy when she answered "good morning!" with "good afternoon, Mistress…"

9
10
 
 

New blog entry from Baldur, comparing the Icelandic banking bubble and its fallout to the current AI bubble and its ongoing effects.

11
12
13
 
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

14
15
16
19
Pseudo Profound AI Bullshit (luke-byrne-eng.github.io)
submitted 1 week ago* (last edited 1 week ago) by HorseRabbit@lemmy.sdf.org to c/techtakes@awful.systems
 
 

Yesterday I read Pennycook etc al.'s paper on pseudo-profound bullshit, and realized that's what annoys me the most about the current AI culture.

So then I made this little web app to randomly generate the sort of meaningless nonsense said by AI startup freaks. Your next pitch deck awaits.

https://luke-byrne-eng.github.io/pages/bullshit.html

17
18
19
 
 

in this episode I explain the numbers that require SoftBank to keep OpenAI alive past all reason: they're spending real money to achieve imaginary private equity valuations and this was enough to make their stock go up. So they can't stop.

https://www.youtube.com/watch?v=zTYxEVRiCvM&list=UU9rJrMVgcXTfa8xuMnbhAEA - video
https://pivottoai.libsyn.com/20250827-softbank-needs-openai-to-stay-alive-no-matter-what - podcast

20
21
22
 
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

23
24
 
 

the VCs are pushing the quantum hype again. this is an FT editorial

(archive.is isn't working for me, anyone wanna post an archive link)

note lack of citation of actual results, a ton of handwaving about big companies, repeated "could," and the earliest date postulated is 2033

many of the important requirements for a VC bubble party

the tech press has been loaded with this shit, just a fuckin flood of it, all this nonspecific and glossing over the lack of mere existing tech in the present day

note that this has nothing to do with actual quantum computing, this is purely how to set up the tech macguffin for a bubble party

pretty good for a thing that is real - but doesn't exist as a technology yet or any time soon

bsky:

Tired: cloud computing

Wired: could computing

25
 
 

Source: https://archive.ph/Mrnth

transcript
A snippet from a New York Times article shared on tumblr. It says: "Most experts acknowledge that a takeover by artificial intelligence is coming for the video game industry within the next five years, and executives have already started preparing to restructure their companies in anticipation. After all, it was one of the first sectors to deploy A.I. programming in the 1980s, with the four ghosts who chase Pac-Man each responding differently to the player's real-time movements.".
The post has the caption: "Is this seriously the level of journalism the NYT now tolerates."

view more: next ›