I'm not gonna advocate for it to happen but I'm pretty sure the world would be overall in a much healthier place geopolitically if someone actually started yeeting missiles into major American cities and landmarks. It's too easy to not really understand the human impact of even a successful precision strike when the last times you were meaningfully on the other end of the airstrike were ~20 and ~80 years ago, respectively.
YourNetworkIsHaunted
Someone didn't get the memo about nVidia's stock price, and how is Jensen supposed to sign more boobs if suddenly his customers all get missile'd?
You know, I hadn't actually connected the dots before, but the dust speck argument is basically yet another ostensibly-secular reformulation of Pascal's wager. Only instead of Heaven being infinitely good if you convert there's some infinitely bad thing that happens if you don't do whatever Eliezer asks of you.
The big shift in per-action cost is what always seems to be missing from the conversation. Like, in a lot of my experience the per-request cost is basically negligible compared to the overhead of running the service in general. With LLMs not only do we see massive increases in overhead costs due to the training process necessary to build a usable model, each request that gets sent has a higher cost. This changes the scaling logic in ways that don't appear to be getting priced in or planned for in discussions of the glorious AI technocapital future
While I also fully expect the conclusion to check out, it's also worth acknowledging that the actual goal for these systems isn't to supplement skilled developers who can operate effectively without them, it's to replace those developers either with the LLM tools themselves or with cheaper and worse developers who rely on the LLM tools more.
I think it's a better way of framing things than the TESCREALs themselves use, but it still falls into the same kind of science fiction bucket imo. Like, the technology they're playing with is nowhere near close to the level of full brain emulation or mind-machine interface or whatever that you would need to make the philosophical concerns even relevant. I fully agree with what Torres is saying here, but he doesn't mention that the whole affair is less about building the Torment Nexus and more about deflecting criticism away from the real and demonstrable costs and harms of the way AI systems are being deployed today.
Charles, in addition to being a great fiction author, is also an occasion guest here on awful.systems. This is a great article from him, but I'm pretty sure it's done the rounds already. Not that I'm complaining, given how much these guys bitch about science fiction and adjacent subjects.
I'm not comfortable saying that consciousness and subjectivity can't in principle be created in a computer, but I think one element of what this whole debate exposes is that we have basically no idea what actions makes consciousness happen or how to define and identify that happening. Chatbots have always challenged the Turing test because they showcase how much we tend to project consciousness into anything that vaguely looks like it (interesting parallel to ancient mythologies explaining the whole world through stories about magic people). The current state of the art still fails at basic coherence over shockingly small amounts of time and complexity, and even when it holds together it shows a complete lack of context and comprehension. It's clear that complete-the-sentence style pattern recognition and reproduction can be done impressively well in a computer and that it can get you farther than I would have thought in language processing, at least imitatively. But it's equally clear that there's something more there and just scaling up your pattern-maximizer isn't going to replicate it.
In conjunction with his comments about making it antiwoke by modifying the input data rather then relying on a system prompt after filling it with everything, it's hard not to view this as part of an attempt to ideologically monitor these tutors to make sure they're not going to select against versions of the model that aren't in the desired range of "closeted Nazi scumbag."
"We made it more truth-seeking, as determined by our boss, the fascist megalomaniac."
Total fucking Devin move if you ask me.
Copy/pasting a post I made in the DSP driver subreddit that I might expand over at morewrite because it's a case study in how machine learning algorithms can create massive problems even when they actually work pretty well.
It's a machine learning system, not an actual human boss. The system is set up to try and find the breaking point, where if you finish your route on time it assumes you can handle a little bit more and if you don't it backs off.
The real problem is that everything else in the organization is set up so that finishing your routes on time is a minimum standard while the algorithm that creates the routes is designed to make doing so just barely possible. Because it's not fully individualized, this means that doing things like skipping breaks and waiving your lunch (which the system doesn't appear to recognize as options) effectively push the edge of what the system thinks is possible out a full extra hour, and then the rest of the organization (including the decision-makers about who gets to keep their job) turn that edge into the standard. And that's how you end up where we are now, where actually taking your legally-protected breaks is at best a luxury for top performers or people who get an easy route for the day, rather than a fundamental part of keeping everyone doing the job sane and healthy.
Part of that organizational problem is also in the DSP setup itself, since it allows Amazon to avoid taking responsibility or accountability for those decisions. All they have to do is make sure their instructions to the DSP don't explicitly call for anything illegal and they get to deflect all criticism (or LNI inquiries) away from themselves and towards the individual DSP, and if anyone becomes too much of a problem they can pretend to address it by cutting that DSP.