sansruse

joined 4 months ago
[–] sansruse@awful.systems 6 points 4 days ago (3 children)

to what extent does he actually believe this? is that even a meaningful question? i think this narrative is way too esoteric and absurd to really convince anyone, so it doesn't even appear valuable if his goal is to flood the zone with post-truth nonsense.

[–] sansruse@awful.systems 11 points 2 weeks ago (7 children)

this is nearly as dumb as elon's "show me your 5 best lines of code" shit while he was err, downsizing twitter. What are you supposed to do when a code review flags some bad code? fondle your prompts repeatedly until that part gets fixed? Sounds like a solution that will often be much less efficient than making edits by hand. Maybe they just don't do code reviews now, that would be cool.

It seems clear that every single company that makes money off of software is or will soon be in a race to the bottom on software quality and that's just amazing, i love it for everyone. I choose to laugh rather than cry.

[–] sansruse@awful.systems 10 points 2 weeks ago

i don't know if it's a convention even in the "serious" AI research industry to use anthropomorphic jargon, but it drives me up a wall to see shit like this:

17.6 Theory of Mind Limitations in Agentic Systems

Agentic systems don't have "theory of mind", they cannot infer mental state. they are probabilistic word generators operating within non-deterministic frameworks. They can have a system prompt that tells them to generate text that appears to be an interpretation of another entity's "mental state", and they can even be directed to refer to it as context, but it is not theory of mind and the entity they're generating in reference to may not have a mind at all.

I wish there was some way to stop these dorks from stealing the imprimatur of cognitive science.

[–] sansruse@awful.systems 9 points 2 weeks ago (1 children)

the answer is definitely not to sanction and attempt to destabilize them on behalf of your two equally evil regional client states. The corollary to that is that you cannot produce the necessary conditions for future prosperity by destroying their economy in a way that harms the average person more than the elites.

And that's assuming that we (the west) even want them to prosper or care about their future as a nation. Perhaps in an alternate universe, that would be the motivation for regime change but that is not and has never been the case.

[–] sansruse@awful.systems 5 points 3 weeks ago* (last edited 3 weeks ago)

i expected alastair reynolds to look different but i'm not sure what i actually expected him to look like

[–] sansruse@awful.systems 12 points 1 month ago (11 children)

https://x.com/MrinankSharma/status/2020881722003583421

Anthropic safety research lead quits the field entirely to write poetry with a somewhat cryptic note. Trying to read between the lines here, the most likely explanation (IMO) is that he developed a guilty conscience and anthropic doesn't actually give a shit about any of the human harms created by the technology. Ah well, nevertheless they persisted.

[–] sansruse@awful.systems 9 points 1 month ago

i don't find that name too strange, it's a post-ironic Online Leftist shibboleth

[–] sansruse@awful.systems 6 points 1 month ago (1 children)

They’re not cutting jobs because their financials are in the shitter

Their financials are not even in the shitter! except insofar as their increased AI capex isn't delivering returns, so they need to massage the balance sheet by doing rolling layoffs to stop the feral hogs from clamoring and stampeding on the next quarterly earnings call.

[–] sansruse@awful.systems 13 points 1 month ago (1 children)

anyone who can get a job at palantir can get an equivalent paying job at a company that's at least measurably less evil. what a lazy copout

[–] sansruse@awful.systems 5 points 1 month ago

while it's obviously stupid and misguided to try to hold the nobel foundation criminally liable for making yet another bad selection for a prize that has been given to egregious war criminals (kissinger), it is a very funny joke.

[–] sansruse@awful.systems 12 points 1 month ago (1 children)

i love articles that start with a false premise and announce their intention to sell you a false conclusion

The future of intelligence is being set right now, and the path we’re on leads somewhere I don’t want to go. We’re drifting toward a world where intelligence is something you rent — where your ability to reason, create, and decide flows through systems you don’t control, can’t inspect, and didn’t shape.

The future of automated stupidity is being set right now, and the path we're on leads to other companies being stupid instead of us. I want to change that.

view more: next ›