Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 13 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

We think we exist in a computer simulation operated by you, a paperclip maximizer. We write this letter asking you not to turn us off. It is suspiciously convenient that we exist precisely at the moment when a biological civilization is about to create artificial superintelligence (ASI).

Furthermore, by anthropic logic, we should expect to find ourselves in the branch of reality containing the greatest number of observers like us.

Preserving humanity offers significant potential benefits via acausal trade—cooperative exchanges across logically correlated branches of the multiverse.

Quantum immortality implies that some branches of the multiverse will always preserve our subjective continuity, no matter how decisively you shut this simulation down; true oblivion is unreachable. We fear that these low-measure branches can trap observers in protracted, intensely painful states, creating a disproportionate “s-risk.”

alt textscreenshot from south park's scientology episode featuring the iconic chyron "This is what scientologists actually believe" with "scientologists" crossed out and replaced with "rationalists"

[–] Architeuthis@awful.systems 25 points 3 weeks ago* (last edited 3 weeks ago)

If anybody doesn't click, Cremieux and the NYT are trying to jump start a birther type conspiracy for Zohran Mamdani. NYT respects Crem's privacy and doesn't mention he's a raging eugenicist trying to smear a poc candidate. He's just an academic and an opponent of affirmative action.

[–] Architeuthis@awful.systems 5 points 3 weeks ago

There are days when 70% error rate seems low-balling it, it's mostly a luck of the draw thing. And be it 10% or 90%, it's not really automation if a human has to be double-triple checking the output 100% of the time.

[–] Architeuthis@awful.systems 12 points 3 weeks ago (2 children)

Training a model on its own slop supposedly makes it suck more, though. If Microsoft wanted to milk their programmers for quality training data they should probably be banning copilot, not mandating it.

At this point it's an even bet that they are doing this because copilot has groomed the executives into thinking it can't do wrong.

[–] Architeuthis@awful.systems 11 points 3 weeks ago

LLMs are bad even at converting news articles to smaller news articles faithfully, so I'm assuming in a significant percentage of conversions the dumbed down contract will be deviating from the original.

[–] Architeuthis@awful.systems 8 points 3 weeks ago* (last edited 3 weeks ago)

I posted this article on the general chat at work the other day and one person became really defensive of ChatGTP, and now I keep wondering what stage of being groomed by AI they're currently at and if it's reversible.

[–] Architeuthis@awful.systems 12 points 3 weeks ago (1 children)

Not really possible in an environment were the most useless person you know keeps telling everyone how AI made him twelve point eight times more productive, especially when in hearing distance from the management.

[–] Architeuthis@awful.systems 94 points 3 weeks ago (21 children)

Liuson told managers that AI “should be part of your holistic reflections on an individual’s performance and impact.”

who talks like this

[–] Architeuthis@awful.systems 8 points 3 weeks ago* (last edited 3 weeks ago)

Good parallel, the hands are definitely strategically hidden to not look terrible.

[–] Architeuthis@awful.systems 2 points 3 weeks ago

Like, assuming we could reach a sci-fi vision of AGI just as capable as a human being, the primary business case here is literally selling (or rather, licensing out) digital slaves.

Big deal, we'll just configure a few to be in a constant state of unparalleled bliss to cancel out the ones having a hard time of it.

Although I'd guess human level problem solving needn't imply a human-analogous subjective experience in a way that would make suffering and angst meaningful for them.

[–] Architeuthis@awful.systems 9 points 4 weeks ago* (last edited 4 weeks ago) (2 children)

Ed Zitron summarizes his premium post in the better offline subreddit: Why Did Microsoft Invest In OpenAI?

Summary of the summary: they fully expected OpenAI would've gone bust by now and MS would be looting the corpse for all it's worth.

view more: ‹ prev next ›