Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 8 points 2 months ago

I feel that strip mall dojos where you were ostensibly taught some very mainstream belt-based martial art like karate or TKD (or straight up make-believe stuff like ninjutsu) but were essentially glorified daycare should figure somewhere in the history of the term.

[–] Architeuthis@awful.systems 2 points 2 months ago* (last edited 2 months ago)

190IQ is when you verb asymptote to avoid saying 'almost'.

[–] Architeuthis@awful.systems 2 points 2 months ago

It's possible someone specifically picked the highest IQ that wouldn't need a second planet earth to make the statistics work.

[–] Architeuthis@awful.systems 10 points 2 months ago* (last edited 2 months ago) (14 children)

Siskind did a review too, basically gives it the 'their hearts in the right place but... [read AI2027 instead]' treatment. Then they go at it a bit with Yud in the comments where Yud comes off as a bitter dick, but their actual disagreements are just filioque shit. Also they both seem to agree that a worldwide moratorium on AI research that will give us time to breed/genetically engineer superior brained humans to fix our shit is the way to go.

https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/154920454

https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/154927504

Also notable that apparently Siskind thinks nuclear non-proliferation sorta worked because people talked it out and decided to be mature about it rather than being scared shitless of MAD, so AI non-proliferation by presumably appointing a rationalist Grand Inquisitor in charge of all human scientific progress is an obvious solution.

[–] Architeuthis@awful.systems 11 points 2 months ago (3 children)

All the stuff about ASI is basically theology, or trying to do armchair psychology to Yog-Sothoth. If autonomous ASI ever happens it's kind of definitionally impossible to know what it'll do, it's beyond us.

The simulating synapses is hard stuff I can take or leave. To argue by analogy, it's not like getting an artificial feather exactly right was ever a bottleneck to developing air travel once we got the basics of aerodynamics down.

[–] Architeuthis@awful.systems 6 points 2 months ago

Nice. Here's the bluesky account as well.

[–] Architeuthis@awful.systems 9 points 2 months ago (2 children)

Some quality wordsmithing found in the wild:

transcript@MosesSternstein (quote-twitted): AI-Capex is the everything cycle, now.

Just under 50% of GDP growth is attributable to AI Capex

@bigblackjacobin: Almost certainly the greatest misallocation of capital you or I will ever see. There's no justification for this however you cut it but the beatings will continue until a stillborn god is born.

[–] Architeuthis@awful.systems 11 points 2 months ago (1 children)

Remember, when your code doesn’t compile, it might mean you made a mistake in coding, or your code is about to become selfaware.

Good analogy actually.

Don't forget Yud is also a big compiler understander

[–] Architeuthis@awful.systems 10 points 2 months ago* (last edited 2 months ago)

The arguments made against the book in the review are that it doesn't make the case for LLMs being capable of independent agency, it reduces all material concerns of an AI takeover to broad claims of ASI being indistinguishable from magic and that its proposed solutions are dumb and unenforceable (again with the global GPU prohibition and the unilateral bombing of rogue datacenters).

That towards the end they note that the x-risk framing is a cognitive short-circuit that causes the faithful to ignore more pressing concerns like the impending climate catastrophe in favor of a mostly fictitious problem like AI doom isn't really a part of their core thesis against the book.

[–] Architeuthis@awful.systems 9 points 2 months ago* (last edited 2 months ago) (2 children)

They also seem to broadly agree with the 'hey, humans are pretty shit at thinking too, you know' line of LLM apologetics.

“LLMs and humans are both sentence-producing machines, but they were shaped by different processes to do different work,” say the pair – again, I’m in full agreement.

But judging from the rest of the review I can see how you kind of have to be at least somewhat rationalist-adjacent to have a chance of actually reading the thing to the end.

[–] Architeuthis@awful.systems 16 points 2 months ago (3 children)

The pair also suggest that signs of AI plateauing, as seems to be the case with OpenAI’s latest GPT-5 model, could actually be the result of a clandestine superintelligent AI sabotaging its competitors.

copium-intubation.tiff

Also this seems like the natural progression of that time Yud embarrassed himself by cautioning actual ML researchers to be weary of 'sudden drops in loss function during training', which was just an insanely uninformed thing to say out loud.

[–] Architeuthis@awful.systems 2 points 2 months ago

the only people who like prediction markets [..]

Apparently Donald Trump Jr. has found his way into the payroll of a couple of the bigger prediction markets, so they seem to be doing their darndest to change that.

view more: ‹ prev next ›