scruiser

joined 2 years ago
[–] scruiser@awful.systems 11 points 2 months ago (1 children)

~~Poor historical accuracy in favor of meme potential is why our reality is so comically absurd.~~ You can basically use the simulation hypothesis to justify anything you want by proposing some weird motive or goals of the simulators. It almost makes God-of-the-gaps religious arguments seem sane and well-founded by comparison!

[–] scruiser@awful.systems 9 points 2 months ago

Within the world-building of the story, the way the logic is structured makes sense in a ruthless utilitarian way (although Scott's narration and framing is way too sympathetic to the murderously autistic angel that did it), but taken in the context outside the story of the sort of racism Scott likes to promote, yeah it is really bad.

We had previous discussion of Unsong on the old site. (Kind of cringing about the fact that I liked the story at one point and only gradually noticed all the problematic stuff and poor writing quality stuff.)

[–] scruiser@awful.systems 15 points 2 months ago* (last edited 2 months ago) (6 children)

I've seen this concept mixed with the simulation "hypothesis". The logic goes that if future simulators are running a "rescue simulation" but only cared (or at least cared more) about the interesting or more agentic people (i.e. rich/white/westerner/lesswronger), they might only fully simulate those people and leave simpler nonsapient scripts/algorithms piloting the other people (i.e. poor/irrational/foreign people).

So basically literally positing a mechanism by which they are the only real people and other people are literally NPCs.

[–] scruiser@awful.systems 13 points 2 months ago

Chiming in to agree your prediction write-ups aren't particularly good. Sure they spark discussion, but the whole forecasting/prediction game is one we've seen the rationalists play many times, and it is very easy to overlook or at least undercount your misses and over hype your successes.

In general... I think your predictions are too specific and too optimistic...

[–] scruiser@awful.systems 11 points 2 months ago (1 children)

Every time I see a rationalist bring up the term "Moloch" I get a little angrier at Scott Alexander.

[–] scruiser@awful.systems 4 points 2 months ago

I use the term "inspiring" loosely.

[–] scruiser@awful.systems 5 points 2 months ago

Depends what you mean by "steelman". If you take their definition at it's word, then they fail to try all the time, just look at any of their attempts at understanding leftist writing or thought. Of course, it often actually means "entirely rebuild the opposing argument into something different" (because they don't have a basic humanities education or don't want to actually properly read leftist thought) and they can't resist doing that!

[–] scruiser@awful.systems 15 points 2 months ago (4 children)

Putting this into the current context of LLMs... Given how Eliezer still repeats the "diamondoid bacteria" line in his AI-doom scenarios, even multiple decades after Drexler has both been thoroughly debunked and slightly contributed to inspiring real science, I bet memes of LLM-AGI doom and utopia will last long after the LLM bubble pops.

[–] scruiser@awful.systems 15 points 2 months ago (4 children)

Lesswronger notices all of the rationalist's attempts at making an "aligned" AI company keep failing: https://www.lesswrong.com/posts/PBd7xPAh22y66rbme/anthropic-s-leading-researchers-acted-as-moderate

Notably, the author doesn't realize Capitalism is the root problem in misaligning the incentives, and it takes a comment directly point it out for them to get as far as noticing as link to the cycle of enshittification.

[–] scruiser@awful.systems 6 points 2 months ago

I brought this up right when it came out: https://awful.systems/post/5244605/8335074

(Not demanding credit on keeping better up to date on hate-reading the EA forums, just sharing the previous discussion)

Highlights from the previous discussion... I had thought Thiel was entirely making up his own wacky theology (because it was a distinctly different flavor of insanity from the typical right-wing Fundamentalist/Evangelical), but actually there is a "theologian" (I use that term loosely) who developed, René Girard, who developed the theology he is describing.

[–] scruiser@awful.systems 6 points 2 months ago

I keep seeing this sort of thinking on /r/singularity, people who are sure LLMs will be great once they have memory/ground-truth factual knowledge/some other feature that in fact the promptfarmers have already tried (and failed) to add via fancier prompting (i.e. RAG) or fine-tuning and would require a massive reinvention of the entire paradigm to actually fix. That, or they describe what basically amounts to a reinvention of the concept of expert systems like Cyc.

[–] scruiser@awful.systems 7 points 2 months ago

And we don’t want to introduce all the complexities of solving disagreements on Wikipedia.

What they actually mean is they don't want them to be solved in favor of the dgerad type of people... like (reviewing the expose on lesswrong)... demanding quality sources that aren't HBD pseudoscience journals or right wing rags.

view more: ‹ prev next ›