Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 6 points 2 weeks ago (3 children)

As far as I can tell there's absolutely no ideology in the original transformers paper, what a baffling way to describe it.

James Watson was also a cunt, but calling "Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid" one of the founding texts of eugenicist ideology or whatever would be just dumb.

[–] Architeuthis@awful.systems 8 points 2 weeks ago

Hey it's the character.ai guy, a.k.a. first confirmed AI assisted kid suicide guy.

I do not believe G-d puts people in the wrong bodies.

Shazeer also said people who criticized the removal of the AI Principles were anti-Semitic.

Kind of feel the transphobia is barely scratching the surface of all the things wrong with this person.

[–] Architeuthis@awful.systems 3 points 2 weeks ago (2 children)

So if a company does want to use LLM, it is best done using local servers, such as Mac Studios or Nvidia DGX Sparks: relatively low-cost systems with lots of memory and accelerators optimized for processing ML tasks.

Eh, Local LLMs don't really scale, you can't do much better than one person per one computer, unless it's really sparse usage, and buying everyone a top-of-the-line GPU only works if they aren't currently on work laptops and VMs.

Sparks type machines will do better eventually but for now they're supposedly geared more towards training than inference, it says here that running a 70b model there returns around one word per second (three tokens) which is snail's pace.

[–] Architeuthis@awful.systems 7 points 2 weeks ago

It definitely feels like the first draft said for the longest time we had to use AI in secret because of Woke.

[–] Architeuthis@awful.systems 7 points 2 weeks ago

only have 12-days of puzzles

Obligatory oh good I might actually get something job-related done this December comment.

[–] Architeuthis@awful.systems 7 points 3 weeks ago* (last edited 3 weeks ago) (7 children)

What's a government backstop, and does it happen often? It sounds like they're asking for a preemptive bail-out.

I checked the rest of Zitron's feed before posting and its weirder in context:

Interview:

She also hinted at a role for the US government "to backstop the guarantee that allows the financing to happen", but did not elaborate on how this would work.

Later at the jobsite:

I want to clarify my comments earlier today. OpenAI is not seeking a government backstop for our infrastructure commitments. I used the word "backstop" and it mudlled the point.

She then proceeds to explain she just meant that the government 'should play its part'.

Zitron says she might have been testing the waters, or its just the cherry on top of an interview where she said plenty of bizzare shit

[–] Architeuthis@awful.systems 12 points 3 weeks ago

it often obfuscates from the real problems that exist and are harming people now.

I am firmly on the side of it's possible to pay attention to more than one problem at a time, but the AI doomers are in fact actively downplaying stuff like climate change and even nuclear war, so them trying to suck all the oxygen out of the room is a legitimate problem.

Yudkowsky and his ilk are cranks.

That Yud is the Neil Breen of AI is the best thing ever written ~~about rationalism~~ in a youtube comment.

[–] Architeuthis@awful.systems 5 points 1 month ago* (last edited 1 month ago)

this seems counterintuitive but... comments are the best, name of the function but longer are the worst. Plain text summary of a huge chunk of code that I really should have taken the time to break up instead of writing a novella about it are somewhere in the middle.

I feel a lot of bad comment practices are downstream of javascript relying on jsdoc to act like a real language.

[–] Architeuthis@awful.systems 9 points 1 month ago* (last edited 1 month ago) (1 children)

Managers gonna manage, but having a term for bad code that works that is more palatable than 'amateur hour' isn't inherently bad imo.

Worst i've heard is some company forbidding LINQ in C#, which in python terms is forcing you to always use for-loops in place of filter/map/reduce and comprehensions and other stuff like pandas.groupby

[–] Architeuthis@awful.systems 19 points 1 month ago* (last edited 1 month ago)

My impression from reading the stuff posted here is that omarchy is a nothing project that's being aggressively astroturfed so a series of increasingly fashy contributors can gain clout and influence in the foss ecosystem.

[–] Architeuthis@awful.systems 12 points 1 month ago (1 children)

Definitely, it's just code for I'm ok with nazis at this point.

[–] Architeuthis@awful.systems 6 points 1 month ago (1 children)

pro-AI but only self hosted

Like being pro-corporatism but only with regard to the breadcrumbs that fall off the oligarchs' tables.

We should start calling so-called open source models trickle-down AI.

 

edited to add tl;dr: Siskind seems ticked off because recent papers on the genetics of schizophrenia are increasingly pointing out that at current miniscule levels of prevalence, even with the commonly accepted 80% heritability, actually developing the disorder is all but impossible unless at least some of the environmental factors are also in play. This is understandably very worrisome, since it indicates that even high heritability issues might be solvable without immediately employing eugenics.

Also notable because I don't think it's very often that eugenics grievances breach the surface in such an obvious way in a public siskind post, including the claim that the whole thing is just HBD denialists spreading FUD:

People really hate the finding that most diseases are substantially (often primarily) genetic. There’s a whole toolbox that people in denial about this use to sow doubt. Usually it involves misunderstanding polygenicity/omnigenicity, or confusing GWAS’ current inability to detect a gene with the gene not existing. I hope most people are already wise to these tactics.

 

... while at the same time not really worth worrying about so we should be concentrating on unnamed alleged mid term risks.

EY tweets are probably the lowest effort sneerclub content possible but the birdsite threw this to my face this morning so it's only fair you suffer too. Transcript follows:

Andrew Ng wrote:

In AI, the ratio of attention on hypothetical, future, forms of harm to actual, current, realized forms of harm seems out of whack.

Many of the hypothetical forms of harm, like AI "taking over", are based on highly questionable hypotheses about what technology that does not currently exist might do.

Every field should examine both future and current problems. But is there any other engineering discipline where this much attention is on hypothetical problems rather than actual problems?

EY replied:

I think when the near-term harm is massive numbers of young men and women dropping out of the human dating market, and the mid-term harm is the utter extermination of humanity, it makes sense to focus on policies motivated by preventing mid-term harm, if there's even a trade-off.

 

Sam Altman, the recently fired (and rehired) chief executive of Open AI, was asked earlier this year by his fellow tech billionaire Patrick Collison what he thought of the risks of synthetic biology. ‘I would like to not have another synthetic pathogen cause a global pandemic. I think we can all agree that wasn’t a great experience,’ he replied. ‘Wasn’t that bad compared to what it could have been, but I’m surprised there has not been more global coordination and I think we should have more of that.’

 

original is here, but you aren't missing any context, that's the twit.

I could go on and on about the failings of Shakespear... but really I shouldn't need to: the Bayesian priors are pretty damning. About half the people born since 1600 have been born in the past 100 years, but it gets much worse that that. When Shakespear wrote almost all Europeans were busy farming, and very few people attended university; few people were even literate -- probably as low as ten million people. By contrast there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564? The Bayesian priors aren't very favorable.

edited to add this seems to be an excerpt from the fawning book the big short/moneyball guy wrote about him that was recently released.

view more: ‹ prev next ›