"As AI enters the operating room, reports arise of botched surgeries and misidentified body parts"
Medical malpractice as a service, coming to a GP near you
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
"As AI enters the operating room, reports arise of botched surgeries and misidentified body parts"
Medical malpractice as a service, coming to a GP near you
A prompt enjoyer does eschatology. Along the way he abuses mathematics, Ohio, and a chinchilla.
You could’ve probably given me a good 80~100 rounds and I still would not have guessed that set of items
And I’ve been watching these dipshits for a while
(the first two I could’ve guessed/converged to within 10~20 I suspect, but a chinchilla? Fucked from left field, I tell ya)
Cool! I keep on saying that there will be at least one more AI bubble before 2045, because IIRC that's the latest date for a singularity that Kurzweil gives, and this dude comes along with a date that's conveniently ~halfway between now and then for people to anchor on. Thanks dude! If I find an online sod retailer that sells single square feet, I'll send you some grass to touch!
I though Kurzweil’s latest singularity date was 2032 or smth
He might have revised it in more recent publications and/or brainfarts. If I were a Responsible Internet Debater™, I would go check, but the whole point is that i could give a fuck
funny how all the tech CEOs are the ones who are saying in the next couple years and all the researchers give 10-20 year timelines. surely this does not mean anything about the reliability of the companies and their claims
It's always Miller(ite) Time
https://x.com/MrinankSharma/status/2020881722003583421
Anthropic safety research lead quits the field entirely to write poetry with a somewhat cryptic note. Trying to read between the lines here, the most likely explanation (IMO) is that he developed a guilty conscience and anthropic doesn't actually give a shit about any of the human harms created by the technology. Ah well, nevertheless they persisted.
Anthropic doesn’t actually give a shit about any of the human harms created by the technology
but yeah it sounds like they got overwhelmed by all the shit happening in the world (there is a lot of shit happening in the world, especially in America) and left for their own mental health’s sake
Especially for a guy named Sharma living in the US. It doesn't take too many footsteps outside the Bay area for him to be in literal physical danger right now.
~~He also less cryptically posted his plans and resignation letter.~~ (Edit: memory lapse) xcancel version of resignation letter post. Tl;dr moving to the UK (understandably) and doing a poetry degree (I didn't accidentally critique someone into quitting, did I?)
Honestly, I hope he finds both what he's looking for and also what he's not looking for but still equally needs. For example, a personal perspective not entrenched in institutional ontological frameworks.
Yeah, I can't hold it against anyone for feeling scared and overwhelmed with what's happening in America right now and fleeing. Hope he finds happiness soon
I did a five line PR to a little shell util I've used for a decade or so, and bickered with the stupid PR bot. Fuck you kody, you have bad taste, go away, go back to enterprise.
I want to force feed it Worse is Better until it chokes, surely that's in its corpus somewhere.
ok done venting

Eliezer, I would be very careful about talking about age of consent if I were you
load-bearing "fairly"
here's another very good take from baldur bjarnason, answering the question if he had hardened his stance against LLMs.
(the answer is “not exactly”, and you want to read the whole thing, because the answer itself is the least interesting part of the essay.)
The whole thing's worth reading, but this snippet in particular deserves attention:
Tech companies have done everything they can to maximise the potential harms of generative models because in doing so they think they’re maximising their own personal benefit.
it's full of quotable bangers like this, and it's hard to choose the one to quote, right.
Elon Musk pivots from mars colony tweets to moon colony tweets (xcancel).
I'm not quite clear on what "self-growing" means here given how inhospitable the moon is.
Self growing like a video game, for example a colony in eu4, initially it costs a lot of gold per month to keep sending colonists, and when you reach 100% growth it becomes a full province on which you can build things.
If only more journalists go: 'we don't know what this means either, and when we asked him he started shouting slurs at us'.
given how inhospitable the moon is.
You could even say she is a harsh mistress.
Given it's the Moon a better comparison would be a Greenland colony in EU5 where it costs gold initially and then costs your precious sanity, as you are doomed to ship tonnes and tonnes of food and materials there for centuries because there is nothing fucking there and the whole endeavour was a huge mistake.
Have not done eu5 so didnt know they improved the system. But indeed.
Not that it matters for the pro let billionaires colonize space crowd. As some thing some thing ai robots are magic. See also how they are planning datacenters in space, and dont think there is a real solution for the whole 'what if you need to flip a switch or replace a fan' problem.
The moment I've learnt chuds like Musk and Sammy Boi treat the speed of light as just a thing that they can solve with sufficient computational power I started treating all their claims like a 5yo talking shit. It's really all you need to know about them.
wasn't Musk financing people who were gonna "hack the simulation" at one point?
You could even say she is a harsh mistress.
what if the moon got mad tho
Tell Luna you've been a bad boy/girl/etc and need to be punished and see what she does.
on first reading I thought you were talking about a specific luna (who I shall not tag here) but who is definitely somewhat of a kinkposter
still might mean that, haha
Luna is a very common transfem name
If you accept the imho insane idea that a Mars colony is worth building, using Luna as a stepping stone makes sense. You can debug a lot of issues with recycling, growing food, low gravity, slow resupply etc. with a faster feedback loop.
self-growing
the virile space men will have plenty of nubile females to pump out babies
Weirdly, the moon might actually be more hostile that mars… the dust is sharper, the gravity is lower, the radiation is worse, the nights are longer and colder, there’s less water…
It is a much cheaper and quicker means of murdering a bunch of astronauts though, so it does have that going for it.
It is a much cheaper and quicker means of murdering a bunch of astronauts though, so it does have that going for it.
There's also a better chance that Elon exits the planet sooner.
He hasn’t even done a suborbital flight yet, has he? I don’t seem him being brave enough to even get as far as the moon, even assuming he’s healthy enough.
Shhh, don't tell him.
the virile space men will have
i've heard they like to read sf, they might want to read heinlein's luna stories and miss the point entirely
If you accept the imho insane idea that a Mars colony is worth building, using Luna as a stepping stone makes sense. You can debug a lot of issues with recycling, growing food, low gravity, slow resupply etc. with a faster feedback loop.
as usual, felon just has no imagination and is basing his silly plan straight off any number of scifi books. but some dipshit stan is of going to ecstatically praise him for this "revolutionary" "forward-thinking" idea
Starting this Stubsack off by linking to Pavel Samsonov's "You can't "AI-proof your career" with a project mindset", a follow-on to Iris Meredith's "Becoming an AI-proof software engineer" which goes further into how best to safeguard one's software career from the slop-bots.