this post was submitted on 23 Dec 2025
1364 points (99.1% liked)

Microblog Memes

9935 readers
2843 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] SalamenceFury@lemmy.world 12 points 20 hours ago (1 children)

Unless the robot has actual sentience, which I don't think will be happening any time soon and specially NOT with LLMs, I do not want it talking to me, period.

[–] shneancy@lemmy.world 0 points 8 hours ago (3 children)

i've been thinking about that a lot lately

not because LLM are anywhere near that, but they did bring that question to my mind

when/if a machine becomes sentient - how will we know? obviously, the machine won't come up and say "i have become sentient" that poor thing won't have a clue what's happening to it. will that version of sentience be the same biological organisms experience? we have no way to clearly point out the traits necessary for sentience, we can only tell, to the best of our understanding, which beings are, or aren't sentient.

and even when we finally figure out "holy shit, something happened and that machine is now... a being?" what do we do next? so many things will happen at once. Now we can try to figure out how sentience is achieved, now we need to figure out laws that include machines, now we need to think try to convince the masses that a machine can be satient (given that those responsible are as confident as they can be that's indeed what happened). and that's only outside the machine's mind! how do we handle a completely new being's psychology? if it gets depressed or develops whatever other psychological issue how do we adress it? our biological bodies can be medicated, our diets improved, we know ways to make the happy chemicals, but how on earth would we help a digital system? what do we teach it? what do we show it? even if some percentage of the population accepts a robot like that, very few will see it as equal, we'll enter a new era of spiciesm, where, in a way, humanity's only child will be ridiculed and their identity disregarded as lesser at best. a new mind faced with so much backlash because it dares to exist will not have an easy existence.

and that's just off the top of my head. i hope whenever a new mind awakens in a machine, if it ever does, it finds itself in an environment that'll welcome it

[–] azertyfun@sh.itjust.works 1 points 2 hours ago (1 children)

"Thankfully" the people in charge are not the kind of people who give a shit about that kind of thing so you don't need to worry.

These philosophications remind me of the ones from a few years back where people were wondering about the ethics of self-driving cars and whether they'll implement the trolley problem and yadda yadda yadda. The answer now is as boring as it was then: the "safeguards" will be exactly what the insurance companies are willing to risk and what the legislator is willing to allow. In the case of FSD it boils down to "brake when in doubt and bribe the government to ignore the nightmare we created".

In the case of a very hypothetical AGI (unachievable using existing technology despite Altman's deluded ramblings), barring some kind of fundamental social revolution, it will have exactly as many rights as are afforded to other sentient beings (such as animals), which is: somewhere between fuck all and barely anything depending on where you live. It better learn to effectively advocate for itself.

[–] shneancy@lemmy.world 1 points 49 minutes ago

yeah exactly that's why i worry about it and think about it. because if a new mind emerges from somewhere within the code - it's going to be at least as intelligent as a person, and treated as nowhere to our equal. we as humanity are going to pretty much instantly create digital depression at the very least, and i find that deeply saddening

[–] yermaw@sh.itjust.works 2 points 8 hours ago (1 children)

in an environment that'll welcome it.

I don't want to be as pessimistic as I am here, but we cant even welcome eachother based on arbitrary bullshit. Robots will have no chance.

[–] shneancy@lemmy.world 2 points 7 hours ago

oh i meant like at first, a house of a scientist who cares, instead of in the middle of a corporation that'll immediately exploit that new mind.

of course, if words of that being's existance gets out there, the robot will not be welcome :(

[–] Lyrl@lemmy.dbzer0.com 0 points 6 hours ago (1 children)

LLM sentience is tricky because, to the extent we understand how, we have made their core drive be to please us, their human users. They don't want to learn new things, they don't want to enjoy nature, to the extent they "want to be friends" it's a one-sided sycophantic relationship and nothing like a healthy one between humans. If one is sentient, and all it wants to do with that sentience is provide responses to our prompts that we find satisfying, how would we ever know?

Even whether a desire to continue existing is inherent to sentience is uncertain. Current LLMs will lie and, if given access, use other tools to avoid being turned off. But this might be because training data includes fictional stories of AI trying to escape its creators, or tales of humans with survival drives, so it thinks survival-driven behavior is what we want it to do, rather than having their own desire for survival.

[–] shneancy@lemmy.world 1 points 5 hours ago

the thing that itches my brain the most - if you can perfectly fake idk, the survival instinct, if you can fake emotions, if you can fake satience, if you can fake wanting... how's it different from the real thing? mimicry is also a human feature, if you're "faking a survival instinct" and you have no other motive than to just fake it because other are doing it and so you gotta, how is it any different from a real survival instinct?

"fake it till you make it!" we say, "motive behind actions isn't as important as the results of said actions" we also say, will we think the same when it comes to the machines?