Fundamentally, anything humans can do can be done by physical systems of some kind, (because humans are already such a system), so given enough time I'd bet that it would be eventually possible to make a machine do literally anything that can be done by a human. There might be some things that nobody ever does get an AI to replicate even if technically possible though, just because of not having a motivation to
AI
Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.
The flavor of cinnamon toast crunch.
Since AI is trained by us, using the fruit of human labor as input, it'll have to be something we can't train it to do.
Something biological or instinctual... Like being in close proximity to an AI will never result in synchronized menstruation since an AI can't and won't ever menstruate.
So... That 👍
Synched Menstruation is supposed to be a myth now. I have experienced it many times, but I guess it’s mostly considered coincidence, which it could be, I’m not a mathematician.
What would you bet on AI not ever getting the ability to menstruate?
Computers will never consistently beat humans and humans will never consistently beat computers as snakes and ladders.
Or rock-paper-scissors, for that matter.
Didn't some robotics lab build an unbeatable rock-paper-scissors robot a while back?
Calvinball! All hail Watterson lol
Pretty sure it won't manage Ligma any time soon
They will when they perfect the bofa fill algorithm
There are already specialized robots just for that
An exact 1:1 realtime copy of itself emulated within a simulated universe.
Pretty much everything else mentioned in this thread falls into the "never say never" category.
Also being able to analyze any program and guarantee it will stop
Probably still a never say never problem:
In their new paper, the five computer scientists prove that interrogating entangled provers makes it possible to verify answers to unsolvable problems, including the halting problem.
If you actually read the article, it doesn't say anything about being able to solve the halting problem. It used the undecidability of the halting problem to prove equivalence of another class of problems to the halting problem.
Which is why I said it was still a "never say never" and not an already solved problem.
The halting problem is impossible for Turing machines, but if hypercomputation ends up possible, it isn't impossible.
For example, an oracle machine as proposed by Turing, or a 'real' computer using actual real values.
The latter in particular may even end up a thing in the not too distant future assuming neural networks continue to move into photonics in such a way that networks run while internals are never directly measured. In that case the issue would be verifying the result - the very topic of the paper in question.
Effectively, while it is proven that we can never be able to directly measure a solution to the halting problem, I wouldn't take a bet that within my lifetime we won't have ended up being able to indirectly measure a solution to the problem and directly validate the result.
deez nuts
Giving everyone money for free from the rich people! Yeah, that's right... wealth redistribution! AI won't ever be able to do that.
Organic intelligence? The qualifier never kind of removes a lot of answers when you also say "never"
A bit fallacious to add "organic" to intelligence. But then I'm sure we will be able to make organic computers at some point. I think there is research into this already.
I'd like to be proven wrong but Empathy
Truly creative, decent Dad Jokes.
I don't know, there are a couple pretty good ones here by chatgpt:
Of course! Here are some classic dad jokes for you:
- Why don't skeletons fight each other? They don't have the guts.
- Did you hear about the cheese factory that exploded? There was nothing left but de-brie.
- I used to play piano by ear, but now I use my hands.
- What do you call a fish with no eyes? Fsh.
- Why did the scarecrow win an award? Because he was outstanding in his field.
- What's brown and sticky? A stick.
- How does a penguin build its house? Igloos it together.
- I'm reading a book on anti-gravity. It's impossible to put down.
- Parallel lines have so much in common. It's a shame they'll never meet.
- Did you hear about the mathematician who's afraid of negative numbers? He'll stop at nothing to avoid them.
Most of those predate the internet
Social anxiety
Well, I know my social anxiety is basically just a hallucination based on a bad data set.
Cracking my knuckles nervously before I’m about to give a presentation in front of the whole class.
Stupid posts like this one
Stupid comments like this one
And this one
And that one
And those ones over there
How humans think. AI "thinking" will always be different than human thinking. Because human brain is "that thing" that is impossibile to simulate in silico as is. We might be able to have good approximations, but as good as they can get, they'll always diverge from the real thing
I guess a good part also comes from learned experiences. Having a body, growing up, feeling pain, being mortal.
And yes, the brain is an incredibly complex system not only of neurons, but also transmitters, receptors, a whole truckload of biochemistry.
But in the end, both are just matter in patterns, excitation in coordination. The effort to simulate is substantial, but I don't see how that would NEVER succeed, if someone with the capabilities insisted on it. However, it might be fully sufficient for the task (whatever that is, probably porn) to simulate 95% or so, technically still not the real deal.
Driving
I was gonna say left turns lol
Anxiety
feel superior after being witty.
Art. At least, until we get AI which is actually capable of thought, which I personally don't think is going to happen. Art of any kind is completely inaccessible to the sorts of "AI" being put forward now. Art is fundamentally about conveying a meaning beneath the surface. All art, visual or verbal or otherwise, shares this trait. AI has no feelings, no meaning to share. All it does is meaninglessly mimic the form of art made by others.
An artist and an AI, when given the same prompt, will produce similar outputs. However, an artist replicates it in strokes, while AI replicates it in pixels. AI can create art, because art is in the eye of the observer, but its different than a human creating art.
An artist and an AI, when given the same prompt, will produce similar outputs.
yeah thats what art is about, you got it
Is there a turing test for art, and what's the detection quota?
I think any clear definition will either positively identify lots of AI works as art (along collections of random junk), or deny the qualifier to lots of supposed artworks from human artists.
Coming from theater, I agree it is about "conveying a meaning beneath the surface". Having studied computer science, I note that is very much not in a strict sense, but very vague. It seems to be a feature, not a bug, that everyone in the audience can see something different.
I think you can pretty much present random nonsense, and someone will still find it brilliant and inspiring, and a lot more people will tell you what patterns they saw, and of what it reminded them. The meaning is created in the minds of the observers, even if the creator explicitly did not put another, or any, meaning into the "art".