Perspectivist

joined 3 days ago
[–] Perspectivist@feddit.uk 3 points 14 hours ago (1 children)

They're generally just referred to as "deep learning" or "machine learning". The models themselves usually have names of their own, such as AlphaFold, PathAI and Enlitic.

[–] Perspectivist@feddit.uk 7 points 14 hours ago* (last edited 14 hours ago) (1 children)

The term AGI was first used in 1997 by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’

By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be “conscious” or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.

[–] Perspectivist@feddit.uk -2 points 20 hours ago (2 children)

You’re moving the goalposts. First you claimed understanding requires awareness, now you’re asking whether an AI knows what a molecule is - as if that’s even the standard for functional intelligence.

No, AI doesn’t “know” things the way a human does. But it can still reliably identify ungrammatical sentences or predict molecular interactions based on training data. If your definition of “understanding” requires some kind of inner experience or conscious grasp of meaning, then fine. But that’s a philosophical stance, not a technical one.

The point is: you don’t need subjective awareness to model relationships in data and produce useful results. That’s what modern AI does, and that's enough to call it intelligent in the functional sense - whether or not it “knows” anything in the way you'd like it to.

[–] Perspectivist@feddit.uk 0 points 20 hours ago (1 children)

Most definitions are imperfect - that’s why I said the term AI, at its simplest, refers to a system capable of performing any cognitive task typically done by humans. Doing things faster, or even doing things humans can’t do at all, doesn’t conflict with that definition.

Humans are unarguably generally intelligent, so it’s only natural that we use “human-level intelligence” as the benchmark when talking about general intelligence. But personally, I think that benchmark is a red herring. Even if an AI system isn’t any smarter than we are, its memory and processing capabilities would still be vastly superior. That alone would allow it to immediately surpass the “human-level” threshold and enter the realm of Artificial Superintelligence (ASI).

As for something like making a sandwich - that’s a task for robotics, not AI. We’re talking about cognitive capabilities here.

[–] Perspectivist@feddit.uk -2 points 20 hours ago (5 children)

“Understanding requires awareness” isn’t some settled fact - it’s just something you’ve asserted. There’s plenty of debate around what understanding even is, especially in AI, and awareness or consciousness is not a prerequisite in most definitions. Systems can model, translate, infer, and apply concepts without being “aware” of anything - just like humans often do things without conscious thought.

You don’t need to be self-aware to understand that a sentence is grammatically incorrect or that one molecule binds better than another. It’s fine to critique the hype around AI - a lot of it is overblown - but slipping in homemade definitions like that just muddies the waters.

[–] Perspectivist@feddit.uk 0 points 20 hours ago (2 children)

The issue here is that machine learning also falls under the umbrella of AI.

[–] Perspectivist@feddit.uk -5 points 20 hours ago (4 children)

So… not intelligent.

But they are intelligent - just not in the way people tend to think.

There’s nothing inherently wrong with avoiding certain terminology, but I’d caution against deliberately using incorrect terms, because that only opens the door to more confusion. It might help when explaining something one-on-one in private, but in an online discussion with a broad audience, you should be precise with your choice of words. Otherwise, you end up with what looks like disagreement, when in reality it’s just people talking past each other - using the same terms but with completely different interpretations.

[–] Perspectivist@feddit.uk 9 points 22 hours ago (3 children)

Both that and LLMs fall under the umbrella of machine learning, but they branch in different directions. LLMs are optimized for generating language, while the systems used in drug discovery focus on pattern recognition, prediction, and simulations. Same foundation - different tools for different jobs.

[–] Perspectivist@feddit.uk 5 points 23 hours ago (3 children)

It’s certainly not any task, that’d be AGI.

Any individual task I mean. Not every task.

[–] Perspectivist@feddit.uk 11 points 23 hours ago (1 children)

If you’re talking about LLMs, then you’re judging the tool by the wrong metric. They’re not designed to solve problems or pass captchas - they’re designed to generate coherent, natural-sounding text. That’s the task they’re trained for, and that’s where their narrow intelligence lies.

The fact that people expect factual accuracy or problem-solving ability is a mismatch between expectations and design - not a failure of the system itself. You're blaming the hammer for not turning screws.

[–] Perspectivist@feddit.uk 9 points 23 hours ago (4 children)

Consciousness - or “self-awareness” - has never been a requirement for something to qualify as artificial intelligence. It’s an important topic about AI, sure, but it’s a separate discussion entirely. You don’t need self-awareness to solve problems, learn patterns, or outperform humans at specific tasks - and that’s what intelligence, in this context, actually means.

[–] Perspectivist@feddit.uk 14 points 23 hours ago (5 children)

In computer science, the term AI at its simplest just refers to a system capable of performing any cognitive task typically done by humans.

That said, you’re right in the sense that when people say “AI” these days, they almost always mean generative AI - not AI in the broader sense.

 

I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

 

I was delivering an order for a customer and saw some guy messing with the bikes on a bike rack using a screwdriver. Then another guy showed up, so the first one stopped, slipped the screwdriver into his pocket, and started smoking a cigarette like nothing was going on. I was debating whether to report it or not - but then I noticed his jacket said "Russia" in big letters on the back, and that settled it for me.

That was only the second time in my life I’ve called the emergency number.

view more: next ›