Perspectivist

joined 6 days ago
[–] Perspectivist@feddit.uk 2 points 12 hours ago (2 children)

You mean french fries sauce because that's all it's good for.

[–] Perspectivist@feddit.uk 3 points 1 day ago (1 children)

I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.

I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.

[–] Perspectivist@feddit.uk 5 points 1 day ago (1 children)

Finland recently passed a law prohibiting under 15 year olds from riding electric scooters and similar vehicles. Up untill now, the average age of the people hospitalized for accidents with these has been 12 years.

[–] Perspectivist@feddit.uk 4 points 1 day ago (3 children)

Did you genuinely not understand the point I was making, or are you just being pedantic? "Silicon" obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as "in non-biological substrates," I’m happy to oblige - but I have a feeling you already knew that.

[–] Perspectivist@feddit.uk 29 points 1 day ago* (last edited 1 day ago) (2 children)

Älä välitä, ei se villekään välittänyt, vaikka sen väliaikaiset välihousut jäi väliaikaisen välitystoimiston väliaikaisen välioven väliin.

Rough translation: Don’t worry about it - Ville didn’t worry either when his temporary long johns got caught in the temporary side door of the temporary temp agency.

[–] Perspectivist@feddit.uk 1 points 1 day ago

Don't confuse AGI with LLMs. Both being AI systems is the only thing they have in common. They couldn't be further apart when it comes to cognitive capabilities.

[–] Perspectivist@feddit.uk 10 points 1 day ago* (last edited 1 day ago) (5 children)

The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:

  1. Either there’s something fundamentally unique about how the biological brain processes information - something that cannot, even in principle, be replicated in silicon,

  2. Or we wipe ourselves out before we get the chance.

Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That's what humans do; improve our technology.

The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one thinks we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.

[–] Perspectivist@feddit.uk 11 points 2 days ago (1 children)

What's doubling down called when you're doing the same mistake for 3rd or 4th time?

[–] Perspectivist@feddit.uk 3 points 3 days ago (1 children)

They're generally just referred to as "deep learning" or "machine learning". The models themselves usually have names of their own, such as AlphaFold, PathAI and Enlitic.

[–] Perspectivist@feddit.uk 7 points 3 days ago* (last edited 3 days ago) (1 children)

The term AGI was first used in 1997 by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’

By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be “conscious” or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.

[–] Perspectivist@feddit.uk -4 points 3 days ago (2 children)

You’re moving the goalposts. First you claimed understanding requires awareness, now you’re asking whether an AI knows what a molecule is - as if that’s even the standard for functional intelligence.

No, AI doesn’t “know” things the way a human does. But it can still reliably identify ungrammatical sentences or predict molecular interactions based on training data. If your definition of “understanding” requires some kind of inner experience or conscious grasp of meaning, then fine. But that’s a philosophical stance, not a technical one.

The point is: you don’t need subjective awareness to model relationships in data and produce useful results. That’s what modern AI does, and that's enough to call it intelligent in the functional sense - whether or not it “knows” anything in the way you'd like it to.

 

I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

 

I was delivering an order for a customer and saw some guy messing with the bikes on a bike rack using a screwdriver. Then another guy showed up, so the first one stopped, slipped the screwdriver into his pocket, and started smoking a cigarette like nothing was going on. I was debating whether to report it or not - but then I noticed his jacket said "Russia" in big letters on the back, and that settled it for me.

That was only the second time in my life I’ve called the emergency number.

view more: next ›