this post was submitted on 25 Jul 2025
8 points (68.2% liked)

Technology

39717 readers
156 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
top 11 comments
sorted by: hot top controversial new old
[–] t3rmit3@beehaw.org 2 points 16 hours ago

People need to understand the difference between LLMs and Neural Networks.

LLM training is a massive energy hog that gives us nothing but the illusion of coherent human-made text.

Non-LLM Neural Networks are much broader in use, almost always massively less energy-intensive to train, and often incredibly accurate when finely-tuned for specific purposes.

LLMs can die in a fire, and nothing would be lost. NNs in general are incredibly useful and honestly a massive source of potential for bettering healthcare (and science research in general) globally.

[–] Kissaki@beehaw.org 3 points 1 day ago

in a world of abundance

uuh I guess this is a hypothetical of a possible utopian future rather than about current AI or based on current trends and implementations.

[–] Perspectivist@feddit.uk 8 points 2 days ago

Artificial intelligence isn’t designed to maximize human fulfillment. It’s built to minimize human suffering.

What it cannot do is answer the fundamental questions that have always defined human existence: Who am I? Why am I here? What should I do with my finite time on Earth?

Expecting machines to resolve existential questions is like expecting a calculator to write poetry. We’re demanding the wrong function from the right tool.

Pretty weird statements. There’s no such thing as just “AI” - they should be more specific. LLMs aren’t designed to maximize human fulfillment or minimize suffering. They’re designed to generate natural-sounding language. If they’re talking about AGI, then that’s not designed for any one thing - it’s designed for everything.

Comparing AGI to a calculator makes no sense. A calculator is built for a single, narrow task. AGI, by definition, can adapt to any task. If a question has an answer, an AGI has a far better chance of figuring it out than a human - and I’d argue that’s true even if the AGI itself isn’t conscious.

[–] SweetCitrusBuzz@beehaw.org 7 points 2 days ago* (last edited 2 days ago) (3 children)

It won't solve anything except "How do we slowly kill off most life on this planet by using too much energy from power plants that spew awful chemicals into the air, and make deserts by using all the water up too?"

[–] t3rmit3@beehaw.org 2 points 16 hours ago

LLMs, sure.

Neural Networks in general though are massively useful, and NNs being trained for e.g. medical diagnostics or scientific research are miniscule in their energy footprints compared to LLMs, can be incredibly accurate (even beyond people), and open up tons of avenues for research that the extant budgets just couldn't support.

[–] Quexotic@beehaw.org 1 points 17 hours ago (1 children)

That is if it doesn't kill us by engineering a bio weapon first.

[–] SweetCitrusBuzz@beehaw.org 1 points 17 hours ago

maybe in 10000 years

[–] Perspectivist@feddit.uk 3 points 2 days ago* (last edited 2 days ago) (2 children)

It won’t solve anything

Go tell that to AlphaFold which solved a decades‑old problem in biology by predicting protein structures with near lab‑level accuracy.

[–] belated_frog_pants@beehaw.org 5 points 2 days ago

It could have been done without burning the earth down to get there.

[–] SweetCitrusBuzz@beehaw.org 1 points 2 days ago (1 children)

Oh yes, and how many chemicals did it cause to spew out and how much water did it deplete? That solution won't matter if life is dead anyway.

[–] Perspectivist@feddit.uk 3 points 2 days ago* (last edited 2 days ago)

Way to move the goalposts.

If you take that question seriously for a second - AlphaFold doesn’t spew chemicals or drain lakes. It’s a piece of software that runs on GPUs in a data center. The environmental cost is just the electricity it uses during training and prediction.

Now compare that to the way protein structures were solved before: years of wet lab work with X‑ray crystallography or cryo‑EM, running giant instruments, burning through reagents, and literally consuming tons of chemicals and water in the process. AlphaFold collapses that into a few megawatt‑hours of compute and spits out a 3D structure in hours instead of years.

So if the concern is environmental footprint, the AI way is dramatically cleaner than the old human‑only way.