this post was submitted on 14 Jul 2025
6 points (87.5% liked)

Hacker News

2061 readers
455 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

founded 10 months ago
MODERATORS
top 2 comments
sorted by: hot top controversial new old
[โ€“] SpookyMulder@lemmy.4d2.org 2 points 6 days ago* (last edited 6 days ago)

Is this satire? If so, it's very subtle.

Presumably bullshit.

And yet:

This kind of nonsense is what's feasible, for some AI everyone agrees is "not really intelligent." No model thinks about what code does, but you can ask what code does, and it will try to tell you. You can also describe what code is supposed to be doing, and it will try to make the code do that thing. Looping that might turn GetAnAlbumCover into GetAnalBumCover. But the result should return an anal bum cover.

"What's supposed to be happening here?" "Is that what's happening here?" and "What would make that happen here?" are all questions a neural network could answer. Any functionality can be approximated. Any functionality. LLMs almost kinda sorta do it, already, and they're only approximating "What's the next word?" It is fucking bonkers that these models are so flexible.