this post was submitted on 09 Jun 2025
789 points (91.7% liked)

Technology

72932 readers
2884 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] NeilBru@lemmy.world 71 points 1 month ago* (last edited 1 month ago) (5 children)

An LLM is a poor computational/predictive paradigm for playing chess.

[–] surph_ninja@lemmy.world 29 points 1 month ago (1 children)

This just in: a hammer makes a poor screwdriver.

load more comments (1 replies)
[–] Takapapatapaka@lemmy.world 8 points 1 month ago (4 children)

Actually, a very specific model (chatgpt3.5-turbo-instruct) was pretty good at chess (around 1700 elo if i remember correctly).

load more comments (4 replies)
load more comments (3 replies)
[–] AlecSadler@sh.itjust.works 57 points 1 month ago (7 children)

ChatGPT has been, hands down, the worst AI coding assistant I've ever used.

It regularly suggests code that doesn't compile or isn't even for the language.

It generally suggests AC of code that is just a copy of the lines I just wrote.

Sometimes it likes to suggest setting the same property like 5 times.

It is absolute garbage and I do not recommend it to anyone.

[–] j4yt33@feddit.org 17 points 1 month ago (4 children)

I find it really hit and miss. Easy, standard operations are fine but if you have an issue with code you wrote and ask it to fix it, you can forget it

[–] AlecSadler@sh.itjust.works 9 points 1 month ago (1 children)

I've found Claude 3.7 and 4.0 and sometimes Gemini variants still leagues better than ChatGPT/Copilot.

Still not perfect, but night and day difference.

I feel like ChatGPT didn't focus on coding and instead focused on mainstream, but I am not an expert.

load more comments (1 replies)
load more comments (3 replies)
[–] Etterra@discuss.online 10 points 1 month ago (2 children)

That's because it doesn't know what it's saying. It's just blathering out each word as what it estimates to be the likely next word given past examples in its training data. It's a statistics calculator. It's marginally better than just smashing the auto fill on your cell repeatedly. It's literally dumber than a parrot.

load more comments (2 replies)
[–] nutsack@lemmy.dbzer0.com 9 points 1 month ago (3 children)

my favorite thing is to constantly be implementing libraries that don't exist

[–] Blackmist@feddit.uk 11 points 1 month ago

You're right. That library was removed in ToolName [PriorVersion]. Please try this instead.

*makes up entirely new fictitious library name*

load more comments (2 replies)
[–] arc99@lemmy.world 8 points 1 month ago (2 children)

All AIs are the same. They're just scraping content from GitHub, stackoverflow etc with a bunch of guardrails slapped on to spew out sentences that conform to their training data but there is no intelligence. They're super handy for basic code snippets but anyone using them anything remotely complex or nuanced will regret it.

load more comments (2 replies)
load more comments (3 replies)
[–] nednobbins@lemm.ee 47 points 1 month ago (5 children)

Sometimes it seems like most of these AI articles are written by AIs with bad prompts.

Human journalists would hopefully do a little research. A quick search would reveal that researches have been publishing about this for over a year so there's no need to sensationalize it. Perhaps the human journalist could have spent a little time talking about why LLMs are bad at chess and how researchers are approaching the problem.

LLMs on the other hand, are very good at producing clickbait articles with low information content.

[–] nova_ad_vitum@lemmy.ca 24 points 1 month ago (5 children)

Gotham chess has a video of making chatgpt play chess against stockfish. Spoiler: chatgpt does not do well. It plays okay for a few moves but then the moment it gets in trouble it straight up cheats. Telling it to follow the rules of chess doesn't help.

This sort of gets to the heart of LLM-based "AI". That one example to me really shows that there's no actual reasoning happening inside. It's producing answers that statistically look like answers that might be given based on that input.

For some things it even works. But calling this intelligence is dubious at best.

load more comments (5 replies)
load more comments (4 replies)
[–] floofloof@lemmy.ca 43 points 1 month ago* (last edited 1 month ago) (1 children)

I suppose it's an interesting experiment, but it's not that surprising that a word prediction machine can't play chess.

[–] otp@sh.itjust.works 9 points 1 month ago (1 children)

Because people want to feel superior because they ~~don't know how to use a ChatBot~~ can count the number of "r"s in the word "strawberry", lol

[–] electricyarn@lemmy.world 15 points 1 month ago (3 children)

Yeah, just because I can't count the number of r's in the word strawberry doesn't mean I shouldn't be put in charge of the US nuclear arsenal!

load more comments (3 replies)
[–] Halosheep@lemm.ee 41 points 1 month ago (1 children)

I swear every single article critical of current LLMs is like, "The square got BLASTED by the triangle shape when it completely FAILED to go through the triangle shaped hole."

load more comments (1 replies)
[–] MonkderVierte@lemmy.zip 39 points 1 month ago (1 children)

LLM are not built for logic.

[–] PushButton@lemmy.world 17 points 1 month ago (2 children)

And yet everybody is selling to write code.

The last time I checked, coding was requiring logic.

[–] jj4211@lemmy.world 10 points 1 month ago (4 children)

To be fair, a decent chunk of coding is stupid boilerplate/minutia that varies environment to environment, language to language, library to library.

So LLM can do some code completion, filling out a bunch of boilerplate that is blatantly obvious, generating the redundant text mandated by certain patterns, and keeping straight details between languages like "does this language want join as a method on a list with a string argument, or vice versa?"

Problem is this can be sometimes more annoying than it's worth, as miscompletions are annoying.

load more comments (4 replies)
load more comments (1 replies)
[–] anubis119@lemmy.world 37 points 1 month ago (6 children)

A strange game. How about a nice game of Global Thermonuclear War?

[–] ada@piefed.blahaj.zone 18 points 1 month ago

No thank you. The only winning move is not to play

load more comments (5 replies)
[–] Furbag@lemmy.world 28 points 1 month ago (5 children)

Can ChatGPT actually play chess now? Last I checked, it couldn't remember more than 5 moves of history so it wouldn't be able to see the true board state and would make illegal moves, take it's own pieces, materialize pieces out of thin air, etc.

[–] ToastedRavioli@midwest.social 9 points 1 month ago

ChatGPT must adhere honorably to the rules that its making up on the spot. Thats Dallas

load more comments (4 replies)
[–] cley_faye@lemmy.world 22 points 1 month ago

Ah, you used logic. That's the issue. They don't do that.

[–] arc99@lemmy.world 20 points 1 month ago (3 children)

Hardly surprising. Llms aren't -thinking- they're just shitting out the next token for any given input of tokens.

load more comments (3 replies)
[–] Lembot_0003@lemmy.zip 14 points 1 month ago (2 children)

The Atari chess program can play chess better than the Boeing 747 too. And better than the North Pole. Amazing!

[–] CarbonatedPastaSauce@lemmy.world 12 points 1 month ago (2 children)

Neither of those things are marketed as being artificially intelligent.

load more comments (2 replies)
load more comments (1 replies)
[–] finitebanjo@lemmy.world 14 points 1 month ago

All these comments asking "why don't they just have chatgpt go and look up the correct answer".

That's not how it works, you buffoons, it trains off of datasets long before it releases. It doesn't think. It doesn't learn after release, it won't remember things you try to teach it.

Really lowering my faith in humanity when even the AI skeptics don't understand that it generates statistical representations of an answer based on answers given in the past.

[–] Endymion_Mallorn@kbin.melroy.org 12 points 1 month ago

I mean, that 2600 Chess was built from the ground up to play a good game of chess with variable difficulty levels. I bet there's days or games when Fischer couldn't have beaten it. Just because a thing is old and less capable than the modern world does not mean it's bad.

[–] Nurse_Robot@lemmy.world 11 points 1 month ago (3 children)

I'm often impressed at how good chatGPT is at generating text, but I'll admit it's hilariously terrible at chess. It loves to manifest pieces out of thin air, or make absurd illegal moves, like jumping its king halfway across the board and claiming checkmate

load more comments (3 replies)
[–] Sidhean@lemmy.blahaj.zone 9 points 1 month ago

Can i fistfight ChatGPT next? I bet I could kick its ass, too :p

[–] capuccino@lemmy.world 9 points 1 month ago (1 children)
load more comments (1 replies)
[–] Kolanaki@pawb.social 9 points 1 month ago (4 children)

There was a chess game for the Atari 2600? :O

I wanna see them W I D E pieces.

load more comments (3 replies)
[–] stevedice@sh.itjust.works 8 points 1 month ago

2025 Mazda MX-5 Miata 'got absolutely wrecked' by Inflatable Boat in beginner's boat racing match — Mazda's newest model bamboozled by 1930s technology.

load more comments
view more: next ›