this post was submitted on 06 Oct 2025
862 points (99.7% liked)

Microblog Memes

9437 readers
1972 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
 

Original post: social.coop (Mastodon)

you are viewing a single comment's thread
view the rest of the comments
[โ€“] Paragone@lemmy.world 2 points 4 days ago (1 children)

There are math-specific LLM's, & coding-specific ones

( Yi Coder is one, which I've used to translate bits of code into some language I can sorta understand.. Julia.. I've been trying to learn programming for decades, & brain-injury can go eat rocks. : ), too.

LM Studio has a search-function, so search for "math" in its models-search, & see what it comes up with.

I've used such things to give me a derivative of some horrible equation NASA published decades ago, & then go finding an online derivatives-finder to check it with..


The thing that kills me is that IT SHOULD BE CHECKED, dammit!

ie: IF the LLM did some bullshit "arithmetic" on a column-of-numbers, THEN the regular code of the spreadsheet should

  1. display the function that the AI used, if any, &
  2. suggest the SUM() function, AND SHOW THAT-FUNCTION'S RESULT.

This whole "LLM: take the wheel" idiocy .. incomprehensible.

DuckDuckGo's AI is hit-or-miss, & sometimes it is stubbornly wrong: no correction gets through to it.

_ /\ _

[โ€“] RunawayFixer@lemmy.world 1 points 4 days ago

One of the other replies said that: "1"+(2+3) is "15" in JavaScript.". So my last theory as to what was going on, was that the creator of the meme had as cell contents ="1", 2 and 3. And then copilot used python code to sum those, not sum() which would have answered 5.

But since the answer is a black box, who really knows. This blind trust that open ai+ms expect, makes it unusable for anything that needs to be correct and verifiable. Indeed incomprehensible that they think this is a good idea. I'll have to try finding something better on lm studio the next time that I have a math problem, thanks for that tip.