this post was submitted on 20 Feb 2026
216 points (99.5% liked)

Not The Onion

20491 readers
780 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
top 8 comments
sorted by: hot top controversial new old
[–] jballs@sh.itjust.works 13 points 12 hours ago

To flag grants for their DEI involvement, Fox entered the following command into ChatGPT: “Does the following relate at all to DEI? Respond factually in less than 120 characters. Begin with ‘Yes.’ or ‘No.’ followed by a brief explanation. Do not use ‘this initiative’ or ‘this description’ in your response.” He then inserted short descriptions of each grant. Fox did nothing to understand ChatGPT’s interpretation of “DEI” as used in the command or to ensure that ChatGPT’s interpretation of “DEI” matched his own.

Jesus Christ these people are so fucking dumb

[–] CluckN@lemmy.world 6 points 16 hours ago

The didn’t ask Grok?

[–] deadbeef79000@lemmy.nz 29 points 1 day ago* (last edited 1 day ago) (2 children)

To be fair, a trained LLM was probably better at identifying DEI than whatever musky chump they had driving it.

The whole premise is evil, but this possibly was more efficient.

[–] foggy@lemmy.world 12 points 19 hours ago (1 children)

100% not true if they were using a single session to check multiple grants.

Every prompt you send contains a hashed version of your entire conversation with the chatbot. When this exceeds the chat bots context window, it's answers become less and less relevant.

You'll notice this if you've ever had a chat or guide you through something for an hour or more. It eventually gets something wrong takes you down a rabbit hole, and goes in a big circle. At this point, it can be very difficult to get the chat bot to simply respond to your prompt, i.e. if you say "you know what let's talk about _______ instead." It will keep talking about whatever you were talking about staying in your dumb rabbit hole loop.

So if they did this with multiple grants eventually it would basically realize theyre looking for "yes that's dei" and just responding with different versions of that ad nauseam.

[–] HobbitFoot@thelemmy.club 1 points 15 hours ago

Yeah, but if the people who are hired to review grants are checking for DEI, are they smart enough to understand what they're reading?

[–] green_red_black@slrpnk.net 13 points 1 day ago (1 children)

Unfortunately it wouldn’t be better. Rather it would be a coin flip. Sometimes it will use the genuine definition, other times it would use the BS Definition

[–] Quacksalber@sh.itjust.works 13 points 23 hours ago (1 children)

And 100% of the time it will agree with the user. So if they ever asked "Are you sure this isn't DEI?", it would agree with them.

[–] shneancy@lemmy.world 9 points 21 hours ago

"Good observation! The concept of breathing is associated with DEI by some circles of LGBTQ people. As they say — queer people need air 🌪"

or something like that idk i don't speak AI