this post was submitted on 02 May 2025
63 points (100.0% liked)

Technology

38931 readers
317 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] nesc@lemmy.cafe 17 points 1 month ago (13 children)

Everyone should treat 'ai' like a program that it is. Your guilt compex is irrelevant here.

[–] GammaGames@beehaw.org 12 points 1 month ago (9 children)

The program is statistically an average white guy that knows about a lot of things but doesn’t understand any of it soooooo I’m not even sure what point you thought you had

[–] nesc@lemmy.cafe 4 points 1 month ago (6 children)

Chat bot will impersonate whoever you'll tell them to impersonate (as stated in the article), my point is pretty simple, people don't need a guide when using a chat bot that tells them how they should treat and interact with it.

I get it, that was just perfunctory self depreciation with intended audience being other first worlders.

[–] moomoomoo309@programming.dev 4 points 1 month ago (1 children)

Sure, who will it impersonate if you don't? That's where the bias comes in.

And yes, they do need a guide, because the way chatbots behave is not intuitive or clear, there's lots of weird emergent behavior in them even experts don't fully understand (see OpenAI's 4o sycophancy articles today). Chatbots' behavior looks obvious, and in many cases it is...until it isn't. There's lots of edge cases.

[–] nesc@lemmy.cafe 2 points 1 month ago (1 children)

They will impersonate 'helpful assistant made by companyname (following hundreds of lines of invisible rules and what to say and when)'. Experts that don't have an incentive to understand and at least partially in the cult who would have guessed!

[–] moomoomoo309@programming.dev 1 points 1 month ago (1 children)

And you think there's not bias in those rules that's notable, and that the edge cases I mentioned won't be an issue, or what?

You seem to have sidestepped what I've said to rant about how OpenAI sucks when that was just meant to be an example of how even those best informed about AI in the world right now don't really understand it.

[–] nesc@lemmy.cafe 2 points 1 month ago

That's not 'bias', that's intended behaviour, iirc meta published some research on it. Returning to my initial point, viewing chat bots as 'white male who lacks self-awareness' is dumb as fuck.

As for not understanding, they are paid to not understand.

load more comments (4 replies)
load more comments (6 replies)
load more comments (9 replies)