HedyL

joined 2 years ago
[–] HedyL@awful.systems 4 points 1 month ago (2 children)

To me, in terms of the chatbot's role, this seems possibly even more damning than the suicides. Apparently, the chatbot didn't just support this man's delusions about his mother and his ex-girlfriend being after him, but even made up additional delusions on its own, further "incriminating" various people including his mother, whom he eventually killed. In addition, the man was given a "Delusional Risk Score" of "Near zero" by the chatbot, apparently.

On the other hand, I'm sure people are going to come up with excuses even for this by blaming the user, his mental illness, his mother or even society at large.

[–] HedyL@awful.systems 6 points 1 month ago

because made-up stats/sources will get their entire grift thrown out if they’re discovered

I believe it is not just that. Making up some of those references as a human (in a way that sounds credible) would require quite a lot of effort and creativity. I think this is a case where the AI actually performs “excellently” at a task that is less than useless in practice.

[–] HedyL@awful.systems 10 points 1 month ago (2 children)

This is a theory I had put forward before: Made-up (but plausible-sounding) sources are probably one of the few reliable “AI detectors.” Lazy people would not normally bother to come up with something like this themselves.

[–] HedyL@awful.systems 2 points 1 month ago

The most useful thing would be if mid-level users had a system where they could just go “I want these cells to be filled with the second word of the info of the cell next to it”,

In such a case, it would also be very useful if the AI would ask for clarification first, such as: "By 'the cell next to it', you mean the cells in column No. xxx, is that correct?"

Now I wonder whether AI chatbots typically do that. In my (limited) experience, they often don't. They tend to hallucinate an answer rather than ask for clarification, and if the answer is wrong, I'm supposedly to blame because I prompted them wrong.

[–] HedyL@awful.systems 2 points 1 month ago

Also, AI is super cheap, supposedly, because it is only $ 0.40 an hour (where did that number come from?). Unlike humans, AI doesn't need any vacations and is never sick, either. Furthermore, it is never to blame for any mistakes. The user always is. So at the very least, we still need humans for shouldering all the blame, I guess.

[–] HedyL@awful.systems 3 points 1 month ago (2 children)

This week I heard that supposedly, all of those failed AI initiatives did in fact deliver the promised 40% productivity gains, but the companies (supposedly) didn't reap any returns "because they failed to make the necessary organizational changes" (which happens all the time, supposedly).

Is this the new "official" talking point?

Also, according to the university professor (!) who held the talk, the blockchain and web3 are soon going to solve the problems related to AI-generated deepfakes. They were dead serious, apparently. And someone paid them to hold that talk.

[–] HedyL@awful.systems 12 points 1 month ago (5 children)

What happened to good old dice?

[–] HedyL@awful.systems 5 points 1 month ago

I'm not even sure I understand the point of this supposed "feature". Isn't their business model mainly targeted at people who want to sell merch to their fanbase or their followers? In this case, I would imagine that most creators would want strong control over the final product in order to protect their "brand". This seems very different from stock photography / stock art, where creators knowingly relinquish (most) control over how their work is being used.

[–] HedyL@awful.systems 8 points 2 months ago

It's a bit tangential, but using ChatGPT to write a press release and then being unable to answer any critical questions about it is a little bit like using an app to climb a mountain wearing shorts and flip-flops without checking the weather first and then being unable to climb back down once the inevitable thunderstorm has started.

[–] HedyL@awful.systems 8 points 2 months ago (1 children)

A while ago, I uploaded a .json file to a chatbot (MS Copilot, I believe). It was a perfectly fine .json, with just one semicolon removed (by me). The chatbot was unable to identify the problem. Instead, it claimed to have found various other "errors" in the file. Would be interesting to know if other models (such as GPT-5) would perform any better here, as to me (as a layperson) this sounds somewhat similar to the letter counting problem.

[–] HedyL@awful.systems 6 points 2 months ago

Turns out I had overlooked the fact that he was specifically seeking to replace chloride rather than sodium, for whatever reason (I'm not a medical professional). If Google search (not Google AI) tells the truth, this doesn't sound like a very common idea, though. If people turn to chatbots for questions like these (for which very little actual resources may be available), the danger could be even higher, I guess, especially if chatbots had been trained to avoid disappointing responses.

[–] HedyL@awful.systems 6 points 2 months ago* (last edited 2 months ago) (3 children)

On first glance, this also looks like a case where a chatbot confirmed a person's biases. Apparently, this patient believed that eliminating table salt from his diet would make him healthier (which, to my understanding, generally isn't true - consuming too little or no salt could be even more dangerous than consuming too much). He was then looking for a "perfect" replacement, which, to my knowledge, doesn't exist. ChatGPT suggested sodium bromide, possibly while mentioning that this would only be suitable for purposes such as cleaning (not as food). I guess the patient is at least partly to blame here. Nevertheless, ChatGPT seems to have supported his nonsensical idea more strongly than an internet search would have done, which in my view is one of the more dangerous flaws of current-day chatbots.

Edit: To clarify, I absolutely hate chatbots, especially the idea that they could replace search engines somehow. Yet, regarding the example above, some AI bros would probably argue that the chatbot wasn't entirely in the wrong if it hadn't suggested adding sodium bromide to food. Nevertheless, I would still assume that the chatbot's sycophantic communication style significantly exacerbated the problem on hand.

view more: ‹ prev next ›