this post was submitted on 04 Apr 2026
196 points (96.2% liked)

Ask Lemmy

39187 readers
1313 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lemmy, I really would like to hear your opinions on this. I am bipolar. after almost a decade of being misdiagnosed and on medication that made my manic symptoms worse, I found stable employment with good insurance and have been able to find a good psychiatrist. I've been consistently medicated for the past 3 years, and this is the most stable I have been in my entire life.

The office has rolled out the use of an app called MYIO app. My knee jerk reaction was to not be happy about the app, but I managed my emotions, took a breath and vowed to give it a chance. After being sent the link to validate my account, the app would force restart my phone at the last step of activation. (I have my phone locked down pretty tight, and lots of google shit, and data sharing is disabled, so I'm thinking that might be the cause. My phone is also like 4-5 years old, so that could also be the cause.)

Luckily I was able to complete the steps on PC and activate that way. Once I was in the account there were standard forms to sign, like the HIPAA release. There was also a form there requesting I consent to the use of AI. Hell to the NO. That's a no for me dawg.jpg.

I'm really emotional and not thinking rationally. I am hoping for the opinions of cooler heads.

If my doctor refuses to let me be a patient if I don't consent to AI, what should I do? What would you do? Agree even though this is a major line in the sand for me, or consent to keep a provider I have a rapport with, who knows me well enough to know when my meds need adjusting?

EDIT: This is the text of the AI agreement. As part of their ongoing commitment to provide the best possible service, your provider has opted to use an artificial intelligence note-taking tool that assists in generating clinical documentation based on your sessions. This allows for more time and focus to be spent on our interactions instead of taking time to jot down notes or trying to remember all the important details. A temporary recording and transcript or summary of the conversation may be created and used to generate the clinical note for that session. Your provider then reviews the content of that note to ensure its accuracy and completeness. After the note has been created, the recording and transcript are automatically deleted.

This artificial intelligence tool prioritizes the privacy and confidentiality of your personal health information. Your session information is strictly used for the purpose of your ongoing medical care. Your information is subject to strict data privacy regulations and is always secured and encrypted. Stringent business associate agreements ensure data privacy and HIPAA compliance.

Edit 2: I just wanted to say that I appreciate everyone here that commented. For the most part everyone brought up valid points, and helped me see things I had not considered. I emailed my doctor and let them know I did not want to agree to the use of AI. I let them know that I was cool with transcription software being used as long as it was installed locally on their machines, but I did not want a third party online app having access to recorded sessions for the purposes of transcription. They didn't take issue with it.

Thank you everyone!

you are viewing a single comment's thread
view the rest of the comments
[โ€“] WoodScientist@lemmy.world 7 points 2 weeks ago (1 children)

Seriously. What happens if it hallucinates and decides that I said I was planning to harm myself or others? Could I end up being committed because an LLM thought I said something I didn't?

Or more realistic, how does this affect something like body language? When taking notes, a therapist does more than just write down the words you say. They also take note on any body language or behavior that might be relevant to your case. If AI is replacing all the note taking, then this leaves two possibilities. One possibility is the therapist simply won't ever have records of nonverbal communication. The second is even worse - you try to get an AI to create this record by feeding it a video of the session. Now you have even more subjectivity brought in.

[โ€“] Scipitie@lemmy.dbzer0.com 1 points 2 weeks ago

If you are truly serious: this is not how hallucinations work. It's really best to think of it as "fancy auto complete". The hallucinations happen when the next token is too disconnected from what we as humans would call as "belonging together". But it's all math after all.

Limit the k value, tube down temperature and cut off context size and the issue of hallucinations is a non-topic for "transcribe and summarize".

You get into what I'd call "stupid" territory like you're describing.


Your second point I fully agree with and is the reason why I'd ask the doc directly. To give the personal anecdote: the transcript itself helps me to focus on exactly the topics you've described: who's confused? Where was agreement? Where did people just not speak up?

A specific hallucination example I see every other day for example are tasks: that thing "thinks" that "we should" or "you must" are always tasks and outcomes which is utter bullshit - but I know that and using the transcript part helps me focus on the important part, the humans.