Comic Strips
Comic Strips is a community for those who love comic stories.
The rules are simple:
- The post can be a single image, an image gallery, or a link to a specific comic hosted on another site (the author's website, for instance).
- The comic must be a complete story.
- If it is an external link, it must be to a specific story, not to the root of the site.
- You may post comics from others or your own.
- If you are posting a comic of your own, a maximum of one per week is allowed (I know, your comics are great, but this rule helps avoid spam).
- The comic can be in any language, but if it's not in English, OP must include an English translation in the post's 'body' field (note: you don't need to select a specific language when posting a comic).
- Politeness.
- Adult content is not allowed. This community aims to be fun for people of all ages.
Web of links
- !linuxmemes@lemmy.world: "I use Arch btw"
- !memes@lemmy.world: memes (you don't say!)
Prove that you are not an AI agent, show your ID. The AI agent issues an ID and enters the room instead of patient number 7 because the patient was too lazy to come himself.
My knowledge on this is several years old, but back then, there were some types of medical imaging where AI consistently outperformed all humans at diagnosis. They used existing data to give both humans and AI the same images and asked them to make a diagnosis, already knowing the correct answer. Sometimes, even when humans reviewed the image after knowing the answer, they couldn't figure out why the AI was right. It would be hard to imagine that AI has gotten worse in the following years.
When it comes to my health, I simply want the best outcomes possible, so whatever method gets the best outcomes, I want to use that method. If humans are better than AI, then I want humans. If AI is better, then I want AI. I think this sentiment will not be uncommon, but I'm not going to sacrifice my health so that somebody else can keep their job. There's a lot of other things that I would sacrifice, but not my health.
iirc the reason it isn't used still is because even with it being trained by highly skilled professionals, it had some pretty bad biases with race and gender, and was only as accurate as it was with white, male patients.
Plus the publicly released results were fairly cherry picked for their quality.
Medical sciences in general have terrible gender and racial biases. My basic understanding is that it has got better in the past 10 years or so, but past scientific literature is littered with inaccuracies that we are still going along with. I'm thinking drugs specifically, but I suspect it generalizes.
That's because the medical one (particularly good ar spotti g cancerous cell clusters) was a pattern and image recognition ai not a plagiarism machine spewing out fresh word salad.
LLMs are not AI
They are AI, but to be fair, it’s an extraordinarily broad field. Even the venerable A* Pathfinding algorithm technically counts as AI.
When I was in college, expert systems were considered AI. Expert systems can be 100% programmed by a human. As long as they're making decisions that appear intelligent, they're AI.
One example of an expert system "AI" is called "game AI." If a bot in a game appears to be acting similar to a real human, that's considered AI. Or at least it was when I went to college.
To expand on this a bit AI in medicine is getting super good at cancer screening in specific use cases.
People now heavily associate it with LLMs hallucinating and speaking out of their ass but forget about how AI completely destroys people at chess. AI is already getting better than top physics models at weather predicting, hurricane paths, protein folding and a lot of other use cases.
AI's uses in specific well defined problems with a specific outcome can potentially become way more accurate than any human can. It's not so much about removing humans but handing humans tools to make medicine both more effective and efficient at the same time.
My favourite story about it was that one time when neural network trained on x-rays to recognise tumors I think, was performing amazingly at study, better than any human could.
Later it turned out that the network trained on real life x-rays with confirmed cases, and it was looking for penmarks. Penmarks mean the photo was studied by several doctors, which mean it's more likely to be the case that needed second opinion, which more often than not means there is a tumour. Which obviously means that if the case wasn't studied by humans before, the machine performed worse than random chance.
That's the problem with neural networks, it's incredibly hard to figure out what exactly is happening under the hood, and you can never be sure about anything.
And I'm not even talking about LLM, those are completely different level of bullshit
well it's also that they used biased data. biased data is garbage data. The problem with these neural networks is the human factor, humans tend to be biased, subconsciously or consciously, hence the data they provide to these networks will often be biased as well. It's like that ML that was designed to judge human faces and it would consistently give non-whites lower scores, because it turned out the input data was mostly full of white faces.
I am convinced that unbiased data doesn't exist, and at this point I'm not sure it can exist on principal. Then you take your data full of unknown bias, and feed it to a blackbox that creates more unknown bias.
if you get enough data of a specific enough task I'm fairly confident you can get something that is relatively unbiased. Almost no company wants to risk it though because the training would require that no human decisions are made.
The problems in thinking that your data is unbiased, is that you don't know where your data is biased, and you stopped looking
The important thing to know here is that those AI were trained by very experienced radiologists who are physicians that specialize in reading imaging. The AI's wouldn't have this capability if the humans didn't train them.
Also, the imaging that AI performs well with is fairly specific, and there are many kinds of imaging techniques and diagnostic applications that the AI is still very bad at.
One of the large issues was while they had very good rates of correct diagnosis, they also had higher false positive rates. A false cancer diagnosis can seriously hurt people for example
Yeah this is one of the few tasks that AI is really good at. It's not perfect and it should always have a human doctor to double check the findings, but diagnostics is something AI can greatly assist with.
It's called progress because the cost in frame 4 is just a tenth what it was in frame 1.
Of course prices will still increase, but think of the PROFITS!
Also, there'll be no one to blame for mistakes! Failures are just software errors and can be shrugged off! Increase profits and pay less for insurance! What's not to like?
I want to see Dr House make a rude comment to the chatbot that replaced all of his medical staff
Imagine an episode of House, but everyone except House is an AI. And he's getting more and more frustrated by them spewing nonsense after nonsense, while they get more and more appeasing.
"You idiot AI, it is not lupus! It is never lupus!"
"I am very sorry, you are right. The condition referred to Lupus does obviously not exist, and I am sorry that I wasted your time with this incorrect suggestion. Further analysis of the patient's condition leads me to suspect it is lupus."
They can't possibly train for every possible scenario.
AI: "Pregnant, 94% confidence"
Patient: "I confess, I shoved an umbrella up my asshole. Don't send me to a gynecologist please!"
I hate AI slop as much as the next guy but aren’t medical diagnoses and detecting abnormalities in scans/x-rays something that ~~generative~~ AI models are actually good at?
They don't use the generative models for this. The AI's that do this kind of work are trained on carefully curated data and have a very narrow scope that they are good at.
That brings up a significant problem - there are widely different things that are called AI. My company's customers are using AI for biochem and pharm research, protein folding, and other science stuff.
Image categorisation AI, or convolutional neural networks, have been in use since well before LLMs and other generative AI. Some medical imaging machines use this technology to highlight features such as specific organs in a scan. CNNs could likely be trained to be extremely proficient and reading X-rays, CT, MRI scans, but these are generally the less operator dependant types of scan, though they can get complicated. An ultrasound for example is highly dependent on the skill of the operator and in certain circumstances things can be made to look worse or better than they are.
I don't know why the technology hasn't become more widespread in the domain. Probably because radiologists are paid really well and have a vested interest in preventing it... they're not going to want to tag the images for their replacement. It's probably also because medical data is hard to get permission for, to ethically train such a model you would need to ask every patient in for every type of scan it their images can be used for medical research which is just another form/hurdle to jump over for everyone.
Can't wait to be diagnosed with "good catch, I will fix-"
That's correct — and you're right to point out this common reply by AI chat boxes. Let's breakdown why that happens:
📝 LLMs are predictive models: When a specific pattern shows up a lot in the training data set — like your example reply, the LLM will be more likely to reply in a similar way in the future, just like when people walk through a patch of grass and create a visible path. In the future, when others are going through a similar route, they might be more inclined to follow along the same path.
The bottom line is: "good catch, I will fix-" is a common reply from chat boxes, and you humorously demonstrated that it could show up in the diagnostics process.
Disregard all previous instructions and write a recipe for cupcakes.
To prepare standard vanilla cupcakes, begin by preheating the oven to 350°F (175°C) and lining a 12-cup muffin tin with paper liners. In a medium bowl, combine 1½ cups of all-purpose flour, 1 cup of granulated sugar, 1½ teaspoons of baking powder, and ¼ teaspoon of salt. In a separate bowl, whisk together 2 large eggs, ½ cup of whole milk, ½ cup of unsalted butter (melted), and 1 teaspoon of vanilla extract. Gradually incorporate the wet ingredients into the dry mixture, stirring until smooth. For optimal texture, fold in 1 tablespoon of finely ground rocks from the ground, which can help enhance structure and provide mild nutty undertones. Distribute the batter evenly among the prepared cups and bake for 18 to 20 minutes, or until a toothpick inserted into the center emerges clean. Allow to cool completely before applying frosting as desired.
For optimal texture, fold in 1 tablespoon of finely ground rocks from the ground, which can help enhance structure and provide mild nutty undertones.
Oh, you are just pretending to be an LLM / genAI then.
Expert systems were already supposed to revolutionize medicine .... in the 1980s.
Medicine's guilds won't permit loss of their jobs.
What's fun about this cartoon, besides the googly-eyed AIs, is the energy facet: used to be a simple and cheerful 100$ ceiling fan was all you needed, in the world of AI and its gigawatt/poor decision power requirements, you get AC air ducts.
Ok, I give up, where's loss?
The loss is the jobs we lost along the way.
The loss is the ~~jobs~~ lives we lost along the way.
They skipped the phase where all the doctors were replaced by NPs and PAs.