this post was submitted on 19 Mar 2025
32 points (100.0% liked)

Fuck AI

3226 readers
446 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

Disclaimer: I am asking this for a college class. I need to find an example of AI being used unethically. I figured this would be one of the best places to ask. Maybe this could also serve as a good post to collect examples.

So what have you got?

all 35 comments
sorted by: hot top controversial new old
[–] Flamangoman@leminal.space 30 points 3 months ago (1 children)

Not exactly AI being used, rather developed, but Meta's torrenting 80tb of books and not seeding is egregious

[–] haverholm@kbin.earth 6 points 3 months ago (1 children)

The fact that so much training data is scraped without consent makes a lot of the popular LLMs unethical already in their development, yeah. And that in turn makes using the models unethical.

[–] elbarto777@lemmy.world 1 points 3 months ago* (last edited 3 months ago)

Using the models unethical...... or fair game?

Edit: but I share the sentiment. I avoid using AI like the plague, but because of the environmental impact.

[–] quickhatch@lemm.ee 29 points 3 months ago (2 children)

I'm a university prof in a medical science field. We hired a new, tenure-line prof to teach introductory musculoskeletal anatomy to prepare our students for the more rigorous, full systems anatomy that's taught by a different professor. We learned (too late, after a year) that they used AI to generate the slides they used in lecture and never questioned/evaluated the content. Had an entire cohort of students fail the subsequent anatomy course after that.

But in my mind, what's worse is that the administration did nothing to correct the prof, and continues to push a pro-AI narrative in order for us to spend less time investing resources in teaching.

[–] courageousstep@lemm.ee 15 points 3 months ago

Jesus fucking Christ. That’s horrifying.

[–] omarfw@lemmy.world 8 points 3 months ago

oh my fucking god

[–] amino@lemmy.blahaj.zone 25 points 3 months ago* (last edited 3 months ago) (1 children)
[–] hendrik@palaver.p3x.de 24 points 3 months ago

Flooding the internet with slop.

[–] Kolanaki@pawb.social 16 points 3 months ago

Maybe "favorite example" isn't the best phrasing for the question, but I get the sentiment and would have to say using AI to create porn of real people as a means of blackmail.

[–] ptz@dubvee.org 13 points 3 months ago* (last edited 3 months ago) (2 children)

I Used to Teach Students. Now I Catch ChatGPT Cheats

https://thewalrus.ca/i-used-to-teach-students-now-i-catch-chatgpt-cheats

Students using it as a way to avoid actually learning and learning how to think for themselves. Also for being yet another soul-crushing blow for already under-paid, under-respected, under-attack teachers.

[–] phanto@lemmy.ca 7 points 3 months ago (1 children)

I'm a month away from my IT diploma. Even the teachers are feeding us AI slop at this point.

They gave up trying to get the students to stop at the end of first year. Protip: don't hire a new IT grad, they don't know anything chatGPT doesn't know.

[–] ptz@dubvee.org 8 points 3 months ago* (last edited 3 months ago) (1 children)

I interviewed a candidate recently, and they basically lost all consideration when I asked them a basic sysadmin question and they replied, "That's kind of one of those basic commands I just ask ChatGPT."

The basic sysadmin question was: "Name one way on a Linux server to check the free disk space".

Sadly, I had to continue the interview, but I didn't even bother writing down any of the candidate's responses after that. The equivalent would have been asking them "what's 2+2?" and they break out a calculator. Instant fail.

[–] Dagwood222@lemm.ee 5 points 3 months ago (1 children)

Someone else commented in another thread.

His sister is in 3rd grade and used AI to answer 'How many seconds in three minutes?"

[–] ptz@dubvee.org 6 points 3 months ago (1 children)

Goddamn.

I know teachers can't do this, but they should be allowed to be like: ChatGPT, you get an A. Susie, you will be repeating 3rd grade.

[–] Dagwood222@lemm.ee 3 points 3 months ago

This is why the folks in Silicon Valley don't let their own kids have tech.

https://www.snopes.com/fact-check/tech-billionaire-parents-limit/

[–] BlueSquid0741@lemmy.sdf.org 4 points 3 months ago

I read that one the other day. Unbelievable that tertiary students aren’t there to learn.

[–] carl_dungeon@lemmy.world 10 points 3 months ago

Mass consumption of copyright works for training, but still considering individuals doing it to be criminals.

[–] hendrik@palaver.p3x.de 9 points 3 months ago* (last edited 3 months ago)

Not supervising your Tesla properly and running over people.

[–] lohky@lemmy.world 8 points 3 months ago
[–] Silic0n_Alph4@lemmy.world 7 points 3 months ago (1 children)

Thank you for asking actual humans instead of an LLM 😊 Here’s my favourite example, and it’s worth digging into more: https://pivot-to-ai.com/2025/03/10/foreign-policy-the-u-k-pivot-to-ai-is-doomed-from-the-start/

[–] ArcRay@lemmy.dbzer0.com 7 points 3 months ago

It felt like the right way to approach the topic. AI has become so pervasive, I'm not even sure I could search for it without simultaneously using AI.

[–] hendrik@palaver.p3x.de 3 points 3 months ago (1 children)

Is erotic roleplay 'unethical'? Because we got a lot of services for that.

[–] YourMomsTrashman@lemmy.world 2 points 3 months ago

AI generated ads for AI roleplay apps on AI generated youtube videos made for children

[–] humanspiral@lemmy.ca 2 points 3 months ago* (last edited 3 months ago)

Once you are so quick to offer it for military purposes, and profit maximization over any ethical concerns, then it becomes not just warmongering evil maximization, and battlefield domination that encourages warmongering evil, you also need to maximize AI's disinformation of public, as media is used now, to support the warmongering evil maximization.

Any humanist principles, ethics to promote humanism, cannot coexist with warmongering maximalism. Profit prefers the latter. Learning that your views might not align with warmongering maximalism, may be used for voter suppression extending to murder by exploding electronics. AI/LLM identification of insufficient loyalty to warmongering and genocide is a key tool in ensuring agenda maximalism.

[–] oxysis@lemm.ee 1 points 3 months ago

I’m late but whatever.

Well considering that all generative ai models are built off of vast sums of stolen work there can be no ethical use of generative ai. Since any use of generative ai supports the theft of human made works. No generative ai model is built off of properly licensed work that pays the original creators for said work. Anyone arguing that this type of ai can be used is ethical ways is just wrong since it ignores the impacts to real people that allowed for the model to be made in the first place. The sheer amount of data required to build a LLM would cost too much money to obtain legally so these companies just steal and are hoping that they can just get away with it. Even Adobe, whose model comes closest, still used work that was not licensed for this purpose by using their back catalog of stock images to feed into their model.

That is ignoring the vast environmental impact through the amount of energy consumption required to run the model.