this post was submitted on 24 Nov 2025
199 points (95.0% liked)

Ask Lemmy

35691 readers
998 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

I want to let people know why I'm strictly against using AI in everything I do without sounding like an 'AI vegan', especially in front of those who are genuinely ready to listen and follow the same.

Any sources I try to find to cite regarding my viewpoint are either mild enough to be considered AI generated themselves or filled with extremist views of the author. I want to explain the situation in an objective manner that is simple to understand and also alarming enough for them to take action.

(page 3) 36 comments
sorted by: hot top controversial new old
[–] supersquirrel@sopuli.xyz 3 points 3 days ago* (last edited 3 days ago)

Fundamentally what is evil about AI is that it is part of a growing global movement towards increasingly not seeing value in human beings but rather in abstracted forms of capital and power.

Irrespective of how well AI works or how quickly it evolves, what makes it awful is how it is in almost every manifestation it is a rejection of the potential of humanity. Cool things can be done with AI/pattern matching technology, but the thinking that gave birth to and arose around these tools is incredibly dangerous. The social contract has been broken by an extremist embrace of the value of computers and the corporations that own them over the value of human lives. Not only is this disgusting from an ethical standpoint, it is also senseless, no matter how powerful AI gets if we are interested in different forms of intelligence we MUST be humanists since by far the most abundant diversity of intelligence on earth is human/organic and this will continue to be the case long into the future.

What defenders of AI and people with a neutral opinion towards AI miss is that you cannot separate the ideology and the technology with "AI". AI in its meteoric economic acceleration (in terms of investment not profit) is a manifestation of the desire of the ruling class to fully extract the working class from their profit mechanisms. There is no neutrality to the technology of AI since almost the entire story of how, why and what AI has been has been determined by the desires of ideologies that are hostile to valuing human life at a basic level and that should alarm everyone.

[–] I_Has_A_Hat@lemmy.world 3 points 3 days ago (1 children)

This reminds me of those posts from anti-vaxers who complain about not being able to find good studies or sources that support their opinion.

[–] Spacehooks@reddthat.com 1 points 3 days ago

I normally ask them if they have a moment to talk about the rebirth and perseverance* Nurgle. For they already embrace his blesses on the land.

[–] solomonschuler@lemmy.zip 0 points 2 days ago

I just mentioned to a friend of mine why I don't use AI. My hatred towards AI strives from people making it seem sentient, the companies business model, and of course, privacy.

First off, to clear any misconception, AI is not a sentient being, it does not know how to critical think, and it's incapable of creating thoughts outside from the data it's trained on. Technically speaking, a LLM is a lossy compression model, which means it takes what is effectively petabytes of information and compresses it down to a sheer 40Gb. When it gets uncompressed it doesnt uncompress the entire petabytes of information it uncompresses the response that it was trained from.

There are several issues I can think of that makes the LLM do poorly at it's job. remember LLM's are trained exclusively on the internet, as large as the internet is, it doesn't have everything, your codebase of a skiplist implementation is probably not going to be the same from on the internet. Assuming you have a logic error in your skiplist implementation, and you ask chatGPT "whats the issue with my codebase" it will notice the code you provided isn't what it was trained on and will actively try to fix it digging you into a deeper rabbit hole then when you began the implementation.

On the other hand, if you ask chatGPT to derive a truth table given the following sum of minterms, it will not ever be correct unless heavily documented (IE: truth table of an adder/subtractor). This is the simplest example I could give where these LLMs cannot critical think, cannot recognize pattrrns, and only regurgitate the information it has been trained on. It will try to produce a solution but it will always fail.

This leads me to my first point why I refuse to use LLMs, it unintentionally fabricates a lot of the information and treat it as if it's true, when I started

[–] ch00f@lemmy.world 1 points 3 days ago (1 children)

Check out wheresyoured.at for some "haters guides."

My general take is that virtually none of the common "useful" forms of AI are even remotely sustainable strictly from a financial standpoint, so there's not use getting too excited about them.

[–] BlameThePeacock@lemmy.ca 2 points 3 days ago (1 children)

The financial argument is pretty difficult to make.

You're right in one sense, there is a bubble here and some investors/companies are going to lose a lot of money when they get beaten by competitors.

However, you're also wrong in the sense that the marginal cost to run them is actually quite low, even with the hardware and electricity costs. The benefit doesn't have to be that high to generate a positive ROI with such low marginal costs.

People are clearly using these tools more and more, even for commercial purposes when you're paying per token and not some subsidized subscription, just check out the graphs on OpenRouter https://openrouter.ai/rankings

[–] ch00f@lemmy.world 1 points 3 days ago (1 children)

None of the hyperscalers have produced enough revenue to even cover operating costs. Many have reported deceptive “annualized” figures or just stopped reporting at all.

Couple that with the hardware having a limited lifespan of around 5 years, and you’ve got an entire industry being subsidized by hype.

[–] BlameThePeacock@lemmy.ca 1 points 3 days ago

Covering operating costs doesn't make sense as the threshold for this discussion though.

Operating costs would include things like computing costs for training new models and staffing costs for researchers, both of which would completely disappear in a marginal cost calculation for an existing model.

If we use Deepseek R1 as an example of a large high end model, you can run a 8-bit quantized version of the 600B+ parameter model on Vast.Ai for about $18 per hour, or even on AWS for like $50/hour. Those produce tokens fast enough that you can have quite a few users on it at the same time, or even automated processes running concurrently with users. Most medium sized businesses could likely generate more than $50 in benefit from it per running hour, especially since you can just shut it down at night and not even pay for that time.

You can just look at it from a much smaller perspective too. A small business could buy access to consumer GPU based systems and use them profitably with 30B or 120B parameter open source models for dollars per hour. I know this is possible, because I'm actively doing it.

[–] NoSpotOfGround@lemmy.world -2 points 3 days ago (7 children)

What are some good reasons why AI is bad?

There are legitimate reasons people worry about AI. Here are some of the strongest, clearly framed concerns:

1. Bias and unfair decisions

AI systems often learn from biased data and can unintentionally discriminate—against certain races, genders, ages, or socioeconomic groups—in hiring, lending, housing, policing, and more.

2. Lack of transparency

Many AI models act as “black boxes,” making decisions that are hard to explain. This creates problems when the stakes are high (medical diagnosis, legal decisions, etc.).

3. Privacy risks

AI can analyze huge amounts of personal data, track behavior, or identify people through facial recognition—often without explicit consent.

4. Job displacement

Automation threatens certain categories of work, particularly routine or repetitive jobs. Without proper planning, this can increase inequality and unemployment.

5. Misinformation and deepfakes

AI makes it easier to create convincing fake audio, video, or text. This can undermine trust in media, fuel propaganda, and destabilize democratic processes.

6. Weaponization

AI can be used in autonomous weapons, cyberattacks, targeted surveillance, or manipulation—raising serious security and ethical issues.

7. Overreliance and loss of human skills

As AI does more tasks, people may become too dependent, reducing critical thinking, creativity, or expertise in certain fields.

8. Concentration of power

Powerful AI tools tend to be controlled by a few big companies or governments, potentially leading to monopolies, inequality, and reduced individual autonomy.

9. Alignment and control risks

Advanced AI systems may behave in unexpected or harmful ways if their goals aren’t perfectly aligned with human values—even without malicious intent.

10. Environmental impact

Training large AI models consumes significant energy and resources, contributing to carbon emissions.


If you want, I can also provide reasons why AI is good, help you construct an argument for a debate, or analyze specific risks more deeply.

Were you looking for this kind of reply? If you can't express why you have an opinion maybe your opinion is not well founded in the first place. (Not saying it's wrong, just that it might not be justified/objective.)

[–] AmidFuror@fedia.io 1 points 3 days ago (1 children)

You beat me to it. To make it less obvious, I ask the AI to be concise, and I manually replace the emdashes with hyphens.

[–] FaceDeer@fedia.io 0 points 3 days ago

I haven't tested it, but I saw an article a little while back that you can add "don't use emdashes" to ChatGPT's custom instructions and it'll leave them out from the beginning.

It's kind of ridiculous that a perfectly ordinary punctuation mark has been given such stigma, but whatever, it's an easy fix.

load more comments (6 replies)
load more comments
view more: ‹ prev next ›