this post was submitted on 20 Oct 2025
622 points (98.6% liked)

Fuck AI

4495 readers
986 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

cross-posted from: https://slrpnk.net/post/12723593

Not all ai is bad, just most of it

top 50 comments
sorted by: hot top controversial new old
[–] Takeshidude@lemmy.world 143 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

analytical AI is great

generative AI is cancer

[–] MyTurtleSwimsUpsideDown@fedia.io 69 points 2 weeks ago (2 children)

Even analytical AI needs to be questioned and validated before use.

  1. I wouldn’t trust a AI to ID mushrooms for consumption.

  2. I forget the details, but there was a group training a diagnostic model (this was before “AI” became the popular term), and it was giving a lot of false positives. They eventually teased out that it was flagging low quality images because most of the unhealthy examples it was trained on came from poorer countries with less robust healthcare systems; hence the higher rates of the disease and lower quality images from older technology.

[–] Ageroth@reddthat.com 33 points 2 weeks ago

I've seen a similar thing where the machine learning model started associating rulers with cancer because the images it was fed with known cancer almost always also had a ruler to provide scale to measure the size of the tumor

[–] shrugs@lemmy.world 14 points 2 weeks ago

It's like these geoguesser not guessing the country by the plants and streets or houses, but by the camera angle and some imperfections only occuring in pictures taken in that country.

"When a measure becomes a target, it ceases to be a good measure"

[–] monogram@feddit.nl 19 points 2 weeks ago (1 children)
[–] ceenote@lemmy.world 33 points 2 weeks ago (1 children)

Mass surveillance is bad regardless of whether or not AI is part of it.

[–] technocrit@lemmy.dbzer0.com 5 points 2 weeks ago* (last edited 2 weeks ago)

Well there's no "AI" so we don't have to worry about it. Just a bunch of disgusting rich bros spying on us or worse.

[–] technocrit@lemmy.dbzer0.com 7 points 2 weeks ago* (last edited 2 weeks ago)

This kind of wacky absolutism is part of the problem. Esp since "AI" doesn't even exist.

https://www.downtoearth.org.in/governance/lavender-wheres-daddy-and-the-ethics-of-ai-driven-war

[–] s@piefed.world 53 points 2 weeks ago (2 children)

Definitely do not use AI or AI-written guidebooks to differentiate edible mushrooms from poisonous mushrooms

[–] Zoomboingding@lemmy.world 29 points 2 weeks ago (2 children)

It clearly says "plant identification"~

[–] SuperIce@lemmy.world 8 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Could still have difficulty accurately identifying the difference between a delectable tea and a deadly poison though

Or you found laborador tea and it's both delectable and poisonous!

load more comments (1 replies)
[–] Aeao@lemmy.world 8 points 2 weeks ago

Honestly don’t use any guide book or advice. If you aren’t 100 percent sure on your own maybe just walk away.

My self personally… even if I was 99.999 percent sure it still wouldn’t be worth the risk. I’ll just buy some mushrooms.

[–] crazycraw@crazypeople.online 34 points 2 weeks ago (1 children)

rage against the machine learning

[–] Gerudo@lemmy.zip 2 points 2 weeks ago

Good nerdcore band name

[–] punkfungus@sh.itjust.works 33 points 2 weeks ago (1 children)

People are saying you shouldn't use AI to identify edible mushrooms, which is absolutely correct, but remember that people forage fruits and greens too. Plants are deadly poisonous at a higher rate than mushrooms, so plant ID AI has the potential to be more deadly too.

And then there's the issue that these ID models are very America and/or Europe centric, and will fail miserably most of the time outside of those contexts. And if they do successfully ID a plant, they won't provide information about it being a noxious invasive in the habitat of the user.

Like essentially all AI, even when it works it's barely useful at the most surface level only. When it doesn't work, which is often, it's actively detrimental.

I actually think AI for mushroom identification is okay, but as a step in the process. Sometimes you see a mushroom and your like "what is that?". Do a little scan to see what it is. Okay now you have an idea of what it is, but then comes the next part! https://mushroomexpert.com/ there you can go through the list and see if you get a positive ID.

Like if you're not 100% positive you know what you're foraging why would you take the risk.

[–] ThePantser@sh.itjust.works 27 points 2 weeks ago (2 children)

Just dealt with an AI bot this morning when I called a law office. They try so hard to mimic humans. They even added background people talking sounds. But was 100% easily given away when they repeated the same response to my asking to speak with a human. "I will gladly pass on your message to (insert weird pause) "Bill""

[–] A_Union_of_Kobolds@lemmy.world 10 points 2 weeks ago

There was an interesting story on NPR last week about someone experimenting with AI agent clones of himself. Even his best attempt sounded pretty obvious thanks to stuff like that.

[–] DeathByBigSad@sh.itjust.works 3 points 2 weeks ago

They even add fake keyboard typing sounds now 💀

[–] Blackfeathr@lemmy.world 22 points 2 weeks ago (2 children)

Idk, when I Google Lensed a Sunflower plant, the AI told me it was a Peruvian ground apple....

It also has a lot of trouble identifying lambs quarters and other common wild weeds.

[–] renzhexiangjiao@piefed.blahaj.zone 18 points 2 weeks ago (2 children)

probably because Google lens is made to be all-purpose, if you had a model that has been specifically trained to recognize plants, it wouldn't make such obvious mistakes

load more comments (2 replies)
[–] nocturne@slrpnk.net 14 points 2 weeks ago (1 children)
[–] skavj@lemmy.zip 3 points 2 weeks ago (1 children)

Flora incognita is a nice one I've tried. It's a spinout from a university. The best use is to take its guess and then click through, and you can see example images of the plant it is guessing so you can judge for yourself

[–] nocturne@slrpnk.net 1 points 2 weeks ago

Downloaded it, will give it a try.

[–] Cevilia@lemmy.blahaj.zone 16 points 2 weeks ago (1 children)

With respect to the original-original poster, this is wrong. AI plant identification is terrible. It gives you confidence but not enough nuance to know that there are similar plants, some of which look almost but not quite identical, some of which will provide some really nice sustenance, some of which will literally kill you and it'll hurt the whole time you're dying.

It's almost as bad as those AI-written foraging guides that give you enough information to feel confident but not enough information to be able to tell toxic or even deadly plants apart from the real ones.

[–] odelik@lemmy.today 4 points 2 weeks ago

Word to the wise.

If it looks like a carrot, don't touch it, to dig it up, and especially don't eat it. There are tons of plants in the same familuly of plants that look nearly identical or extremely similar that will give you an extremely bad day, month, year, or death.

[–] lavander@lemmy.dbzer0.com 12 points 2 weeks ago (2 children)

One of the issues with LLM is that it attracted all attention. Classifiers are generally cool, cheap and saved us from multiple issues (ok face recognition aside 🙂)

When the AI bubble will burst (because of LLM being expensive and not good enough to replace a person even if they are good in pretending to be a person) all AI will slow down… including classifiers, NLP, etc

All this because the AI community was obsessed by the Turing test/imitation game 🙄

Turing was a genius but heck if I am upset with him for coming with this BS 🤣

[–] skisnow@lemmy.ca 4 points 2 weeks ago* (last edited 2 weeks ago)

I am upset with him for coming with this BS

It made sense in the context it was devised in. Back then we thought the way to build an AI was to build something that was capable of reasoning about the world.

The notion that there'd be this massive amount of text generated by a significant percentage of the world's population all typing their thoughts into networked computers for a few decades, coupled with the digitisation of every book written, that could be stitched together in a 1,000,000,000,000-byte model that just spat out the word with the highest chance of being next based on what everyone else in the past had written, producing the illusion of intelligence, would have been very difficult for him to predict.

Remember, Moore's Law wasn't coined for another 15 years, and personal computers didn't even exist as a sci-fi concept until later still.

[–] brucethemoose@lemmy.world 1 points 2 weeks ago

I dunno about that. We got a pile of architecture research out of it just waiting for some more tests/implementations.

And think of how cheap renting compute will be! It’s already basically subsidized, but imagine when all those A100s/H100s are dumped.

[–] SoftestSapphic@lemmy.world 12 points 2 weeks ago

"AI"'s best uses are the Machine Learning aspects we were already using before we started calling things AI

It's so painful watching this tech be forced down our throats by marketing departments despite us already discovering all of its best use cases long before the marketing teams got ahold of the tech.

[–] Aeao@lemmy.world 12 points 2 weeks ago (1 children)

It didn’t mention the star finder apps!

Save yourself the money and time. It’s Venus. That cool star you’re looking at? Yeah that’s Venus. Just trust me.

[–] Lifter@discuss.tchncs.de 5 points 2 weeks ago (1 children)

You don't need AI toshoew a star map. This is the one and only use for Augmented Reality though.

[–] Aeao@lemmy.world 3 points 2 weeks ago

Yeah I actually paid for the full version of mine… even though it’s always Venus

[–] Jankatarch@lemmy.world 11 points 2 weeks ago* (last edited 2 weeks ago)

Honestly good rule about Machine Learning is just "predicting = good, generating = bad." Rest are case by case but usually bad.

Predict inflation in 3 years - cool
Predict chance of cancer - cool.

Generate image or mail or summary or tech article - fuck you.

Generating speech from text/image is also cool but it's kind of a special case there.

[–] CompactFlax@discuss.tchncs.de 7 points 2 weeks ago (1 children)

Friendly reminder that automatic transmissions were sometimes considered to be artificial intelligence.

“Fuck business idiots waxing poetic about the inestimable value of LLMs” isn’t a good community name though.

[–] pelespirit@sh.itjust.works 3 points 2 weeks ago

Automatic transmissions aren't trying to take your creative job.

[–] ZombiFrancis@sh.itjust.works 7 points 2 weeks ago (1 children)

If you ever are an executive and you need to explain a product idea you made up and don't want to bother with actual proof of concept, then AI has got you.

If you want custom porn, AI has got you.

These are its two competing functions. And judging by my last foray into what AI is about, the latter is winning hard.

[–] Tattorack@lemmy.world 2 points 2 weeks ago

"Winning hard". Nice choice of words.

[–] chosensilence@pawb.social 7 points 2 weeks ago

genAI is the enemy. other kinds are useful.

[–] technocrit@lemmy.dbzer0.com 5 points 2 weeks ago* (last edited 2 weeks ago)

There is no "AI".

But there is endless technology that grifters label as "AI". Ofc some of this technology will be useful. But under capitalism all technology is developed by and for the benefit of the disgustingly privileged via violent control.

[–] Soapbox@lemmy.zip 4 points 2 weeks ago

AI noise reduction and spot removal tools for photo editing get a pass too.

[–] brbposting@sh.itjust.works 4 points 2 weeks ago

Plant ID is soooo disappointing - works sometimes though.

Always gotta run the ID, web search for images of the recommendation, compare images to plant.

Semantic search can be helpful:

Search mockup showing difference between a lexical search of “Daniel Radcliffe” compared to semantic search of “how rich is the actor who played Harry Potter“ which translates to “net worth Daniel Radcliffe“, sourced from seobility.net

Guess OP image could be about e.g. Perplexity repeatedly HAMMERING (no caching?) the beautiful open web and slopping out poor syntheses.

[–] ILikeBoobies@lemmy.ca 3 points 2 weeks ago (3 children)

It’s a decent evolution of the search engine but you have to ask it for sources and it’s way too expensive for it’s use case.

[–] AeonFelis@lemmy.world 3 points 2 weeks ago

decent

You've misspelled "descent"

load more comments (2 replies)
[–] jlow@discuss.tchncs.de 3 points 2 weeks ago

+1 for WhoBird

[–] 30p87@feddit.org 2 points 2 weeks ago (3 children)

That's known as image identification with ML though, not "AI". The difference? Capitalism.

[–] magic_lobster_party@fedia.io 8 points 2 weeks ago (1 children)

The difference is that plant identification is no longer an interesting area for AI research. It was ”AI” 10 years ago, but now it’s more or less a solved problem.

[–] 30p87@feddit.org 8 points 2 weeks ago

Primarily, it's not interesting financially and therefore for marketing.

load more comments (2 replies)
load more comments
view more: next ›