I think just about everyone who is not an executive at a tech company is highly skeptical of AI.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
You'd hope, and yet I've had people on Lemmy give me shit for being overtly anti-llm
My problem with LLMs is that they're expert pattern matchers and little else.
Ask them the integral from 1-5 of ln(x) and they're sure to screw it up.
They'll give you something that sounds like the right answer, but their explanations are nonsense.
Exactly... I advise anyone with some kind of expertise to ask chat gpt some questions about your specific field, and see how accurate it is... Then try to ever believe it about anything else ever again.
Cold readings, like a psychic, is how I recently heard them referred to as.
There's a difference between healthy skepticism and invalid, knee-jerk opposition.
LLMs are a useful tool sometimes, and I use them for refining general ideas into specific things to research, and they're pretty good at that. Sure, what they output isn't trustworthy on its own, but I can pretty easily verify most of what it spits out, and it does a great job of spitting out a lot of stuff that's related to what I asked.
For example, I'm a SW dev, so I'll often ask it stuff like, "compare and contrast popular projects that do X", and it'll find a few for me and give easily-verifiable details about each one. Sometimes it's wrong on one or two details, but it gives me enough to decide which ones I want to look more deeply into. Or I'll do some greenfield research into a topic I'm not familiar with, and it does a fantastic job of pulling out keywords and other domain-specific stuff that help refine what I search for.
LLMs do a lot less than their proponents claim, but they also do a lot more than detractors claim. They're a useful tool if you understand the limitations and have a rough idea of how they work. They're a terrible tool if you buy into the BS coming from the large corps pushing them. I will absolutely push back against people on both extremes.
I mean there is place in between highly skeptical and anti. I think its a faster and more convenient search as long as it gives sources and it makes creating and editing media easier. I don't like the energy usage and do like work bringing that down. Its just trying to get it to solve things on its own that seems to be pushed when we can clearly see it not working when used like that. I think the biggest issue is its crammed in as a solution and it works in the most half assed manner and they want to say that fine.
I hate that it’s being shoved into anything and everything right now, but saying you’re “overtly anti-llm” seems a bit over dramatic to me. LLMs are a tool like anything else. Used properly and in the right situation, they can be very helpful.
Remember how a few years ago 3d displays and VR were being shoved in everyone's faces? I can see the current "AI" trend going the same way.
VR is still cool and will probably always be cool, but I doubt it'll never be mainstream. 3D was just awkward, and they really just wanted VR but the tech wasn't there yet.
I own neither, yet I've been considering VR for a few years now, just waiting for more headsets to have proper Linux support before I get one.
Likewise, I'm not paying for LLMs, but I do use the ones my workplace provides. They're useful sometimes, and it's nice to have them as an option when I hit a wall or something. I think they're interesting and useful, but not nearly as powerful as the big corporations want you to think.
They're mostly not being used for that and they come at a huge cost
We have a lot of non-management whom are all-in and drinking the kool-ade. I'm still highly put off for a number of reasons, but an outlier.
I was just trying to figure out how to express that exact sentiment. Thank you.
I don’t blame them for being skeptical. Anything that corporations/rich people are enthusiastic about usually ends up screwing them.
A certain amount of skepticism is healthy, but it's also quite common for people to go overboard and completely avoid a useful thing just because some rich idiot is pushing it. I've seen a lot of misinformation here on Lemmy about LLMs because people hate the environment its in (layoffs in the name of replacing people with "AI"), but they completely ignore the merit the tech has (great at summarizing and providing decent results from vague queries). If used properly, LLMs can be quite useful, but people hyper-focus on the negatives, probably because they hate the marketing material and the exceptional cases the news is great at shining a spotlight on.
I also am skeptical about LLMs usefulness, but I also find them useful in some narrow use-cases I have at work. It's not going to actually replace any of my coworkers anytime soon, but it does help me be a bit more productive since it's yet another option to get me unstuck when I hit a wall.
Just because there's something bad about something doesn't make the tech useless. If something gets a ton of funding, there's probably some merit to it, so turn your skepticism into a healthy quest for truth and maybe you'll figure out how to benefit from it.
For example, the hype around cryptocurrency makes it easy to knee-jerk reject the technology outright, because it looks like it's merely a tool to scam people out of their money. That is partially true, but it's also a tool to make anonymous transactions feasible. Yes, there are scammers out there pushing worthless coins in a pump and dump scheme, but there are also privacy-focused coins (Monero, Z-Cash, etc) that are being used today to help fund activists operating under repressive regimes. It's also used by people doing illegal things, but hey, so is cash, and privacy coins are basically easier to use cash. We probably wouldn't have had those w/o Bitcoin, though they use very different technology under the hood to achieve their aims. Maybe they're not for you, but they do help people.
Instead of focusing on the bad of a new technology, more people should focus on the good, and then weigh for themselves whether the good is worth the bad. I think in many cases it is, but only if people are sufficiently informed about how to use them to their advantage.
Proper headline:
“Intelligent People Understand the Limits and Dangers of AI; Unfortunately AI Company Leaders Do Not, and Seek to Silence Opposition”
As skeptical as I am, I'm feeling pressure to join the BS train on this. It's literally all over LinkedIn... Even though I'm sure it's all mostly bullshit, it doesn't matter that I think. What matters is that this is where billionaires are dumping their money so I need to be in a position to get some of it or I may not be able to be gainfully employed in 10 years.
🤣 🤣
Guess I must be one of those "marginalized"...
🤣
All Americans are, ya nitwits
Makes sense given that AI has been trained on all the prejudiced blatherings of humanity so far, and it just tries to imitate what it has seen. Yet it's being used to make decisions as if it's some wise oracle.
How do they define 'marginalized'?
In this study, we conducted a survey (n = 742) including a representative U.S. sample and an oversample of gender minorities, racial minorities, and disabled individuals to examine how demographic factors shape AI attitudes.
Thanks for the actual response. Personally I think you sample size is way too low, and the selection is skewed towards people that already feel marginalized, which will in turn, skew your results
I looked into that and the only question I really have is how geographically distributed the samples were. Other than that, It was an oversampled study, so <50% of the people were the control, of sorts. I don't fully understand how the sampling worked, but there is a substantial chart at the bottom of the study that shows the full distribution of responses. Even with under 1000 people, it seems legit.
They checked to see whether or not they had Lemmy accounts.
The trick is for everyone on the seesaw to move as far away as possible from AI, then it'll balance or tilt in favour of the people
I use AI daily and find it useful as a tool. Its also frustrating in its current state. The disgusting default buttlick responses, trying to please the user with fake polite drool. And then the many, many mistakes.
And it's a new tool, so yea it need to ripen....
And that means to go all in on a company strategic level of AI as a technology is dumb.
When building a product the problem the product solves is to be the center of the work. Not the technology used to achieve the solution.
I've got some bad news for you. They will never fix the mistakes as it cannot reason, it has no actual intelligence. LLMs are already plateauing and are miles away from being trustworthy. And they steal copyrighted work every request
There it is. Reason. Machines can't reason. Not one. They can fake it. They can mimic. But they cannot reason and never will