The amount of times I've seen a question answered by "I asked chatgpt and blah blah blah" and the answer being completely bullshit makes me wonder who thinks asking the bullshit machine™ questions with a concrete answer is a good idea
tumblr
Welcome to /c/tumblr, a place for all your tumblr screenshots and news.
Our Rules:
-
Keep it civil. We're all people here. Be respectful to one another.
-
No sexism, racism, homophobia, transphobia or any other flavor of bigotry. I should not need to explain this one.
-
Must be tumblr related. This one is kind of a given.
-
Try not to repost anything posted within the past month. Beyond that, go for it. Not everyone is on every site all the time.
-
No unnecessary negativity. Just because you don't like a thing doesn't mean that you need to spend the entire comment section complaining about said thing. Just downvote and move on.
Sister Communities:
-
/c/TenForward@lemmy.world - Star Trek chat, memes and shitposts
-
/c/Memes@lemmy.world - General memes
This is your reminder that LLMs are associative models. They produce things that look like other things. If you ask a question, it will produce something that looks like the right answer. It might even BE the right answer, but LLMs care only about looks, not facts.
A lot of people really hate uncertainty and just want an answer. They do not care much if the answer is right or not. Being certain is more important than being correct.
Why not just read the first part of a wikipedia article if they want that though? It's not the end all source but it'd better than asking the machine known to make things up the same question.
Because the AI propaganda machine is not exactly advertising the limitations, and the general public sees LLMs as a beefed up search engine. You and I know that’s laughable, but they don’t. And OpenAI sure doesn’t want to educate people - that would cost them revenue.
I don't see the point either if you're just going to copy verbatim. OP could always just ask AI themselves if that's what they wanted.
I feel like it's an unpopular take but people are like "I used chat gpt to write this email!" and I'm like you should be able to write email.
I think a lot of people are too excited to neglect core skills and let them atrophy. You should know how to communicate. It's a skill that needs practice.
This is a reality as most people will abandon those skills, and many more will never learn them to begin with. I'm actually very worried about children who will grow up learning to communicate with AI and being dependent on it to effectively communicate with people and navigate the world, potentially needing AI as a communication assistant/translator.
AI is patient, always available, predicts desires and effectively assumes intent. If I type a sentence with spelling mistakes, chatgpt knows what I meant 99% of the time. This will mean children don't need to spell or structure sentences correctly to effectively communicate with AI, which means they don't need to think in a way other human being can understand, as long as an AI does. The more time kids spend with AI, the less developed their communication skills will be with people. GenZ and GenA already exhibit these issues without AI. Most people go experience this communicating across generations, as language and culture context changes. This will emphasize those differences to a problematic degree.
Kids will learn to communicate will people and with AI, but those two styles with be radically different. AI communication will be lazy, saying only enough for AI to understand. With communication history, which is inevitable tbh, and AI improving every day, it can develop a unique communication style for each child, what's amounts to a personal language only the child and AI can understand. AI may learn to understand a child better than their parents do and make the child dependent on AI to effectively communicate, creating a corporate filter of communication between human being. The implications of this kind of dependency are terrifying. Your own kid talks to you through an AI translator, their teachers, friends, all their relationships could be impacted.
I have absolutely zero beleif that the private interests of these technology owners will benefit anyone other than themselves and at the expense of human freedom.
I know someone who very likely had ChatGPT write an apology for them once. Blew my mind.
I use it to communicate with my landlord sometimes. I can tell ChatGPT all the explicit shit exactly as I mean it and it'll shower it and comb it all nice and pretty for me. It's not an apology, but I guess my point is that some people deserve it.
You don’t think being able to communicate properly and control your language, even/especially for people you don’t like, is a skill you should probably have? It’s not that much more effort.
I can and I do, but I don't think he's worth the effort specifically. Lol
Why waste the brain power when the option exists not to?
Because brains literally need exercise, and conversations with other real humans are the best kind it can get, so you're literally speedrunning an increased potential of dementia and alzheimers with every fake email.
Spent this morning reading a thread where someone was following chatGPT instructions to install "Linux" and couldn't understand why it was failing.
Hmm, I find chatGPT is pretty decent at very basic techsupport asked with the correct jargon. Like "How do I add a custom string to cell formatting in excel".
It absolutely sucks for anything specific, or asked with the wrong jargon.
Good for you buddy.
Edit: sorry that was harsh. I'm just dealing with "every comment is a contrarian comment" day.
Sure, GPT is good at basic search functionality for obvious things, but why choose that when there are infinitely better and more reliable sources of information?
There's a false sense of security couple to a notion of "asking" an entity.
Why not engage in a community that can support answers? I've found the Linux community (in general) to be really supportive and asking questions is one way of becoming part of that community.
The forums of the older internet were great at this... Creating community out of commonality. Plus, they were largely self correcting I'm a way in which LLMs are not.
So not only are folk being fed gibberish, it is robbing them of the potential to connect with similar humans.
And sure, it works for some cases, but they seem to be suboptimal, infrequent or very basic.
Oh, I fully agree with you. One of the main things about asking super basic things is that when it inevitably gets them wrong, as least you won't waste that much time. And it's inherently parasitical, basic questions are mostly right with LLMs because thousands of people have answered the basic questions thousands of times.
Not understanding how to use new technology, even flawed ones, isn't a flex
I understand LLMs well enough that I really don't want to use them because they are inherently incapable of judging the validity of information they are passing along.
Sometimes it's wrong. Sometimes it's right. But they don't tell you when they're wrong, and to find out if they were wrong, you now have to do the research you were trying to avoid in the first place.
I tried programming with it once, because a friend insisted it was good. But it wasn't, and it was extremly confidend, while being exceptionally wrong.
Congrats, then don't use it to validate information.
LLMs are incredible text generators. But if you are going to judge a fish by its ability to climb a tree, then you are never going to find its potential.
Yes, there are tons of bogus AI implementations. But that doesn't say anything about the validity of the technology. Look at what VLC is doing with it for example.
It is pretty clear by those statements that you understand LLMs less than what you claim.
Yeah thats called ignorance and we shouldn't be celebrating it.
I don’t know how to feel about this. I need to ask ChatGPT.
Wait, people actually try to use gpt for regular everyday shit?
I do lorebuilding shit (in which gpt's "hallucinations" are a feature not a bug), or I'll just ramble at it while drunk off my ass about whatever my autistic brain is hyperfixated on. I've given up on trying to do coding projects, because gpt is even worse at it than I am.
They absolutely do. Some people basically use it instead of Google or whatever. Shopping lists, vacation planning, gift lists, cooking recipes, just about everything.
It's great at it, because it'll bother trawling webpages for all that stuff that you can't be bothered to spend hours doing. The internet is really soo shitified that it's easier to use a computer to do this.
I hate that it is so. It's a complete waste of ressources, but I understand it.
It's a waste of your resources to close popups, set cookie preferences and read five full screens about grandma's farm before getting to the point: "Preheat the oven to 200°c and heat the pizza for 15 minutes.", when ChatGPT could've presented it right away without any ads.
Brought to you by chrome being the biggest browser and it willfully enshittifying adblockers, which incidentally made searching way more tedious and funneled people to LLMs.
I have encountered some people who use it as a substitute for thinking. To the extent that it's rather unnerving.
Maybe not Chat GPT specifically, but you can hardly use the internet without some AI being pushed on you.
There's a difference between passively using something and actively using something.
I use electricity every day, but I have no idea how it's generated. I (assume I) use RSA256, but if you ask me to explain block cypher encryption to you I'd just go "well you take a number and another number and....... hope they have sex to produce a bigger number?"
I use a lot of stuff without having to know how it works and having to choose to use it.
Oh hey it's me! I like using my brain, I like using my own words, I can't imagine wanting to outsource that stuff to a machine.
Meanwhile, I have a friend who's skeptical about the practical uses of LLMs, but who insists that they're "good for porn." I can't help but see modern AI as a massive waste of electricity and water, furthering the destruction of the climate with every use. I don't even like it being a default on search engines, so the idea of using it just to regularly masturbate feels ... extremely selfish. I can see trying it as a novelty, but for a regular occurence? It's an incredibly wasteful use of resources just so your dick can feel nice for a few minutes.
Using it for porn sounds funny to me given the whole concept of "rule 34" being pretty ubiquitous. If it exists, there's porn of it! Like even from a completely pragmatic prespective, it sounds like generating pictures of cats. Surely there is a never ending ocean of cat pictures which you can search and refine, do you really need to bring a hallucination machine into the mix? Maybe your friend has an extremely specific fetish list that nothing else will scratch? That's all I can think of.
Used it once to ask it silly questions to see what the fuss is all about, never used it again and hopefully never will.
I've tried a few GenAI things, and didn't find them to be any different than CleverBot back in the day. A bit better at generating a response that seems normal, but asking it serious questions always generated questionably accurate responses.
If you just had a discussion with it about what your favorite super hero is, it might sound like an actual average person (including any and all errors about the subject it might spew), but if you try to use it as a knowledge base, it's going to be bad because it is not intelligent. It does not think. And it's not trained well enough to only give 100% factual answers, even if it only had 100% factual data entered into it to train on. It can mix two different subjects together and create an entirely new, bogus response.
I was finally playing around with it for some coding stuff. At first, I was playing around with building the starts of a chess engine, and it did ok for a quick and dirty implementation. It was cool that it could create a zip file with the project files that it was generating, but it couldn't populate it with some of the earlier prompts. Overall, it didn't seem that worthwhile for me (as an experienced software engineer who doesn't have issues starting projects).
I then uploaded a file from a chess engine that I had already implemented and asked for a code review, and that went better. It identified two minor bugs and was able to explain what the code did. It was also able to generate some other code to make use of this class. When I asked if there were some existing projects that I could have referenced instead of writing this myself, it pointed out a couple others and explained the ways they differed. For code review, it seemed like a useful tool.
I then asked it for help with a math problem that I had been working on related to a different project. It came up with a way to solve it using dynamic programming, and then I asked it to work through a few examples. At one point, it returned numbers that were far too large, so I asked about how many cases were excluded by the rules. In the response, it showed a realization that something was incorrect, so it gave a new version of the code that corrected the issue. For this one, it was interesting to see it correct its mistake, but it ultimately still relied on me catching it.
I use ChatGPT mainly for recipes, because I'm bad at that. And it works great, I can tell it "I have this and this and this in my fridge and that and that in my pantry, what can I make?" and it will give me a recipe that I never would have come up with. And it's always been good stuff.
And I do learn from it. People say you can't learn from using AI, but I've gotten better at cooking thanks to ChatGPT. Just a while ago I learned about deglazing.
It's so strange seeing people being proud that they can't keep up with the technologies.
Yeah, that's just judgemental and presumptive.
I have quite a lot of shit in my life, and I have actively decided to pay no attention to AI. Not because "I can't keep up with it" but because after some research into it I decided "it was bullshit and nonsense and not something I need to know about"
I used to know a guy like that. He would say stuff like "I didn't even know how to use a computer mouse!" It definitely sounded like he was bragging. Such a weird thing to be proud of.
Using AI is helpful, but by no means does it replace your brain. Sure, it can write emails and really helps with code, but anything beyond basic troubleshooting and "short" code streams, it's an assistant, not an answer.