Lemmy Shitpost
Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.
Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!
Rules:
1. Be Respectful
Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.
Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.
...
2. No Illegal Content
Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.
That means:
-No promoting violence/threats against any individuals
-No CSA content or Revenge Porn
-No sharing private/personal information (Doxxing)
...
3. No Spam
Posting the same post, no matter the intent is against the rules.
-If you have posted content, please refrain from re-posting said content within this community.
-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.
-No posting Scams/Advertisements/Phishing Links/IP Grabbers
-No Bots, Bots will be banned from the community.
...
4. No Porn/Explicit
Content
-Do not post explicit content. Lemmy.World is not the instance for NSFW content.
-Do not post Gore or Shock Content.
...
5. No Enciting Harassment,
Brigading, Doxxing or Witch Hunts
-Do not Brigade other Communities
-No calls to action against other communities/users within Lemmy or outside of Lemmy.
-No Witch Hunts against users/communities.
-No content that harasses members within or outside of the community.
...
6. NSFW should be behind NSFW tags.
-Content that is NSFW should be behind NSFW tags.
-Content that might be distressing should be kept behind NSFW tags.
...
If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.
Also check out:
Partnered Communities:
1.Memes
10.LinuxMemes (Linux themed memes)
Reach out to
All communities included on the sidebar are to be made in compliance with the instance rules. Striker
view the rest of the comments
What's with all the AI hate? I use it for work and it significantly decreases my workload. I'm getting stuff done in minutes instead of hours. AI slop aside.
The massive corporate AI (LLMs for the most part) are driving up electricity and water usage, negatively impacting communities. They are creating a stock market bubble that will eventually burst. They are sucking up all the hardware, from GPUs to memory, to hard drives and SSDs.
On top of all of that they are in such a rush to expand that a lot of them are installing fossil fuel power on top of running the local grid ragged so they pollute, drive up costs, and all for a 45% average rate of incorrect results.
There are a lot of ethical problems too, but those are the direct negatives to tons of people.
If AI can do your job in minutes you're either: A fool pumping out AI slop someone else has to fix and you don't realize it.
Or
Doing a job that really shouldn't exist.
LLMs can't do more than shove out a watered down average of things it's seen before. It can't really solve problems, it can't think, all it can do is regurgitate what it's seen before. Not exactly conducive to quality.
Last time I tried to use AI at work, it decided lobster was vegan.
Every once in a while I try to use it at work. It has yet to actually provide me with anything useful.
In convinced the people who think it’s incredible literally just don’t know how to use a search engine, the one and only potentially useful function of LLMs other than writing asinine work related emails.
I wouldn't even recommend using LLMs in place of search engines, since they make stuff up. If it's providing sources, you can check those, but you have to be rigorous enough to check every detail, which just isn't realistic. People are lazy.
The best way I've heard them described is "bullshit machines", and I don't say that because I think they're stupid, but because they "bullshit" as opposed to lying or telling the truth. When you're bullshitting, the truth is irrelevant, as long as it sounds good. That's exactly how LLMs work.
So if there's a problem that can be solved by bullshitting, that's where an LLM might be the right tool for the job.
Try to play tic tac toe against ChatGPT for example 🤣 (just ask for "let's play ASCII tic tac toe")
Practically loses every game against my 4yo child - if it even manages to play according to the rules.
AI: Trained on the entire internet using billions of dollars. 4yo: Just told her the rules of the game twice.
Currently the best LLMs are certainly very "knowledgeable" (as in, they "know" much more than I - or practically any person - do for most topics) but they are certainly far away from intelligence.
You should only use them of you are able to verify the correctness of the output yourself.
"See, no matter how much I'm trying to force this sewing machine to be a racecar, it just can't do it, it's a piece of shit"
Just because there are similarities, if you misuse LLMs, they won't perform well. You have to treat it as a tool, with a specific purpose. In case of LLMs that purpose is to take a bunch of input tokens, analyse them, and output the most likely output tokens that is statistically the "best response". The intelligence is putting that together, not "understanding tic tac toe". Mind you, you can tie in other ML frameworks for specific tasks that are better suited for those -e.g. you can hook up a chess engine (or tic tac toe engine), and that will beat you every single time.
Or an even better example... Instead of asking the LLM to play tic-tac-toe with you, ask it to write a Bash/Python/JavaScript tic-tac-toe game, and try playing against that. You'll be surprised.
Nobody claimed that any sewing machine has PhD level intelligence in almost all topics.
LLMs are marketed as "replaces jobs", "PhD level intelligence", "Reasoning models", "Deep think".
And yet all that "PhD level intelligence" consistently gets the simplest things wrong.
But, prove me wrong. Pick a game, prompt any LLM you like and share it here (the whole conversation not only a code snippet)
If LLMs can’t do whatever you tell them based purely on natural language instructions then they need to stop advertising it that way.
It’s not just advertisement that’s the problem, do any of them even have user manuals? How is a user with no experience prompting LLMs (which was everyone 3 years ago) supposed to learn how to formulate a “correct” prompt without any instructions? It’s a smokescreen for blaming any bad output on the user.
Oh, it told you to put glue in your pizza? You didn’t prompt it right. It gives you explicit instructions on how to kill yourself because you talked about being suicidal? You prompted it wrong. It completely makes up new medical anatomical terminology? You have once again prompted it wrong! (Don’t make me dig up links to all those news stories)
It’s funny the fediverse tends to come down so hard on the side of ‘RTFM’ with anything Linux related, but with LLMs it’s actually the user’s fault for believing they weren’t being sold a fraudulent product without a user manual.
Sounds like you're the kind of person who needs the "don't put your fucking pets in the microwave" warnings.
effect on environment, and the fact that we know it will definitely lose its good, like TV/Cable, Internet, and any honest useful invention that has been raped by the dark side of human culture within history.
Within the structure of ego driven society we live in I don't think we are capable of being a good species.
would be cool if things were different, but Ive never seen it not turn out bad.
People got roped into a media campaign spear headed by copyright companies.
Hilarious to think nobody could notice how dogshit AI is without being handheld into it.
I hope analog hardware or some other trick will help us in the future to make at least local inference fast and low power.
Local inference isn't really the issue. Relatively low power hardware can already do passable tokens per sec on medium to large size models (40b to 270b). Of course it won't compare to an AWS Bedrock instance, but it is passable.
The reason why you won't get local AI systems - at least not completely - is due to the restrictive nature of the best models. Most actually good models are not open source. At best you'll get a locally runnable GGUF, but not open weights, meaning re-training potential is lost. Not to mention that most of the good and usable solutions tend to have complex interconnected systems so you're not just talking to an LLM but a series of models chained together.
But that doesn't mean that local (not hyperlocal, aka "always on your device" but local to your LAN) inference is impossible or hard. I have a £400 node running 3-4b models at lightning speed, at sub-100W (really sub-60W) power usage. For around £1500-2000 you can get a node that gets similar performance with 32-40b models. For about £4000, you can get a node that does the same with 120b models. Mind you I'm talking about lightning fast performance here, not passable.
At least for me the small 4-8b models turned out to be pretty useless. Extremely prone to hallucinations, not good at multiple languages and worst of all still pretty slow on my machine.
I tried to create a simple note taking agent with just file io tools available. Without reasoning they fucked up even the simplest tasks in very creative ways and with reasoning it thought about it for 7 before finally doing it.
The larger one require pretty power hungry and / or expensive hardware.
I hope for analog hardware to change this.