this post was submitted on 28 Jan 2025
845 points (94.7% liked)

memes

15012 readers
3513 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS
 

Office space meme:

"If y'all could stop calling an LLM "open source" just because they published the weights... that would be great."

top 50 comments
sorted by: hot top controversial new old
[–] Jocker@sh.itjust.works 111 points 4 months ago (1 children)

Even worse is calling a proprietary, absolutely closed source, closed data and closed weight company "OpeanAI"

[–] intensely_human@lemm.ee 44 points 4 months ago (2 children)

Especially after it was founded as a nonprofit with the mission to push open source AI as far and wide as possible to ensure a multipolar AI ecosystem, in turn ensuring AI keeping other AI in check so that AI would be respectful and prosocial.

[–] Prunebutt@slrpnk.net 26 points 4 months ago

Sorry, that was a PR move from the get-go. Sam Altman doesn't have an altruistic cell in his whole body.

[–] SoftestSapphic@lemmy.world 14 points 4 months ago (1 children)

It's even crazier that Sam Altman and other ML devs said that they reached the peak of what current Machine Learning models were capable of years ago

https://www.reuters.com/technology/artificial-intelligence/openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11/

But that doesn't mean shit to the marketing departments

[–] Hobbes_Dent@lemmy.world 11 points 4 months ago (2 children)

“Look at this shiny.”

Investment goes up.

“Same shiny, but look at it and we need to warn you that we’re developing a shinier one that could harm everyone. But think of how shiny.”

Investment goes up.

“Look at this shiny.”

Investment goes up.

“Same shiny, but look at it and we need to warn you that we’re developing a shinier one that could harm everyone. But think of how shiny.”

load more comments (2 replies)
[–] WraithGear@lemmy.world 54 points 4 months ago* (last edited 4 months ago) (24 children)

Seems kinda reductive about what makes it different from most other LLM’s. Reading the comments i see the issue is that the training data is why some consider it not open source, but isn’t that just trained from the other AI? It’s not why this AI is special. And the way it uses that data, afaik, is open and editable, and the license to use it is open. Whats the issue here?

[–] Prunebutt@slrpnk.net 38 points 4 months ago (17 children)

Seems kinda reductive about what makes it different from most other LLM’s

The other LLMs aren't open source, either.

isn’t that just trained from the other AI?

Most certainly not. If it were, it wouldn't output coherent text, since LLM output degenerates if you human-centipede its' outputs.

And the way it uses that data, afaik, is open and editable, and the license to use it is open.

From that standpoint, every binary blob should be considered "open source", since the machine instructions are readable in RAM.

load more comments (17 replies)
load more comments (23 replies)
[–] KillingTimeItself@lemmy.dbzer0.com 28 points 4 months ago (4 children)

i mean, if it's not directly factually inaccurate, than, it is open source. It's just that the specific block of data they used and operate on isn't published or released, which is pretty common even among open source projects.

AI just happens to be in a fairly unique spot where that thing is actually like, pretty important. Though nothing stops other groups from creating an openly accessible one through something like distributed computing. Which seems to be a fancy new kid on the block moment for AI right now.

[–] fushuan@lemm.ee 15 points 4 months ago* (last edited 4 months ago)

The running engine and the training engine are open source. The service that uses the model trained with the open source engine and runs it with the open source runner is not, because a biiiig big part of what makes AI work is the trained model, and a big part of the source of a trained model is training data.

When they say open source, 99.99% of the people will understand that everything is verifiable, and it just is not. This is misleading.

As others have stated, a big part of open source development is providing everything so that other users can get the exact same results. This has always been the case in open source ML development, people do provide links to their training data for reproducibility. This has been the case with most of the papers on natural language processing (overarching branch of llm) I have read in the past. Both code and training data are provided.

Example in the computer vision world, darknet and tool: https://github.com/AlexeyAB/darknet

This is the repo with the code to train and run the darknet models, and then they provide pretrained models, called yolo. They also provide links to the original dataset where the tool models were trained. THIS is open source.

[–] FooBarrington@lemmy.world 11 points 4 months ago* (last edited 4 months ago)

But it is factually inaccurate. We don't call binaries open-source, we don't even call visible-source open-source. An AI model is an artifact just like a binary is.

An "open-source" project that doesn't publish everything needed to rebuild isn't open-source.

load more comments (2 replies)
[–] Oisteink@feddit.nl 18 points 4 months ago* (last edited 4 months ago) (1 children)

Source - it’s about open source, not access to the database

[–] Prunebutt@slrpnk.net 16 points 4 months ago (4 children)

So, where's the source, then?

load more comments (4 replies)
[–] verstra@programming.dev 17 points 4 months ago (1 children)
load more comments (1 replies)
[–] Ugurcan@lemmy.world 16 points 4 months ago (4 children)

There are lots of problems with the new lingo. We need to come up with new words.

How about “Open Weightings”?

load more comments (4 replies)
[–] SoftestSapphic@lemmy.world 14 points 4 months ago (1 children)

I like how when America does it we call it AI, and when China does it it's just an LLM!

[–] Prunebutt@slrpnk.net 7 points 4 months ago* (last edited 4 months ago)

I'm including Facebook's LLM in my critique. And I dislike the current hype on LLMs, no matter where they're developed.

And LLMs are not "AI". I've called them "so-called 'AIs'" waaay before.

[–] theacharnian@lemmy.ca 11 points 4 months ago (8 children)

Arguably they are a new type of software, which is why the old categories do not align perfectly. Instead of arguing over how to best gatekeep the old name, we need a new classification system.

load more comments (8 replies)
[–] surph_ninja@lemmy.world 9 points 4 months ago (1 children)

Judging by OP’s salt in the comments, I’m guessing they might be an Nvidia investor. My condolences.

[–] Prunebutt@slrpnk.net 7 points 4 months ago

Nah, just a 21st century Luddite.

[–] Dkarma@lemmy.world 7 points 4 months ago (32 children)

I mean that's all a model is so.... Once again someone who doesn't understand anything about training or models is posting borderline misinformation about ai.

Shocker

[–] FooBarrington@lemmy.world 21 points 4 months ago

A model is an artifact, not the source. We also don't call binaries "open-source", even though they are literally the code that's executed. Why should these phrases suddenly get turned upside down for AI models?

[–] intensely_human@lemm.ee 15 points 4 months ago

A model can be represented only by its weights in the same way that a codebase can be represented only by its binary.

Training data is a closer analogue of source code than weights.

load more comments (30 replies)
[–] maplebar@lemmy.world 7 points 4 months ago (1 children)

Yeah, this shit drives me crazy. Putting aside the fact that it all runs off stolen data from regular people who are being exploited, most of this "AI" shit is basically just freeware if anything, it's about as "open source" as Winamp was back in the day.

load more comments (1 replies)
[–] surewhynotlem@lemmy.world 7 points 4 months ago (1 children)

Creative Commons would make more sense

load more comments (1 replies)
load more comments
view more: next ›