this post was submitted on 10 Jul 2023
12 points (100.0% liked)

Technology

87 readers
2 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago
 

In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. When companies disclose these data sources it leaves them open to legal challenges. OpenAI rival Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI image generator.

Aaaaaand there it is. They don’t want to admit how much copyrighted materials they’ve been using.

top 27 comments
sorted by: hot top controversial new old
[–] preppy_wind@kbin.social 6 points 2 years ago (3 children)

I'm with OpenAI here. Screw copyright and greedy publishers.

[–] peter@feddit.uk 4 points 2 years ago (1 children)

You wouldn't be saying that if it was your content that was being ripped off

[–] Chozo@kbin.social 1 points 2 years ago* (last edited 2 years ago) (2 children)

"Ripped off"? That isn't how LLMs work.

[–] Niello@kbin.social 1 points 2 years ago (1 children)

if you read a copyrighted material without paying and then forgot most of it a month later with vague recollection of what you've read the fact is you still accessed and used the copyrighted material without paying.

Now let's go a step further, you write something that is inspired by that copyrighted material and what you wrote become successful to some degree with eyes on it, but you refuse to admit that's where you got the idea from because you only have a vague recollection. The fact is you got the idea from the copyrighted material.

[–] exscape@kbin.social 1 points 2 years ago (1 children)

FWIW that's pretty much exactly how all creative works are created. Especially music.

[–] Niello@kbin.social 1 points 2 years ago

Except the illegally obtaining the copyrighted material part, which is the main point. And definitely not on this scale.

[–] Kichae@kbin.social 0 points 2 years ago (1 children)

That's, uh, exactly how they work? They need large amounts of training data, and that data isn't being generated in house.

It's being stolen, scraped from the internet.

[–] Chozo@kbin.social 0 points 2 years ago (1 children)

If it was publicly available on the internet, then it wasn't stolen. OpenAI hasn't been hacking into restricted content that isn't meant for public consumption. You're allowed to download anything you see online (technically, if you're seeing it, you've already downloaded it). And you're allowed to study anything you see online. Even for personal use. Even for profit. Taking inspiration from something isn't a crime. That's allowed. If it wasn't, the internet wouldn't function at a fundamental level.

[–] HeartyBeast@kbin.social 1 points 2 years ago (1 children)

I don’t think you understand how copyright works. Something appearing on the internet doesn’t give you automatic full commercial rights to it.

[–] Chozo@kbin.social 0 points 2 years ago (1 children)

An AI has just as much right to web scrape as you do. It's not a violation of copyright to do so.

[–] HeartyBeast@kbin.social 1 points 2 years ago (1 children)

It's not an AI webscraping. It's a commercial company deciding to do a mass ingest.

[–] Chozo@kbin.social 0 points 2 years ago (1 children)

It's the same thing. Just because you have personal opinions on the matter, however valid they may be, doesn't make it any less the exact same thing.

That's like saying that McDonald's Super Sized fries aren't fries because they're commercially large. No, it's still fries, there's just a lot of fries being processed in one serving. And yet, despite the arguments and outcries of many, still legal.

Exact same thing with LLMs.

[–] HeartyBeast@kbin.social 1 points 2 years ago

If it’s the same thing, then why describe it as an AI scraping it’s not. It’s a company that has scraped a corpus of data from the internet and has used that to train an AI.

The problem is that intellectual property law is complex. Simply saying two things are the same thing is your personal opinion. Content on the internet is not by-and-large public domain. It comes with a license, which lets you use it for certain purposes and not others. Saying, for example an AI reading a book is just like a human reading a book’ (not something you said, I don’t think) betrays a certain naivety about the way IP works.

[–] Ferk@kbin.social 4 points 2 years ago* (last edited 2 years ago)

Note that what the EU is requesting is for OpenAI to disclose information, nobody says (yet?) that they can't use copyrighted material, what they are asking is for OpenAI to be transparent with sharing the training method, and what material is being used.

The problem seems to be that OpenAI doesn't want to be "Open" anymore.

In March, Open AI co-founder Ilya Sutskever told The Verge that the company had been wrong to disclose so much in the past, and that keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.

Of couse, disclosing openly what materials are being used for training might leave them open for lawsuits, but whether or not it's legal to use copyrighted material for training is something that is still in the air, so it's a risk either way, whether they disclose it or not.

[–] PabloDiscobar@kbin.social 1 points 2 years ago* (last edited 2 years ago) (1 children)

Your first comment and it is to support OpenAI.

edit:

Haaaa, OpenAI, this famous hippies led, non-profit firm.

2015–2018: Non-profit beginnings

2019: Transition from non-profit

Funded by Musk and Amazon. The friends of humanity.

[–] Chozo@kbin.social 1 points 2 years ago (1 children)

With replies like this, it's no wonder he was hesitant to post in the first place.

There's no need for the hostility and finger pointing.

[–] PabloDiscobar@kbin.social 1 points 2 years ago

he was hesitant to post in the first place.

Was he hesitant? how do you know that?

[–] chemical_cutthroat@kbin.social 5 points 2 years ago* (last edited 2 years ago)

If I do a book report based on a book that I picked up from the library, am I violating copyright? If I write a movie review for a newspaper that tells the plot of the film, am I violating copyright? Now, if the information that they have used is locked behind paywalls and obtained illegally, then sure, fire ze missiles, but if it is readily accessible and not being reprinted wholesale by the AI, then it doesn't seem that different from any of the other millions of ways we use data in every day life. Just because a machine learned it instead of a human, I don't believe that it makes it inherently wrong. I am very open to discussion on this, and if anyone has a counter-argument, I'd love to hear it, because this is a new field of technology that we should all talk about and learn to understand better.

Edit: I asked GPT-4 what it thought about this, and here is what it said:

As an AI developed by OpenAI, I don't access any copyrighted databases, subscription sites, confidential information, or proprietary databases for my learning or response generation. I was trained on a diverse range of internet text, which includes publicly available data in multiple languages and formats. The training also involves learning from large-scale datasets that contain parts of many kinds of texts (books, websites, scientific articles, etc.). However, I don't know specifics about which documents were in my training set or have access to any specific documents or sources.

Your question seems to pertain to how copyright law applies to AI. This is a complex and evolving area of law and policy, largely because AI systems like me do not "read" or "use" material in the same way a human would. A human might read a book and then consciously recall information from it when writing or speaking. In contrast, AI models do not have consciousness or awareness, they analyze statistical patterns in the data they were trained on and use these patterns to generate responses.

When a human uses copyrighted material to create something new, the law often considers this to be a derivative work. It is typically necessary to obtain permission from the copyright holder to create derivative works. However, AI doesn't use or recall copyrighted material directly; instead, it learns patterns from a huge amount of text data to predict or generate text based on a given prompt.

Dissemination of copyrighted material, such as through forum posts or Q&A platforms, without proper licensing or within the bounds of fair use, can be a violation of copyright law. AI doesn't 'disseminate' copyrighted material; it generates responses based on patterns it has learned.

That said, the legal interpretation of how copyright law applies to machine learning and AI is still evolving, and there can be a lot of complexities and nuances. Therefore, the information here should not be taken as legal advice.

As of my last update in September 2021, this was the case. However, I recommend checking the most recent legal standpoints and opinions, as this area is still evolving rapidly.

[–] raktheundead@fedia.io 2 points 2 years ago

Don't threaten me with a good time.

[–] animist@lemmy.one 1 points 2 years ago
[–] LegendOfZelda@kbin.social 1 points 2 years ago

I disagree with the "they're violating copyright by training on our stuff" argument, but I've turned against generative AI because now automation is taking art from us, and we're still slaving away at work, when automation was supposed to free up time for us to pursue art.

[–] pglpm@lemmy.ca 1 points 2 years ago

I think it's a basic requirement that the data upon which a large language model is trained be publicly disclosed. It's the same as the requirement of writing the ingredients in packaged food. Or in knowing where your lawyer got their degree from. You want to know where what you're using is coming from.

[–] Eggyhead@kbin.social 1 points 2 years ago
[–] bedrooms@kbin.social 0 points 2 years ago (1 children)

Read the whole thing. The reason OpenAI is opposing the law is not necessarily copyright infringement.

One provision in the current draft requires creators of foundation models to disclose details about their system’s design (including “computing power required, training time, and other relevant information related to the size and power of the model”)

This is the more likely problem.

[–] jcrm@kbin.social 1 points 2 years ago

Given their name is "OpenAI" and they were founded on the idea of being transparent with those exact things, I'm less impressed that that's what they're upset about. The keep saying they're "protecting" us by not releasing us, which just isn't true. They're protecting their profits and valuation.

[–] StarServal@kbin.social 0 points 2 years ago (1 children)

This is one of those cases where copywrite law works opposite as intended; in that it should drive innovation. Here we have an example of innovation, but copywrite holders want to (justifiably) shut it down.

[–] cmhe@lemmy.world 1 points 2 years ago

I think this is actually a case where copyright works correctly. It is protecting individuals of getting their work, they provided for free in many cases, 'stolen' by a more powerful party to make money from it without paying the creators of their work.

load more comments
view more: next ›