Copyright Lawyer Explains Drake AI Song and More | WSJ

preview_player
Показать описание
AI is quickly evolving and poised to become a new normal, but copyright law is still playing catch-up. Can AI be trained on copyrighted material to create the large language models that power generative AI systems like ChatGPT? Are works created using AI tools like Midjourney able to be protected?

WSJ asked an intellectual property lawyer to break down three of the biggest cases to explain how copyright law is adapting and what it means for the future of generative AI.

Chapters:
0:00 AI-related lawsuits
0:26 Should AI work be copyrighted?
3:27 Can AI work violate the right of publicity?
5:19 Can AI be trained on copyrighted work?

News Explainers
Some days the high-speed news cycle can bring more questions than answers. WSJ’s news explainers break down the day's biggest stories into bite-size pieces to help you make sense of the news.

#AI #Tech #WSJ
Рекомендации по теме
Комментарии
Автор

This was an extremely easy to digest summary of the current situation. Good job WSJ!

trogdorstrngbd
Автор

Copyright office will probably get swamped if AI can copyright

johnl.
Автор

It would be good if AI generated art wasn't able to ever be copyrighted. This would mean Holywood isn't going to get replaced by some computers.

matthewbaynham
Автор

Brilliant breakdown. I can easily see the expert in this video coming to Congress to distill the issues down!

ericcartmansh
Автор

Would like to see more of Naruto's selfie portfolio!

CoinOpTV
Автор

So artistic and creative works are protected... That's good news... 😍

nikkivieler
Автор

If you allow access to anything, you must accept that they can learn/train from it. Now if they produce new content based on it that makes significant use of your image/words, then you violate copyright just as if a person did. After all, the tool does not create anything that the owner/operator of the tool had it do. I mean, if you use your hands, shovel or excavator to dig a hole, the hole is what was created regardless of the tool used or whether you'd seen other holes before.

homewall
Автор

Creating a software based on unlicensed assets is an obvious violation of copyright.

mf--
Автор

Why did you guys skip NyTimes vs OpenAI

ryan_singh
Автор

Nice to put a face to the WSJ narration voice

alexandrugheorghe
Автор

People will start using AI for creative work that copy/reproduce similar picture. Then the copyright should be to AI generator because it was earlier inspiration to human reproduction

prilep
Автор

Find it odd that none of these organizations WSJ contacted wanted to comment. I mean its good that the journist reached out and its these orgs can not reply but when all of them declines, then I gotta question about the common denominator

Burnlit
Автор

7:46 If the script was flipped these gigantic companies wouldn't spare any punishments such as infringement or violations committed against them. Can anyone explain to me why those big companies refuse to examine those dangerous violations before trodden upon them? To me it's like these folks don't care if Ai progress comes to a dead halt when something like copyright violations are massively preventable for gigantic companies like tech companies. They can hugely avoid those violations but they still decide to tread upon those dangerous grounds. Are they that desperate to overlook the potential or possible lawsuit that can be filed against them? Who does that? Now they are going to ruin Ai technology for me and others because of silly preventable violations that they willingly and arrogantly violate. Shame on these greedy big companies. You're already wealthy why be so greedy? 💭🤨💭🤨 Now they ruin it for everybody.

aliettienne
Автор

Fascinating insights, AI really is changing the game! 🤖

AndreaDoesYoga
Автор

🔆The lady at the end is definitely an Ai 🤏

duran
Автор

a huge issue that this video misses is that people will simply not disclose the use of AI if it is really not copyrightable.

kylokat
Автор

We should be signing all of our content anyways. Digital anything is easily reproduceable. Without there being some cryptographic/mathematical cost to creation, verification, or even viewing, the whole point is moot. We're ~30 years into the information age, we treat algorithms as adversaries, and still don't understand how information works.

arizvisa
Автор

AI companies should pay creators they train on, publicly available data is different from creative works. It takes years to become proficient in an artand 10mins to update your linkedin page. What they are stealing is not the data, it's the human perspective on what will be appealing to another human.

When AI trains on AI generated content with each generation is devolves further from something a human relates to, because it inherently has no understanding of the human condition. It literally cannot create art it can just mix and match bits of other peoples arts with various weightings. Given it is 100% dependent on prior art to generate these weightings it cannot be argued its creating original content.

This is theft of creativty in the same way someone would steal 10 cents of everyones bank account and launder it to make a new billion dollar balance. AI just mixes and matches 0.001% of everyones existing creativity, calls it something new and then hopes to get paid for it. Silicon valley nerds hope no one will notice so they can profit off everyone elses hard work becoming proficient in their respective their arts so they can buy a new super yacht.

Put it this way: AI needs humans for "Creativity", humans don't need AI for Creativitiy.

If all the humans gave up creating music, and AI was left to train on it's own data in the subsuquent years it would rapidly devolve.

"Whenever we are training an AI model on data which is generated by other AI models, it is essentially learning from a distorted reflection of itself. Just like a game of “telephone, ” each iteration of the AI-generated data becomes more corrupted and disconnected from reality. Researchers have found that introducing a relatively small amount of AI-generated content in the training data can be “poisonous” to the model, causing its outputs to rapidly degrade into nonsensical gibberish within just a few training cycles. This is because the errors and biases inherent in the synthetic data get amplified as the model learns from its own generated outputs.

The problem of model collapse has been observed across different types of AI models, from language models to image generators. Larger, more powerful models may be slightly more resistant, but there is little evidence that they are immune to this issue. As AI-generated content proliferates across the internet and in standard training datasets, future AI models will likely be trained on a mixture of real and synthetic data. This forms an “autophagous” or self-consuming loop that can steadily degrade the quality and diversity of the model’s outputs over successive generations."

If big tech lawyers successfully argue the case, and these tech billionaires get to yoink everyones hard work via their online data, launder the fragmented data via a LLM so they can repackage it up again and sell it back to us it will be one of the biggest crimes against humanity.

ambi.music_
Автор

This is assuming that new technology innovations should be copyrighted immediately...

CoderRaven
Автор

How can the copyright office determine if a human was involved or not?

jashannon