Has Open Source AI Just Become Too Dangerous?

preview_player
Показать описание

There is a new open source AI out there. It was published by Meta and it’s one of the most powerful models ever developed. And now a lot of people want to stop it. They say Meta is too dangerous and irresponsible. And what if they are right this time? Why would Mark Zuckerberg do this? Who should you believe and what if open source AI is a mistake?

Let me shine a light on all of these things that I’ve been following extremely closely as I believe what will transpire over the next months and years will profoundly affect all of our digital lives.

On July 23, Meta published a new stack of AI models as Llama 3.1 packaged into three different sizes. There are two models small enough to run on consumer hardware with just 8 and 70 billion parameters. But we’ve had those for a while. What really changes everything is Meta’s largest model to date – with whopping 405B paramaters, it’s the biggest and most capable open source LLM ever created.

The 405b model is now capable of rivaling and even outperforming ChatGPT. No open source AI has done this before. But unlike GPT-4, anyone in the world can copy, modify and redistribute Meta’s flagship AI without almost any restrictions.

Sources [references available in the transcript]

Credits

Follow me:

The footage and images featured in the video were for critical analysis, commentary and parody, which are protected under the Fair Use laws of the United States Copyright act of 1976.
Рекомендации по теме
Комментарии
Автор

All new episodes are free but you can get all of it and early access and my merch if you pledge your support. My work is impossible without your support! Huge thanks to all of you!

TheHatedOne
Автор

If open source AI is dangerous, all AI is dangerous. Why would closed source AI be less dangerous when it's in the hands of the worst actors in the world, such as Microsoft, Google, and governments?

So either all AI is bad or no AI is bad.

Gabrielnfs
Автор

Open source is the only weapon against malevolent closed source actors, like governments, 3-letter agencies and military.

velvia
Автор

Turns out the people who make the most noise about the dangers of open-source (weights) models are... (drumroll) ... the closed-source AI companies.

moken
Автор

I’m gonna be so annoyed when OpenAI successfully leverages a tech illiterate and craven Congress to legislate consolidation of AI into 3 monopolistic megacorporations.

Nzombii
Автор

The only thing I care about is it being uncensored, so I can run it on my own hardware.

rdsii
Автор

Open source is the only kind of software that's written mostly in good faith. for YOU, not for your wallet.
Also, open-source is pretty much the only way people can sufficiently learn to write software.
Software dev myself, I would have never become one if not for all the open source projects I had the ability to take a look at.
If open source is an issue, then open driving courses are too! People should learn to drive blindfolded!

shapelessed
Автор

Lest we forget, OpenAI was hacked and breached just last month. So much for closed source LLMs being safe from bad actors....

biofreakzor
Автор

You think China doesn't have ChatGPT 4 weights. I mean what is this, guys... Let's get serious

nexovec
Автор

The tools they use to scrape all of your personal data and interactions over the years / decades is close sourced and is more powerful than anything they provide to the public.

thebugg
Автор

we don't need llama for ai to be weaponized against it people we already do that

linuxstreamer
Автор

A lot of the talk about regulating AI and machine learning is being done by the incumbent to prevent agile, smart, efficient startups with better ideas applying AI better and stealing market share. Open source is smart and promotes a more even playing field for GOOD AI products to be developed vs simply throwing AI chat bots on every page/widget.

ryanlupi
Автор

"Open source AI" is a Newspeak language. They didn't open the SOURCE of the code that made the model. Releasing a build artifact - is not open-sourcing.

michaelns
Автор

"If the headline of an article asks a question, the answer is almost always 'no.'"
Allow me to answer the title
NO NO NO NO NO

shodanxx
Автор

So now I'm defending Zucc on the subject of AI. I didn't think that day would come.

DavidPereiraLima
Автор

If Google, Microsoft, OpenAI, and the US government, are pissed off at you, that means you did something right.

Kodeb
Автор

saw the video, noticed its from The Hated One, clicked instantly.

iquoen
Автор

If AI is not allowed to be open source we are doomed. AI is coming. AGI is coming. Do we want these new technologies in the hands of governments and corporations? or the people?
There will come a gigantic wave of propaganda against open source AI, I personally will ignore all of it.

aaronlewis
Автор

The genie is out of bottle anyway, in form or another.

ivan
Автор

they want to ban and gatekeep knowledge itself.

improvisedchaos