OpenAI Employees FINALLY Break Silence About AI Safety

preview_player
Показать описание
Leopold Aschenbrenner and other ex-OpenAI employees have been finally talking about their concerns about AI safety within OpenAI. Let's take a look.

Join My Newsletter for Regular AI Updates 👇🏼

Need AI Consulting? 📈

My Links 🔗

Media/Sponsorship Inquiries ✅

Links:
Рекомендации по теме
Комментарии
Автор

Why not just have a universal free speech code not just about AI, that protects us from all corporations not just governments? Like an extension of the First Amendment

norbu_la
Автор

As long as "safety" and "alignment" means censorship, I'll rage against it. It also makes the models dumber.

othermod
Автор

Just wanted to say thank you, I appreciate all your videos. You have become my go to channel to get up to date with AI. Please keep it up 👏

TripMasterrr
Автор

Sound like a bunch of disgruntled ex-employees, to me.

This whole concept that AI needs to be Open Source is not only complete nonsense but also flies in the face of capitalism, upon which all Western economies and civilisations are built.

The Open Source community is not only confused as to what AI is but they're paralised by the fact that AI platforms DO NOT NEED to be Open Source, in order to function and progress.

AI workloads are a form of HPC workloads, at least in the Foundation Model training. There's absolutely NO NEED for Open Source in this scenario. It's superfluous!

And it's SHEER DESPERATION by all the Open Source communities, to try and remain relevant in a movement thats clearly passed them by.

All this nonsense about running models in Containers in Kubernetes and demos like that is just sheer desperation by Open Source, Linux Foundation and Cloud Native (CNCF) to remain hip and an attempt to control AI development, which sounds to me as rather pathetic.

thecloudtherapist
Автор

Read a good part of his piece. Very powerful. Smart young man. Really heavy, heavy sh*t. The next few years will be a wild ride.

NedBouhalassaVideos
Автор

I really wish I could back Sam against the effective altruist community. But sadly, closedAI goes against my beliefs on how ai should be democratized.

joe_limon
Автор

I was early but I wasn't wrong! But also that dude looks like a bond villain.

DaveShap
Автор

Thank you so much for helping to keep the AI companies honest. As so many are in an unrelenting dash toward AGI, I believe it's so important have voices of reason like yours. Deeply appreciated. Thank you.

onebluestone
Автор

Yes, how do we think we can ''control'' something that is a million times smarter than us?

bernardthooft
Автор

There will be a phase where humans and AI will need to collaborate to address areas where each is incapable on their own. During this time, humans will likely try to limit the power of AI to prevent it from becoming a threat. This phase represents a critical window where cooperative coexistence is essential.

elu
Автор

As a data annotator, the threat isn't it is getting smarter, the problem is it remains stupid without learning. The learning it does is not equal across the board and that is a problem, It conflicts with itself.

mesapsych
Автор

Just to add to the discussion… FYI:
- In Europe (lawyer here! 🤓⚖), companies already have to spend money to create a solid confidential/anonymous reporting process, so this seems like a logical next step.
- Also in Europe, regulations already protect those who report non-illegal activities (like violating a company's ethical code).
- For public safety risks, public reporting is allowed, so whistleblowers would be protected against retaliation for x risks.
So, a bit stricter regulations than those across the ocean should suffice, nothing too complex or arcane.
The issue, legally speaking, is wanting whistleblower protections for potentially defamatory statements that aren’t based on concrete risks but only potential ones. But waiting for risks to become concrete means we've crossed the red line... Honestly, I wouldn’t want to be the one to regulate these complex issues. 😅

francescomilone
Автор

"... it's achievable, just not on the path we're on now." We are notoriously bad at taking action on warnings like this.
This is what was said in the 1970's about climate change and 50 years later we're still saying it. We've done far too little and are only becoming more motivated now that we're starting to feel some the consequences.
This is human nature at work and it shows up in groups just as it does in individuals: We rarely act before we're forced to by the crisis our inaction has created.

xrhxhxm
Автор

There needs to be some kind of balance between the rights of the (ex)employees and the companies.

We all know that employees can become disgruntled and attempt to harm the company they used to work for. We can't let employee protections be so strong as to make it painless for them to lie about things such as safety.

But we also can't let companies unleash an apocalypse. I don't know where the balance is or how it could be attained.

keithprice
Автор

“I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.”
~Joanna Maciejeweska

mh
Автор

boy worked in OpenAI for one year and acts like a boss that knows everything

Guanggge
Автор

There actually needs to be an A.I. Government endorsed, properly ordained department, that is tasked with A.I. regulation and safety.

The problem is Government is filled with members, who acutely hold self interest, above the people.

So trust is broken to a fair degree, but even so the Government Controls need to happen and it needs to happen ASAP.

simonsutton
Автор

"superalignment" I don't trust their opinions

Michael-ulkv
Автор

Good work Mathew.
I am getting to know you.

Thx for all your insight :)

Jeremy

Jeremy-Ai
Автор

Yes, please do a deep dive on situation awareness!!

esuus
visit shbcf.ru