OpenAI WHISTLEBLOWER Reveals What OpenAI Is Really Like!

preview_player
Показать описание


Links From Todays Video:

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
Рекомендации по теме
Комментарии
Автор

Suchir Balaji apparently took his own life. "Suicide" is the go-to excuse for whistleblower deaths in the US. Do you honestly believe that an intelligent young man with a bright future would take his own life?

blackwonderBW
Автор

sources?? again, where are links to your sources?

kendrazoa
Автор

Intelligent psychopaths congregate at the executive level of society.
Well studied and uncontroversial.
Us not stopping that, is our downfall.

ZappyOh
Автор

"I am erring on the side of maybe / maybe not." LMFAO

DigitalRenegadeStudios
Автор

I miss the time when the only youtube channel that discuss A.I. technology was two minute papers.
and how he highlight how the technology works, it's progress, and all the phenomenal stuff that it can do.

Now there are endless stream of A.I. Apocalypse Harbingers channels full of fear mongering with very little substance.

NicholasLatipi
Автор

The comments at the 4 minute mark made me LOL....

"I think that you know a lot of people do think that you know every single human being and every single organization is just strict you know Military Star professionals that would never make a wrong mistake but at the time it's humans who make mistakes and who have incentives and who maybe sometimes are greedy and what we can see here is that in action that humans are you know these flawed creatures and just because it's Microsoft it doesn't mean that they might rush out a product if they believe it's going to advance their company's efforts...."

Groups I find to be the LEAST trustworthy on the planet:
1) military intelligence
2) other government intelligence agencies
3) the military
4) other government agencies and/or officials
5) mega-corporations

There's a word for people who trust these groups: idiots.

daimonmagus
Автор

I get that you have a big channel and a lot of viewers, but all of your comments are trying to tell you something, some of them are just bullies, some of them are legitimate advice. I haven't watched your videos in months because I'm not spending 40 minutes listening to you ramble. Clear concise points are at least what interest me. Maybe not others.

nicknmusic_
Автор

Although it wouldn't have been news, I'd still think it's important to mention, that Daniel also said in this interview, that his prediction of the risk for an catastrophically bad outcome for AGI/ASI is 70%. Extremely concerning as even Jan Leike (former Co-Leader of Superalignment Team at Openai), is very concerned and estimates the risk at 10-90%.
Fact is that ALL researchers are currently completely in the dark and our existence really is at stake if it goes bad.
Even if the distribution of positive to extremely negative events after reaching AGI were 10000/1, statistically the extremely negative event would still occur at some point if the alignment problem is not solved (because in the future there will be more AI labs in more countries with more powerful AIs = more events occurring in total).
I really don't want to panic here, but be aware that it is the top researchers at the worlds leading ai company who start blowing the whistles.
We cannot predict something that is much more intelligent than us on all levels if we cannot control it 100%.

Mark-zngr
Автор

Employees are not just part of the company they are the company no people equals no company

Allofussurvived
Автор

Safety measures in AI development are a PR exercise. Even where there are actually costs involved, it was never any different. Gold Fever has taken hold, expect nothing to be done in respect to safety as a technology described as more significant than the atom bomb explodes into life, under exponential investment. - mark, I have used my words carefully -

Roskellan
Автор

What is really being said “I am not as important as I want to be.”

itsjustme
Автор

my thought? you say " fascinating ... fascinating ... fascinating"--i say Terrifying, crazy

kendrazoa
Автор

A wrong mistake .. is that accidentally doing it right?

PlanetJeroen
Автор

It's "Anyway' not "Anyways". That's when as an employee I stand up! Who's coming with me?

aaronbaca
Автор

What i hate about all of this, Talking about "Safety" There are two different ways to think about safety, "Limitations and censorship" and "social responsibility", I would love to get EXACT DETAILS of what their true concepts of safety even IS. The moment that Open Ai is nationalized, is the end of its trustworthyness. Once its integrated into the government, prepare for opensource models to vanish and screws get put to companies who DONT use the approved ai. Stockpile your Ai's now. Betcha 5 bucks.

Dj-Mccullough
Автор

Every time you say pretty pretty it makes me think of pretty pretty prisoner from one punch man lol

damonstorms
Автор

I am slightly confused. I thought there was a definitive statement some while ago that Sam Altman owned no shares in openai. The final segment of this video seems to contradict that. What is the truth? I presume this information should be in the public domain?

rs-dms
Автор

I used a yoking process and procedure, trying to create SSI NOT AS HARD AS YOU MIGHT THINK.Its a philosophical problem at this point as stated

dragonfly-fu
Автор

Can someone tell me what the prediction have been made by whom so far that AGI will arrive in 2027?

aaaaaaaaooooooo
Автор

Nakasone did not work at NSA, he was the director of NSA... slight difference

m_go
welcome to shbcf.ru