OpenAI INSIDER Shares Future Scenarios | Scott Aaronson

preview_player
Показать описание
This is a lecture by Scott Aaronson at MindFest, held at Florida Atlantic University, CENTER FOR THE FUTURE MIND, spearheaded by Susan Schneider. Thank you to Dan Van Zant and Rachid Lopez for your camera work.

LINKS MENTIONED:

TIMESTAMPS:
00:00:00 - Intro
00:02:14 - Lecture Begins
00:02:54 - Scott's Work At OpenAI
00:04:50 - ChatGPT
00:09:58 - Future Scenarios Of AI
00:20:34 - Justaism
00:30:45 - Watermarking
00:40:06 - AI Art
00:45:22 - Human AI Merging
00:48:45 - AI Safety
00:51:49 - Q&A
01:08:00 - Outro/Support TOE

Support TOE:

Follow TOE:

Join this channel to get access to perks:

NOTE: The perspectives expressed by guests don't necessarily mirror my own. There's a versicolored arrangement of people on TOE, each harboring distinct viewpoints, as part of my endeavor to understand the perspectives that exist.

#ai #openai #science #mindfest
Рекомендации по теме
Комментарии
Автор

love that the future of humanity is handled by people who clearly don't care about other humans and aren't capable of empathy. That'll turn out great I'm sure.

paganlark
Автор

That's the most "you know" per sentence ratio I've heard in my life. Very cool talk though!

LDdrums
Автор

It's pretty surreal to be talking about potential dystopian futures with so much excitement and fear all wrapped up in a seemingly nonchalant acceptance. I can't tell if I'm more excited than i am anxious. We're about to be confronted by all of our hardest questions and quandaries. Are we ready for that? Could we ever be ready? 😅 im more anxious, i

Hstevenson
Автор

This guy embodies the scientist caricature from Jurassic Park. "We were so busy thinking about how to do it that we didn't stop to think if we should."

dnoordink
Автор

Being on the verge of extinction makes life feel more precious.

caseymead
Автор

It seems like we may be able to cure/prevent most major diseases soon without needing to get anywhere near AGI. This seems as exciting as achieving AGI.

devlogicg
Автор

We need a GPT to do a regex search and replace on "You Know" ....

akaalkripal
Автор

14:00-ish "Lucky for me, I have tenure". ... so, I'm not very worried about how I'll handle all the workforce disruption barrelling towards us, because my skillset is varied in bonkers ways, but I'm worried about how that may no be enough to overcome all the social collapse that the workforce disruption is gonna smash into us.

Good talk though. Nice overview of things. These talks often highlight how our inability to have an agreed upon definition for life is a real big problem.

gingerhipster
Автор

We are hallucinating and we have been since birth only we have optimized ourselves so that our hallucinations, usually, don't impede survival.

KaliFissure
Автор

These big AI companies (Nvidia, Open AI, Google, Meta, Microsoft, and Elon Musk, and other “powers”) are all invested deeply in this. They have control over it and its development. I’d rather not have a cabal of money men having control over a technology more powerful than a trillion nuclear bombs.

ProbablyLying
Автор

Scott Aaronson admits we need an argument against x-risk. David Chalmers low-key roasts his "alignment" plan in the Q&A at 58:04. "We'll indoctrinate our AIs with a religion"?!?!? Holy hell. Reckless endangerment, or is he serious?

masonlee
Автор

I find one of the key areas of differentiation to be true individualization (human) vs potentially limitless aggregation (AI). This is separate from our biological finitude and limits to learning speed and information accessibility. "Random" "flaws."

HeCedTooMuch
Автор

It's not just doubly exponential, it's doubly empirical. We believe it now because the empirical trends of our empirical results surprised us more than we expected.

afterthesmash
Автор

I often think that we ourselves might be a form of AI only realized at an appointed time. A consistent follow up thought is that which created us might hide in plain sight (a type of sight not given to us) or… ignore our desire to fully realize our Creator. 

It kinda saddens me that life might just be one big moral, ethical, intelligence test. But perhaps to fully exist, such things are required. It might just be my lack in understanding that I find unsettling. Maybe remaining absent and withholding evidence or having outright communication with us is necessary for some reason.

Are humans mass hallucinating about God or an intelligent designer? Is our hallucination so strong that it would lead us to design a humanoid version of ourselves?
One things for sure… if we do in fact create a conscious android - communicating and interacting with it would be the primary goal.

God became human, so human could become God. 
~ Athanasius of Alexandria

OneRudeBoy
Автор

This is going to shake up the status quo! Yes! 🎉

bernardofitzpatrick
Автор

Nobody asked him the one question I wanted to hear him try to answer. A while back (I think it was 2006), Dr. Aaronson wrote about the P vs NP problem. He believes that P != NP. He gave some casual justifications for it. One of them was that if P=NP, then "everyone would be Mozart" or "everyone would be Rembrant". That is, if you had the ability to appreciate a high quality piece of art or whatever, you'd have the equal ability to trivially generate it. Well, it looks like that's one justification for his position that's been knocked down.

The_Conspiracy_Analyst
Автор

Hopefully the AI teaches us to value each other and the planet

Rabbiton
Автор

on the topic of watermarking 33:16 -- i would assume most students use llm's to 'cheat' via having the llm generate the essay or whatever the case may be, and then completely rewriting the essay from scratch following the llm output as a guide. i'm not sure what to say about people who would think they could just copy/paste from chatgpt directly into a .doc file and submit it.

squfucs
Автор

It is wild we live in a time where we’re discussing this. The ending of Battlestar Galactica feels so real right now. Cannot help but be excited for all this

Kai-neks
Автор

synthetic data is just a reflection algorithm that costs less inference compute than expanding the model size: all synthetic data is a result of the thought output of the original model. it's similar to wiring agents together and using one agents output for the in-context learning/conclusion of another, except that you use it to further descent the permanent error. it really is just learning on itself, which is similar to adding parameters for more abstraction/depth. synthetic data is much more of an architecture enhancement than a training. the limit is where the intrinsic depth of the model determined by its parameters cannot conclude further from all the iterations of training data. so while natural data enhances its knowledge of the world, synthetic data refines it. and if it's trained on the entire internet, it may not need more natural data. we all know LLMs are extremely inefficient with data, and synthetic data will solve a large part of that problem. think from first principles, it's obvious

JazevoAudiosurf