Sam Altman on OpenAI, Future Risks and Rewards, and Artificial General Intelligence

preview_player
Показать описание
If 2023 was the year artificial intelligence became a household topic of conversation, it’s in many ways because of Sam Altman, CEO of the artificial intelligence research organization OpenAI. Altman, who was named TIME’s 2023 “CEO of the Year” spoke candidly about his November ousting—and reinstatement—at OpenAI, how AI threatens to contribute to disinformation, and the rapidly advancing technology’s future potential in a wide-ranging conversation with TIME Editor-in-Chief Sam Jacobs as part of TIME’s “A Year in TIME” event on Tuesday.

Follow us:
Рекомендации по теме
Комментарии
Автор

If you can read between the lines - you can tell that AGI is the main thing on his mind and it feels like either it has been achieved or they are very closer to a major breakthrough.

samiesmilz
Автор

We're nearing the end of the first half of that sci-fi movie.

fractal_gate
Автор

I've never followed anything so closely in my 33 year old life. Anything AI I click on and stay updated.

Allplussomeminus
Автор

I worked in tech for 25 years, the way he sounds is the same way every CEO from tech sounds, hopeful and aspirational, with their typical message to make the word better and help humanity, etc, etc, all of them are the same

marzbitenhaussen
Автор

Ho-ly crap, do you guys realize what are witnessing at the moment? We are in the midst of the biggest change in human history with the architect announcing it. Chills down my spine and goosebumps, what an amazing time to be alive! Thank you Sam.

danielkahbe
Автор

He explained nothing about what went wrong. Essentially: "it was a long time coming and it just exploded, but now it's good, trust me". He wants us to trust him and his company with the biggest transformation without being open about anything and dodging like a politician

rickymort
Автор

4:15 Sam Altman is becoming a great politician, skipping questions without the interviewer noticing it, or there has been a secret deal not to press for answer when Altman avoids answering a question.

MikkoRantalainen
Автор

His comments at the end about people having 'teams' of AI experts leads me to believe they have made some big advances in the Agent Swarms area.

ElderFoxDocumentaries
Автор

I've never seen him so measured in his comments. Normally, his responses flow and seem visceral. This was quite different.

Habdogwriter
Автор

Sam diplomatically dodged many questions. That would be perfectly OK, if he wasn't the CEO of *Open* AI.

EuroUser
Автор

This is kinda long, but I promise, worth your while. I wrote this originally as a self-post in rslashfuturology on 11 Sep 2023.

"Wave" is not the right word here. The proper term is "tsunami". And by tsunami, I mean the kind of tsunami you saw when that asteroid hit the Earth in the motion picture, "Deep Impact". Remember that scene where the beach break was vastly and breathtakingly drawn out in seconds? That is the point where humanity is at this _very_ moment in our AI development. And the scene where all the buildings of NYC get knocked over _by_ that wave, a very short time later, is going to be the perfect metaphor for what happens to human affairs when that AI "tsunami" impacts.

It may not be survivable.

We are on the very verge of developing _true_ artificial general intelligence. Something that does not exist now and has not ever existed in human recorded history up to this point. One real drawback about placing my comment in this space is that I can't place any links here. So if you want to vet the things that I am telling you, you'll have to look up some things online. But we'll come to that. First, I want to explain what is _actually_ going on.

As you know, in the last not quite one year since 30 Nov 22, when GPT 3.5, better known as ChatGPT was released, the world has changed astonishingly. People can't seem to agree over how long ChatGPT took to penetrate human society. I will, for arguments sake, say it took _15 days_ for ChatGPT from OpenAI, to be downloaded by 100 _million_ humans. But I have reason to believe the actual time was five days. And then on 14 Mar 23, GPT-4 was also released by OpenAI.

Some things about GPT-4. When GPT-4 was still in its pre-release phase, there was a lot of speculation about just how powerful it would be compared to GPT 3.5. The number was stated to be roughly 100 _trillion_ parameters. The number of parameters in ChatGPT is 175 billion. Shortly after that number was published, that 100 trillion one, a strange thing happened. OpenAI said, well no, it's not going to be 100 trillion. In fact, it may not be much more than 175 billion even. (It was still pretty big though, 1.7 _trillion_ parameters.) This is because there had been another breakthrough in which parameters was not going to matter so much as a different metric. The new metric that was far more accurate to how the LLM model would perform was released. It was called "tokens". That is the, like, individual letter, word, punctuation or symbol, or whatever is input and then output. And is based on the training data required for a given LLM. It is what enables an LLM to "predict the next word or sequence". Like in the case of coding. I'm not even going to address "recursive AI development" here. I think it will become pretty obvious in a short time.

The number of tokens for GPT-4 is potentially 32K. The number of tokens for ChatGPT is 4, 096. That is an approximately 8x increase over ChatGPT. But just saying it is 8x more is not the whole picture. That 8x increase allows for the combination of those tokens which is probably an astronomical increase. Let me give you an analogy to better understand what that means for LLMs. So there are 12 notes of music and there are about 4, 017 chords. Of them, only _four_ really matter. That combination of notes and them 4 chords are pretty much what has made up music since the earliest music has existed. And there is likely a near infinite number of musical re-arrangements of those chords still in store.

That is what 'tokens' mean for LLMs.

And here is where it gets "interesting". Because that 8x increase allows for the ability to do some things that LLMs have never been able to do previously. They call it "emergent" capabilities. And "emergent" capabilities can be, conservatively speaking, _startling_ . Startling emergent capabilities have even been seen in ChatGPT but particularly in generative AI image generating models like "Midjourney" or "Stable Diffusion" for instance. And now it is video. Have you seen an AI generated video yet? They are a helluva thing. So basically, an emergent capability is a new ability that was never initially trained into the algorithm that spontaneously came into being. (And we don't know _why_ ) You can find many examples of this online. Not hard to find. All of that is based on what we call the "black box". That is, why a given AI zigs instead of zags in its neural network, but still (mostly) gives us the right answer. Today we call the wrong answer "hallucinating". That kind of error is going to go away fairly soon. But the "black box" is going to be vast, _vast_ and impenetrable. Probably already is.


Very shortly after GPT-4 was released. A paper was published concerning GPT-4 with a _startling_ title. "Sparks of AGI: Early experiments with GPT-4". Even more startling was this paper was, in its finished form, published just short of one month after the release of GPT-4, 13 Apr 23. That's how fast the researchers were able to make these determinations. Not too much longer after, another paper was published. "Emergent Analogical Reasoning in Large Language Models". This paper also concerning GPT-4 was published on 3 Aug 23. The paper describes how the GPT-4 model is able to ape something that was once considered to be unique to human cognition. A way of thinking called "zero-shot analogy". Basically, that means that when we are exposed to the requirement to do a task that we have never encountered before, that we use what we already know to work through how to do the task. I mean to the best of our ability. That can be described in one word. "Reasoning". We "reason" out how to do things. And GPT-4 is at that threshold _today_ . Right now. And just to pile on a bit. Here is another paper from just the other day, I think. They are no longer even coy about it. The paper, "When Do Program-of-Thoughts Work for Reasoning?", was published 29 Aug 23. Less than 2 weeks ago.



The ability to reason is what would make, what we now call "artificial narrow" or "narrowish intelligence", artificial _general_ intelligence. I forecast that AGI will exist NLT 2025. And that once AGI exists it is a _very_ slippery slope to the realization of artificial _super_ intelligence. An AGI would be about as smart as the smartest human being alive today as far as reasoning capability. Like about a 200 IQ or even a couple times that number. But ASI is a whole different ballgame. An ASI is hypothesized to be hundreds to _billions_ of times better at "mental" reasoning than humans. Further, an AGI is a _very_ slippery fish. How easy is it to ensure that such an AI is "aligned" with human values, desires and needs? Plus, us humans-- _we_ can't even agree on that. You can see what I mean now when I say "tsunami". What do you think that Suleyman was referring to when he said that our AI will "walk us through life"?

Oh. And this is _also_ why all the top AI experts, people like Geoff Hinton, who was the first to realize the convolutional neural network back in 2007, have called for a pause of all training for all future LLMs for at least six months. The idea being to regulate or align what we already have. He actually quit his job of chief AI tech at Google to give this warning. The warning fell on deaf ears and _nothing_ has been paused _anywhere_ . For two reasons. First is the national security of the USA and China (PRC) and second is the economic race to AI supremacy in the US that we are now trapped into realizing because we are a market driven, capitalist society. Hell of an epitaph for humanity. "I did it all for the "noo---". Tragically apt for a naked ape. Ironically, it is probably going to be the end of the concept of value in any event. If we don't get wiped out, we may see the birth of an AI driven "post-scarcity" society. You would like that, I promise. But the 1 percenters of the world probably won't.

Anyway, Google is fixing to release "Gemini" which it promises to be far more powerful than GPT-4, in Dec 2023. And GPT-5 itself is on track for release within the first half of 2024. Probably in the first 4 month. I suspect that GPT-5 is going to be the first AGI, if I know my AI papers that I see even today. At that point the countdown to ASI starts. Inevitable and imminent.

And I say this--I say that ASI will exist NLT than the year 2029 and potentially as soon as the year 2027 depending on how fast humans allow it to train. I sincerely hope that we don't have ASI by the year 2027, because, well, I give us 50/50 odds of existentially surviving such a development. But if we _do_ survive, it will no longer be business as usual for humanity. Such a future is likely unimaginable, unfathomable and incomprehensible. This is a "technological singularity". An event that was last realized about 3-4 _million_ years ago. That is when a form of primate that could think abstractly came into being. All primates before that primate would find that primate's cognition... Well, it would basically be the difference between me and my cat. I run things. The cat is my pet. Actually, that is _vastly_ understating the situation. It would be more like the difference between us and _"archaea"_ . Don't know what "archaea" is? The ASI will. BTW, what do you imagine the difference between an ASI and consciousness would be? I bet an ASI would be 'conscious" in the same sense that a jet exploits the laws of physics to achieve lift just like biological birds. Who says an AI has to work like the human mind at all? We are just the initial template that AGI is going to use to "bootstrap" itself to ASI. There is that 'recursive AI development "I touched on for a second, earlier. ASI=TS.

Such a thing has never happened in human recorded history.

Yet.

Izumi-spfp
Автор

My precious glorious king Sam Altman, I love you and everything you have done for me.

muhyo
Автор

Sam Altman's charm allows him to get away with uttering glittering generalities that evade the central point of just about every question he was asked here.

EricJohnson-lomj
Автор

For those of you who don't get it... AGI might as well be ASI. AGI has always been the main goal because it will be guaranteed to progress to ASI - imagine for a moment if an intelligent human, could upgrade their processing power/speed, perfectly recall from memory anything and everything and draw on the collective knowledge of all the other intelligent people in the world. That's basically what it will be like to AGI - if even just 'intelligent', the sheer ability to crunch, store, recall and contextualise data, added to logical problem solving will guarantee that this will start solving problems we've been unable to solve, hypothesis new maths and science, much of which will create better computers which will further improve the AI in the long run.

dappa
Автор

this guy understands deeply the power of virtue projection right now

meyou
Автор

I've never witnessed such a devoid of substance interview.

gianlucaf.
Автор

Wow listen from 1:08 closely, he says as we get closer & closer to SUPERINTELLIGENCE everyone gets more stressed. To me this implies they have reached what could be considered AGI already & perhaps that ASI is close…

samhouston
Автор

He has the body language of someone spinning omissions and half truths throughout the whole interview, for some reason I'm trusting him less over time. I hope whatever secrets he is keeping, are truly best left secret.

Eric.Clay.
Автор

The reason he was fired was because he was conspiring to get rid of the board members who represented the original mission statement of the company, that being the non profit pursuit of AI alignment. He's a Machiavellian manipulator and he's in charge of the most powerful technology in the history of the world.

RazorbackPT
Автор

No question about the future of Ilya at OpenAI ?

chickendinner