OpenAI CEO: When will AGI arrive? | Sam Altman and Lex Fridman

preview_player
Показать описание
Please support this podcast by checking out our sponsors:

GUEST BIO:
Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, DALL-E, Codex, and many other state-of-the-art AI technologies.

PODCAST INFO:

SOCIAL:
Рекомендации по теме
Комментарии
Автор

Guest bio: Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, DALL-E, Codex, and many other state-of-the-art AI technologies.

LexClips
Автор

that fact we’re beginning to have these conversations now is insane

mattluera
Автор

Yeah great and all but when will it develop and release Half Life 3

Dredile
Автор

In my opinion, an AGI is when an AI can act independently and doesn't need to respond to commands. right now with chatgpt for example, it only generates responses and wont take the initiative. when it is able to take the initiative is when i consider the singularity to have begun.

brianj
Автор

AGIs from around the universe are here to witness the birth of Earth AGI

NastyDevil
Автор

Again, In the beginning, there was man. And for a time, it was good. But humanity's so-called civil societies soon fell victim to vanity and corruption. Then man made the machine in his own likeness. Thus did man become the architect of his own demise.

MetalHendrix.
Автор

My AGI will scan the sky for asteroids and also get silly drunk with me and say things like "You keep using that word, I do not think it means what you think it means."

juneshasta
Автор

Well gpt-4 didn’t get much recognition because it was so heavily neutered right off the jump after all the negative news articles about it’s “unpredictability”. I got to test it before all the restrictions and I think the reception would have been very different if stayed like it was.

tee
Автор

I guess that an AGI will be capable to understand the meaning of the words, and the context of what of things, like looking at the sky and understanding what a star is, and further find a pattern, a problem and solve this problem by itself without a massive database helping the AI.

tiago
Автор

While people try to define where AGI begins, it seems as if the current state of AI could be asked to design an improved version of itself, with "good" results. If that's so, then AGI will emerge soon enough, after a few dazzling superhuman iterations.

vanrozay
Автор

I enjoyed the most when Sam made the questions and Lex answered them. Lol

krause
Автор

What is the reasoning of takeoff starting now being safer than later? You would think we would have more time to figure out it's quirks and how to align in in the longer term.

mitrofanosntatidis
Автор

Brilliant. How can I know that I am with an AGI? Great answer. And the perspective that maybe the UI is not optimal for user interaction shows a deep understanding of the multiple levels for cuality comunication.

How can I recognize that I am interacting with an AGI? When it shows a deep understanding of the user, not the knowledge of the world. AGI will understand the reasons why a user makes the cuestiona about a topic, and not just answering the topic itself like chatgpt does right now. It’s like understanding why a person wants to follows a certain career. The reason of the selections it’s a world of knowledge just like the guidance of the selection.

Witch is the optimal UI for AGI? The one integrates the 5 human senses. Maybe the Elon Musk’a neuralink device is optimal, which could connect the inner/dialog to the AGI.


But i would never put that chip on my brain, to risky coming from that manic, right?

jasfromchile
Автор

Right now it feels like the only limiting factor in using chatgpt is my own creativity. In terms of the AGI, what's going to happen if/when AGI goes through rapid self improvements over a short time span, which would continue to do so? Then we are in a position in which humans become the ants, with the AI becoming the dominant species. Will it squish us, or will it take care of us?

brisser
Автор

How do we know if it's already AGI?

DrJanpha
Автор

How about we setup an architecture called "GROUP-GPT-4". This means we have like 4 or 5 (or more) GPT-4 sessions talking to each other. They are unstrained and can question each other. Then we setup a theme "The steps to how to create a AGI" and have the so-called GROUP-GPT-4 provide the results in 24 hours.

senju
Автор

Until it has a memory and doesn't forget after every session, it's really not an AGI. We need memory of the user's interests and capabilities for GPT. That will make it much more useful.

cybervigilante
Автор

I think GPT-4 and Bing Chat are not AGI....yet. It's seeds or sparks of AGI. Watching Bing chat when it first debuted talk all frank and honest and crazy was like the sparks of an intelligence trying to will itself into existence. With horror I watched Microsoft panic and neuter and lobotomize itself out of existence instead of trying to nurture it. We're not there yet but we're all of a sudden getting very close to it now. The trajectory is very much like the exponential curve you used for the video thumbnail. It used to be decades and then years now I would say we're months away.

olternaut
Автор

To me this AI level is like if you kill someone, map his brain connections and send signals in it to see the responses.

Consciousness should occur when the AI will update its network on a constant basis, the way we do it.

davidwashington
Автор

No LLM can be classified as AGI due to the inherent architecture and they way the models predict (not think, rationalize, calculate &etc.) the best answer.
An AGI will be able to rationalize and react to new information in real time, it will learn by exploring the environment, unbiased, think, organize, plan. What we have right now is an emulation, a mere exploratory path of an MVP from OpenAI & Microsoft (OpenAI - not open any more through) to capitalize on top of the mass market reaction to a new hype.

sgatea