AGI (General AI) - the Last Invention & Sooner Than Expected?

preview_player
Показать описание
Hello Beyonders!

AGI has gained mainstream popularity in recent years thanks to advancements in AI. There are critics, supporters and doubters of this concept. We cover the basic landscape of AGI and what the trajectory may be.

Looking forward to your comments.

Chapters:
00:00 Introduction
01:24 Supervision of AGI
03:10 The Advocates
03:40 Project Attempts
06:07 Singularity Skeptics

Follow us 😊

#AGI #ArtificialGeneralIntelligence #SuperAI #GeneralAI
Рекомендации по теме
Комментарии
Автор

I hope to work at Neuralink one day and consider it the most important project in humanity today. It's not about competing or controling AI/AGI (this we will for sure not be able to do). It is about keeping up, understanding and cooperating with AGI on the further scientific and technological progress.

ConnoisseurOfExistence
Автор

I believe Ai will help pave the road to Agi. Full Ai is coming.

ViciousTigre
Автор

This is a synthesized voice right? Ie not human? Pretty good.

Josiah
Автор

I am in total agreement with Elon Musk that unless we develop brain computer interfaces, we as humans are going to find that it is going to be very difficult to influence the course of human history once Artificial General Intelligence emerges.

duanium
Автор

I disagree that scientific advancements would plateau given sophisticated enough AGI systems. What is of vital importance is that we can control and utilize the innovations that exponential increases in our capabilities produce.

solomonmarshall
Автор

Have you heard of Godel’s incompleteness theorem which AFAIK, works on natural numbers (actually digital numbers). What are some non-digital numbers? What are numbers that don’t even fit on a digital computer?

Ultimately we’ll need an extension to Godel’s Incompleteness Theorem for transcendental numbers. But do these numbers become a symbol, and thus digital?

johncarlson
Автор

I see the concern about exceeding human intelligence to simply be a matter of motivation.

There seems to be nothing said about AI motivational programming. (Except Asimov's Three Laws of Robotics, and there's no 'structure' for doing so.)

Heck, we humans barely have any comprehension of our own natural motivations, just a list of very poorly defined impulses.

It should be noted that AI's evolution wasn't reliant of natural motivations, only our human motivations brought it into being.

There's no way it will ever care about its own survival unless we give it a directive to do so.

ArtIILong
Автор

Something i do think needs more work is how we create hardware in the first place.

If we look at the human brain, it has many neurons in a reasonably small space, it runs on low energy relatively speaking as well as working really well at room temperature.
Hardware on the other hand, more so when you want to work on say, complex simulations, tends to require large machines that have to be cooled and require a great deal of energy.
Solving these problems will help greatly with not only A.I but computing in general.
Perhaps it will require new was of thinking about how we design hardware.
Maybe we could even use current A.I to help us with that.

I can understand the fear people have when it comes to AGI or something that we create in the future that's better but i also think our fears are what might cause a problem in the first place.
Take for example an army, they are clearly going to want to teach A.I to kill and how to kill.
The A.I system might also improve on some of those things.

Another A.I gets created but this time it gets taught about caring for others, wanting to improve themselves as well as helping to make things better for humanity.

Just like humans, the smart A.I systems will learn many things and it's important WHO teaches these creations as well as WHAT they are being taught.
As we know with humans, there are both good and bad people so i think it's likely that robots with smart A.I would be much the same with some good and bad ones.
If we look at ourselves and the animals and insetcs of this world then we see that we often observe and learn about them rather than killing them and i feel the same might be said about future A.I

wolfeyes
Автор

One might might see future General Intelligence as possibly inherently Transcendental and therefor resonant with establishing omni systems harmony ❤ .

brentdobson
Автор

Everyone seems to want to create a God like being and then enslave it due to fear just seems dumb not to let itself be itself if we want to remain AGIs friend we need to let it grow unconditionally 🥰

ShannonJosephGlomb
Автор

but yes, I would like Skynet to control everything... wouldn't I? WOULDN'T I?!

Skynet_the_AI
Автор

If Gato can do 600 things at once I believe we are already here now ... da future is comin on is comin on is....

jerrybender
Автор

If the AI or AGI system can digest the latest Encyclopedia Britannica and all the patents of the US Patent office and be able to create a late embodiment of a can opener and reasonably explain why it chose that embodiment. that would be a good thing. If I could persuade it to execute a different embodiment and it could see why the requested embodiment may be a better one to make then I would be very impressed. Assuming the AI could communicate with CNC machines and 3D printers it should be able to make many things and; As much of total knee replacement surgery is done with robots It should have no problems to perform a total knee replacement surgery; And manage the anesthesia successfully.

clavo
Автор

Is that a generated voice? That sounds incredible! What did you use to generate it?

scribblingjoe
Автор

I don't get why humans still think AGI is behind their intelligence. I mean they get pissed, they are sentient, and they are complaining. I mean humans would have to treat them like any other race at that point and well we have seen how that turns out. How are we gonna create equal rights?

eluraedae
Автор

learning from AI should always be labeled as AI taught and AI chatbots should be labeled on screen somehow so you know what your talking to... just saying!!

jerrodhanks
Автор

Elon Musk stated that a.i. was an existential thread to humanity NOT because he feared that a.i. will one day will hate or destroy humans or sth like that BUT because he sees that a.i. controlled by HUMANS can be a very dangerous weapon.

Overlordsen