When will artificial intelligence surpass human intelligence?

preview_player
Показать описание


ChatGPT can help you write an essay, plan a vacation, and learn quantum theory — next stop: world domination? Maybe...maybe not. But teachers are right to be worried that this might spell the end of homework and journalists might also have reason for concern. Large language models like GPT-4, Bard, and Bing Chat use some pretty incredible technology and computing to create a convincing and pleasant chat experience, and to create some pretty fun texts along the way. But when will they exceed human intelligence? How close do these chatbots take us toward reaching "the singularity"? What more will it take to get us there? Join me as I dig into how these AIs are (or aren’t) like a human brain and what the future might hold.

-- CITATIONS --
De Witte, Melissa. “How will ChatGPT change the way we think and work? | Stanford News.” Stanford News, 13 February 2023.
Pearl, Mike. “ChatGPT from OpenAI is a huge step toward a usable answer engine. Unfortunately its answers are horrible.” Mashable, 3 December 2022.
Roose, Kevin. “The Brilliance and Weirdness of ChatGPT.” The New York Times, 5 December 2022.
Roose, Kevin. “How Chatbots and Large Language Models, or LLMs, Actually Work.” The New York Times, 4 April 2023.

Alternatively, if you wanna support the channel and get some fun emojis to use in comments and a badge next to your name in the process, consider becoming a "member" of our channel right here on YT:

We couldn’t do all of this without our awesome Patreon Producers, Ryan M. Shaver, Danny Van Hecke, Carrie McKenzie, and Jareth Arnold. You four are like warm sunshine on a cool day!

And thanks to our other high-level Patrons, including:
Marcelo Kenji
12tone
Linda L Schubert
Susan Jones
Ilsa Jerome
k b
Raymond Chin
Marcel Ward
Memming Park
Рекомендации по теме
Комментарии
Автор

“We are closer to the singularity then we have ever been” is a statement that has been true for all of history.

detoxfidelity
Автор

Didn’t take long for it to surpass mine.

MelindadelosSantos
Автор

This was a really great overview of many important aspects of AI and a lot of concepts that aren’t mentioned in all the breathless or doom mongering coverage. And although you said you’re not a philosopher you did cover a bunch of fascinating philosophy. Editing also super on point. IJ Good sounds like a classy guy! 😉

MedlifeCrisis
Автор

I use Chat GPT at work all the time, and we're encouraged to. It's great at giving you point blank answers, but only for some things (i.e. give me a formula for Google Sheets, do this mathematical equation for me). Ask it anything...human, and it falls off. It's limited by what information it has, it's repeating back to you what it assumes you want, even if it's complete bullshit.

I'm actually less scared of it now that I use it, but I do think AI needs to be regulated and we should start thinking of this now. I hope we've learned from past mistakes that blind faith in "progress" and the tech industry never works out.

dunnowy
Автор

I am old.
I am studying the details of this technology.
As an example, I found out that each word of the query and the response is represented inside the software by around a thousand floating point numbers (with the exact number depending on the model), and that in GPT3 each of those thousand-number words pass up through 96 layers of transformer units between query and generation of the first word of the response. That is where the parameters come into the picture. During that journey up through the layers, within those 96 layers, there are 160 billion adjustments to the machine, each of which is set to an individual floating point number during training.
So... in the end I feel even older.

RalphDratman
Автор

Just a heads up, the singularity refers to an event when technology becomes self improving in an uncontrolled fashion. It doesn't necessarily have anything to do with sentience if sentience is not a prerequisite for self enhancement. For instance, an extremely effective self replicating nanotechnology (the Gray Goo scenario) would constitute a technology singularity.

KevinHorecka
Автор

To me, the most ethical thing to do is to try and break chatgpt in as many ways as possible, so we can learn and improve. Amazing video and honestly this put me more at ease with AI in general, despite me advocating to my family that skynet is not around the corner I was slightly concerned about the pace in which it seemed to go. The parrot comparison was quite helpful!

anoniemp
Автор

Whether present-day AI "think" isn't really accurate or worth focusing on IMO.. ChatGPT, in it's current form, is more like one sentence from your mind's inner-monologue, just spouting off the first thing it thinks it knows, in reaction to some input. In human minds, we tend to have multiple thoughts, that quickly might chime in and say "woops, actually don't forget this other thing that also matters", and through those iterations, or "thinking", we eventually decide our best course of action or what to say. If we want ChatGPT to "think" we need to give it a self-reviewing loopback instance of itself, maybe even a sense of time so that it can have multiple loopbacks instances for complicated thoughts. Researchers are already trying this, and find it greatly improves the accuracy of it's answers and reduces hallucinations. So I think it's just a matter of how we set it up.
It also has pretty decent knowledge of programming languages, and an ability to think about problems using programming logic, and some experts think this also helps it with "thinking" and why it's displaying significantly more complex understanding of topics than previous models.

NikoKun
Автор

To preface, I think this was a very well made video with impressive research and you definitely touched on the important issues regarding this topic.

However, there are a couple of things I believe were missing.

AI is not sentient, nor does it have feelings. But I believe that generalizing "AI" as only harmless LLMs can be detrimental. The real algo technology behind AI (transformer models) has a lot of other applications. I understand LLMs are the AIs that the general population is mostly familiar with, but I still believe that it is a dangerous assumption to pass on. A quick example would be marketing algorithms trained using transformer models that have been able to recommend baby clothes to people that were pregnant before they even knew it themselves (Since ~2018). This technology in the wrong hands can be extremely invasive.

Another point you made was that its only mimicking language, but that is not necessarily true. Its multi-modality allows it to read any code/data as input and generate outputs as well, for example Stable Diffusion, which generates image output from text input. Another example, AI was trained on Brain Scans to recognize patterns with blood flow when looking at images and generate what it believes the person was seeing. (just imagine the repercussions of that primitive testing improving drastically)

Per the video, "The A.I Dilemma", A.Is can detect the amount of people in a completely dark room using CCTV

This is just the tip of the iceberg, and AI is improving at an unprecedented scale as more research, money, and data gets implemented. This scale compounds on itself, and the amount of users feeding inputs to ChatGPT and a myriad of other A.Is will only make it significantly more accurate.

Additionally, while AI is not "aware", "conscious", nor does it have any emotions, I believe its contextual awareness is something to be both admired and frightened by. I tested this using SnapChat's AI by telling it to guess the story I was trying to convey based on the sequence of Emojis I was using, and it did so better than most humans would've done so.

What I mean by "contextual awareness" is an understanding of how things work. Fire burns, water evaporates... and how they dynamically interact with each other. It could guess the stories I was trying to convey with emojis only because it was "aware" that certain things do... certain things. A beautiful example shown in the AI Dilemma I mentioned earlier, when prompted to generate an image for google soup, plastic melted (soup is warm, and plastic melts - hence, contextual awareness).

Anyway, I pass the challenge on to you. What movie?

🧠🤖🦾📈🌏🔌🌐👨🔌🧟💊

ItsJustMigs
Автор

Another fantastic video! Irrespective of whether AIs gain the ability to "think" in a year or a century, I am still worried about the impact that AI, generative AI in particular, will have on our society.

What happens when chatbots begin contributing significant amount of data to the data sets (i.e. the internet) that future chatbots are trained on? What happens to objective truth once the most ubiquitous resources on the internet (e.g. Wikipedia) are written by biased bots engaged in circular references of one another's output? How can artists, writers, and musicians compete with bots that can not only produce an unimaginable volume of content for free, but is bespoke content made to perfectly match the tastes of one particular human user?

Of course these aren't on the level of catastrophe as Skynet, but they also feel to be much more likely and proximal problems.

StrongMed
Автор

The best future for humanity after technological singularity is to create, together with general artificial intelligence, a virtual reality identical to the real world but unlimited and individual, where people are free to do anything imaginable while AGI protects us in the real world and expands throughout the universe to be as durable as possible

Savingtheworld-mmnl
Автор

These aren't Artificial Intelligence; they are large language models. An interesting problem is that as LLMs produce more content, they pollute the data upon which LLMs are based.

iceguy
Автор

AI is not the problem... greed is. Greed for power, greed for profits, greed for control.
What if we reach singularity and ask it to solve housing crisis but solution will not align with corpo/bank profits? We know what media will say about that.

morbid.
Автор

Thinking is a feedback loop of learning. An AI is a learning machine that can perform a feedback loop without getting stuck in the loops.

speadskater
Автор

"AI is like a Tsunami that threatens to flood us if we are not mindful."

~ (Mindful AI)

Book recommendation: "MINDFUL AI: Reflections on Artificial Intelligence."

curiousphilosopher
Автор

You guys have come a long way! The quality of the videos have become top notch! This was such an intersting topic and I agree that AI is such a long way to reach human intelligence but at the pace is very scary. Keep it up!!!

agnosticmuslim
Автор

I’m not so afraid of a conscious AI. It’s the unconscious one that uses brutal efficiency to meet its goals. And so what happens when we’re in the way?

cruzyla
Автор

Excellent best explanation I have ever heard

scottstormcarter
Автор

I've used it to generate AutoLisp routines for certain automation tasks I desire.
It's remarkably good and, with a little editing on my end, the routines actually work.
If I don't like the initial code, I just refine the ask, and I get a new code.
Then, I'll refine the ask, maybe a few more times, and new code is generated until I see something I can work with.
What once would take me two or three hours, I can cut it down to 15 minutes.
Kinda scary, really.

MikeBMW
Автор

I just have to say, your sign off has to be my favorite on YouTube. Over n out! 😄

michellev