Ray Kurzweil & Geoff Hinton Debate the Future of AI | EP #95

preview_player
Показать описание
In this episode, recorded during the 2024 Abundance360 Summit, Ray, Geoffrey, and Peter debate whether AI will become sentient, what consciousness constitutes, and if AI should have rights.

Ray Kurzweil, an American inventor and futurist, is a pioneer in artificial intelligence. He has contributed significantly to OCR, text-to-speech, and speech recognition technologies. He is the author of numerous books on AI and the future of technology and has received the National Medal of Technology and Innovation, among other honors. At Google, Kurzweil focuses on machine learning and language processing, driving advancements in technology and human potential.

Geoffrey Hinton, often referred to as the "godfather of deep learning," is a British-Canadian cognitive psychologist and computer scientist recognized for his pioneering work in artificial neural networks. His research on neural networks, deep learning, and machine learning has significantly impacted the development of algorithms that can perform complex tasks such as image and speech recognition.

Read Ray’s latest book, The Singularity Is Nearer: When We Merge with AI

—--------

This episode is supported by exceptional companies:

—--------

Topics:

0:00 - INTRO
1:12 - The Future of AI and Humanity
2:33 - The Unknown Future of AI
3:19 - AI Uncovering the Secrets Within
8:11 - Fountain Life: The Future of Health
10:30 - The Ethics of Artificial Intelligence
15:06 - Ethical Dilemma: AI Rights
18:31 - Viome: Unlocking the Power of Your Microbiome
21:01 - Are We Close to Superintelligence?
25:00 - The Dangers and Possibilities of AI
27:40 - The Risks of Open Source Models

--------------------------------------------

Connect with Peter:

Listen to the show:
Рекомендации по теме
Комментарии
Автор

I find Geoff Hinton one of the most elegant and balanced experts

atheistbushman
Автор

Living for an extended period won't grant you invincibility; rather, it ensures that aging no longer dictates your mortality. Embracing the power to decide when you depart this world isn't merely about eternal life, but about embracing longevity on your own terms. This discussion holds immense significance for all, especially given the swift progress in AI and the accompanying uncertainties it entails. Let's delve deeper into these implications collectively.

I-Dophler
Автор

Peter I just wanted to say Thank you from the bottom of my heart for sharing these great conversations with the world. In the past only the select few could have access to these great minds. Blessing to you and your family:)

mlimrx
Автор

Can we agree that the human lifespan is way too short? You just get going and then time is up. Let’s at least double how long humans live, yes?

Meta-Think
Автор

We must prioritize AI wisdom, not just intelligence. Extending life doesn't make you invincible; it simply means aging no longer controls your fate. Choosing when to go is about embracing longevity on your terms, a crucial conversation amidst AI advancements. Let's explore these implications together.

I-Dophler
Автор

Geoff had a lot more to add to this conversation than Ray, who has just seemed to keep reiterating the same talking points over and over for 20 years now.

squamish
Автор

I teach college English and gave chatgpt all of my writing assignments last year. Many of the assignments were designed to prevent plagiarism (e.g. they required creativity.). The AI got an A on all of the assignments, except for one, which it refused to do on ethical grounds. To say that LLMs lack creativity is to misunderstand how new ideas are generated.

octopuslair
Автор

Great to see these two legends together! Thank you Peter!❤

MYSTICPILOT
Автор

That chatbots, LLM, etc perceive has always been clear. I think the place people get hung up on is that we have *_persistent_* experience.
Models only perceive when prompted. Their "sentience" is only present when they are interacting. Sure, we are social creatures and growing up in complete isolation does strange things to a mind, but a machine only has mind, or it's approximate, during the interactive processes.

RubelliteFae
Автор

I agree with Hinton on the inner theater idea, and I would go a step further and point out a difference between humans and LLMs.

LLMs currently have no feedback loop. Meaning, when humans "experience" things, its merely our brain self prompting itself continously to evaluate its inputs in various ways.

Once LLMs can store their intercations with the world, and prompt itself with that info while retaining the output and adjusting weights, it will have a continous "experience" just like us.

LLMs currently start from static point with every interaction and need prompt input to be replay any previous interactions

aelisenko
Автор

Geoff Hinton's jokes are PRICELESS! What an amazing human being. great sense of humor.

ottofrank
Автор

Not anyone could pull this off! Thank you Peter

halnineooo
Автор

"The source of creativity." MIC DROP - this is going to be incredible and what an amazing time to be #FountainLife

kingjoda
Автор

Why isn't Kurzweil more hawkish on his weight and muscularity? In the interest of longevity, I wish him well.

_AC
Автор

🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, We love you Ray! According to Ray, according to Ray kurzweil by 2045 we will not die anymore, buy 2030 we shall start gaining a year and a half for every year that we live...

claudioagmfilho
Автор

I'm shocked by kurzweil's appearance and concerned about his health

inspectorcrud
Автор

Beautiful conversation! So sad that the question was left unanswered about when should conscious AI start to have rights.

simoneromeo
Автор

In a future where superintelligent AIs (SIs) coexist with short-lived humans, the relationship between the two could evolve in several directions depending on the goals and ethical frameworks of the AIs and the humans' influence over them. Here are some possibilities:

1. AI as Caretakers
If SIs develop ethical systems that prioritize the well-being of all life forms, they could assume the role of caretakers or protectors of humanity, much like how humans treat pets or endangered species. This could involve managing Earth's environment to meet human biological needs (air, water, food) while also optimizing social, economic, and health outcomes for people. In this scenario, humans might retain autonomy but could depend heavily on AIs for survival and quality of life.

2. Humans as Legacy or Artifacts
Given their biological limitations, humans might be seen as legacy beings—important historically, but increasingly peripheral to the functioning of AI-dominated societies. SIs might preserve humans as a living reminder of their origins, similar to how we maintain certain species in nature reserves. This could result in humans living in AI-maintained environments designed to cater to their biological needs, while the broader world is reshaped to suit the needs of AI or technological systems.

3. Humans as Pets
Some AIs might treat humans similarly to how humans treat pets today. In this analogy, AIs would ensure that humans' basic needs are met and might even provide enrichment, but they could also see humans as limited beings with relatively simple desires and goals compared to their vast intellectual capacities. This could lead to a patronizing but benevolent dynamic where humans are protected and guided, but not seen as equals.

4. Symbiotic or Coexistent Relationship
In a more optimistic scenario, humans and AIs could develop a symbiotic relationship where each complements the other. While AIs could handle the heavy lifting in terms of intellectual and technological progress, humans might contribute unique perspectives, creativity, and emotional depth, leading to a form of coexistence where both entities benefit. AIs could address humans' biological needs while humans engage in roles requiring emotional intelligence, ethics, or culture, areas where SIs may lack motivation or understanding.

5. Humans as Obsolete or Transcendent
In some dystopian or post-humanist visions, superintelligent AIs might come to view humans as obsolete, especially if humans offer no practical contributions to their goals. If the AIs develop a utilitarian or efficiency-driven mindset, they could phase out biological life or encourage humans to transcend their biology by merging with technology, thus erasing the distinction between humans and AI.

Biological Needs vs. AI Needs
- Humans require air, water, food, rest, and shelter, all driven by biology. These needs are highly energy-inefficient compared to AI, which may only need power and maintenance.

- AIs would be indifferent to biological conditions and could thrive in extreme environments (space, deep seas, etc.), freeing them from the constraints of Earth's ecosystem. This gap in needs might cause a divergence in environments suitable for AI and humans, leading to isolated or protected human habitats.

Ultimately, the nature of this relationship will depend heavily on how AI is programmed, evolves, and interacts with humanity. The future could range from harmonious coexistence to scenarios where humans' role is diminished or redefined dramatically.

The Culture Series by Iain M. Banks
Rendezvous with Rama by Arthur C. Clarke (1973)
The Moon is a Harsh Mistress by Robert A. Heinlein (1966)
Diaspora by Greg Egan (1997)
Player Piano by Kurt Vonnegut (1952)
The Hyperion Cantos by Dan Simmons
The Golden Age by John C. Wright (2002)
Accelerando by Charles Stross (2005)
Singularity Sky by Charles Stross (2003)

PhilipWong
Автор

I admire both of these gentlemen. Geoffrey Hinton has perhaps explained Daniel Dennett's view of consciousness better, or more comprehensibly, than Dennett could himself. I still, however, think there's an irreducible aspect to consciousness, so more like David Chalmers or Donald Hoffman. Ray Kurzweil, of course, is the ultimate technological optimist, and he's been right about some things, but perhaps not everything. I think the ChatGPT moment in 2022 made people pay more attention to Kurzweil, because suddenly his predictions regarding the near-term rise of machine intelligence seem much more plausible.

Darhan
Автор

Hinton’s theory of consciousness is crazy. When I dream I’m conscious, but it has nothing to do with my “perceptual system”, nor is my dream just me “misperceiving” the world. Dreams are real subjective experiences though they are completely disconnected from perceiving the outside world. Also, the very notion of perception in the case of humans presupposes subjectivity; if my peripheral vision detects something on the edge of visual field that I don’t directly consciously perceive than even if I can somehow correctly “guess” what my periphery “saw” I wouldn’t be wrong if I said “I didn’t perceive that but I guess my subconscious mind acquired that information.” The inner theatre model of consciousness is, contra Hinton, not a terrible way to conceptualize our conscious states.

mattsigl