Yoshua Bengio on Pausing More Powerful AI Models and His Work on World Models

preview_player
Показать описание
In this episode of the Eye on A.I. podcast, host Craig Smith interviews Yoshua Bengio, one of the founding fathers of deep learning and a Turing Award winner. Bengio shares his insights on the famous pause letter, which he signed along with other prominent A.I. researchers, calling for a more responsible approach to the development of A.I. technologies. He discusses the potential risks associated with increasingly powerful A.I. models and the importance of ensuring that models are developed in a way that aligns with our ethical values.

Bengio also talks about his latest research on world models and inference machines, which aim to provide A.I. systems with the ability to reason for reality and make more informed decisions. He explains how these models are built and how they could be used in a variety of applications, such as autonomous vehicles and robotics.

Throughout the podcast, Bengio emphasises the need for interdisciplinary collaboration and the importance of addressing the ethical implications of A.I. technologies. Don’t miss this insightful conversation with one of the most influential figures in A.I. on Eye on A.I. podcast!

Рекомендации по теме
Комментарии
Автор

Great interview, Its a pleasure listening to bengio providing insight

vrda
Автор

I highly recommend The Hedonistic Imperative by David Pearce. In short, it proposes how humanity could - and convincingly makes the case why we SHOULD - use artificial intelligence to abolish involuntary suffering in all sentient life. It can be found for free online with a simple search.

gregw
Автор

Actually to the question 'how technical should I go's. Please go super technical. I feel there are already too many high level channels/ or I gotta say layman's channels which do not educate us we'll on the real inner workings and research of AI. So going super technical si great and very needed!

MrErick
Автор

The first country to break any international treaty on AI will be the US itself if we use history as a guide.

rockapedra
Автор

great content - any chance of using a better internet connection/camera to improve video quality on your end stream ?

warperone
Автор

literally 1 year past.. 12 april 2024, i am coming for it so hard rn

KemalCetinkaya-iq
Автор

thank you 22:20 yes go for technical, this guy does cool interviews!

anirbanc
Автор

That's the theory of ruliad objects proposed by Stephen wolfarm

yitzhill
Автор

Propose the regulations to the EU, they love it!

eafadeev
Автор

If this thing is dangerous in any sense - a private company is the worst place.

"How long do we need - to place it at UN?" -> The result is a real measure of human intelligence.
(My suggestion for normalization: "1 year =equals= IQ 65")

volkerengels
Автор

it took over a minute to ask the question, "Given that you are a reasonable person, why did you sign the letter?" 
Seriously, do we have this much time left?

ryoung
Автор

I think GPT-4 is _somewhat_ better at reasoning than he suggests. Possible, one supposes, owing to a world model being to some degree encoded in natural language.

And as for the world model, there is an actual world out there, fortunately. Some company needs to sell millions of fancy LLM-empowered home robots, with sensory inputs. Use that data. :-)

cacogenicist
Автор

12:55 If you didn’t want the message to be taken to mean pause development when you meant speed up regulation then you shouldn’t have signed it my dude.

MaJetiGizzle
Автор

There is a difference between reality and wishful thinking. Reality is there is NO way to stop the pace of these LLLM AI models. The letter has no impact on the process. It provides awareness of what is happening but will not stop it. It is also too late for humans to create policies, treaties, pass laws in our current system which is extremely slow and can take years. The goal now is to accept AI process is here to stay and there is no way to stop it or even delay it. Once you understand that, we can now ask companies to train AI models to POLICE any AI bad actors. We will need to use AI to fight AI. Humans will be too slow to do it.

senju
Автор

It's inevitable. Biology is just one step of evolution.

So just chill out and enjoy life 💟

eSKAone-
Автор

I spent hours discussing classical concepts of Western philosophy with ChatGPT. On the surface, is has an encyclopedic memory of the domain, but it is pretty obvious, owing to how abstract these concepts tend to be, that is has no knowledge of philosophy proper. 

It cannot philosophize, so to speak. It parrots extensively, pretty mundane facts, but it evidently cannot reason or synthesize. It's decent at providing summaries or bibliographies, but it doesn't understand what it's talking about. 

A simple test I gave it, lies on what philosophical reasoning is all about : Can you extract the parent concepts in the body of Western ethical works ; that is, create an ontology of the domain where concepts cluster and a multi-axial hierarchy emerges where you could discover, for instance, that ethical thought is intrinsically tied to biological survival. 

This level of inference is simply not there. I tried more down-to-earth approaches and asked the system to perform Principal Component Analysis on language embeddings for the body of moral philosophy from Plato to Hegel, and it said it couldn't. No matter how I tried to simplify or limit the task. This is the sort of stuff where AI could be useful : Discovering hidden hierarchies of concepts. 

But in a LLM that has no ability to reason and no world model, at best it would only discover APPARENT hypernyms as they are constituted in the body of language it's been trained on. To make profound discoveries it would also need inference and a world model. 

Part of the problem is that many AI researchers actually believe that intelligence is nothing but the sort of statistical engine that they build ; they believe that human creativity is merely 'emergent' (unpredictable) patterns arising from complexity. This is paradoxically extremely deterministic, because it roots intelligence in a finite-but-chaotic set of interdependent parameters (like the Three Body Problem in physics). 

In contrast reasoning implies first a reflexive ability — the model needs to have a model of itself ; a homunculus of sorts — and this also implies a referential but sparse and orthogonal (world) model to base its evaluations on. I believe that the brain has a model of itself, and that proof of this is seen in neural plasticity. This implies that the brain stores a model of itself, at a functional level. And by the same token, the brain stores a sparse model of the world. As few baseline building blocks as are necessary for the subject to understand and act as a fit constituent of the world. We understand the world because we harbour an analogue of the world. Philosophers have tried for centuries to define such building blocks and there are traces of this in Aristotle's Categories, Boethius, Avicenna, Kant, Peirce, Wiitgenstein, Quine, Rosch, Fodor and so on. And there are strong hints at how semantics are parametrized in predicate logic, in the use of function words to qualify or quantify concepts of Existence, Identity, Evaluation, Description, Space, Time, as well as Social status and Mental action. But the truth may well be that the building blocks of our stored models are made of very abstract meta language ; a form of data compression.

phpn
Автор

Withholding technology can be dangerous as well, when that technology is then only available to elites or corporations, or governments, however it might then be siloed. And dangerous to democracy-- if only certain elite silos have access to the technology.

Syncopator
Автор

Agreement among governments would be uneven at best, and I can't see Russia and China having any interest in agreeing on guardrails.

px
Автор

You have contributed to AI more than anyone, and your telling everyone els not to do it.

It's a complete fallacy, ai is a gimmick.

There nowhere near, they don't even understand human thinking, there still using Freudian psychology.

Thinking isn't self-generated, that's the assumption.

lukestevenson
Автор

What we learned is that Yoshua signed a letter he didn't read or agree with... Not sure how serious we are with these topics.

andymanel