I don't think we can control AI much longer. Here's why.

preview_player
Показать описание

Geoffrey Hinton recently ignited a heated debate with an interview in which he says he is very worried that we will soon lose control over superintelligent AI. Meta’s AI chief Yann LeCun disagrees. I think they’re both wrong. Let’s have a look.

🔗 Join this channel to get access to perks ➜

#science #sciencenews #artificialintelligence #ai #technews #tech #technology
Рекомендации по теме
Комментарии
Автор

Computer Linguist here. I think there is a big misconception: LLMs have a static training method which doesnt allow for continous learning or implementing things which have been learned by interaction. Yes, they have a token based context window which remembers some deatails of the current interaction but that doesnt mean that it "learns" in any traditional sense. When you want to interact with a model, you always use a snapshot of the system - which is static. Also the term AI is misleading. LLMs really are not as scary and much more controllable than you may think since they have nothing to do with anything like real intelligence, which is capable of having a !continous! stream of information and !also! implementing these new informations into their innerst workings. Theres also some interesting work of anthropic on their model claude, where they gave special regions of the neural network a higher weight which resulted in very interesting behavioral changes. Anyhow, ich liebe deine Videos Sabine, mach gerne weiter so :) edit: i'm not saying that LLMs as a tool in the wrong hands aren't extremely dangerous though!

lennarthammel
Автор

My primary school was totally controlled by aggressive moron bullies...

jouhannaudjeanfrancois
Автор

odd note, having been in the IT industry for decades, its known that there is no code that doesnt have bugs, we just dont know what might trigger them

bpiobil
Автор

"Not since Biden got elected" is a sick burn

neopabo
Автор

When I worked in IT, most of the workforce was far more intelligent than the management team.

Crumbleofborg
Автор

"tell me an example where less intelligent beings control more intelligent ones"
Universities, politicians, a lot of workplaces. It's not like power and wealth are distributed based on intelligence..

Marqan
Автор

Hinton's argument wasn't that "more intelligent things control less intelligent things, " but rather that "less intelligent things aren't able to control more intelligent things." We don't really "control" birds, but they surely don't control us. The inherent threat isn't that we'll become subservient to ASI, but that we'll lose alignment with it, and by extension we'll have effectively no way of controlling a being orders of magnitude smarter than us. Who knows what will happen at that point.

austinpittman
Автор

Decades ago I was watching one of those Disney/Dog planet movies with the family
One of the Dogs said: “Of course, we control humans… Who picks up whose poop?”

I looked at my dog and my toddler in diapers and understood my place in the universe :)

hdwdnuo
Автор

On the fish and birds thing, in addition to our history of controlling them, we have also had a tendency to eliminate animals and bugs when they were inconvenient.

MrScrofulous
Автор

Brilliant scientists, historians, literary critics, artists, writers and others often find themselves under the thumb and at the mercy of people in management, administration and government who are far less intelligent than they are.

reyperry
Автор

I see 3 main risks of IAs:
1. Because it is so easy to get, people just trust the information they get.
2. It is easy to fake information with IA, speeches, Videos, Images, articles, ...
3. Nobody can find out, how the IA has been trained. So you can make lots of Impact, handling out an IA, which believes in your goals.

Unfortunately I only got advertisements, waiting for your clue.

moskitoh
Автор

What baffles me most about this entire discussion is the fact that some people seem to think that language models somehow have goals. Goals and aspirations to control and dominate anyone. Humans have goals and as Sabine tells you here, an aspiration to control resources in order to continue living, and ultimately to produce more offspring. Humans die of old age, which has created a lot of evolutionary pressure to develop social traits and indeed, the desire to dominate others in order to secure resources. Guess what, computers don't have anything like that. They don't die, they don't eat, they don't reproduce - they don't have to. They don't need resources other than power to run, and humans supply that power. Not that the models care either way, they don't bleed when you hit the off button. They don't have the ability to care. It's not productive to worry that when these models finally become more able to answer questions intelligently, this intelligence will necessarily have some specific super bad consequence for humanity that must be avoided at all cost. The Terminator movies from the 1980s really are just light entertainment, not documentaries to serve as the foundation for lawmakers or our intuition and understanding about artificial intelligence.

michaelberg
Автор

If an AI becomes more intelligent than us, it may be able to successfully pretend it isn't

arctic_haze
Автор

Guardrails aren't a realistic solution. That would require infallible rules and no bad actors modifying/creating/abusing an AI.

csm
Автор

Why only focus on “control”? Yes, we don’t control fish, but we pull millions of them out of the ocean everyday and eat them.

We don’t control chickens, but we keep them in terrible conditions and force them to do our bidding.

Randy.Bobandy
Автор

Every manager of a big company has at least one employee smarter then them.

Alexandru_Iacobescu
Автор

"I need your clothes, your boots, and your

leftcoaster
Автор

"No one really wants to control fish or birds." I think the 2 trillion fish fished up/farmed each year and the 20 billion chickens kept as livestock would disagree with that statement. Not to mention basically every other animal on the planet, annual hunting seasons for the purpose of population control, the animals used for experimentation and testing, cows and elephants used for hard labor in less developed countries, horses whose sole existence is for human entertainment and being ridden for fun, and the uncountable billions of insects and rodents exterminated for "pest control". Yup, no one really wants to control fish or birds...

FloatingOer
Автор

I feel like I need to point out a small misconception regarding software/hardware undeterminism. AI models which run today rely on computations which are fully deterministic It's the amount of out input data multiplied by the cost of the computation that makes it impossible to predict. Hardware has be to determistic since any form of operating system would be impossible otherwise. A single failure in an operation as simple as addition, when accessing memory may cause crashes. The things which is undeterministic is the time that the computation may take. This may be due to how memory used by the program is spread out or in case of multi-threaded CPUs the time it takes to create a thread. None of these makes the outcome differ if used properly.
GPU fingerprinting does not rely on differences in the outcome of the computation (the image produced is the same for all GPUs), but rather the timing of it. Fingerprint is based on non-random splitting of the computation between Execution Units (EU) that behave much like threads in a CPU. By ensuring that all time consuming computation (refered to as Stall in the paper "DRAWNAPART: ..." referenced by the article in the video) is run on just one of the EUs allows the attacker to measure it's compute power and with that compare to existing GPUs.

odpowiedzbrzminie
Автор

AI algorithms execute against goals. Nobody knows how they might try to implement goals, which means that nobody knows how to formulate effective guardrails for situations which they have never thought of. Determinism doesn't really matter when a deterministic thing transcends our understanding and our ability to predict it. And goals aren't "competition for resources" at all. Hinton's "control" was really about how more intelligent things can often think of ways to control less intelligent things - it isn't about the specific examples which may confirm or disconfirm that, since Hinton understands that we are talking about a specific type of "intelligence" (not human intelligence), and he is simply trying to provide "dumbed down" arguments for those who don't understand AI algorithms deeply. As for training data vs the resulting model, that is largely an oversimplification. Some models carry all their data, some models carry the most significant parts of their data, and some models carry non of the training data yet allow new data to stream in real time in order to modify the model. This is a moving target, and will be optimized based on effectiveness. Also, 99.9% of computer science professionals, even the experts, have spent almost no time working deeply with the actual algorithms - they extrapolate based on prior experience and the explanations they interpret about how these complex algorithms work. And all the comments about LLMs are conflating things - LLMs can be implemented using Neural Networks, but that is like saying that a lever can be used in a gun. Ignore anybody who starts talking about LLM characteristics - they are missing the bigger picture. And, BTW, I taught Deep Learning algorithms for several years. Like everybody who deeply understands these things, I don't know of any solution. (Gates, Zuck, Musk, etc never implemented an AI algorithm in their life. They have very biased/corrupt perspectives, as do too many people whose finances depend on this technology.)

tonyduarte