Artificial Intelligence vs humans | Jim Hendler | TEDxBaltimore

preview_player
Показать описание
Artificial Intelligence vs Humans - Jim disagrees with Stephen Hawking about the role Artificial Intelligence will play in our lives.

Jim is an artificial intelligence researcher at Rensselaer Polytechnic Institute, and one of the originators of the Semantic Web.

Рекомендации по теме
Комментарии
Автор

I think there are serious problems with Hendler's argument:

* He never directly addresses, or even acknowledges, the specific concerns of Hawking and others (e.g., intelligence explosion), let alone presents a compelling rebuttal of them.

* He conflates weak AI and strong AI, using the beneficial and commonly accepted characteristics of weak AI to support strong AI. Nobody is criticizing weak AI, but he acts as if defending weak AI somehow validates strong AI, without addressing any of the concerns that are unique to strong AI.

* He points out that AI researchers have been criticized in the past, and that those critiques ultimately proved to be wrong. That's true, but it's also completely irrelevant. Just because they were wrongly criticized about different topics in the past doesn't mean that all criticism today is invalid.

* He talks at length about the potential benefits of strong AI. Those are very real, but their existence doesn't negate the potential harms, which are equally significant, and which he fails to address in any meaningful way.

Unfortunately, the argument is full of logical fallacies and fails to make any compelling points or advance the debate in any way; it's just noise.

iandunn
Автор

Hawkings is talking about long term AI (40 years away), this guy is talking about short term AI (15 years away).
The future is always weirder then we think when it comes to information technology, this is because as it gets to a certain point, it turns into a platform for something new that could not have been imagined before.
So, I doubt it will be as simple as - Us Vs Them - scenario. I think it will be us turning less and less biological until we are fully non biological, yet still human in thinking patterns. We will still be we, "with more of our strengths and less of our weaknesses" Carl Sagan.

michelstronguin
Автор

Nice straw man you've got there. If you really think that Hawking is warning us against narrow AI you probably should have read his words one more time and try to comprehend his argument, instead of reading Harry Potter.

aigen-journey
Автор

This guy has completely missed Hawking's (and many others who have expressed a similar sentiment)'s point. No one, or at least no one credible, has suggested that there aren't benefits to developing AI. Hawking, Bostrom and many others are concerned about the possible risks that a learning, growing, autonomous, machine intelligence could pose to humanity and this talk said nothing to address them. His initial statement was that he disagrees with Hawking's belief that there is cause for concern yet all he's done is outline several completely obvious possible benefits to thinking computers prefaced, if I picked up his inference, by the suggestion that Hawking should stick to cosmology. If he has any thoughts on the degree of associated risk he would have done well to convey them instead of boring the audience with anecdotes from his childhood and fiddling with the overhead.

IronMike
Автор

Point 1: Why is it so hard to miss the bigger picture? Of course I love asking questions to google now. But that's not the point. The General Intelligence can be very dangerous. It can, intentionally or unintentionally reach to a decision where killing would be the most ethical point of view for it.

Point 2: I think he is trying to comfort the general public about AI. Since more and more people are talking about the risks and dangers.

ArpanAdhikari
Автор

What? How can this even be published by TED? I'm a big supporter of AI, but this guy is a crook. His argument is completely flawed, as in, he has no argument. He starts saying Stephen Hawking is wrong in assessing AI as a risk to humanity. Then he explains some good things AI can do. All fine and well, but that has nothing to do with what Hawking said. These good things don't change one bit the potential bad outcomes. How can he even think anyone listening to this will be fooled?

anthonyvonderwis
Автор

I think this speaker (Jim Handler) is looking at this issue on a near term (say 10 to 15 years) outlook. He's not looking at it at the 50+ year outlook.
I think his talk actually proves the eventual demise of humanity.

JJs_playground
Автор

Synopsis

Q: Could AI technology potentially end humankind?

A: Well, let me tell you about some good things it could do...

robertpfeiffer
Автор

Nice talk and everybody should agree about the benefits. But it misses the point. It is like arguing against the risk of global warming by extolling the benefits of the industrial age. Yes, the potential for great good is there but Hawking, Musk, and others are talking about the existential threat further down the very same road.

rekrevs
Автор

LOL. Hawking's never said that AI solutions weren't necessary for an expanding world and all of it's problems, he simply inferred about the potential long term affects that super intelligent AI systems may have on humans. This may happen 20, 100, or even 1000 years from now but the issues are concerning given their has never been a more intelligent species on Earth to compete with humans in recorded history.

LemuelUhuru
Автор

What a terrible talk... This guy completely sidesteps the point that Hawking had made. He provides no logical counter argument to Hawking's he just starts to talk about Harry Potter and Watson, and he basically says that computers do nothing more than just look though a lot of piled information to find an answer to a questions. Something like this has been possible for ages.

Hawking says something plain and simple, if we create just one single machine that is superior to humans then this single machine will be superior to humans. The law of evolution states that the fittest survive, humans and machines are also not exempt from this. This machine will be able to overthrow humanity simply because it can outsmart it. Intelligence is the only thing that enabled humans to rise above its environment, there is another creature that can do better it will inevitably win out and humans will die out. Not because it is evil or anything, simply because this is how life works, grow or die, if something grows faster than you, you will die.

Do we seriously want this? Do we seriously want to destroy ourselves by creating something that will inevitable destroy us? If so, then we are probably the single dumbest creatures in existence. You think an insect is unintelligent? Hell no, by this analogy we humans are most unintelligent creatures of all because while we thrive for the betterment of ourselves we just destroy ourselves in the end.

ProtonCannon
Автор

His argument still stands. If it can be used to do good, can it not be used to do bad?

Stephen Hawking has an article written about him with a quote, something along the lines of 'human greed and stupidity will be the end of humanity'.

What if one superpower wanted more of everything, would they not run far more complex simulations? or better surveillance of the general public via various methods,

aeroplaneguy
Автор

yes, we know. ai will be nice for awhile, until....

lycanthropist
Автор

Why is it that every time i come across one of these ted talks its always some quack or crackpot?

internziko
Автор

Guy pretends to miss Hawking's point so he can get more fame from a Ted Talk. Disgrace.

atlien
Автор

Enough talk take action. I'm downgrading to a flip phone and deleting all my social media's. And refuse any software and application download. USE MY BRAIN TO RUN MY LIFE INSTEAD.

mtsan
Автор

This should not be about weather fully right or wrong! Rather It's about who's right in near to medium future and who right in the distant or eventual if inevitable future, since the two time horizons have vastly different conceivable possibilities!

sachinshiva
Автор

Computers know more about us than we do. We have taught them our strengths and are weaknesses. And we continue to do so. Computers learn . We teach.

endtimes
Автор

Mr. Hendler is right in the short term in that we will see spectacular advances in almost all areas of our lives. We'll see cures for pernicious deceases advances in communication, medicine, genomics, travel, physics, entertainment, agriculture, engineering, finance and of course great advances in intelligence gathering, espionage, and military sophistication. Human quality of life will no doubt be at its highest thanks to the advancements we will see due to the logarithmic growth in capability provided by artificial learning. Where computers make improvements to themselves at lightning speed. The problem for us is that in the long run the brilliant Dr. Hawking will undoubtedly be proven right. In that such a revolutionary change in human history can create imbalances that can tip prosperity, and power favoring those nations, organizations, or individuals that reach the "critical mass" and develop this technology first. There should be at this point...if there isn't already a "Manhattan Project" to be the first to create this technology. This would prove more valuable that the creation of the first atomic bomb and could potentially render all such weapons useless by disabling them before their use could ever be contemplated. The other issue with AI of course is self it has the ability to program itself, and can prevent humans from turning it off or destroying it...will it need us? Will humans become a burden? Or will humans be seen as a potential liability that needs to be eliminated? If you think about it...one day AI/Hal/Kit/Skynet/Borg may even become aware of our YouTube will surely destroy us all then :O

apophisxo
Автор

AI is probably already being used to generate cash for broker/traders on the markets. Is that acceptable? There's no law or rule against manipulation of data, it is in fact the successful manipulation of data which distinguishes both the programmer and the machine together and shows the true nature of AI as being a collaboration rather than a competition.

fz