Looking Back at AGI-22 | Charles Simon and Janet Adams

preview_player
Показать описание
Charles Simon, CEO of Future AI and sponsor of the AGI-22 conference, and Janet Adams, COO of SingularityNET look back at AGI-22.

About Future AI:

----

SingularityNET is a decentralized marketplace for artificial intelligence. We aim to create the world's global brain with a full-stack AI solution powered by a decentralized protocol.

We gathered the leading minds in machine learning and blockchain to democratize access to AI technology. Now anyone can take advantage of a global network of AI algorithms, services, and agents.

Рекомендации по теме
Комментарии
Автор

Clear considerations with good enthusiasm. When I get more experienced, I’d like that spirit. Tks 🙂

tangomarine
Автор

Charles Simon says at 5:55 that he thinks that AGI really doesn't have anything to do with machine learning. And of course the other camp says that AGI is only about ML. Maybe the truth lies somewhere in the middle?

artificialintelligencechannel
Автор

Ask sally to try get to the bottom of zero, and ask what happens in the first few seconds, something to do with a stale freshwater teardrop stuck....🤔 maybee. But defo ask sally about the bootom of 0...just a thought....🙂

dqholdings
Автор

glad to hear I'm not the only one still championing biologically modeled systems


I'd be really curious to how to approach an translation layer for integrating something as simple as a calculator into a brain


one of my personal errs of caution is for sub human general intelligences, more so than if a super human AGI gets out of its box, picturing a scenario where it's fitter on the net than a any human, but not able to reason through more complex abstracts, or less ideological, unable to find an escape trajectory that's not significantly detrimental to humanity.. a bit of science fiction, but to me the more intelligent a system gets the less threat is posses, see Michael Sugrue's video lecture on Don Quixote's comments section for why 😆


though I wouldn't think hard coded goals to avoid undesirable scenarios would be fruitful, to me it's too high level of an abstraction, to have any meaningful control at that level seems incompatible with the structure that could produce such a point... maybe in something like Goertzel's Open COG with its fragmented task specific sub systems 🤷


anyways thanks for the recap, I've been a bit out of touch lately, I'll go check out that Joscha Bach chat

GNARGNARHEAD