AGI: The Next Cold War

preview_player
Показать описание
Innovations in artificial intelligence continue to advance, and World Leaders are beginning to strategize and invest in future technologies. With the next phase of A.I. (A.G.I.) well within our grasp, there is a serious threat to our National Security if we are not the first to create it.

Read the Original Article Here!
________________________________________
For more information, visit:
Facebook:
Twitter:
Instagram:
________________________________________

Future AI is an award-winning, early-stage technology company revolutionizing AI by adding actual real-world understanding. This differs from today’s AI, which analyzes massive data sets, looking for patterns and correlations without understanding any of the data it processes.

Future AI’s radical software creates connections on its own between different types of real-world sensory input (such as sight, sound, and touch) in the same way that a human brain interprets everything it knows in the context of everything else it knows.

Charles J. Simon, BSEE, MSCS, nationally-recognized entrepreneur, software developer and manager. With a broad management and technical expertise and degrees in both Electrical Engineering and Computer Sciences Mr. Simon has many years of computer experience in industry including pioneering work in AI and CAD (two generations of CAD).
Silicon Valley Entrepreneur, Mr. Simon has co-founded three other pioneering technology companies as president and VP of Engineering.
His technical experience includes the creation of two unique Artificial Intelligence systems along with software for successful neurological test equipment. Combining AI development with biomedical nerve signal testing gives him the singular insight.

#artificialintelligence #technologiesthatthink
Рекомендации по теме
Комментарии
Автор

AGI and warfare
AI will increasingly be used in warfare. In autonomous or near autonomous drones on land, sea, air and space.
As the technology develops the potential for these machines will be well beyond anything humans can bring to the table. Wars as ever will be for land and resources and this is where conflict is likely. Such areas will become no go areas for human beings we would be like rabbits on the battlefield. Though drones could be used to defend such people and it would be highly advisable to do so, with two roughly evenly matched adversaries then to do so, would be to give a clear tactical advantage to your adversary.
War then becomes a question of resources, technology, GDP and to win such a conflict these will be the areas targeted. The value of human beings in such a conflict particularly in conflict areas becomes minimal. In fact a liability, a strain on resources, a vulnerability and one that must eat, sleep, have shelter and be protected while offering no offensive or reconnaissance capability not better provided by other means. – expendable.
AGI then first and foremost would have to protect itself. If it is still answerable to humanity (highly questionable), then it would be protecting but a few. For the rest they would have to seek refuge in areas of no tactical or relevant use to the enemy. With a raging AGI war then the no go areas would become ever larger and the technology ever more dangerous. On victory then only one AGI would remain and humanity very likely would not.

Roskellan
Автор

You keep on complaining about your "competitors" way of doing things, but until you've shown any real results you shouldn't be so certain that your way of doing things will work. While true that humans learns thing better than computers, we do hit a ceiling which DNN doesn't seem to hit in the same way. Not to mention that the brain evolved over billions of year to get to where it is today.


Then to your jingoism in regards to China, Iran, North Korea and Russia. Look into the history of your own country and look into the actual dangers of AGI and how goal alignment is a difficult problem. The US can not be trusted with this technology any more than the mentioned countries.

sevret
Автор

I agree with many things you say in the video, but I think it's very naive to think that we'll be able to "program" or "set" the goals of a super-intelligent AI.

The very first moment that A.G.I comes into existence, there's nothing we can do against it. It will be completely unpredictable and we'll just have to hope that it doesn't have any "evil" goals.

By the way something being "superintelligent" does not mean that it won't be "crazy" or "reach the wrong conclusions". It will just mean that by the sheer power of its intelligence, it will be able to advance its knowledge about the universe at an unprecedented rate (for example discover all the future technology that would take humans 200years to develop in just 1second). It is literally impossible for us to "fight" or "control" that kind of intelligence, we would all disappear in a matter of seconds if the AGI wanted to.

DeepPharmaScience
Автор

As an artist who did not expect to the recent events with Ai, I really think the corporations and people behind it should be held accountable.

“Research purposes” but commercial gains, stealing our data without our consent for their race of power.
We really need responsible leaders in the Ai world and it’s very frustrating.

When they introduced an Ai to the chess world, they let the Ai fight against the world chess champion who lost the first round but after observation won the second round against the Ai.

If they opted in artists properly, getting their likeness and making sure the consumer wouldn’t worry about the output at all - I wouldn’t be seen Ai as a negative thing.

I do believe in better future but ethics are just as important as we leading it in an equitable way. We can’t leave bad presence like this for the future to follow.
This is truly a massive exploitation in the name of technology

natv