Reasons you might think human level AI soon is unlikely | Asya Bergal | EAGxVirtual 2020

preview_player
Показать описание
If we knew that human-level AI was implausible within the next 20 years, we would take different actions to improve the long-term future. Asya Bergal of AI Impacts talks about her investigations into reasons people say we won’t have human-level AI soon, including survey data, trends in compute, and arguments that current machine learning techniques are insufficient.

Рекомендации по теме
Комментарии
Автор

Human level intelligence isn't a problematic result at all when you reach it. It's the super human level intelligence that's less than proportionally harder. It comes in the blink of an eye compared to how long the field of AI exists. It's hard to imagine what our role will be once we've crossed that threshold.

The risk I personally see is that once you can achieve not even near human-level AI that makes a lot of money (e.g. specialised AI) it can be expanded and improved with those returns. That creates a pathway to improve the rate of spending on compute, research and data acquisition. We could then basically pay people to train the AI system from revenues that the AI system makes. Note that an AI agent developed for a certain activity could be repurposed to act maliciously. A more nefarious AI could simply be good at bribing people, extortion (e.g. even current deep fakes might be effective) and/or cybercrime and make money. If that is employed on a system without any oversight, it could create a feedback loop that the AI can improve faster from illegal activity.

ErikdeBruijn
visit shbcf.ru