
Just how scary is Artificial Intelligence? A survey among the experts
Daniel Fagella writes: Recent advancements in Artificial Intelligence have led to fears of elimination of jobs or destruction of life as we know it. But are these fears legitimate? To help shed light on this, TechEmergence asked over 30 AI researchers on what they believe is the most likely AI-related risk. Here are their responses.
Daniel Fagella, Singularity Weblog
When a technology becomes all-pervasive and affects the human experience in a positive or negative light, it’s easy to forget that (for better or worse) technology is a tool largely under human control. Yet once a technology is released into and adapted by society, the line between control and potential chaos can become blurred.
Recent advancements in artificial intelligence have found their way into the media spotlight, and one doesn’t have to do much searching to find headlines that allude to elimination of jobs or destruction of life as we know it due to AI.
But are these fears legitimate? Considering the inherent risks in AI, such fears do not seem irrational, but are the risks being publicized by much of the media really the ones about which we should be thinking?
To help shed light on the issue, TechEmergence completed a recent survey on the topic and received the opinions of over 30 AI researchers’, the majority of whom hold a PhD and all of whom are experts in their respective fields. In this survey, we asked researchers to give their perspectives on what they believe is the most likely AI-related risk in the next 20 years, as well as the next 100 years.
Definite patterns emerged amongst the researchers’ responses. Within the next two decades, the majority of researchers (36.36 percent) foresaw the most likely risks related to automation and the economy. Interestingly, the second most populous category of response (18.18 percent) was that there are no inherent short-term risks.
These trends shift slightly when looking out over the next century, with the greatest risk (27.27 percent) being humans’ mismanagement of AI, followed by automation and economic concerns (21.21 percent). The following graphic lays out a visual representation of all 30 researchers’ named risks within the next 20 years, organized by category.
About the Author:
Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com