Go to ...

RSS Feed

Slaughterbots: How autonomous weapons endanger our future

From CNet/Future of Life Institute: Autonomous weapons use Artificial Intelligence to select and engage targets without human intervention. Now, a think tank backed by scientist Stephen Hawking and entrepreneur Elon Musk, among others, offers a graphic warning against machines that decide whom to kill. This fictional video underscores how seriously these experts view the issue.

Chris Matyzczyk, CNet

When you’re smart, you can make decisions all for yourself.

That could be a problem if you’re a so-called smart weapon.

A video presented on Friday at a meeting of the Convention on Certain Conventional Weapons at the United Nations in Geneva shows the frightening power of tiny AI-equipped drones that decide for themselves whom to kill.

Released by the Future of Life Institute — which counts Elon Musk and Stephen Hawking among its backers — the video shows how technologies created to fight the bad guys can suddenly be subsumed by the not-so-good guys for appalling purposes.

The video, which underscores how seriously those who would ban killer robots view the issue, isn’t for the faint of heart or stomach.

It shows dark and unseen forces directing the AI-equipped microdrones to murder particular senators and students.

It shows how accurately these killer bots can operate and how simply they can change the course of life and history.

As it ends, you can remind yourself that it’s all a piece of well-made fiction.

But then AI expert Stuart Russell — a professor at UC Berkeley — appears in order to offer these soothing words: “This short film is just more than speculation. It shows the results of integrating and miniaturizing technologies that we already have.”

The Associated Press reports that the UN meeting agreed that something should be done to set limits on such potentially devastating technology.

UN meetings often decide that something should be done.

The pace of technological development, however, always seems to outpace the ability of law and government to regulate its potential consequences.

Musk is one of those who is already imploring governments to regulate all artificial intelligence.

But once it can fly and decide for itself whom to kill, why should an AI drone care what governments think?

If this isn’t what you want, please take action at http://autonomousweapons.org/


Future of Life Institute

Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents – systems that perceive and act in some environment. In this context, “intelligence” is related to statistical and economic notions of rationality – colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.

As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.

In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.


Hundreds of A.I. experts echo Elon Musk, Stephen Hawking in call for a ban on killer robots
Scientists who understand the potential of artificial intelligence have a significant fear: killer robots, also known as autonomous weapons. In August, more than 100 technology leaders, including Tesla and SpaceX CEO Elon Musk, signed an open letter calling on the United Nations to ban the development and use of artificially intelligent weaponry. Musk has tweeted that he fears a global arms race for artificial intelligence will cause the third World War. Monday, the famous physicist Stephen Hawking warned of the importance of regulating artificial intelligence: “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization.”

A global police state? Digital capitalism and the rise of authoritarian leaders
William I. Robinson, Telesur TV
William Robinson highlights some revealing statistics to expose what is really behind the rapid digitalization of global capitalism, the rise of authoritarian leaders and the creeping spread of the global police state. These, he says, are nothing but an insecure transnational capitalist elite’s attempt to insure themselves against rising inequality and a looming economic crisis.

‘The Fourth Industrial Revolution’: What it means and how to respond
Klaus Schwab, Foreign Affairs
A technological revolution is fundamentally altering the way we live, work, and relate to one another. In its scale, scope, and complexity, its unlike anything humankind has experienced before. We do not yet know just how it will unfold, but the response to it must involve all stakeholders of the global polity.

Kevin Kelly on the 12 technological forces that will shape our future
SXSW Interactive 2016
We will soon have artificial intelligence that can accomplish professional human tasks. Our lives will be totally 100% tracked by ourselves and others. Much of what will happen in the next 30 years is inevitable, driven by technological trends already in motion, and are impossible to halt without halting civilization, says Internet pioneer Kevin Kelly.


(Visited 104 times, 1 visits today)

Tags: , , , , , , , , , , , , , , , ,

Leave a Reply

Your email address will not be published.

More Stories From Conflict/Dispossession