You have seen Artificial Intelligence could make us extinct, warn Oxford University researchers or similar pieces in the news of late.
With the usual sound bites (shortened even more here):
- Oxford researchers: “intelligent AIs a unique risk, in that extinction is more likely than lesser impacts.”
- Elon Musk, the man behind PayPal, Tesla Motors and SpaceX,… ‘our biggest existential threat‘
- Bill Gates backed up Musk’s concerns…”I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
- The Greatest Living Physicist? Stephen Hawking…”The development of full artificial intelligence could spell the end of the human race. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.
This is what is known as the “argument from authority” (a fallacy).
As the Wikipedia article on argument from authority notes:
…authorities can come to the wrong judgments through error, bias, dishonesty, or falling prey to groupthink. Thus, the appeal to authority is not a generally reliable argument for establishing facts.[7]
This article and others like it must use the “argument from authority” fallacy because they have no facts with which to persuade you of the danger of future AI. It isn’t often that you find others, outside of science fiction, who admit their alleged dangers are invented out of whole clothe.
The Oxford Researchers attempt to dress their alarmist assertions up to sound better than “appeal to authority:”
Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), 485 and would probably act in a way to boost their own intelligence and acquire maximal resources for almost all initial AI motivations. 486 And if these motivations do not detail 487 the survival and value of humanity in exhaustive detail, the intelligence will be driven to construct a world without humans or without meaningful features of human existence.
This makes extremely intelligent AIs a unique risk, 488 in that extinction is more likely than lesser impacts. An AI would only turn on humans if it foresaw a likely chance of winning; otherwise it would remain fully integrated into society. And if an AI had been able to successfully engineer a civilisation collapse, for instance, then it could certainly drive the remaining humans to extinction.
Let’s briefly compare the statements made about some future AI with the sources cited by the authors.
486 See Omohundro, Stephen M.: The basic AI drives. Frontiers in Artificial Intelligence and applications 171 (2008): 483
The Basic AI Drives offers the following abstract:
One might imagine that AI systems with harmless goals will be harmless. This paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted. We start by showing that goal-seeking systems will have drives to model their own operation and to improve themselves. We then show that self-improving systems will be driven to clarify their goals and represent them as economic utility functions. They will also strive for their actions to approximate rational economic behavior. This will lead almost all systems to protect their utility functions from modification and their utility measurement systems from corruption. We also discuss some exceptional systems which will want to modify their utility functions. We next discuss the drive toward self-protection which causes systems try to prevent themselves from being harmed. Finally we examine drives toward the acquisition of resources and toward their efficient utilization. We end with a discussion of how to incorporate these insights in designing intelligent technology which will lead to a positive future for humanity.
Omohundro reminds me of Alan Greenspan, who had to admit to Congress that his long held faith in “…rational economic behavior…” of investors to be mistaken.
From wikipedia:
In Congressional testimony on October 23, 2008, Greenspan finally conceded error on regulation. The New York Times wrote, “a humbled Mr. Greenspan admitted that he had put too much faith in the self-correcting power of free markets and had failed to anticipate the self-destructive power of wanton mortgage lending. … Mr. Greenspan refused to accept blame for the crisis but acknowledged that his belief in deregulation had been shaken.” Although many Republican lawmakers tried to blame the housing bubble on Fannie Mae and Freddie Mac, Greenspan placed far more blame on Wall Street for bundling subprime mortgages into securities.[80]
Like Greenspan, Omohundro has created a hedge around intelligence that he calls “rational economic behavior,” which has its roots Boolean logic. The problem is that Omonundro, as so many others, appears to know Boole’s An Investigation of the Laws of Thought by reputation and/or repetition by others.
Boole was very careful to point out that his rules were only one aspect of what it means to “reason,” saying at pp. 327-328:
But the very same class of considerations shows with equal force the error of those who regard the study of Mathematics, and of their applications, as a sufficient basis either of knowledge or of discipline. If the constitution of the material frame is mathematical, it is not merely so. If the mind, in its capacity of formal reasoning, obeys, whether consciously or unconsciously, mathematical laws, it claims through its other capacities of sentiment and action, through its perceptions of beauty and of moral fitness, through its deep springs of emotion and affection, to hold relation to a different order of things. There is, moreover, a breadth of intellectual vision, a power of sympathy with truth in all its forms and manifestations, which is not measured by the force and subtlety of the dialectic faculty. Even the revelation of the material universe in its boundless magnitude, and pervading order, and constancy of law, is not necessarily the most fully apprehended by him who has traced with minutest accuracy the steps of the great demonstration. And if we embrace in our survey the interests and duties of life, how little do any processes of mere ratiocination enable us to comprehend the weightier questions which they present! As truly, therefore, as the cultivation of the mathematical or deductive faculty is a part of intellectual discipline, so truly is it only a part. The prejudice which would either banish or make supreme any one department of knowledge or faculty of mind, betrays not only error of judgment, but a defect of that intellectual modesty which is inseparable from a pure devotion to truth. It assumes the office of criticising a constitution of things which no human appointment has established, or can annul. It sets aside the ancient and just conception of truth as one though manifold. Much of this error, as actually existent among us, seems due to the special and isolated character of scientific teaching—which character it, in its turn, tends to foster. The study of philosophy, notwithstanding a few marked instances of exception, has failed to keep pace with the advance of the several departments of knowledge, whose mutual relations it is its province to determine. It is impossible, however, not to contemplate the particular evil in question as part of a larger system, and connect it with the too prevalent view of knowledge as a merely secular thing, and with the undue predominance, already adverted to, of those motives, legitimate within their proper limits, which are founded upon a regard to its secular advantages. In the extreme case it is not difficult to see that the continued operation of such motives, uncontrolled by any higher principles of action, uncorrected by the personal influence of superior minds, must tend to lower the standard of thought in reference to the objects of knowledge, and to render void and ineffectual whatsoever elements of a noble faith may still survive.
As far as “drives” of an AI, we only have one speculation on such drives and not any factual evidence. Restricting the future model of AI to current mis-understandings of what it means to reason doesn’t seem like a useful approach.
487 See Muehlhauser, Luke, and Louie Helm.: Intelligence Explosion and Machine Ethics. In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer (2012)
Muehlhauser and Helm are cited for the proposition:
And if these motivations do not detail 487 the survival and value of humanity in exhaustive detail, the intelligence will be driven to construct a world without humans or without meaningful features of human existence.
The abstract for Intelligence Explosion and Machine Ethics reads:
Many researchers have argued that a self-improving artificial intelligence (AI) could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals. If so, and if the AI’s goals differ from ours, then this could be disastrous for humans. One proposed solution is to program the AI’s goal system to want what we want before the AI self-improves beyond our capacity to control it. Unfortunately, it is difficult to specify what we want. After clarifying what we mean by “intelligence,” we offer a series of “intuition pumps” from the field of moral philosophy for our conclusion that human values are complex and difficult to specify. We then survey the evidence from the psychology of motivation, moral psychology, and neuroeconomics that supports our position. We conclude by recommending ideal preference theories of value as a promising approach for developing a machine ethics suitable for navigating an intelligence explosion or “technological singularity.”
What follows is a delightful discussion of the difficulties of constructing moral rules of universal application and how many moral guidance for AIs could reach unintended consequences. I take the essay as evidence of our imprecision in moral reasoning and the need to do better for ourselves and any future AI. Its relationship to “…driven to construct a world without humans or without meaningful features of human existence” is tenuous at best.
For their most extreme claim:
This makes extremely intelligent AIs a unique risk, 488 in that extinction is more likely than lesser impacts.
the authors rely upon the most reliable source, themselves:
488 Dealing with most risks comes under the category of decision theory: finding the right approaches to maximise the probability of the most preferred options. But an intelligent agent can react to decisions in a way the environment cannot, meaning that interactions with AIs are better modelled by the more complicated discipline of game theory.
For the extinction by a future AI being more likely, the authors only have self-citation as authority.
To summarize, the claims about future AI are based on arguments from authority and the evidence cited by the “Oxford researchers” consists of one defective notion of AI, one exploration of specifying moral rules and a self-citation.
As a contrary example, consider all the non-human inhabitants of the Earth, none of which have exhibited that unique human trait, the need to drive other species into extinction. Perhaps those who fear a future AI are seeing a reflection from a dark mirror.
PS: You can see the full version of the Oxford report: 12 Risks that threaten human civilisation.
The authors and/or their typesetter is very skilled at page layout and the use of color. It is unfortunate they did not have professional editing for the AI section of the report.