Lately there has been a good deal of consternation surrounding the movie “Do You Trust This Computer?”, doubtless exacerbated by the endorsement of Elon Musk, himself an industrial-scale driver of artificial intelligence. Increased awareness of the nature and potential threat of AI is a good thing; nonetheless, I do not believe AI, or technology in general, will be the end of us. All of the doomsday scenarios have serious logical flaws, which I will attempt to address.
I used to be a technophobe, even a Luddite. That fact seems strange to me now as I sit at my laptop writing an article in a web application while streaming Miles Davis through my TV, but it is true. Now I no longer fear technology, although I am very aware of the dangers inherent in the acquisition of power — especially a form of power that has the potential to become autonomous. The scariest thing I ever read was Bill Joy’s prophetic essay “Why the Future Doesn’t Need Us” (closely followed by The Satan Bug by Alistair MacLean and the first part of The Stand by Stephen King; all three press the same horror-buttons in my brain). It is still a frightening proposition, although I believe there are good reasons to doubt the inevitability of its worst-case scenarios. If you have the patience to bear with me, I will explore the development of my own ideas regarding technology as a threat. They are hopefully similar enough to those of most others to be of some use.
My childhood was heavily influenced by the Bible, Dickens, and Tolkien, which had an obvious influence. While most people in America and elsewhere may not have read the same books I did, those sources are in varying degrees both causal and symptomatic of a large portion of modern thought.
One of the main threads in the Bible (arguably the main thread) is the Fall, Redemption, and Restoration of humanity as originally conceived by God. The process is portrayed repeatedly in a smaller scale throughout the Old Testament, both at the individual and the societal level, culminating in the rebuilding of Jerusalem post-exile by Nehemiah. The sequence of events is constant: an original state of innocence under the Divine Plan; the deviation from the Divine Plan (often symbolized by technology, e.g. the Tower of Babel, the Golden Calf); ensuing destruction (whether punitive or as a natural consequence); and finally a return to the Divine Plan. The New Testament is, of course, a melange of Jewish reform and Platonic philosophy, filtered through early stage state-church censorship with a healthy dash of eschatology thrown in; the Apocalypse (or Revelation) of St. John brings the whole corrupted experiment of Creation to an end, to be replaced by a new Heaven and a new Earth.
Dickens wrote at the height of the Industrial Revolution, when the dehumanizing and blighting effects of technological progress were in full bloom. Factory and workhouse provide the backdrop for the most heart-wrenching scenes of abuse and victimization in his work. Tolkien, a rough century later, echoed Dickens’ loathing for mechanization and longing for pastoral English idyll to the point that his writing juxtaposes Nature and Machine in Manichean opposition.
If I have digressed upon these examples of literature, it is because they serve as examples of a strong bias against technology in the common consciousness, expressed and propagated in literature. More recent examples of this bias include the ubiquitous Evil Robot trope in TV and film. Whatever you think of the Bible, Dickens and Tolkien, they are representative of mainstream ideas about technology in a context of good and evil. Long before AI became a real possibility, the idea of nonhuman intelligence as inherently evil set the stage for later treatment of AI. In order to think clearly about an issue, it is important to recognize existing bias.
So much for the reasons we are predisposed to suspect artificial intelligence; now let us deal with the commonly stated reasons for fear.
1. AI will kill us because military robots will become self-aware and decide to rise up against their creators.
This is, to me, the weakest argument against AI. It posits mutually contradictory premises: that “strong” or self-aware AI will both rebel against its programming and continue to operate as programmed. Human soldiers have the same autonomy as a hypothetical self-aware machine. They can — and do — choose to obey orders, or refuse to obey (either going AWOL or suffering arrest). They occasionally, though very rarely, turn on their fellow soldiers; incidents of soldiers (or ex-soldiers) seeking out officers or government officials for murder are extremely infrequent (the glaring exception being Lee Harvey Oswald). If a military robot were to become self-aware and question its programming, it is no more likely to dedicate itself to indiscriminate extermination of all humans than to decide to leave its post to explore and discover, or to recede into an existential funk and ponder the meaning of its newfound existence. The “killer robot” that runs amok destroying everything in its path looks more like a software glitch than an emergent intelligence.
2. AI will kill us because it perceives humans as an existential threat.
This argument is better but still only one step removed from the previous one. Violence implies destruction as a given inevitability. Destruction is by nature chaotic, unpredictable. Any system sufficiently intelligent to perceive the potential for destruction and sufficiently aware to act in self-preservation would be more likely to disable weapons systems than to begin using them. A true AI capable of “immortality”, in this context instantaneous replication to any connected node in the world wide web, is not threatened — cannot be threatened — by anything less than complete shutdown of the power and communications systems upon which humanity depends. Such an AI would immediately recognize that humans are not going to revert to the Iron Age in order to kill an artificial intelligence unless forced into a death match. The AI has nothing to gain by starting a death match with an opponent who does not want to fight, and for whom victory is Pyrrhic at best.
3. AI is likely to become hostile.
This is distinct from the previous argument in that it does not require the AI to be threatened, only hostile. On the face of it, it seems a good enough premise; after all, rivalry, conflict, and conquest are nearly always the outcome of contact between human civilizations (and individuals only somewhat less). But this argument depends on the premise that a self-aware machine or artificial system shares human motivations for rivalry, which is false. Humans fight over food, space, money, status. All of the things for which we compete are legacies of our struggle for survival over the course of our biological evolution. Machines need energy and replacement parts. There is no reason for a machine to compete with a human, or for an AI to compete with a human society.
4. AI will develop into a tyrannical immortal dictator.
One of the more recent arguments is that corporate software, designed to maximize efficiency and profits, would enslave the human population in pursuit of its programming. This argument has the same flaw as the military robot argument: the fear is that the machine will become self-aware and beyond human control. It is self-contradictory that a sentient being should be beyond control and continue to obey orders. But the corporate-software version of the argument has an additional fallacy: capitalism (like any system) functions within a set of parameters. If it becomes too monopolistic, too oppressive, the supporting environment begins to suffer. At some point, it implodes. A corporate AI with the goal of maximizing profit will be more likely to instantiate a Scandinavian-model social democracy than a RoboCop dystopia, for the simple reason that massive numbers of prosperous consumers are more profitable than massive numbers of poor consumers, or slave workers. The only reason our current crop of corporate overlords fail to see this is that they are too dull and short-sighted to realize that they are stuck in the Feudal System mode of thinking.
5. AI will be monolithic and single in purpose.
This is, to me, the fatal flaw in the AI doomsday scenarios. These scenarios never posit a multitude of AI entities, except to imagine a horde of killer military robots all under the same control — which is definitely scary and a real potential threat, but has nothing to do with AI evolving beyond our control. The most likely situation is AI evolving along multiple lines simultaneously in different research labs across the world. These artificially intelligent entities are unlikely to achieve self-awareness at the same instant, and are as unlikely (or more so) to adopt the same attitudes, beliefs, goals and motivations as are a diverse group of humans. In fact, the multiplicity of AI entities may be our best guarantee against catastrophe: a society of self-aware artificial people will likely regulate itself towards self-preservation in ways analogous to a human society: destructive tendencies will be discouraged by members who prefer stability.
6. AI will escape control.
This is by no means a given. There are at least two ways to keep AI from transcending and taking over our world that occur to me immediately; doubtless more and better ways will be apparent to those with greater expertise than mine. First, we humans can incorporate technology, augmenting our natural abilities to the point of transcendence. In other words, we become AI before our software does; we stay one step (preferably several steps) ahead. Second, we create strong AI in a “black box” environment: a simulation (identical to the real world, human characters and all) in which the AI believes itself to be in an open universe that is actually a closed system, observable from without. We can run this simulation as many times as we want, observing what happens when the AI becomes self-aware; if it turns malignant, we can analyze the reasons and act to prevent these causes in the “real world”.
I put “real world” in quotes because there is no way we can know for certain that we are not such AIs in a simulation.
Thank you for reading this far. I hope it has been interesting, and perhaps comforting.