Could machines potentially be the cause of civilization’s demise? Not yet perhaps, but some would argue that such an event might not be too far off, thanks to a recent development in technology. In the technologically dominated era we live in, there’s no doubt that most of us are familiar with the term “artificial intelligence” or “A.I.” for short. The term was first coined by computer scientist-John McCarthy (1927-2011), and refers to machines that are seemingly able to function independently from a human controller. This concept is certainly something that has drawn some concern from the public, which is perhaps best apparent in a large number of science fiction stories that have been released within the past century, such as Stanley Kubrick’s 2001: A Space Odyssey, and James Cameron’s The Terminator. Of course, the scenarios presented in those films have yet to occur in the real world, but nevertheless, there have been a growing number of individuals who have expressed their own concerns with the advancements in this technology, including renowned geniuses like Stephen Hawking.
According to Tanya Lewis of Live Science in her article “A Brief History of Artificial Intelligence”, the earliest concepts of artificial intelligence can actually be traced as far back as ancient Greece, which involved myths about artificial beings (or robots), and from the philosophical concept of “human thinking as a symbolic system” in Lewis’ own words. According to Lewis however, A.I. as a field wasn’t founded until a 1956 conference at the Dartmouth College in New Hampshire, resulting in John McCarthy coining the term, as mentioned earlier.
This reveal of the concept of A.I. was initially met with optimism and excitement, but the road to making this concept a reality wasn’t a simple one, as the above article attests. Government funding in the field eventually experienced a drop from around 1974 to 1980. According to Lewis, this period had come to be referred to as the “A.I. Winter”. This was not the end of the field of A.I. however, as it would experience a revival later in the 1980s by the British government, in an effort to compete with the Japanese. Following this “A.I. Winter”, and the subsequent revival of the field, there have been other developments in artificial intelligence that would eventually lead to the modern day phenomenon of virtual reality, and self-driving cars.
When one examines the impact that artificial intelligence may have over society in the future, it’s easy to see where these fears come from. As expressed by Gary Marcus of The New Yorker website with his article “Why We Should Think About the Threat of Artificial Intelligence”, the fear of A.I. primarily stems from the possibility that machines could potentially surpass humans in intellect by the end of the 21st century, thereby rendering us “obsolete”. Among these concerns, is the fact that we may be looking at a future where machines are able to completely dominate fields like mathematics, engineering, and science on their own. In addition, there is also the concern as to whether or not machines that possess artificial intelligence will possibly attempt to battle humanity for resources. After all, if a machine is able to think and process on its own, who’s to say it won’t possess other desires that sentience can provide, such as self-preservation?
Perhaps the most disturbing revelation about A.I. is the fact that these machines are being used as weapons by the military in the form of what most of us would refer to as “drones”. The function of these machines is even mentioned in an article from the New York Times titled “The Pentagon’s Terminator Conundrum’: Robots That Could Kill on Their Own”. In this article, it is said that these machines are able to identify any human target it is authorized to kill, and is able to carry out the kill with an almost disturbing level of accuracy, despite not having any human pilots. On the surface, this certainly sounds like it would make for a convenient method of “protecting” our sense of national security, but there are also some troubling implications to these weapons, including the inevitable arms races that are expected to result from them.
Aside from the fear of arms races for these A.I. weapons, there is also a growing concern over whether or not giving killing machines any level of sentience is a good idea. After all, if robots that are designed to kill are given the ability to act on their own, could it be possible that they may end up diverting from their primary missions, and kill on anything they might consider a threat? For anyone who’s familiar with practically any science fiction story involving artificial intelligence (namely the Terminator movies), this may come across as rather silly, but according to the article by the New York Times, Air Force General Paul J. Silva has apparently stated that the United States government could be a mere decade away from being able to build fully independent robots that could kill without even being given orders, though he does provide some relief by suggesting that the government has no current plans to build such a machine. Regardless of what the intentions of the U.S. government might be however, the fact that other countries are attempting to build similar weapons could make the scenarios presented in the Terminator movies a distant possibility.
Within the science community, opinions on artificial intelligence appear to be split. While some scientists are no doubt embracing the technological advancements that A.I. provides, others appear to have expressed their own concerns. According to an article written by Michael Sainato of Observer Opinion, some well-known figures such as Stephen Hawking, Elon Musk, and Bill Gates have issued warnings about the potential dangers of artificial intelligence. Stephen Hawking in his own words suggested that being able to perfect A.I. would be “the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks”. Elon Musk, meanwhile, has similar concerns, believing that A.I. could very well be our “greatest existential threat”, unless we take extreme caution in its development, while Bill Gates has echoed the belief that A.I. could lead the end of human jobs. All three individuals seem to share the same underlying fear; A.I. as a whole is something that must be kept in check, or it could end up showing negative effects on the human race as a whole.
For myself personally, I have a rather conflicted opinion on artificial intelligence. While I am not personally “scared” of the possibility that machines will try to compete with us for dominance, I am somewhat weary of how much we have come to rely on them. I certainly do believe that machines are very helpful tools that in many ways have made life better; artificial intelligence certainly is acquiring a rather notable level of prominence over basic human thinking. We constantly rely on things such as Google Maps and Siri to help guide us, to the point where we many of us likely wouldn’t know what to do without them. Artificial intelligence is definitely an impressive tool, but it shouldn’t become something that we endlessly rely on. If we place all of our trust in the current state of technology, it could lead to stagnation for civilization, thereby preventing us from making any true progress. This admittedly does tie into the concerns mentioned earlier about how machines will possibly “steal” jobs from humans, but it also into the concern that we may end up stagnating as a species, which would essentially be self-destructive for us. This leads us back to the earlier question; will A.I. be the cause of our demise, or will they instead be tools that can help continue our growth as a species? There may be no easy answer to this question, but it is one that has certainly been asked within the science community.
Lewis, Tanya. “A Brief History of Artificial Intelligence.” LiveScience. N.p., 4 Dec. 2014. Web.
Marcus, Gary. “Why We Should Think About the Threat of Artificial Intelligence.” The New Yorker, 24 Oct. 2013,
Rosenberg, Matthew, and John Markoff. “The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own.” The New York Times. The New York Times, 25 Oct. 2016. Web. 02 Nov. 2016.
“Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence.” Observer, 10 Sept. 2015