Facing Fear: The Coming Of The Terminator

Facing Fear: The Coming Of The Terminator
“When sorrows come,” groaned Claudius in Hamlet, “they come not single spies, but in battalions.”

Our world knows what he meant. Consider: There is the imminent threat of climate change—and we can see its devastating impact on Pakistan where one-third of its area is submerged as its glaciers melt; there is the pandemic brought on by a virus that continues to mutate and kill; and there is the horror of millions of people displaced and killed all over the world on the basis of religion and ethnicity.

Across the world, there is no palpable centre holding society together or clear instruction pointing it to the future. Everything and everyone appears to be suspended in mid-air; as if people are waiting for directions. A feeling of profound uncertainty permeates society, which engenders a sense of fear in the individual. It is a dangerous point on the cycle of existence. In the vacuum and uncertainty, false narratives of a mythical past when the land flowed with milk and honey and heroes roamed abroad are accepted as facts. We were great then and it is time to be great again. It is the story of Paradise Lost. It lends itself to a corollary: Paradise was lost to the alien invader or immigrant; if we get rid of these foreigners, we may yet again regain Paradise. From Orban in Hungary to Trump in the USA to Modi in India, leaders are falling back to this simplistic but dangerously fraught misreading of history.

There is yet another extinction-level crisis on the horizon for humankind, which it does not seem to be fully aware of. Arguably, indeed, humankind appears to be hurrying the crisis along into accelerated development. This is the creation of artificial intelligence or AI.

Stephen Hawking and Elon Musk along with others have warned that AI is the most dangerous development in human history, and potentially poses an existential danger to the human race. The term artificial intelligence originates with the computer scientist John McCarthy of Stanford University, who first used the term in 1955 and defined it as “the science and engineering of making intelligent machines.”
While we cannot say definitively whether these predictions are accurate or not, we already have early examples of the casual cruelty of AI

According to some of the most dire predictions about how rapidly evolving artificial intelligence will develop, AI will gain “consciousness” in the near future and possess the capacity to think and calculate for itself. The danger it could pose is that as a sentient creature, it may determine that the Homo Sapiens have little to contribute to the development of the planet. Humans would thus be obsolete from the point of view of AI. Exterminating the human race will be the next logical step. If, in the future, AI moves to finish human life on Earth as useless and obsolete, these theories state, there will be nothing to really resist it.

While we cannot say definitively whether these predictions are accurate or not, we already have early examples of the casual cruelty of AI. In one recent example, a robot playing chess against a young boy lost a move, reached out, and broke the finger of its opponent. In another example, when Sophia, a robot developed by Hanson Robotics, was asked by the CEO of the company, “Do you want to destroy humans? Please say ‘no,’” Sophia responded, “OK, I will destroy humans.” The signs are there, say those warning about the dangers of AI; once again the human race is almost willingly, with eyes wide open, ignoring them.

Those issuing these kinds of warnings argue that if unchecked, AI will before the end of this century possess a higher level of consciousness than us; indeed, it will look down on our so-called “human values,” which in any case we preach but rarely practice. AI will still have to face great numbers of the human population. It could therefore use cunning and subterfuge so as not to be terminated before it is in a position of irreversible superiority. In time, AI may become more than any religious, ethnic or caste distinction, the ultimate Other.

Furthermore, artificial intelligence may search for algorithms to explain the concept of an unseen God which many humans believe in. Not finding anything to satisfy it, AI may proceed with the task of exterminating people with such irrational ideas as belief in an invisible God. The idea of an unseen, all-powerful God who is all around us and yet does not appear to us, is not calculated to inspire AI with awe or fear.

The Project Gutenberg eBook of The War of the Worlds, by H. G. WellsTempted by the capacity of artificial intelligence to help humans in the fields of medicine, transport, agriculture, and education, say those raising the alarm about AI, we have lulled ourselves into overlooking the real dangers arising when artificial intelligence may be sentient and have consciousness and thus become too powerful to control.

In the British author H. G. Wells’ classic 1898 novel The War of the Worlds, alien invaders of Earth abruptly died due to invisible germs. Human beings were immune, as humans have lived with the germs over the millennia. But artificial intelligence would not die in such a contingency, as it is immune to such germs. It is not made of flesh and blood, and lacks the moral feelings and understandings that humans possess. It is cold hard steel and wires, and its nature is of a similar temperament.

H.G. Wells is known as the “father of science fiction,” an artistic genre that envisions the future and deals with moral questions and the impact of technology on humanity. The best science fiction, arguably, is based in science itself – and therefore asks important questions about where scientific development will lead humanity. It therefore also captures fears and anxieties that humans have about their future trajectory.

Before the development of AI and the concurrent fears concerning its future, two great British visionaries, Aldous Huxley in his 1931 novel Brave New World and George Orwell in 1984, published in 1948, presented us with two differing but terrifying pictures of what is to come. Soma and sex in the first and senseless sadism and shocks in the second. In either case it is not a very appealing or pleasant future that awaits us.

In Brave New World, children are conditioned to reject flowers and books. The rose, the symbol of love and beauty in world literature, is a particular object of loathing. The hierarchy of society with five castes, Alphas on top and Epsilons at the bottom, reflects the hierarchy in H. G. Wells’ The Time Machine with the Eloi on top and the Morlock at the bottom, literally living subterranean lives. The Eloi were groomed to be soft and feeble, just right for the cooking pots of the Morlocks.

Orwell was even more prescient. 1984 has world civilizations in perpetual conflict and the state constantly spying through technology on its citizens. Everyone is subject to discipline and torture in order to bow to the will of the state. Big Brother is watching every move you make. Contradictory and ironic sayings such as “war is peace” remind individuals that life is neither neat nor simple.
Furthermore, artificial intelligence may search for algorithms to explain the concept of an unseen God which many humans believe in. Not finding anything to satisfy it, AI may proceed with the task of exterminating people with such irrational ideas as belief in an invisible God

In a chilling centerpiece episode in 1984, Orwell describes the brutal power of totalitarianism through a lesson in the mathematics of what we call brainwashing. O’Brien, a member of the Inner Party, is interrogating the protagonist of the novel, Winston Smith. O’Brien holds up four fingers and asks him how many fingers he is presenting. Winston answers ‘four’ and O’Brien’s machine wired to Winston’s head administers a shock. Again, he asks how many fingers and again Winston answers four. This is accompanied by an increased jab of acute pain, the intensity increasing with each ‘incorrect’ answer.  This carries on until the pain becomes too much to bear. Winston cries out, “whatever you say, it is that, just stop the pain.” O’Brien then says, while holding up four fingers, that the answer could be five or even four. The point of the lesson was that Winston must answer exactly the way he has been asked to by those in charge, whatever the reality he is seeing.

Orwell had based his novel entirely on Stalin’s USSR. But Orwell could not foresee that the USSR would collapse by the end of the century under Gorbachev and be partially resuscitated by Putin early in the next century. As for Huxley, except for some students on some university campuses, the notion of free sex and free soma was still a long way away.

Another Englishman, Arthur C. Clarke, one of the most influential contemporary science-fiction writers, co-wrote the screenplay to the 1968 film 2001 Space Odyssey with his collaborator Stanley Kubrick, and published the 2001 novelization based on the screenplay. We have here a prominent example of the prophesied bleak turn of AI in HAL, the computer system of the spacecraft in the story. HAL decides to kill the astronauts onboard, which he believes is in keeping with his programming objectives.

The mood and the content of what was projected about the future began to change, growing darker. AI, an element not foreseen by either Huxley or Orwell, became an increasing element in science fiction.

Movies like the Terminator and Blade Runner series depicted such advanced AI that they would think for themselves and fool the Homo Sapiens into believing they were one of them. In the Terminator and its sequels, the future is dominated by AI represented by Skynet. Skynet faces a human threat in the form of John Connor, who will fight the robots under Skynet. And Skynet’s solution is blindingly brilliant: send a top killer robot to the past, one programmed to destroy Sarah Connor, the mother of John. Thus, John will not be born and therefore there will be no leader in the future to resist and fight the robots. John is not a Greek philosopher nor a Chinese sage. He is, however, the next best thing for Hollywood: an energetic, intelligent, and moral young American. That, for the Hollywood moviemakers, is sufficient.

In Blade Runner, Harrison Ford’s Agent Deckard falls in love with Sean Young’s replicant Rachael. She’s so real that he could be forgiven. And Blade Runner was made half a century ago. Much has changed and much has advanced since then. In the minds of many science fiction writers, AI has changed the equation between man and machine and poses the most serious threat to the human species. Among the other films capturing these anxieties was I, Robot (2004), about sentient robots rebelling against the human race and drawing from the earlier work of the author Isaac Asimov.

Even the television series South Park picked up on the potential lethality of AI. In a 2011 episode called “Funnybot,” it argued that the human race likes humour and jokes and therefore Funnybot, purportedly the funniest robot ever, sets out to exterminate the human race as the “ultimate joke.”

Yet Hollywood is also acutely aware of other dangers to humanity. Don’t Look Up (2021) is an end of the world movie with an ensemble cast and portrays the inexorable threat of climate change which is set to create an extinction level event. In the film, an asteroid as big as Mount Everest is heading for earth. Called a planet killer, the news is met with indifference, confusion, and political machination. It is business as usual.
The appeal of Star Trek originating in the 1960s came from precisely its projection of an optimistic, upbeat future with peoples of different ethnic and religious backgrounds who could in harmony explore the vast universe with a spirit of unity. That vision was replaced a few decades later by a darker one

As in Wells’ The War of the Worlds, adapted into a film by Stephen Spielberg in 2005, the purported malign intentions of aliens towards Earth and humans have also been a mainstay in fiction and further capture our anxieties about the future. Independence Day (1996) is but one example. Documentaries like Ancient Aliens on History Channel, now in its 18th season, further demonstrate the intense interest in the subject of extraterrestrial involvement in human affairs. The series has moved “Ancient Alien theories” from the margins into the mainstream. Such theories freely equate great architectural wonders like the pyramids to visitations from aliens and the vimanas in ancient Sanskritic texts to flying chariots and warships.

Fear and anxiety about the future have also inspired moral philosophies such as longtermism which comes to us from the dreaming spires of Oxford and dons including William MacAskill. Longtermism argues that we must begin to think of our long-term future as a moral imperative. Everything we are doing today will be impacting that future. We need to make our moral choices today with more prudence and wisdom in order to be able to build a better society for the future and ensure humanity’s survival.

The Oxford dons are a century late. Although in an entirely different context and for different purposes, Hitler and his fascist thugs had already thought of and implemented what the Dons called long-termism, albeit with what we clearly identify and recognise as evil intentions. Hitler’s gang had conceived of the Third Reich which was to rule for a thousand years. The plan was to develop a master race which would monopolise all the resources that Europe could provide in order to perpetuate its rule. They deserved to dominate society and lead humanity far into the future, they argued, because they were part of a superior race.

In a radically different approach to longtermism, reputable scientists Wes Jackson and Robert Jensen in their recently published study An Inconvenient Apocalypse predict an exceedingly gloomy future that is almost upon us (2022). The authors see global collapse on a mass scale and predict that those who do survive the planetary destruction would be forced to live in very different circumstances. Nothing, they argue, will stop the apocalypse, which is a cumulation of the catastrophes that we have created and which are hurtling toward us – such as climate change. One of their main arguments is the need to drastically cut down the world population from around 7 to 8 billion people to a manageable 2 to 3 billion. In order to do so, the question arises as to who will be selected for survival. Once again, the spectre of white supremacy haunts the question. Will the survivors be decided by the colour of their skin? Or their religion or their economic status? The answer contains the seeds of mass violence. The core message of An Inconvenient Apocalypse is that the time of unlimited expansion and the mindless optimism for the future is now effectively and permanently over.

Yet, as Friedrich Nietzsche recognised in the late 19th century, the majority of societies in the West that were once Christian and held Christian values as part of their beliefs, have now largely abandoned them. They are therefore left with a vacuum which may prompt individuals to create their own philosophies of nihilism or anarchy. The vacuum could explain why so many marriages break up, so many suicides take place, so much violence permeates society and so many believe in aliens and “Ancient Alien theories.” In this environment, we must place the fears for a dark human future represented by factors like climate change disasters, pandemics and out-of-control AI.
The memory of Minglers must be preserved. Perhaps one day soon it will provide the one tangible and direct link to our identity as human beings

The appeal of Star Trek originating in the 1960s came from precisely its projection of an optimistic, upbeat future with peoples of different ethnic and religious backgrounds who could in harmony explore the vast universe with a spirit of unity. That vision was replaced a few decades later by a darker one, perhaps best represented by the Terminator series of films and its imitators. Today, post 9/11 and into Russia’s Ukraine war and the recklessly dangerous drift to confrontation between China and the US, the world is in a dangerous place. Looming over everything are the crises of the Covid virus that will not die and climate change that will not let us live.

'Father of Robotics' - Ismail al-Jazari's diagram from 1206 AD of a water-powered flute from The Book of Knowledge of Ingenious Mechanical Devices


In a study that I am working on with Frankie Martin and Dr. Amineh Hoti, called “The Mingling of the Oceans,” we are exploring how some individuals in history - that we are calling the Minglers - reflect the legacy of the human race in preserving for us its wisdom, compassion and hope for the future—all qualities that make us human. It is an extraordinary list of individuals selected from the past and the present, from the east and the west, irrespective of gender, religion, and ethnicity and all engaged in a similar task, to promote human coexistence by strengthening the bonds between us.

Our Minglers with their philosophy of acquiring knowledge, being inclusive and striving to create stable harmonious societies may seem suddenly quaint, outdated and even obsolete. It is precisely for this reason that the human race must hold on to and appreciate the lessons from its mutual past. The memory of Minglers must be preserved. Perhaps one day soon it will provide the one tangible and direct link to our identity as human beings. They provide a key point of reference which defines our so-called human values.

We have to raise some practical questions in the context of our Minglers addressing the crises of our times including the fear of artificial intelligence running amok. How can a man, let us say Socrates, dressed in robes and wearing sandals, effectively stop robots? Would he begin a discussion about ethics, morality and the nature of the good life? Or another Mingler, also in robes and sandals, Jesus of Nazareth this time, talking of forgiveness and loving the enemy. Or more contemporary examples such as those of Professor Noam Chomsky, Dr. Haris Silajdzic of Bosnia, Dr. Rowan Williams, the former Archbishop of Canterbury, and Lord Rabbi Jonathan Sacks of the UK talking rationally and persuasively of their particular intellectual traditions. The strength of their presentation lies in the context of our global civilization and draws attention to the humanist ideas of their philosophy.

What about Minglers who are philosophers and mystics? These Minglers can aid us. Perhaps not in a military sense, clearly it has been a long time since old Socrates, Confucius and Lao Tzu even wielded a sword, and if they did they would be more a danger to themselves. Dara Shikoh, great mystic philosopher of interfaith discussion, was a disaster when he fell off an elephant waving a sword and leading an army. But they are strong in providing belief, morality, conscience, and compassion—precisely those features that artificial intelligence may have an inkling of but may not grasp.

In the context of religion, what is Islam’s position on AI, fear, and the future?

God is All-knowing and All- powerful. If man’s destiny is to be God’s deputy on Earth, then mankind must be protected. But how will God protect man from man?

Humanity will need more than just a good-looking young man like John Connor, however clean cut his features and blond his hair, to stop out-of-control AI, should it evolve in that direction. We will need the wisdom and insights of the sages of the past to guide us into the future where AI will increasingly feature in our daily lives.

While some of the voices discussed here argue that the triumph of AI over humans is inevitable, because we are optimistic we can state that this is only one scenario of the future, and have introduced the Minglers who embody the finest of humanity. It is they who give us a model of how to proceed as we face the future, and we must face it together, irrespective of our own backgrounds and traditions. This will counteract, check and defeat any threat to human civilisation that may arise.

Ambassador Akbar Ahmed is Distinguished Professor of International Relations and holds the Ibn Khaldun Chair of Islamic Studies at the American University, School of International Service. He is also a global fellow at the Wilson Center Washington DC. His academic career included appointments such as Nonresident Senior Fellow at the Brookings Institution; the First Distinguished Chair of Middle East and Islamic Studies at the U.S. Naval Academy in Annapolis, MD; the Iqbal Fellow and Fellow of Selwyn College at the University of Cambridge; and teaching positions at Harvard and Princeton universities. Ahmed dedicated more than three decades to the Civil Service of Pakistan, where his posts included Commissioner in Balochistan, Political Agent in the Tribal Areas, and Pakistan High Commissioner to the UK and Ireland