Lennox, John C. (2024-11-25). 2084 and the AI Revolution, Updated and Expanded Edition: How Artificial Intelligence Informs Our Future . Zondervan: The advances in the field of artificial intelligence (AI) in the four years since I wrote the first edition of this book not only justify its revision but also require a considerably expanded edition in order to try to keep pace with what is an unprecedented phenomenon of the early twenty-first century. AI is now on everyone’s lips as a tech buzzword and has even become the Big Theme at the 2024 World Economic Forum (WEF) at Davos. In 2023, more than 25 percent of startups were in AI, and investment worldwide is expected to reach $200 billion by 2025. The WEF reckons that up to 40 percent of jobs worldwide will be affected by AI in one way and another, with many being replaced and others complemented. The number of published research papers on AI is growing exponentially, doubling every two years; by 2021, there were 4,000 new papers per month on arXiv! It is clearly impossible to keep up with such an exploding field, and yet I dare express the hope that my readers may feel that I have done at least some justice to what has been happening since I wrote the first edition of this book in 2020. My aim is to do two things – firstly, to inform my readers, and, secondly, to explore and reflect on the possible implications for us of this revolutionary new area of technology. In a 2009 debate, the biologist E. O. Wilson said: “The real problem of humanity is the following: We have paleolithic emotions, medieval institutions, and godlike technology. And it is terrifically dangerous.” He continued: “Until we answer those huge questions of philosophy that the philosophers abandoned a couple of generations ago – Where do we come from? Who are we? Where are we going? – rationally, we’re on very thin ground.”1 I think, however, that Wilson is rather hard on philosophers. I know quite a few eminent people, and not only philosophers, who have been thinking hard about these questions for a long time. In his Critique of Pure Reason, the famous German philosopher Immanuel Kant said that the three most important questions for any human being are What can I know? What can I hope for? And what must I do? They are the key existential questions we all ask as we seek meaning in life. This book is an attempt to address them in the context of AI. Related questions abound. What is the relationship of the mind to the brain? Is real intelligence always coupled with consciousness? What is consciousness anyway? Will we be able to construct artificial consciousness and life? Will humans so modify themselves that they become something else entirely – either by genetic engineering, cyborg technology, or both? Will we eventually make superintelligences? And, if we do, what will our relationship to such entities be? Shall we control them, work alongside them, or be controlled or even replaced or destroyed by them? Or shall we gradually merge with machines? Is life in a metaverse a step in that direction, whatever the metaverse may eventually prove to be?
PART 1: MAPPING THE TERRITORY Chapter 1 explores famous dystopian novels, some of which, such as Orwell’s 1984, depict grim technology-dominated totalitarian surveillance states. That leads us to AI, which is often used for surveillance today. We then briefly trace the history of information technology, leading up to the pioneering work of the mathematical genius Alan Turing, the father of the computer, whose ideas lie behind efforts to construct a “thinking machine.” From there we introduce narrow and general AI, noting that the latter features not only in science fiction but also increasingly in the thinking of some top-level scientists and engineers who are now making a serious attempt to construct it. Chapter 2 explores AI in more depth, considering the nature of intelligence and what machine intelligence is understood to mean. We then take a brief look at the history and trace the ups and downs of the AI program aimed at the construction of machines that imitate what humans can do. We introduce the ideas of neural networks, algorithms, and machine learning. We finally give some examples of AI in common use today. Chapter 3 turns first to ethics. Just as a knife can be used for surgery or murder, all technology raises ethical problems. AI is no exception – indeed some of the ethical problems that now face us are both complicated and urgent as they have an impact on many people’s lives. They range widely – from invasion of privacy by surveillance to deceit by digital assistants. We list some common ethical systems used to determine what ethical principles should be built into AI systems and their regulation (for example, the Asilomar principles). We also look at ethical guidance for robots. The chapter concludes by observing that, because of the vocabulary used in the AI field – “artificial intelligence,” “machine learning,” “deep learning,” and so on – it is very easy to fall prey to the idea that we are well on the way to constructing machines that think like humans. Machines do not have minds and cannot perceive, and we conclude the chapter by citing the recent fascinating work by neuroscientist Iain McGilchrist on the different modes of perception employed by the two hemispheres in the human brain. PART 2: TWO FUNDAMENTAL QUESTIONS: WHERE DO WE COME FROM? WHERE ARE WE GOING? To engage as wide an audience as possible, we look at these questions through the eyes of two very different authors. Chapter 4 considers the first question – Where do we come from? – in the context of science and technology as depicted in the book Origin by the popular fiction writer Dan Brown. We shall probe his ideas for credibility and scientific veracity. Chapter 5 explores the second question – Where are we going? – in light of Brown’s idea of the merging of humans with machines, a notion on the agendas of serious academics such as Astronomer Royal Lord Martin Rees and the late physicist Stephen Hawking. It is the main goal of transhumanism – the drive to modify human beings and create a superintelligence by means of bioengineering and cyborg engineering. We discuss Ray Kurzweil’s concept of “the singularity” and its origins in ideas of an “ultra intelligent” machine. We describe the current thinking of a wider range of scientists and engineers and their expectations for a technological future that will shape our lives.
Chapter 6 begins by listing some of the familiar ways in which AI is shaping us today in a variety of activities and enterprises – from emails, digital assistants, robotics, autonomous vehicles, manufacturing, and medicine to the triumphant use of AI in solving some of the most intractable practical problems in science, such as protein folding. Research proceeds apace and makes for a bright future. There are downsides, however. Chapter 7 turns first of all to one of the major problems created by increased automation in general and AI in particular – job losses when people are replaced by machines that can do their work faster, more efficiently, and more economically. We find that at all stages, from job interview to job loss, AI can create problems that urgently need to be addressed in technologically dependent economies. The relationship of the human to the machine is high on the agenda – or should be. The second topic in chapter 7 is the way in which AI already has a profound effect on education – from the use of intrusive monitoring technology in schools in China to the use of ChatGPT in essay writing. That leads us to discuss the GPT revolution, made possible by the creation of large language models (LLMs) developed by Sam Altman’s company OpenAI. These models are trained on billions of words and can generate convincing text on many topics in response to human prompts. The competence of such systems leads to the third topic in chapter 7: fears about AI getting out of (human) control, which have led to calls by many of the heavyweights in the AI industry for a moratorium on research. We discuss the need for and the nature of legislation to control errant AIs. The final topic in chapter 7 is the military use of AI in the deployment of autonomous weapons, which raises an array of ethical problems that, once more, demand urgent attention. Chapter 8 concerns the pressing issue of citizen privacy and rights in light of the fact that most of us – smartphone users in particular, which means most people – are voluntarily sharing data about ourselves and our habits, conversations, friends, and many other things with megacorporations. This citizen data can be used for various purposes beyond our control that are not always beneficial – at least not to us. We look first at surveillance capitalism – the use of our data without our permission for commercial gain. We then turn to what I have called surveillance communism, which, as the name suggests, mainly concerns what is going on in China – although it is being exported to many other countries. There is the attempt at data-driven governance through the social credit system, in which the population is comprehensively monitored for trustworthiness, as measured by their compliance with the state ideology: good (in other words, state-approved) behavior is rewarded, whereas bad behavior is punished.
We next turn to an even more disturbing use of intrusive AI surveillance technology in the Chinese province of Xinjiang, where the Uyghur population is faced with what has now been classified by the UN as attempted genocide. An estimated 800,000 to 2 million people have been imprisoned by the Chinese government in camps euphemistically called vocational and educational centres. Those not detained are subjected to relentless surveillance, religious restrictions, forced labour, and forced sterilizations. The objective is to wipe out their culture. All of this is a warning to other countries, as a lot of the technology used in Xinjiang is being exported to enable other governments to tighten their control. Hence the next section of chapter 8 deals with surveillance in the West and how it is shaping our culture and also showing dangerous trends. That takes us to the concluding section of chapter 8, where we discuss one of the most recent and most advanced uses of AI – deepfakes. We discuss the threat they pose to democratic institutions and to individuals and the danger of our no longer being able to tell truth from falsehood or between what is a human artifact or something produced by an AI. I raise the question of how those of us with moral and religious convictions should respond to all of this. Chapter 9 takes us into the realm of virtual reality (VR) and the metaverse. We explore the pluses and, even more, the minuses of immersing ourselves in a virtual world where we can conceal our identities and give vent to our fantasies of every kind, healthy or not. We follow the various attempts to create a metaverse – a totally immersive experience of VR arising from gaming technology with the addition of AI, which will take us into a world where we live by proxy through an avatar of ourselves and indulge in anything we desire. We evaluate the dangers of VR platforms such as Second Life and the problems that such experiences create for our daily lives in the real world. Chapter 9 concludes by discussing the flood of internet pornography that is damaging human relationships and robbing children of their innocence while making millions of dollars for ruthless exploiters of the media. We ask how we can protect ourselves and our children from this onslaught where they can be groomed online and sucked into gruesome encounters in VR. Chapter 10 takes us from the current use of AI to the much more speculative quest for artificial general intelligence (AGI) and the desire and attempt to upgrade humans. This is the transhumanist agenda famously proposed in the bestselling book Homo Deus: A Brief History of Tomorrow by the Israeli historian Yuval Noah Harari. In it he maps out an ambitious transhumanist vision for the twenty-first century. The agenda is, first, to solve the “problem” of physical death by technical means and, second, to enhance human happiness by using bioengineering and cyborg engineering to merge humans with AI tech and turn them into superhuman, superintelligent “gods.” Far-fetched as these goals may seem – and we give reasons for thinking so – these goals are shared by many others whose views we engage. We discuss the extreme but common reductionist view that human beings are nothing but a bunch of algorithms. We report the skepticism that neuroscientists express about the idea of prolonging life or finding a “cure” for death, considering the extreme and expensive lengths to which some people are going to preserve their bodies by cryonics, hoping that science will eventually find a way to resuscitate their brains and identities.
Regarding the second of Harari’s agenda items, we look at the attempt to move from the organic to the inorganic in the quest to find a more durable substrate for future super intelligent beings (trans humans, or posthumans, but certainly no longer simply humans). This takes us to a consideration of C. S. Lewis’s anticipation of the dangers surrounding such developments in his brilliant dystopic science fiction novel That Hideous Strength, written in 1945. From there we turn to the views of people who think that humans have had their day, and far from attempting to upgrade, they should instead cease to exist once and for all and be replaced by inorganic intelligences. Chapter 10 concludes by considering one implication of the above – long termism, the almost incredible suggestion that we should essentially abandon all attempts to alleviate poverty and concentrate all our wealth on preventing existential threat to the intelligent beings that, in the view of some, may exist in the distant future – a total violation of fundamental moral principles concerning the value of human life. We look at historical precedents for such views, reverting to the works of C. S. Lewis. Chapter 11 takes us into another dark realm of possible developments in AGI, where now vast sums of money are being invested by companies such as Mark Zuckerberg’s Meta. We take a closer look at what is going on, leading us to ask about the worldview driving much of this – the atheistic worldviews of naturalism and materialism. One of the characteristics of materialism is that it reduces minds to brains and regards brains as computers. We adduce evidence from leading scientists to challenge this view, arguing that materialism fails because matter is not the ultimate reality – it is not even the prime reality. We also maintain that simulated intelligence is far from real intelligence by outlining John Searle’s famous Chinese room thought experiment. We also cite the work of Nobel Prize–winning mathematician Roger Penrose arguing that the brain is not a computer, as well as Iain McGilchrist’s neuroscience that supports the same conclusion. We eventually return to Harari and his idea that the transhumanist goal can be achieved by what he sees as taking evolution into our own hands and producing future intelligences by intelligent design. That takes us back to current attempts to fuse human brains with AI technology, and we round off the chapter by considering some of the proposals for future AGI scenarios, including those of world domination – many of them anticipated in cult films such as The Matrix and The Terminator. We look at physicist Max Tegmark’s twelve scenarios in his book Life 3.0, concentrating on his Prometheus scenario of a world-dominating totalitarian regime driven by AI technology. One memorable detail of this scenario is that the world ruler forces all citizens to wear a bracelet that has many functions – it acts as a surveillance gadget, transmitting all kinds of information to a central authority so that Prometheus becomes aware of any deviation. The diabolical thing about the bracelet is that it incorporates a mechanism that can instantly execute the wearer by lethal injection if he or she doesn’t comply with Prometheus ideology. Two of Tegmark’s AGI scenarios even have the word God in their titles, and Tegmark observes that many people like his “Protector God” scenario because of its similarity to what is advanced by the world’s major monotheistic religions.1 That is not surprising, since members of the Abrahamic religions already believe in a super intelligent being – the Creator God. It therefore makes sense to turn to the theistic worldview of the Bible to see what it has to contribute to our topic. The obvious place to begin is the book of Genesis.
Chapter 12 begins our consideration of the teaching of Genesis on the nature of human beings. We do this fully aware of the fact that Harari and many others think that science has consigned the biblical worldview to the scrap heap of history. In common with many scientists, however, I reject that view on rational grounds and begin by tracing how atheism has come to dominate the Western academy through the Renaissance and Enlightenment to the modern era. We then argue that science sits comfortably with biblical theism but actually conflicts with atheism, a circumstance that legitimizes revisiting a source of inspiration that we have been drawing on for millennia. Referring to the Asilomar principles for the governance of AI, we introduce a Christian manifesto with the same aim but one based on the fundamental value-giving teaching that human beings are of infinite dignity and worth because they are made in the image of God their Creator. We next explore Genesis in some depth to find out what it has to teach about human beings, starting with the fact and nature of creation in Genesis 1 and moving on to the fascinating account in Genesis 2 of what makes human life the wonderfully varied thing it is – our constitution, aesthetic sense, curiosity, work ethic, and capacity for relationships with others. Chapter 13 focuses on that most important part of what it means to be human – our moral sense – by taking a fresh look at the garden of Eden narrative. We consider the basics of what morality is and how it was defined in terms of humans’ intelligent verbal relationship with God. Next we explore how the first humans were tempted by an alien being (!) to eat the fruit of the Tree of Knowledge of Good and Evil (not a tree of knowledge, by the way – God wants us to have knowledge). They chose to rebel against God, bringing disaster on themselves and the world, from which we have suffered ever since. All of this turns out to be highly relevant to the issues arising from the desire to modify human beings by AI and other technology. For instance, the human rebellion against their creator, God, has led people to think about the danger of AI rebelling against us, its human creators. This raises the “control problem” that attracts a lot of attention today for obvious reasons. That leads to thinking once more about what moral code to build into AI and the deeper question whether absolute standards exist. We bring the chapter to an end by considering the other tree in the garden of Eden – the Tree of Life, which is relevant to the transhumanist dream of physical immortality. We note that Harari has apparently become skeptical of that dream, as it is merely an expression of what he conceives to be the flawed philosophy of liberal humanism, and in particular its belief in human free will, which Harari rejects. He also rejects humanism’s belief that we are individuals. I explain that he is wrong on both counts, not only because I am a Christian, but because his views are illogical. We finally consider the danger we are in if we lose our hold on the freedom of the will and the fact that we are individuals in an age of AI, because they undermine and weaken much of our defense against a gradual erosion of our identity and autonomy. In the end we lose all meaning in the incessant data flow that engulfs us. Chapter 14 introduces us to the true Homo Deus, one completely different from Harari’s conceptualization. We start with the fact that Mo Gawdat, formerly of Google and author of Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World, believes that a super intelligent alien has already arrived on Earth in the form of an incredible being, a child, but not biological in nature. It is, of course, an AI. We consider the resonance between this statement and a much older story about the advent of another super intelligent being, but in this case a real human child with unique powers, born of a woman but with a divine origin – Jesus Christ, the Man who is God.
6 comments:
Lennox, John C. (2024-11-25). 2084 and the AI Revolution, Updated and Expanded Edition: How Artificial Intelligence Informs Our Future . Zondervan: The advances in the field of artificial intelligence (AI) in the four years since I wrote the first edition of this book not only justify its revision but also require a considerably expanded edition in order to try to keep pace with what is an unprecedented phenomenon of the early twenty-first century. AI is now on everyone’s lips as a tech buzzword and has even become the Big Theme at the 2024 World Economic Forum (WEF) at Davos. In 2023, more than 25 percent of startups were in AI, and investment worldwide is expected to reach $200 billion by 2025. The WEF reckons that up to 40 percent of jobs worldwide will be affected by AI in one way and another, with many being replaced and others complemented. The number of published research papers on AI is growing exponentially, doubling every two years; by 2021, there were 4,000 new papers per month on arXiv! It is clearly impossible to keep up with such an exploding field, and yet I dare express the hope that my readers may feel that I have done at least some justice to what has been happening since I wrote the first edition of this book in 2020. My aim is to do two things – firstly, to inform my readers, and, secondly, to explore and reflect on the possible implications for us of this revolutionary new area of technology. In a 2009 debate, the biologist E. O. Wilson said: “The real problem of humanity is the following: We have paleolithic emotions, medieval institutions, and godlike technology. And it is terrifically dangerous.” He continued: “Until we answer those huge questions of philosophy that the philosophers abandoned a couple of generations ago – Where do we come from? Who are we? Where are we going? – rationally, we’re on very thin ground.”1 I think, however, that Wilson is rather hard on philosophers. I know quite a few eminent people, and not only philosophers, who have been thinking hard about these questions for a long time. In his Critique of Pure Reason, the famous German philosopher Immanuel Kant said that the three most important questions for any human being are What can I know? What can I hope for? And what must I do? They are the key existential questions we all ask as we seek meaning in life. This book is an attempt to address them in the context of AI. Related questions abound. What is the relationship of the mind to the brain? Is real intelligence always coupled with consciousness? What is consciousness anyway? Will we be able to construct artificial consciousness and life? Will humans so modify themselves that they become something else entirely – either by genetic engineering, cyborg technology, or both? Will we eventually make superintelligences? And, if we do, what will our relationship to such entities be? Shall we control them, work alongside them, or be controlled or even replaced or destroyed by them? Or shall we gradually merge with machines? Is life in a metaverse a step in that direction, whatever the metaverse may eventually prove to be?
PART 1: MAPPING THE TERRITORY Chapter 1 explores famous dystopian novels, some of which, such as Orwell’s 1984, depict grim technology-dominated totalitarian surveillance states. That leads us to AI, which is often used for surveillance today. We then briefly trace the history of information technology, leading up to the pioneering work of the mathematical genius Alan Turing, the father of the computer, whose ideas lie behind efforts to construct a “thinking machine.” From there we introduce narrow and general AI, noting that the latter features not only in science fiction but also increasingly in the thinking of some top-level scientists and engineers who are now making a serious attempt to construct it. Chapter 2 explores AI in more depth, considering the nature of intelligence and what machine intelligence is understood to mean. We then take a brief look at the history and trace the ups and downs of the AI program aimed at the construction of machines that imitate what humans can do. We introduce the ideas of neural networks, algorithms, and machine learning. We finally give some examples of AI in common use today. Chapter 3 turns first to ethics. Just as a knife can be used for surgery or murder, all technology raises ethical problems. AI is no exception – indeed some of the ethical problems that now face us are both complicated and urgent as they have an impact on many people’s lives. They range widely – from invasion of privacy by surveillance to deceit by digital assistants. We list some common ethical systems used to determine what ethical principles should be built into AI systems and their regulation (for example, the Asilomar principles). We also look at ethical guidance for robots. The chapter concludes by observing that, because of the vocabulary used in the AI field – “artificial intelligence,” “machine learning,” “deep learning,” and so on – it is very easy to fall prey to the idea that we are well on the way to constructing machines that think like humans. Machines do not have minds and cannot perceive, and we conclude the chapter by citing the recent fascinating work by neuroscientist Iain McGilchrist on the different modes of perception employed by the two hemispheres in the human brain. PART 2: TWO FUNDAMENTAL QUESTIONS: WHERE DO WE COME FROM? WHERE ARE WE GOING? To engage as wide an audience as possible, we look at these questions through the eyes of two very different authors. Chapter 4 considers the first question – Where do we come from? – in the context of science and technology as depicted in the book Origin by the popular fiction writer Dan Brown. We shall probe his ideas for credibility and scientific veracity. Chapter 5 explores the second question – Where are we going? – in light of Brown’s idea of the merging of humans with machines, a notion on the agendas of serious academics such as Astronomer Royal Lord Martin Rees and the late physicist Stephen Hawking. It is the main goal of transhumanism – the drive to modify human beings and create a superintelligence by means of bioengineering and cyborg engineering. We discuss Ray Kurzweil’s concept of “the singularity” and its origins in ideas of an “ultra intelligent” machine. We describe the current thinking of a wider range of scientists and engineers and their expectations for a technological future that will shape our lives.
Chapter 6 begins by listing some of the familiar ways in which AI is shaping us today in a variety of activities and enterprises – from emails, digital assistants, robotics, autonomous vehicles, manufacturing, and medicine to the triumphant use of AI in solving some of the most intractable practical problems in science, such as protein folding. Research proceeds apace and makes for a bright future. There are downsides, however. Chapter 7 turns first of all to one of the major problems created by increased automation in general and AI in particular – job losses when people are replaced by machines that can do their work faster, more efficiently, and more economically. We find that at all stages, from job interview to job loss, AI can create problems that urgently need to be addressed in technologically dependent economies. The relationship of the human to the machine is high on the agenda – or should be. The second topic in chapter 7 is the way in which AI already has a profound effect on education – from the use of intrusive monitoring technology in schools in China to the use of ChatGPT in essay writing. That leads us to discuss the GPT revolution, made possible by the creation of large language models (LLMs) developed by Sam Altman’s company OpenAI. These models are trained on billions of words and can generate convincing text on many topics in response to human prompts. The competence of such systems leads to the third topic in chapter 7: fears about AI getting out of (human) control, which have led to calls by many of the heavyweights in the AI industry for a moratorium on research. We discuss the need for and the nature of legislation to control errant AIs. The final topic in chapter 7 is the military use of AI in the deployment of autonomous weapons, which raises an array of ethical problems that, once more, demand urgent attention. Chapter 8 concerns the pressing issue of citizen privacy and rights in light of the fact that most of us – smartphone users in particular, which means most people – are voluntarily sharing data about ourselves and our habits, conversations, friends, and many other things with megacorporations. This citizen data can be used for various purposes beyond our control that are not always beneficial – at least not to us. We look first at surveillance capitalism – the use of our data without our permission for commercial gain. We then turn to what I have called surveillance communism, which, as the name suggests, mainly concerns what is going on in China – although it is being exported to many other countries. There is the attempt at data-driven governance through the social credit system, in which the population is comprehensively monitored for trustworthiness, as measured by their compliance with the state ideology: good (in other words, state-approved) behavior is rewarded, whereas bad behavior is punished.
We next turn to an even more disturbing use of intrusive AI surveillance technology in the Chinese province of Xinjiang, where the Uyghur population is faced with what has now been classified by the UN as attempted genocide. An estimated 800,000 to 2 million people have been imprisoned by the Chinese government in camps euphemistically called vocational and educational centres. Those not detained are subjected to relentless surveillance, religious restrictions, forced labour, and forced sterilizations. The objective is to wipe out their culture. All of this is a warning to other countries, as a lot of the technology used in Xinjiang is being exported to enable other governments to tighten their control. Hence the next section of chapter 8 deals with surveillance in the West and how it is shaping our culture and also showing dangerous trends. That takes us to the concluding section of chapter 8, where we discuss one of the most recent and most advanced uses of AI – deepfakes. We discuss the threat they pose to democratic institutions and to individuals and the danger of our no longer being able to tell truth from falsehood or between what is a human artifact or something produced by an AI. I raise the question of how those of us with moral and religious convictions should respond to all of this. Chapter 9 takes us into the realm of virtual reality (VR) and the metaverse. We explore the pluses and, even more, the minuses of immersing ourselves in a virtual world where we can conceal our identities and give vent to our fantasies of every kind, healthy or not. We follow the various attempts to create a metaverse – a totally immersive experience of VR arising from gaming technology with the addition of AI, which will take us into a world where we live by proxy through an avatar of ourselves and indulge in anything we desire. We evaluate the dangers of VR platforms such as Second Life and the problems that such experiences create for our daily lives in the real world. Chapter 9 concludes by discussing the flood of internet pornography that is damaging human relationships and robbing children of their innocence while making millions of dollars for ruthless exploiters of the media. We ask how we can protect ourselves and our children from this onslaught where they can be groomed online and sucked into gruesome encounters in VR. Chapter 10 takes us from the current use of AI to the much more speculative quest for artificial general intelligence (AGI) and the desire and attempt to upgrade humans. This is the transhumanist agenda famously proposed in the bestselling book Homo Deus: A Brief History of Tomorrow by the Israeli historian Yuval Noah Harari. In it he maps out an ambitious transhumanist vision for the twenty-first century. The agenda is, first, to solve the “problem” of physical death by technical means and, second, to enhance human happiness by using bioengineering and cyborg engineering to merge humans with AI tech and turn them into superhuman, superintelligent “gods.” Far-fetched as these goals may seem – and we give reasons for thinking so – these goals are shared by many others whose views we engage. We discuss the extreme but common reductionist view that human beings are nothing but a bunch of algorithms. We report the skepticism that neuroscientists express about the idea of prolonging life or finding a “cure” for death, considering the extreme and expensive lengths to which some people are going to preserve their bodies by cryonics, hoping that science will eventually find a way to resuscitate their brains and identities.
Regarding the second of Harari’s agenda items, we look at the attempt to move from the organic to the inorganic in the quest to find a more durable substrate for future super intelligent beings (trans humans, or posthumans, but certainly no longer simply humans). This takes us to a consideration of C. S. Lewis’s anticipation of the dangers surrounding such developments in his brilliant dystopic science fiction novel That Hideous Strength, written in 1945. From there we turn to the views of people who think that humans have had their day, and far from attempting to upgrade, they should instead cease to exist once and for all and be replaced by inorganic intelligences. Chapter 10 concludes by considering one implication of the above – long termism, the almost incredible suggestion that we should essentially abandon all attempts to alleviate poverty and concentrate all our wealth on preventing existential threat to the intelligent beings that, in the view of some, may exist in the distant future – a total violation of fundamental moral principles concerning the value of human life. We look at historical precedents for such views, reverting to the works of C. S. Lewis. Chapter 11 takes us into another dark realm of possible developments in AGI, where now vast sums of money are being invested by companies such as Mark Zuckerberg’s Meta. We take a closer look at what is going on, leading us to ask about the worldview driving much of this – the atheistic worldviews of naturalism and materialism. One of the characteristics of materialism is that it reduces minds to brains and regards brains as computers. We adduce evidence from leading scientists to challenge this view, arguing that materialism fails because matter is not the ultimate reality – it is not even the prime reality. We also maintain that simulated intelligence is far from real intelligence by outlining John Searle’s famous Chinese room thought experiment. We also cite the work of Nobel Prize–winning mathematician Roger Penrose arguing that the brain is not a computer, as well as Iain McGilchrist’s neuroscience that supports the same conclusion. We eventually return to Harari and his idea that the transhumanist goal can be achieved by what he sees as taking evolution into our own hands and producing future intelligences by intelligent design. That takes us back to current attempts to fuse human brains with AI technology, and we round off the chapter by considering some of the proposals for future AGI scenarios, including those of world domination – many of them anticipated in cult films such as The Matrix and The Terminator. We look at physicist Max Tegmark’s twelve scenarios in his book Life 3.0, concentrating on his Prometheus scenario of a world-dominating totalitarian regime driven by AI technology. One memorable detail of this scenario is that the world ruler forces all citizens to wear a bracelet that has many functions – it acts as a surveillance gadget, transmitting all kinds of information to a central authority so that Prometheus becomes aware of any deviation. The diabolical thing about the bracelet is that it incorporates a mechanism that can instantly execute the wearer by lethal injection if he or she doesn’t comply with Prometheus ideology. Two of Tegmark’s AGI scenarios even have the word God in their titles, and Tegmark observes that many people like his “Protector God” scenario because of its similarity to what is advanced by the world’s major monotheistic religions.1 That is not surprising, since members of the Abrahamic religions already believe in a super intelligent being – the Creator God. It therefore makes sense to turn to the theistic worldview of the Bible to see what it has to contribute to our topic. The obvious place to begin is the book of Genesis.
Chapter 12 begins our consideration of the teaching of Genesis on the nature of human beings. We do this fully aware of the fact that Harari and many others think that science has consigned the biblical worldview to the scrap heap of history. In common with many scientists, however, I reject that view on rational grounds and begin by tracing how atheism has come to dominate the Western academy through the Renaissance and Enlightenment to the modern era. We then argue that science sits comfortably with biblical theism but actually conflicts with atheism, a circumstance that legitimizes revisiting a source of inspiration that we have been drawing on for millennia. Referring to the Asilomar principles for the governance of AI, we introduce a Christian manifesto with the same aim but one based on the fundamental value-giving teaching that human beings are of infinite dignity and worth because they are made in the image of God their Creator. We next explore Genesis in some depth to find out what it has to teach about human beings, starting with the fact and nature of creation in Genesis 1 and moving on to the fascinating account in Genesis 2 of what makes human life the wonderfully varied thing it is – our constitution, aesthetic sense, curiosity, work ethic, and capacity for relationships with others. Chapter 13 focuses on that most important part of what it means to be human – our moral sense – by taking a fresh look at the garden of Eden narrative. We consider the basics of what morality is and how it was defined in terms of humans’ intelligent verbal relationship with God. Next we explore how the first humans were tempted by an alien being (!) to eat the fruit of the Tree of Knowledge of Good and Evil (not a tree of knowledge, by the way – God wants us to have knowledge). They chose to rebel against God, bringing disaster on themselves and the world, from which we have suffered ever since. All of this turns out to be highly relevant to the issues arising from the desire to modify human beings by AI and other technology. For instance, the human rebellion against their creator, God, has led people to think about the danger of AI rebelling against us, its human creators. This raises the “control problem” that attracts a lot of attention today for obvious reasons. That leads to thinking once more about what moral code to build into AI and the deeper question whether absolute standards exist. We bring the chapter to an end by considering the other tree in the garden of Eden – the Tree of Life, which is relevant to the transhumanist dream of physical immortality. We note that Harari has apparently become skeptical of that dream, as it is merely an expression of what he conceives to be the flawed philosophy of liberal humanism, and in particular its belief in human free will, which Harari rejects. He also rejects humanism’s belief that we are individuals. I explain that he is wrong on both counts, not only because I am a Christian, but because his views are illogical. We finally consider the danger we are in if we lose our hold on the freedom of the will and the fact that we are individuals in an age of AI, because they undermine and weaken much of our defense against a gradual erosion of our identity and autonomy. In the end we lose all meaning in the incessant data flow that engulfs us. Chapter 14 introduces us to the true Homo Deus, one completely different from Harari’s conceptualization. We start with the fact that Mo Gawdat, formerly of Google and author of Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World, believes that a super intelligent alien has already arrived on Earth in the form of an incredible being, a child, but not biological in nature. It is, of course, an AI. We consider the resonance between this statement and a much older story about the advent of another super intelligent being, but in this case a real human child with unique powers, born of a woman but with a divine origin – Jesus Christ, the Man who is God.
Post a Comment