The science of artificial intelligence (AI) is quickly catching up to the science fiction. Some industry leaders are calling for a timeout. Elon Musk, Steve Wozniak, and others recommend a six-month pause in development of AI development, because, among other reasons, they’re worried that it might accidentally annihilate all of humanity. Paul Christiano, formerly on OpenAI’s safety team, blithely estimates “maybe a 10% to 20% chance of AI takeover [with] many, most humans dead.” Eliezer Yudkowsky, research leader at the Machine Intelligence Research Institute, is less optimistic: “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.”
I doubt that Mr. Yudkowsky counts me among the “steeped,” but I don’t buy the we’re-all-going-to-die idea. For one, my older son reminds me that no one has ever won any prizes for correctly predicting the end of all humanity. Secondly, I have the same question about superintelligent, sentient robots of the future that I have about alien invasions: Why should they be so interested in us humans, specifically? I would think they would be more intrigued, or even threatened, by life forms with longer track records of success, like cockroaches and crocodiles. Those guys have been kicking it for over 200 million years, at least thirty times longer than we have.
By definition, our puny brains are too weak to know what our future hyperintelligent overlords will do with us, but we can make some guesses by extrapolation of the way we treat other animals. The good news is that computers don't eat meat, so we're not likely to be hunted, other than for sport. Bad news for blue-pill fans, our bodies wouldn't really work as batteries. It's more likely that we become like zoo animals or trained pets (a la the Slubs in Shiner), slowly assimilated into a world controlled by algorithmic prompts. Of course, we would never really succumb to such Pavlovian—hold on, got an alert on my phone, be right back.
How much longer before we get knocked off the top of the IQ ladder? Some have said as soon as 2028, but a 2022 survey of AI experts puts the estimate closer to 2059. Machines have been smarter than us in some aspects for many decades--think calculators, for example. The situation will probably continue to evolve without an identifiable “singularity” development. But I don't think this will be a frog-in-the-boiling-water-pot thing; it's not going to happen slowly. Consider that 85% of Americans owned smartphones in 2022, which were not even available before 2011. AI is going to hit hard and fast.
Ironically, at least so far, AI is really good at compassion, but surprisingly sucky at Wordle. It's also ushering in an even further erosion of truth. Maybe AI is smart, but it lies a lot, and does it well. Many of us are wringing our hands about its encroachment into art, and rightly so. AI is winning art prizes and writing books for publication. No one knows how much of this is going on today, because it’s up to humans to reveal, and they might not want to. Hell, this sentence could be written by AI, for all you know (rutabaga). It is ominous to imagine the incoming tsunami of artificial art. In large part, we create art to be remembered. When our personal drops of creativity get flooded by waves of algorithmically-generated creations, as seems certain to happen soon, will our memories drown?
That brings me to the theme of Best Played Hands: Identity. I don’t think our greatest weakness in the face of encroaching AI lies within the realms of military might or intellectual prowess. We are currently way ahead on the first one, and we have the capability to stay in front on the second. If there is any kind of war coming, we could easily win it from our current starting points. After all, we are the creators--the "god" in this scenario, and at least right now, we collectively wield total power and control. What we really lack as a species is a strong sense of identity. We need purpose; we need to know who we are. Why are we special? Why should we win? More to the point, do we want to win and if so, why? And maybe it’s a little too late, but should we create such powerfully destructive forces like nuclear bombs and AI in the first place?
Since the Age of Enlightenment, human "progress" has largely transitioned decision-making processes from reliance on faith to reliance on facts, ushering in a torrent of changes that feel like progress: constitutional governments, science, law, human equality, and other concepts. But after cruising for a few centuries, we’ve hit a snag: Facts do not set a direction for us, nor do they provide us purpose. With modern technology, we have access to so many facts (and “facts”) that each of us is now empowered, maybe even forced, to find our own purpose. Many of us fail to do so; we work more efficiently than ever, trying as hard as we can to go as fast as we can, but we don’t know where we’re going. The result is nihilism, which is an underlying contributor to suicides, depression, anxiety, drug use, and other afflictions.
It certainly seems that some of us want to retreat to the rule of kings and prophets. We are increasingly politicizing religion, and conversely, deifying certain political figures. There is some comfort in this idea: Let the king set our purpose, we will obey his commands, and we won’t feel lost anymore. But faith-based leadership systems are vulnerable to whomping doses of reality. Theocracies and monarchies fall too easily into human rights abuses, corruption, and inefficiencies that make them ill-equipped to deal with current challenges like climate change, pandemics, food and water shortages, being bullied by countries with fancier weaponry--and impending robot apocalypses created by facts-based thinkers.
I don’t have the answer, but I did my best to look for one in Best Played Hands. That’s why Ely starts out like a nothing: skin pale like parchment, body soft like clay, easily the most ordinary. His identity develops as he confronts various conflicting aspects of faith- and fact-based thinking: Marisa and New Norma, Carla and Sally, Will and Andrei, Judge Seele and Soul Judge. In the end, he chooses his own path, and he follows it based on his instinct. If we listen to our hearts, our guts tell us that value judgments involve more than just our brains. Morality is not so easily understood or reduced to an algorithm, yet it is at least as important to us as logic. Instinct is just a word, but it might be the best descriptor of a fusion of facts and faith that truly guides us.
I think we relied too much on faith in our past, and we rely too much on facts now. The onset of AI is creating an identity crisis for those of us who are primarily facts-based thinkers, since we’re not going to be the smartest kids on the block much longer. I don’t think a retreat to theocracy is going to fix everything, but I don't think more unfettered technological advances will ultimately be successful either. My instinct is that we need to look harder for philosophical constructs that give us a better sense of purpose in the modern world. The ultimate answers didn't come from the Great Pyramids, nor will we find them with the James Webb Space Telescope. We need to stop and think. We need to get a better grip on who we are, or we really are going to fade away.
I doubt that Mr. Yudkowsky counts me among the “steeped,” but I don’t buy the we’re-all-going-to-die idea. For one, my older son reminds me that no one has ever won any prizes for correctly predicting the end of all humanity. Secondly, I have the same question about superintelligent, sentient robots of the future that I have about alien invasions: Why should they be so interested in us humans, specifically? I would think they would be more intrigued, or even threatened, by life forms with longer track records of success, like cockroaches and crocodiles. Those guys have been kicking it for over 200 million years, at least thirty times longer than we have.
By definition, our puny brains are too weak to know what our future hyperintelligent overlords will do with us, but we can make some guesses by extrapolation of the way we treat other animals. The good news is that computers don't eat meat, so we're not likely to be hunted, other than for sport. Bad news for blue-pill fans, our bodies wouldn't really work as batteries. It's more likely that we become like zoo animals or trained pets (a la the Slubs in Shiner), slowly assimilated into a world controlled by algorithmic prompts. Of course, we would never really succumb to such Pavlovian—hold on, got an alert on my phone, be right back.
How much longer before we get knocked off the top of the IQ ladder? Some have said as soon as 2028, but a 2022 survey of AI experts puts the estimate closer to 2059. Machines have been smarter than us in some aspects for many decades--think calculators, for example. The situation will probably continue to evolve without an identifiable “singularity” development. But I don't think this will be a frog-in-the-boiling-water-pot thing; it's not going to happen slowly. Consider that 85% of Americans owned smartphones in 2022, which were not even available before 2011. AI is going to hit hard and fast.
Ironically, at least so far, AI is really good at compassion, but surprisingly sucky at Wordle. It's also ushering in an even further erosion of truth. Maybe AI is smart, but it lies a lot, and does it well. Many of us are wringing our hands about its encroachment into art, and rightly so. AI is winning art prizes and writing books for publication. No one knows how much of this is going on today, because it’s up to humans to reveal, and they might not want to. Hell, this sentence could be written by AI, for all you know (rutabaga). It is ominous to imagine the incoming tsunami of artificial art. In large part, we create art to be remembered. When our personal drops of creativity get flooded by waves of algorithmically-generated creations, as seems certain to happen soon, will our memories drown?
That brings me to the theme of Best Played Hands: Identity. I don’t think our greatest weakness in the face of encroaching AI lies within the realms of military might or intellectual prowess. We are currently way ahead on the first one, and we have the capability to stay in front on the second. If there is any kind of war coming, we could easily win it from our current starting points. After all, we are the creators--the "god" in this scenario, and at least right now, we collectively wield total power and control. What we really lack as a species is a strong sense of identity. We need purpose; we need to know who we are. Why are we special? Why should we win? More to the point, do we want to win and if so, why? And maybe it’s a little too late, but should we create such powerfully destructive forces like nuclear bombs and AI in the first place?
Since the Age of Enlightenment, human "progress" has largely transitioned decision-making processes from reliance on faith to reliance on facts, ushering in a torrent of changes that feel like progress: constitutional governments, science, law, human equality, and other concepts. But after cruising for a few centuries, we’ve hit a snag: Facts do not set a direction for us, nor do they provide us purpose. With modern technology, we have access to so many facts (and “facts”) that each of us is now empowered, maybe even forced, to find our own purpose. Many of us fail to do so; we work more efficiently than ever, trying as hard as we can to go as fast as we can, but we don’t know where we’re going. The result is nihilism, which is an underlying contributor to suicides, depression, anxiety, drug use, and other afflictions.
It certainly seems that some of us want to retreat to the rule of kings and prophets. We are increasingly politicizing religion, and conversely, deifying certain political figures. There is some comfort in this idea: Let the king set our purpose, we will obey his commands, and we won’t feel lost anymore. But faith-based leadership systems are vulnerable to whomping doses of reality. Theocracies and monarchies fall too easily into human rights abuses, corruption, and inefficiencies that make them ill-equipped to deal with current challenges like climate change, pandemics, food and water shortages, being bullied by countries with fancier weaponry--and impending robot apocalypses created by facts-based thinkers.
I don’t have the answer, but I did my best to look for one in Best Played Hands. That’s why Ely starts out like a nothing: skin pale like parchment, body soft like clay, easily the most ordinary. His identity develops as he confronts various conflicting aspects of faith- and fact-based thinking: Marisa and New Norma, Carla and Sally, Will and Andrei, Judge Seele and Soul Judge. In the end, he chooses his own path, and he follows it based on his instinct. If we listen to our hearts, our guts tell us that value judgments involve more than just our brains. Morality is not so easily understood or reduced to an algorithm, yet it is at least as important to us as logic. Instinct is just a word, but it might be the best descriptor of a fusion of facts and faith that truly guides us.
I think we relied too much on faith in our past, and we rely too much on facts now. The onset of AI is creating an identity crisis for those of us who are primarily facts-based thinkers, since we’re not going to be the smartest kids on the block much longer. I don’t think a retreat to theocracy is going to fix everything, but I don't think more unfettered technological advances will ultimately be successful either. My instinct is that we need to look harder for philosophical constructs that give us a better sense of purpose in the modern world. The ultimate answers didn't come from the Great Pyramids, nor will we find them with the James Webb Space Telescope. We need to stop and think. We need to get a better grip on who we are, or we really are going to fade away.