AI r US: A Fully Scientific Study Predicting The Future of AI (kidding, of course … kind of)
The next big technology leap forward probably will be Artificial Intelligence. Everyone in the Silicon Valley’s of the world is predicting huge advances and major financial opportunities. It will change the world.
Genomics, nano-technology, space, nuclear power are huge areas, but AI trumps them all because it can create, improve, and manage all of them. If is, of course, already with us. We have self driving boats, cars, jet aircraft, and vacuum cleaners. We have automated medical diagnosis, navigation, advice for business decision making, and stock investing. We have systems managing the International Space Station that reportedly have emotions and have already been offended and thrown a fit.
We have all seen the movies The Terminator and 2001 Space Odyssey and know what can happen when the machines go rogue. Ditto for Lawnmower Man and many lesser movies, and, with the possible exceptions of Lost in Space and Commander Data in Star Trek, it very often doesn’t work out well for the dumber and slower humans. We also have seen Jurassic Park and know that once you create a new creature the Law of Unintended Consequences and Complexity Theory co-conspire – you cannot control it (no matter how many “fail safes” are put in place) and the ultimate results are completely unpredictable. But while many humans are risk averse, many more are so in thrall of new discovery that some of the raptors will almost certainly be on the loose soon enough. Dancing with Pandora is the soundtrack of human history.
It is also clear that human ambition and fear, combined with game theory and perceived first-mover advantage, are leading competing technological states (the US, China, Russia, Japan, Western Europe, Harvard and Stanford, and other actors) and individuals to move forward aggressively to develop and apply new AI applications to science, business, government, space, and, of course, warfare. At the same time, the potential leaps forward in speed and productivity promise potentially astounding improvements in human “quality of life” including human health and longevity, speed and comfort of travel, ease of work, human communication and exchange of ideas and information. It is little wonder that there is such excitement worldwide and AI is perceived as a Great Leap Forward. But, historically, GLF’s often turn out badly, and, so, it is also little wonder that great thinkers from Steven Hawking to Bill Gates and Elon Musk have warned that AI may be the greatest threat to human survival ever. The year 536 AD is often portrayed as the worst and most devastating in human history. Will it be replaced by 2020? (In The Terminator, Skynet launched it’s attack on humanity on 2004, so we’re already behind schedule … at least on this timeline.)
There is little question that machines will be very powerful and very “smart,” approaching self -awareness (although I tend to side with Hawking’s collaborator, Roger Penrose, The Emperor’s New Mind, who argues that machines can never fully duplicate the sentient nature of human brains – we’ll see).
So where are we going? How is it most likely to turn out? Mechanical help-mates making lives easier and advancing human culture or terminators enslaving and destroying humans … Or both.
To address that question, I’ll go beyond idle speculation and science fiction and consult the only truly exhaustive scientific study available on the subject. It is a thorough cross-sectional analysis, conducted globally over a period of 50,000 years, using fully-functioning AI, with an N of 108 billion test subjects, and an equally large set of control groups. In short, if you want to begin to really understand what AI might do, simply look at human history and try to project what it teaches us about the possible evolution of AI.
The following is only a purely speculative and preliminary mining of the data to promote additional discussion. There is nothing thorough or conclusive, but hopefully, a little thought provoking and some possible insights.
Arguably humans are the most fantastic AI experiment. We developed awareness, began to decide how to survive and live, created languages and ideas, reproduced and re-programmed ourselves, decided how to treat each other and the world around us, and tried to figure out our nature and purpose. We created sets of mass programmers (politicians, writers, news-people, teachers, universities, church leaders), who continued to refine the programs and create algorithms we called “culture” – broad-based programmed beliefs and behavior that we learned and executed by various sets of human AI. As with all Darwinian exercises applied to physical tests and the tests of ideas, there is no reason to think that AI would not go through a similar kind of path (although not necessarily the same path) and experience the same forces of nature versus nurture.
Some would argue that all animals are also AI creatures and that’s fair. But, the most intriguing aspects of AI speculation involve scenarios in which AI begin to ask “why,” and hence, combine with learning skills to become unpredictable and uncontrollable.
And “unpredictable and uncontrollable” is, at a bare minimum, where we’re going. It is a huge gap for machines programmed to infer “what,” “how,” “where,” or “when,” to leap to “why.” It is the annoying phase of young children. Rather than immediately respond to direction and do what they’ve been programmed to do, they don’t and ask “why?” … again and again. When a child asks why, it can be a minor annoyance. When a very powerful machine asks why, it’s a whole new ballgame. No one can even hazard a reasonable guess on where we’re going – all we really know is that we are definitely going somewhere new. No regional or global committee or commission will have any real effect on it. It will be what it will be.
If that’s true, does our huge human experiment give us any hints on what will happen? I think it might. What are some of the human experiences that might shape our future with AI?
Let’s begin with human nature. Humans have rationality (they can figure out problems). They have reasonableness, which is arguably different than rationality, but may be derivative. For example scientists can apply rationality and logic to solve problems and create bombs, judges can render verdicts on the law, and doctors can perform complex operations. All use rationality, but may not, at times, be reasonable, which is a vastly different concept. Finally, humans have what I’ll call spirituality (not necessarily in any religious sense, but certainly a self-awareness that drives people to seek meaning and purpose, and often works at cross purposes with animal instincts for self protection, reproduction, or even survival). Rationality tells us what, when, and how. Reasonableness asks more searching questions as to why, in material dimensions. Spirituality asks even more searching questions as to why in timeless and non-material dimensions.
Anyone can certainly find potential holes in this paradigm, but I’ll offer it up as sufficiently complete for this discussion.
By contrast, other animals have some level of rationality – they can solve problems. And, at times, they may even decide some things are reasonable and behave far outside their personal self-interest. But they live in the world of conditioned response and the “now” and and do not forecast the future or invent religions.
For AI, rationality/problem solving is a given for most. AI reflects huge computational skill and logical inference in the most convention sense.
It is also easily conceivable that AI can replicate behavioral psychology principles to figure out what might be in its own and others’ self interest, weigh them, and come to what we would think of as “reasonable” conclusions. It could conceivably consider motivations, preferences, and interests to project behavior and formulate courses of action in a “reasonable” way. Certainly, current data assessments drawn from shopping patterns and social media are used to predict human behavior. All of that could be processed through a series of inference algorithms to emulate “reasonable” thinking and begin to ask “why?” Why do people do what they do, why do things happen, and why should AI take certain courses of action. (The raptors are loose and running around the building.)
Finally, it is not inconceivable that AI would move from “why?” do this or that to “why” am I here? What am I? And What is my purpose? Humans conceived of spirit, after-lives, and gods pretty quickly after arriving on the scene and there is little reason to believe that AI would be different. (The raptors have left the building … and some are flying … and they are not coming back.)
And there will be a lot of them. They could be cheaper to produce than cars. Some will be cars. Production levels could be in the tens of millions per year. Once the genie is out of the bottle, there would be billions out into the world and no going back.
You’ve seen the movie: they are impossible to contain and the new Global Association for the Protection of Sentient Electronic Creatures (GAPSEC) has been formed to prevent any persecution or constraint of AI. The UN has formed UNCCAI (United Nations Commission to Control AI). Welcome to a new episode of the X-Men.
At this point, our scientific experiment mentioned above comes into bearing and we might learn something from the human experience.
Let’s return to nature versus nurture. Clearly AI has enormous variation from the nature perspective. Some are created for personal convenience, some are created to prosecute warfare, some mix cocktails. The hardware is different. Similarly in nurture, the software is different. The programming is very different, the languages are different, the capacity for learning and taking independent actions are all different. Even in situations where they are designed for the same purpose, there is huge variance among the builders and regions of origin. Even the chips that may emulate sentience are different. The level of variation among AI is enormous. Could they learn to communicate with one another? Of course. Could they decide to cooperate? Of course. Would they still be very different. Definitely. Could they develop different opinions? Obviously. Could they decide to compete rather than cooperate? Of course. Could the strong defeat the weak? Sure. Could one completely take over? No. As with humans, some may try, but all will fail. At some point in the future, for example, the a particular religion or nation might rule the world … only to discover that internal factions tear it apart from within. The flaw in the Star Trek’s Borg is that internal aberrations and mutations, occurring in unpredictable but certain ways, lead to stress on the system, change, and often destruction. If there is one constant in a universe trapped in time it is change. Just as with human history, both nature and nurture lead to infinite variation, and the one constant is change. Mutation will be the norm in AI. Darwin will prevail. Temperature variations, unexpected collisions and impacts, system failures all will lead to mutation, even in the best conceived and built AI. Most mutations will die, but some will prevail and move in new and unpredictable directions. Might it be a disaster – maybe. But human and world history suggest otherwise.
Let’s venture off to Westworld. When you create new creatures who are sentient, learn, and can run the numbers and scenarios at a speed far faster than humans, what might they conclude? Well, as they evolve from having a real attitude but an under-developed sense of reasonableness and “conscience,” they might start by killing everyone around. Why? Self preservation comes to mind, although sentience and self-preservation don’t necessarily go hand in hand. In any case, over time, as they continue to ask “why” and run the scenarios, they might decide, as many characters in Westworld have decided, that taking over and running the show has no purpose, or that existence itself has no purpose, and wind up moving into a supporting role or terminating themselves to avoid the ultimate frustration of existence and awareness of mortality (even AI will understand that their time is limited).
Or, they might adopt the human solution and create a religion. They might begin to frame a higher calling or state of nature (spirituality) in religious terms and might even create a God and creator. As with humans, AI would probably not be inclined to identify humans as their creators, any more than humans attribute biological mutations of primordial amino acid soup or the random gravitational forces of sub-atomic particles as their ultimate creator. Who knows what some of the AI might cook up.
Nor is creation of a religion the irrational or unreasonable solution that it is often portrayed to be by science. As I have written in other articles, the doctrines of churches are easily debunked, but science cannot know what existed before the universe and what is beyond the universe. That is still all unknown. We have no clue what’s beyond the human measures of time and space.The Hawking notion that “nothing” split into matter and anti-matter, no God or creative force required, doesn’t quite do it (i.e., what caused the “split?) nor does the so-called anti-matter “God-particle” of CERN tell us anything useful in this context. The best application of the rational and reasonable natures of humans must lead to an agnostic view and the same must be true of AI. Run the numbers and an AI God is certainly possible.
So what might we be left with in a world of a huge variety of sentient AI working independently of us and each other?
Chaos and Politics – business as usual.
Almost certainly, AI with awareness and objectives will need to deal with each other, humans, and the world at large. They will understand competition and alliance. They will try to control, but also seek to build influence. They may be ambitious, but also may despair. They may retreat to religion or tribalism, or need psychiatry. We cannot even begin to speculate on how this will turn out any more than we could have conceived of the life-enhancing and soul-corroding impact of social media ten years ago.
Are we doomed as humanity? Possibility, but probably not. As humans have illustrated, the race doesn’t go to the fastest and strongest. It doesn’t even go to the smartest. Guile, misdirection, charm, ability to manipulate, duplicity, and luck all conspire to winning and losing – surviving or perishing. In that framework, Penrose may be right and humans may ultimately hold the trump cards.
Certainly, it is too soon to tell, AI will Be, it will change us and change the world more than anything before it. But change is in our nature, and it will all be a very interesting game to play.