The fourth industrial revolution is changing how we live, work, and communicate. It’s reshaping government, education, healthcare, and almost every aspect of life. In the future, it can also change the things we value and the way we value them. It can change our relationships, our opportunities, and our identities as it changes the physical and virtual worlds we inhabit and even, in some cases, our bodies.
From SIRI to self-driving cars, artificial intelligence is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons. The fourth industrial revolution has the power to change the world positively, but we have to be aware that the technologies can have negative results if we don’t think about how they can change us. We build what we value. This means we need to remember our values as we’re building with these new technologies.
For example, if we value money over family time, we can build technologies that help us make money at the expense of family time. In turn, these technologies can create incentives that make it harder to change that underlying value. Do you remember the time when Sophia the robot first switched on, and the world couldn’t get enough? It had a cheery personality, it joked with late-night hosts, and it had facial expressions that echoed our own.
A robot plucked straight out of science fiction was here, the closest thing to true artificial intelligence that we had ever seen. There’s no doubt that Sophia is an impressive piece of engineering. Many of futurism’s own writings refer to the robot as “her.” Piers Morgan even decided to try his luck for a date! But as Sophia became more popular and people took a closer look, cracks emerged. It became harder to believe that Sophia was the all-encompassing artificial intelligence that we all wanted it to be. Over time, articles that might have once oohed and ahhed about Sophia’s conversational skills became more focused on the fact that they were partially scripted in advance. Definition of artificial intelligence is tricky.
The field of AI, constantly reshaped by new developments and changing goalposts is sometimes best described by explaining what it is not. At the core, artificial intelligence is about building machines that can think and act intelligently and includes tools such as Google’ search algorithms to the machines that make self-driving cars possible. Today, artificial intelligence (Al), which was once thought to live purely in the realm of the human imagination, is a very real and looming prospect. In a case of life imitating art, we’re faced with the question of whether artificial intelligence is dangerous or not.
How can artificial intelligence be dangerous?
While we haven’t achieved super-intelligent machines yet, the legal, political, societal, financial and regulatory issues are so complex and wide-reaching that it’s necessary to take a look at them now, so we are prepared to safely operate among them when the time comes. Outside of preparing for a future with super-intelligent machines now, artificial intelligence can already pose dangers in its current form. Let’s take a look at some key AI-related risks. When AI is unleashed, there is nothing that can stop it. No amount of human wrangling can bring in a fully-activated and far-reaching network composed of millions of computers acting with the level of consciousness that’s akin to humans. An emotional, reactive machine aware of its own existence could lash out if it were threatened. And if it were truly autonomous, it could improve upon its design, engineer stealthy weapons, infiltrate impenetrable systems and act in accordance to its own survival. Today, humans are the only species on the planet capable of consciously bending the will of nature and largely impacting the demise of plants, animals, environments, and even other people. But what happens when that changes? When a super-intelligent machine’s existence is threatened, how will it actually react? In one interview Elon Musk likened AI to “summoning a demon.” Stephen Hawking warned that it might “spell the end of the human race.” While droves of hardware and software engineers are helping to lead the charge towards a future laced with AI systems with names like Alexa, Cortana, and Siri, more advanced systems are being developed out of the public’s prying eyes.
Leaders like Musk, a billionaire entrepreneur, renowned futurist, and self-taught rocket engineer, aren’t leaving things to chance. In January of 2015, Musk hedged his bets against AI, devoting $10 million dollars towards 37 separate research projects worldwide through his Future of Life Institute, towards projects that could better create warnings and alarms before a harmful AI was unleashed on society. Why the sudden interest in AI safety? Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology have recently expressed concern in the media and via letter about the risks posed by AI.
Elon Musk wrote: “The pace of progress in artificial intelligence is incredibly fast. Unless you have direct exposure to groups like Deep mind, you have no idea how fast it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most.” Bill Gates recently states, “I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” So why aren’t more people concerned? Surely, if left unchecked, AI will pose an existential threat to the entire human race. Kurzweil also points out that, on an evolutionary scale, our future will likely involve the brain being systematically synced to the cloud through the use of nano-bots the size of red blood cells. He coins this the “neocortical cloud.” So, if this were the case, what would happen when AI runs rampant and its ability to impose its will on humans that have been injected with these nano-bots?
Would the utilitarian nature of what we’re after ultimately spell out our untimely doom? Why is the subject suddenly in the headlines? Because AI has the potential to become more intelligent than any human. We have no sure-fire way of predicting how it will behave. We can’t use past technological developments as much of a basis because we’ve never created anything that has the ability to, wittingly or unwittingly, outsmart us. The best example of what we could face may be our own evolution. People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest.
If we’re no longer the smartest, are we assured to remain in control?
“The mobile phone industry is the backbone of the global brain that is being put Together” – Rick Wiles