For all AI has grown drastically in recent years, fears that it will lead to destructive superintelligences may be unfounded.

A simplified representation of a machine learning method where each layer uses the output of the previous layer as an input. (Image Credit: Wikimedia Commons)

Artificial intelligence (AI) has grown at an astonishing rate in the last few years. We now live in a world where cars drive themselves, computers beat experts at challenging strategy games, and programs diagnose deadly types of cancers (Silver et al., 2017). But will all of these technological advances be used for good? Elon Musk does not think so: last year, he tweeted about his concern that such breakthroughs might lead to World War 3. He might be exaggerating the effects, but we do need to look at how AI will affect us. However, the evidence points to a thankfully less dramatic conclusion: superintelligent AI is highly improbable.

In AI, we want computers to handle complex tasks. Typically, programs will read input, perform precise logical steps, and produce output. The problem arises when arriving at an output requires reasoning. Programmers generally solve problems themselves and develop algorithms around their reasoning.

For example, a good programmer can easily write a program to play Tic-Tac-Toe optimally, but might find it difficult to write a program for chess because there are billions more possibilities for moves. Nevertheless, chess programs still beat many players because both computers and humans can avoid significant computations with “heuristics,” small strategies that complete the task at hand. In the case of chess, you could create a simple scoring system based on pieces left on the board and then pick the highest-scoring move in the scope of the next, say, four turns. What we really want is for computers to find their own heuristics. This is quite complicated because we need a program to think on its own, but programs only follow instructions exactly as specified.

The field of machine learning (ML) deals with programs that teach themselves by finding mathematical patterns in datasets. Instead of following an explicit program, the computer trains a statistical model on empirical evidence. ML makes few assumptions about the data, so it solves a family of similar tasks (with minor adjustments) by extrapolating from the model. With improvements in hardware, researchers and developers have found a lot of success with machine learning. Google has been using an ML system to predict the efficiency of energy usage in their data centers (Gao, 2014). Video game animators are using it to create human-like animations (Petrov, 2017). However, we’ve only changed the problem from explicit logic to statistics. Moravec’s paradox states that it’s hard to give computers our basic mechanical and sensory abilities and easy to give them higher level cognitive abilities. Concretely, robots and computers can build virtual 3D models, do business accounting, and simulate physics because they follow precise logic. However, we cannot easily program them to understand speech, audio, or video because we do not know how to encode sensory skills in precise logic.

Yet neural networks, a machine learning architecture, do just that. Inspired by the human brain, neural nets computationally imitate neurons firing in the brain. The results have been impressive. The AlexNet paper in 2012 showed a large improvement in identifying objects in pictures with neural networks (Krizhevsky). This model was submitted to the annual ImageNet challenge, in which it was the “undisputed winner of both classification and localization tasks” (Li). So-called deep neural networks use computationally expensive models to deal with higher throughput. Thus, they require lots of time and often, GPU-accelerated devices to work effectively.

Machine learning and neural networks seem to be misnomers, implying that computers actually understand what they’re doing. While the algorithms might be inspired by cognitive science, it’s a bit of a stretch to say they work like real brains. To offer an analogy, machine learning is like a physics simulator. It serves as a useful numerical approximation of something much more complex. Both work very well for simple tasks but fail in highly complex situations. Yet even philosophically, a computer cannot be said to have “knowledge” of anything. In the Chinese Room thought experiment, a person inside a locked room is sent messages under the door written in Chinese, which they cannot understand. Instead, they use a dictionary to write responses that they send back. The thought experiment asserts that the person gains no new knowledge. Similarly, a computer can find statistically accurate models from data, but it does not know that information.

If we want computers to apply information creatively, machine learning will not suffice. When a baseball player catches a ball, they do not mentally solve the differential equation that describes the ball’s path. Instead, they focus on a single observable variable, such as the location in the field of view, and then track that instead (Hamlin, 2017). This phenomenon is an example of turning a simple fact, that the ball is accelerating downwards, into a heuristic. How could we program a robot to do the same? With machine learning, such a program is unlikely to show signs of strong or general intelligence.

In other words, there is no guarantee that we will get the destructive superintelligence Musk prophesied. The term “AI winter” refers to the disillusionment with AI’s progress starting in the 1970s and arguably ending in 2012 when AlexNet showed neural nets’ efficacy in image recognition. It’s totally possible that we will see another AI winter when we’ve exhausted neural networks, and that is actually completely fine. It would be fantastic if we could develop a general learning algorithm that performs every task at the human level. In reality, we just need to improve our algorithms so that a computer can learn one task and then use what it learned to solve related tasks quickly. If that point has been reached, AI will have done its job.

In all likelihood, AI will continue to do what it has done thus far. In the near future, various AI systems will begin assisting workers in various jobs. We need to focus on how to implement and institute these automation technologies, and the biggest challenge is its safe incorporation into the real world. To that end, we need to work on answering questions like: How do we communicate objectives properly? Can programs act safely and fail gracefully?

The use of AI also extends past simply automating jobs. Machine learning could be used to select employees by predicting a candidate’s performance. If the data is biased against race or gender, a computer could end up discriminating against real people. Or, consider the use of AIs that synthesize clips of people speaking or create fake but realistic images. If there’s no way to tell if media is synthesized, it could destabilize the way we consume information.

Of course, there will also be use cases that are smaller in scale. Dictation and handwriting recognition could help us interface with computers more easily. Neural networks are also being used to create experimental forms of art. Actor Kristen Stewart co-authored a paper on using them to transfer aesthetic styles from paintings to a short film she directed. Clearly, AI has both positive and negative applications.

It is unlikely that AI will actually destroy the world anytime soon. AIs are still just programs, and as long as they are monitored, they can be edited, deleted, and shut down. The development of AI has been slow, and we’ve only recently seen it pick up pace. As people continue to study its effects and ways to make it safer, the chance for AI to mess up society in catastrophic ways diminishes. When we focus on super intelligences, we’re not thinking about how AI will affect us in the near future. We’re not guaranteed anything fundamentally different than what we’ve already seen.