Cover Story

On Black Friday 2017, Amazon’s best-selling item was its Echo Dot, the voice-activated "smart speaker" that, like similar devices, acts as a mini personal assistant for the digital age—always at the ready to read you a recipe, order pizza, call your mom, adjust your thermostat and much more. Introduced less than four years ago, these devices are now owned by an estimated 16 percent of Americans.

They’re just the latest example of how technology powered by artificial intelligence (AI) seems, suddenly, to be everywhere. AI enables the voice recognition software in the Amazon Echo’s Alexa and Apple’s Siri, it tags our friends and family in Facebook photos, and it determines which ads we see when we search online. Now, many experts believe that AI is on the cusp of joining the human world in ways that may have more profound—even life-and-death—consequences, such as in self-driving cars or in systems that could evaluate medical records and suggest diagnoses.

But amid the hope and the hype (and the worry—will AI take our jobs? Will it take over the world?) it’s easy to forget that in most ways, artificial intelligence remains no match for the ultimate learning machine: the human brain.

AI can do many things extremely well, including tasks that are difficult or impossible for humans, such as recognizing millions of individual faces or instantaneously translating a paragraph into hundreds of languages. But these achievements have generally come in limited, specific circumstances. There are many things that humans do exceptionally well that computers can’t even begin to match, such as creative thinking, learning a new concept from just one example ("one-shot learning") and understanding the nuances of spoken language. Alexa, for example, will respond to hundreds of voice commands, but can’t hold a real conversation.

Now, some in the machine learning field are looking to psychological research on human learning and cognition to help take AI to that next level. They posit that by understanding how humans learn and think—and expressing those insights mathematically—researchers can build machines that are able to think and learn more like people do.

"Humans are the most intelligent system we know," says Noah Goodman, PhD, a professor of psychology and computer science at Stanford University who studies human reasoning and language. "So I study human cognition, and then I put on an engineering hat and ask, ‘How can I build one of those?’"

An intertwined history

There has always been a deep connection between psychology and AI, says Linda Smith, PhD, a developmental psychologist and AI researcher at Indiana University Bloomington. "Back when AI pioneer Alan Turing and others first conceived of this idea in the 1950s, they wanted to build machines that could think like people. And even today human behavior is always the standard to match, or beat, or deal with," she says.

Artificial intelligenceIndeed, the systems that have driven nearly all the recent progress in AI—known as deep neural networks—are inspired by the way that neurons connect in the brain and are related to the "connectionist" way of thinking about human intelligence. This "bottom-up" framework also has a long history in psychology. Connectionist theories essentially say that learning—human and artificial—is rooted in interconnected networks of simple units, either real neurons or artificial ones, that detect patterns in large amounts of data.

In AI, the basic idea works like this: Instead of physical neurons, deep neural networks have neuron-like computational units, stacked together in dozens of connected layers. If you want to create a neural network that can tell the difference between apples and bananas—a visual learning system—then you feed it thousands of pictures of apples and bananas. Each image excites the "neurons" in the input layer. Those "neurons" pass on some information to the next layer, then the next layer and so on. As the training progresses, different layers start to identify patterns at increasing levels of abstraction, like color, texture or shape. When the information reaches the final output layer, the system spits out a guess: apple or banana. If the system’s guess is wrong, then it can adjust the connections among the neurons accordingly. By processing thousands and thousands of training images, the system eventually becomes extremely good at the task at hand—figuring out the patterns that make an apple an apple and a banana a banana.

The concept of neural networks has existed since the 1940s. But today, the enormous increase in computing power and the amount and type of data available to analyze have made deep neural networks increasingly powerful, useful and—with technology giants such as Google and Facebook leading the way—ubiquitous. A deep neural network called AlphaGo, created by the Google-affiliated company DeepMind, analyzed millions of games of the complex board game Go to beat the human world champion in 2016, a feat long thought impossible. Other deep neural networks analyze the sounds that make up language to enable Siri’s and Alexa’s voice recognition ability. Such networks also analyze connections among words in different languages to enable real-time translation.

But neural networks also present problems and have limitations. Most obviously, they require a lot of input data, generally produced or chosen by humans. A neural network can theoretically learn anything, but it must have the right training data to do so. In the case of apples and bananas, it’s easy enough to find thousands of photos for training. But what if you wanted to develop a machine that could learn about an area without an enormous data set available to study? "In a way, what [­neural networks] are doing is crowdsourcing human beings rather than simulating human beings," says Alison Gopnik, PhD, a developmental psychologist at the University of California, Berkeley, who works with AI researchers.

Then there’s the "black box" issue. Because neural networks are not programmed with explicit rules, and instead develop their own rules as they extract patterns from data, no one—not even the people who program them—can know exactly how they arrive at their conclusions. Sometimes that’s OK, but sometimes it’s a big problem. If AI is to someday drive cars or diagnose diseases, it may be unsettling, or even a deal-breaker, to have to rely on an opaque system that sometimes makes mistakes and cannot explain why those mistakes happened.

A 'top-down' approach

Now, psychologists and AI researchers are looking to insights from cognitive and developmental psychology to address these limitations and to capture aspects of human thinking that deep neural networks can’t yet simulate, such as curiosity and creativity.

This more "top-down" approach to AI relies less on identifying patterns in data, and instead on figuring out mathematical ways to describe the rules that govern human cognition. Researchers can then write those rules into the learning algorithms that power the AI system. One promising avenue for this method is called Bayesian modeling, which uses probability to model how people reason and learn about the world.

Brenden Lake, PhD, a psychologist and AI researcher at New York University, and his colleagues, for example, have developed a Bayesian AI system that can accomplish a form of one-shot learning. Humans, even children, are very good at this—a child only has to see a pineapple once or twice to understand what the fruit is, pick it out of a basket and maybe draw an example.

Likewise, adults can learn a new character in an unfamiliar language almost immediately. After seeing, for example, the Russian letter Ж (zh), most humans can recognize it in another style of handwriting and write an example of it, even if they’ve never seen it before. But to train a traditional deep neural network to recognize Ж, the network would have to see many versions of it, in many different handwriting styles, until it could detect the patterns that make up the character. Lake’s system, which he developed after studying hundreds of videos of how people write characters, instead proposes multiple series of pen strokes that are likely to produce the character shown. Using an algorithm based on this method, his AI system was able to recognize characters from many different alphabets after seeing just one example of each and then produce new versions that were indistinguishable from human-drawn examples (Science, Vol. 350, No. 6266, 2015).

Recently, Lake has been working on a new project to model a different human ability—curiosity, or inquisitiveness. People learn by asking questions, and while curiosity might seem like an abstract concept, Lake and his colleagues have grounded it by building an AI system that plays "Battleship," the game in which players locate their opponent’s battleship on a hidden board by asking questions. Only certain questions are allowed in the original game, but Lake and his colleagues allowed human players to ask any open-ended questions that they wanted to, and then used those questions to build a model of the types of questions that elicit the most useful information. Using this model, their AI system could generate new, useful questions when playing the game (Advances in Neural Information Processing Systems, Vol. 30, 2017).

Computers in conversation

Meanwhile, at Stanford University, Goodman is interested in another core human ability: language. At first glance, it might seem like today’s AI systems do "understand" language, given that they can do translations and follow commands. In reality, though, AI systems cannot yet understand the nuances of human language or truly converse with humans.

That’s partly because in real conversations, the meanings of words change with context. "There’s some fixed contribution that comes from the literal meaning of the words, but actually uncovering the interpretation that the speaker intends is a complicated process of inference that invokes our knowledge about the world," Goodman says.

Take the concept of hyperbole: When someone says, "It cost a million dollars," how do you decide whether they mean that the item literally cost a million dollars or only that it cost a lot of money? It depends on whether the speaker is talking about a fancy dinner, a car, or a house, as well as your knowledge of the likely prices of such things. In one study, Goodman and his colleagues set up experiments in which pairs of participants had discussions that included these ambiguous, potentially hyperbolic statements. Then, they developed a mathematical model that could accurately predict the participants’ interpretations of their partners’ statements (Proceedings of the National Academy of Sciences, Vol. 111, No. 33, 2014).

Since then, Goodman and his colleagues have extended the model to other quirky, ambiguous aspects of human language: puns (Cognitive Science, Vol. 40, No. 5, 2016), irony (Proceedings of the Thirty-Seventh Annual Conference of the Cognitive Science Society, 2015) and polite indirect speech (Proceedings of the Thirty-Eighth Annual Conference of the Cognitive Science Society, 2016).

The prospect of a computer that could understand our hyperbole and jokes is tantalizing. But an important limitation of this kind of top-down AI approach is that it requires so much knowledge to be "built-in" by the human programmer.

"In this case we built in some prior knowledge about the world: how much cars usually cost, for example, and the literal meaning of some words," says Goodman. "[So that does] raise the question—where does all that built-in knowledge come from?"

Babies: The smartest things on earth

For psychologists and AI researchers who take a more connectionist, "bottom-up," approach to developing AI systems, that question—"Where does all that knowledge come from?"—is key. Humans may be able to understand jokes and recognize pineapples after seeing just one example, but they do so with decades (or, in the case of children, months or years) of experience observing and learning about the world in general.

How babies learnSo connectionist-oriented AI researchers believe that if we want to build machines with truly flexible, humanlike intelligence, we will need to not only write algorithms that reflect human reasoning, but also understand how the brain develops those algorithms to begin with.

Smith, the developmental psychologist at Indiana University, believes that the answer to that puzzle may come from studying babies.

"My personal view is that babies are the smartest things on Earth in terms of learning; they can learn anything and they can do it from scratch," she says. "And what babies do that machines don’t do is generate their own data."

In other words, deep neural networks learn to distinguish between apples and bananas by viewing thousands of images of each. But babies, from the time they can turn their heads, crawl and grasp, influence the makeup of their own "training data" by choosing where to look, where to go and what to grab.

In one series of studies, Smith and her colleagues are outfitting babies and preschoolers with head-mounted video cameras to closely analyze how they see the world. In one study, for example, they found that during mealtimes, 8- to 10-month-old babies look preferentially at a limited number of scenes and objects—their chair, utensils, food and more—in a way that may later help them learn their first words. They also found that the scenes and objects the babies choose to look at differ from the types of "training images" often used in computational models for AI visual learning systems (Phil. Trans. R. Soc. B, Vol. 372, No. 1711, 2017). Smith is collaborating with machine learning researchers to try to understand more about how the structure of this kind of visual and other data—the order in which babies choose to take in the world—helps babies (and, eventually, machines) develop the mental models that will underlie learning throughout their lives.

"I’m trying to understand how the structure of data in time may make for this kind of robust, general learning," she says. "I think that the data itself will solve a lot of problems."

Other developmental psychologists, meanwhile, take a more top-down approach. Gopnik, for instance, agrees with Smith that studying babies and young children will yield valuable insights for AI. But she and her colleagues do so by trying to build models that explain children’s learning and thinking, and to understand how those models differ from the ones that underlie adult cognition.

She’s found, for instance, that children have an unparalleled capacity for creativity and flexible thinking. Given some information to interpret or a problem to solve, children are more likely to consider unusual possibilities than adults are, which makes them more error-prone, but also more likely than adults to quickly solve problems that have an unexpected solution.

"Children are both literally and metaphorically noisy," she says. "Traditionally, psychologists have seen that as a bug, but my argument is that a lot of things that people have seen as bugs might be features."

In one series of studies, for example, she and her colleagues showed preschoolers, school-age children, teens and adults a picture of a machine and told them that "blickets" make the machine light up. Then, they showed participants pictures of different combinations of objects on top of the machine—either lighting it up or not—and asked them which objects were blickets. When the solution to the problem was unexpected (more than one object was required to make the machine light up), then children were more likely than adults to arrive at the right answer, and younger children were better at it than older children were (PNAS, Vol. 114, No. 30, 2017).

Building models that reflect this and other unique aspects of how children learn could help AI researchers develop computers that capture some of children’s creativity, flexible thinking and learning ability, Gopnik says.

The path forward

The history of AI is in some ways a story of back-and-forth between these top-down and bottom-up approaches to machine learning, but the way forward may end up being a combination of the two. It’s now possible to build AI systems that combine elements of both. Goodman, for example, has developed an AI system that can identify colors based on imprecise human verbal descriptions. In the study, he and his colleagues built up a massive data set of color descriptions by recruiting more than 50,000 people to play a color-identifying game on Amazon Mechanical Turk. Then, they built an AI system that combined deep-neural-network analysis of those descriptions with a probabilistic model of how people used the descriptions in context to correctly identify the colors (Transactions of the Association for Computational Linguistics, Vol. 5, 2017). "These days, the distinction between Bayesian and neural network models is not as big," Goodman says.

In fact, according to Matthew Botvinick, PhD, a cognitive scientist and the director of neuroscience research at DeepMind, AI systems are moving in the direction of deep neural networks that can build their own mental models of the sort that currently must be programmed in by humans.

Such ideas are exciting to many researchers in the field. "We’re on the tipping point of really major advances" in AI, says Smith. But they’re frightening to some. Tech pioneer Elon Musk and physicist Stephen Hawking have both famously sounded dire warnings that developing powerful machines that can learn as well as humans may be a threat to human civilization.

Botvinick believes that we have a long way to go before we can sort out which threats are genuine and which are not, but he says that tech companies are beginning to take such safety issues and larger societal issues seriously. In 2016, for example, Amazon, Apple, Google, Facebook and other companies joined together to create an industry group called the Partnership on AI to Benefit People and Society to discuss the societal ramifications of AI advances.

Broadly speaking, Botvinick says, these advances should spur discussion and perhaps research into questions that are both philosophical and psychological: If we can build machines that can think like humans, then what kinds of things do we want to keep for ourselves? Do we want machines to do everything that humans do?

As society ponders those questions, it’s also important to remember that the knowledge that psychologists and other AI researchers are gaining as they aim to build thinking machines is also helping us to better understand ourselves. "I really see twin goals here: understanding the human mind better and also developing machines that learn in more humanlike ways," says Lake. "I believe that if we can’t program a computer to explain the human behavior, then we don’t fully understand it yet."

APA is hosting Technology, Mind & Society, an interdisciplinary conference exploring interactions between humans and technology, on April 5–7 in Washington, D.C. For more information, visit https://pages.apa.org/tms.

Further reading

Building Machines That Learn and Think Like People
Lake, B.M., et al.
Behavioral and Brain Sciences, 2017

Building Machines That Learn and Think for Themselves
Botvinick, M., et al.
Behavioral and Brain Sciences, 2017

An AI That Knows the World Like Children Do
Gopnik, A.
Scientific American, 2017

Stanford University One Hundred Year Study on Artificial Intelligence (AI100)
https://ai100.stanford.edu/2016-report

The Developing Infant Creates a Curriculum for Statistical Learning
Smith, L.B., et al.
Trends in Cognitive Sciences, in press