Unlike factory robots of the past, social robots will be required to navigate a complex human world and will be expected to have more autonomy and decision-making capabilities, says Bertram Malle, PhD, of Brown University. And to do those things, he says, they will need some capacity for moral judgment.

For example, Malle asks, what should an elder-care robot do if its human companion is begging for pain medicine but the robot cannot reach the doctor for approval? How should a home robot intervene when it witnesses the family's 10-year-old doing something mischievous? "That requires a capacity to recognize social and moral norms, act on them and recognize when others violate them," he says. "Robots have to follow human community norms—and the transition between social norms and moral norms is a fluid one."

In a recent publication, Malle and Matthias Scheutz, PhD, outline three steps toward designing robots with a moral compass ("The Routledge Handbook of Neuroethics," 2017). Step one is to understand the moral expectations that people have for autonomous agents. Next, designers will have to develop mechanisms allowing robots to process and act within those moral boundaries. Finally, Malle says, humans will have to give careful thought to the moral standing of robots. Will robots be held accountable for actions that cause people harm? Will they receive due process and their own protections against harm?

Those questions aren't just philosophical riddles. Their answers have real-world implications that can and should draw from the field of psychology, Malle says. "We don't even know how norms are represented in the human mind," he says. "Psychologists have something to offer to refine this knowledge and build new knowledge."

We might be a ways off from having to create legal defense teams for rogue robots, but if we're to welcome robots into our lives, Malle says, morality should be a building block rather than an afterthought. —Kirsten Weir