Reprogramming Our Robotic Imagination
How "The New Breed" Suggests Other Ways to Partner with Robots
Robots can do many things—vacuum a room, say, or dance like a septuagenarian rock star—but one of their most interesting skills is rarely noted: Their unfailing ability to create and scale moral panic. Many people have a hard time not imaging robots as a professional threat or, in extreme cases, their eventual replacements. When they think of robots, they tend to see them as either tools for doing dirty/dangerous/dull work or our eventual overlords. It’s a very limited narrative, one that Kate Darling, MIT professor and author of The New Breed: What Our History with Animals Reveals about Our Future with Robots, is hoping to revise.
“When we assume that robots will inevitably automate human jobs and replace friendship, we’re not thinking creatively about how we design and use the technology, and we don’t see the choices we have in shaping the broader systems around it,” she writes.
Thinking that robots can only be at our whim or overpower us is robotic thinking, a zero-or-one mental model that is both simplistic and dangerous. Darling argues for a more creative and nuanced approach, one she finds in human-animal relations. She writes that our relationships with animals have been variable and productive: “We’ve relied on animals to help us do things we don’t do alone. In using these autonomous, sometimes unpredictable agents, we have not replaced, but rather supplemented, our own relationships and skills.”
Her book brings up innumerable case studies of animals and humans working together—from the honeyguide bird bringing Yao villagers to beehives, to oxen “arguably the most important domesticated animal in the world… a plow-pulling, load-bearing grunt worker of a beast,” to the dogs who were enlisted to fight in World War II—and suggests that these complex relationships may show us to how to engage with robots going forward.
Can we find a way to see our relationship to robots not as mechanistic (they're not tools) but as social and reciprocal (like a lot of animals)?
Darling suggests we can. The healthy way to view the robotic future is not to quiver about being replaced but to imagine different varieties of partnership robots might create. Doing so will empower us to make conscious design decisions. In some ways, the book demands that we increase our imaginary powers in order to see how truly messy—a favorite word of Darling’s—the reality of robotic-human interaction is, and it will only increase as robotics evolve and become more deeply embedded into the circuitry of daily life.
Questions, Questions
The New Breed wants us to ask questions, lots of them, about what human-robot partnerships might look like. The informed partnerships Darling speaks of are the sort that Andrew McAfee and Erik Brynjolffson, in their book Machine | Platform | Crowd, suggest we should develop with artificial intelligence. They nudge us to ask questions like—
• Where would better human connections most help your performance and that of your organization?
• Looking at the existing tasks and processes in your job or organization, what do you see as the ideal division of work between humans and machines?
• What new products or services could be created by combining the emerging capabilities of machines with a human touch?
—and then allow the honest answers to guide the design process.
It’s worth thinking about our future robot partnerships in the same way. This, of course, requires us to understand, in a deep way, all the elements of human work—which ones we’re best at; which ones we excel at; which ones we hate; which ones might be better or more efficiently done by a robot—long before we install a digital co-worker into our open plan office. This sort of labor audit is necessary for the creation of excellent jobs, in that it focuses on the craft of work (the core professional skills) and automating the “stuff” (the drudgery) one faces in a workday.
We should also listen to the people who actually build robots. They are working hard to convince customers, and the public at large, to accept robots into their lives. “Acceptance by users in the physical space where the robot lives is critical,” Marcio Macedo, Co-Founder and VP of Product at Ava Robotics, tells The Resonance Test. “If that is not achieved, it won’t matter too much what the application or what the promise of the robot’s value is that will have in it.”
Try Sitting Side-by-Side
It’s hard to talk about the varieties of robotic experience because we’re still in the early stages of robotic history. But it seems worth asking, at this early stage, some philosophical questions.
One such questions appears in a 2018 blog post by my colleague Toby Bottorf, who writes that “The humanist future is one where robots assist in the emergence of excellent jobs. The answer to the question ‘Robots or people?’ is ‘Robots for people.’”
Which makes me ask: Which people?
Bottorf’s humanist future, which is the one I want to inhabit, talks about people in general, or as much of humanity as possible. This isn’t the same thing as, say, “Humans who own robots.” Companies may not necessarily want this future, but it’s up to the people working on their automation projects to lay it out and make a case for it. Robots have to work, yes, for the people who own them, buy them, manufacture them, and interact with them—for every stakeholder involved.
This is a new idea, and it’ll take some time. Working with robots requires a grand shift in perspective. Literally. But we can do it. It can be done.
Do a Google search, in English, for something along the lines of “robot-human interaction,” and here’s what you get: images of human arms and robot arms shaking hands. A similar search in Japanese produces images of robots and people sitting or standing together, sharing a point of view. Or so says Darling, indicating that the Japanese have less trouble than we do of welcoming robots into our lives. She writes: “My roboticist colleagues in Japan don’t field nearly as many questions about their creations replacing humans, in part because robots are more often viewed as mechanical partners rather than adversaries.”
Why do they have a different perspective? Darling suggests an answer is in the stories they tell themselves about robots. She talks about the popularity of Astro Boy, the famed robot-centric cartoon, as well as the post-WWII economic narrative: “In the 1960s, Japan began to view robots as a potential driver of productivity and growth, and when robotics played a big role in Japan’s economic revival, it inspired a positive image of robots as nonthreatening and helpful to humans.”
These images and stories embody the idea of different, positive perspective on robot-human partnerships—and we’d do well to learn from them.
The Right Rhythm
Finding ways to shake up our binary thinking about robots can disarm the purely catastrophic imagination. Considering animal-and-human partnerships can act as a counter-force to the kind of passivity many feel about these technologies and help us "think more critically about the systems technology is situated in, and understand that, in all of this, we have choices.”
Darling reminds us that we have an opportunity to define the future rhythms of working with robots, but this will require a more critical, more creative outlook than we currently possess. She’s too polite to say it, but she wants to yell “Wake up!” at, say, the many fearful tech writers and their readers, who buy into the easy, scary story of robotic replacement.
In the upcoming weeks, I’ll be chatting with Darling on The Resonance Test. Do you have a question you’d like to ask her? If so, please let me know.