of Robotics
All these developments change the balance between machines and humans in the ordering of our daily lives. Right now, artificial intelligence and robotics most often complement, rather than replace, human labor. In many areas, we should use our existing institutions of governance to maintain this status quo. Avoiding the worst outcomes in the AI revolution while capitalizing on its potential will depend on our ability to cultivate wisdom about this balance.
However, attaining this result will not be easy. A narrative of mass unemployment now grips policymakers, who are envisioning a future where human workers are rendered superfluous by ever-more-powerful software, robots, and predictive analytics that perform jobs just as well at a fraction of present wages. This vision offers stark alternatives: make robots, or be replaced by them.
Another story is possible and, indeed, more plausible. In virtually every walk of life, robotic systems can make labor more valuable, not less. Even now, doctors, nurses, teachers, home health aides, journalists, and others are working with roboticists and computer scientists to develop tools for the future of their professions, rather than meekly serving as data sources for their future replacements. Their cooperative relationships prefigure the kind of technological advance that could bring better healthcare, education, and more to all of us, while maintaining meaningful work.
They also show how law and public policy can help us achieve peace and inclusive prosperity, rather than a “race against the machines.” We can do so only if we update the laws of robotics that guide our vision of technological progress.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Asimov’s laws of robotics have been enormously influential for science fiction writers and the technologists inspired by them. They seem clear-cut, but they are not easy to apply. Consider, for instance, whether Asimov’s laws allow robotic cars. Self-driving vehicles promise to eliminate many thousands of traffic fatalities each year, but may also put hundreds of thousands of paid drivers out of work. Does that harm entitle governments to ban or slow down the adoption of self-driving cars? These ambiguities, and many more, are why the statutes, regulations, and court cases affecting robotics and AI in our world are more fine-grained than Asimov’s laws.
I propose four new laws of robotics to guide us on the road ahead. They are directed toward the people building robots, not the robots themselves, and better reflect how actual lawmaking is accomplished.
Numerous factors matter in the rush to automation, many specific to jobs and jurisdictions. But one organizing principle is the importance of meaningful work to the self-worth of persons and the governance of communities. A humane agenda for automation would prioritize innovations that complement workers in jobs that are, or ought to be, fulfilling vocations. It would substitute machines to do dangerous or degrading work, while ensuring those presently doing that work are fairly compensated for their labor and offered a transition to other social roles.
Despite the growing ethical consensus for the disclosure of the use of algorithms and smart machines in interactions with users, there are subfields of AI devoted to making it ever more difficult for us to distinguish between humans and machines. These research projects might culminate in a creation like the advanced androids of our science fiction films, indistinguishable from a human being. Yet, in hospitals, schools, police stations, and even manufacturing facilities, there is little to gain by embodying software in humanoid bodies, and plenty to lose.
Deadly and invasive technologies pioneered by armies could be used beyond the battlefield. Today, more law enforcement agencies aim to use facial recognition to scan crowds for criminals. In China, the government utilizes “social credit scores” created from surveillance data to determine what trains or planes a citizen can board, what hotels a person can stay in, and what schools a family’s children can attend.
Some applications of these systems may be quite valuable, such as public health surveillance that accelerates contact tracing to stop the spread of infectious disease. However, when the same powerful capacities are ranking and rating everyone at all times, they become oppressive.
Of course, some robots and algorithms will evolve away from the ideals programmed into them by their owners, as a result of interactions with other persons and machines. Whatever affects the evolution of such machines, the original creator should be obliged to build in certain constraints on the code’s evolution to both record influences and prevent bad outcomes. Once another person or entity hacks into or disables those constraints, the hacker is responsible for the robot’s wrongdoing.
Too many technologists aspire to rapidly replace human beings in areas where we lack the data and algorithms to do the job well. Meanwhile, politicians have tended toward fatalism, routinely lamenting that regulators and courts cannot keep up with technological advance.
Both triumphalism in the tech community and minimalism among policymakers are premature. As robots enter the workforce, we have a golden opportunity to shape their development with thoughtful legal standards for privacy and consumer protection. We can channel technology through law. We can uphold a culture of maintenance over disruption, of complementing human beings rather than replacing them. We can attain and afford a world ruled by persons, not machines. The future of robotics can be inclusive and democratic, reflecting the efforts and hopes of all citizens. And new laws of robotics can guide us on this journey.
Follow @FrankPasquale on Twitter