Deployment of robots in the air, the home, the office, and the street inevitably means their interactions with both property and living things will become more common and more complex. This paper examines when, under U.S. law, humans may use force against robots to protect themselves, their property, and their privacy.
In the real world where Asimov’s Laws of Robotics ((Isaac Asimov introduced the three laws (“1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”) in Runaround, a short story originally published in the March 1942 issue of Astounding Science Fiction and subsequently included in ISAAC ASIMOV, I ROBOT (1950).)) do not exist, robots can pose—or can appear to pose—a threat to life, property, and privacy. May a landowner legally shoot down a trespassing drone? Can she hold a trespassing autonomous car as security against damage done or further torts? Is the fear that a drone may be operated by a paparazzo or a peeping Tom sufficient grounds to disable or interfere with it? How hard may you shove if the office robot rolls over your foot? This paper addresses all those issues and one more: what rules and standards we could put into place to make the resolution of those questions easier and fairer to all concerned.
The default common-law legal rules governing each of these perceived threats are somewhat different, although reasonableness always plays an important role in defining legal rights and options. In certain cases—drone overflights, autonomous cars—national, state, and even local regulation may trump the common law. Because it is in most cases obvious that humans can use force to protect themselves against actual physical attack, the paper concentrates on the more interesting cases of (1) robot (and especially drone) trespass and (2) responses to perceived threats other than physical attack by robots—perceptions which may not always be justified, but which sometimes may nonetheless be considered reasonable in law.
Part II discusses common-law self-help doctrine, which states that conduct, otherwise tortious, is privileged where it cures, prevents, or mitigates a more serious tort that is, or reasonably seems to be, about to occur. In the protection-of-person context, the issue is simple because we value life more than property. One may destroy even expensive property in the reasonable belief that the destruction is necessary to save one’s own life or that of another. The same general rule applies to non-life-threatening personal injury, subject to a reasonableness test as to the relative damages. On the other hand, one may not destroy expensive property to protect inexpensive property. The test is one of cost-benefit: the chattel that poses the threat may be harmed only if the cost of that harm is less than the cost of the harm that will otherwise be done by the chattel.
Privacy intrusions complicate the calculus. Intrusion upon seclusion is a recognized, if somewhat exotic, tort, but its rarity in the courts means that the scope of permissible self-help against privacy-invading chattels—like the camera planted by the landlord in the tenant’s bedroom—is poorly charted legal territory. In principle, a tort is a tort, so some self-help should be justified.