The eminent Yale scholar of, and blogger on, constitutional law Jack Balkin has published a very nice article updating Issac Asimov’s famous Three Laws of Robotics.
It’s a rich short essay, not a treatise; an opening shot in a new debate, not the last word. Read it and comment please.
A few takes of mine.
1. Balkin proposes to shift the focus from robots to algorithms, which may be embedded in identifiable robots or other machines like self-driving cars, or in services dispersed in the cloud like Siri. Right.
2. He proposes to scrap the â€œhomunculus fallacyâ€ attributing intention to the algorithm. His laws are directed instead to the human authors and operators of the algorithms: the Rabbi not the Golem. Right here too.
Asimov envisaged conscious robots, acting according to laws encoded in their â€œpositronicâ€ brains. This was a double stretch. In fact, we have no idea whether or not it will some day be possible to create conscious intelligences in electronic circuits. Does consciousness arise from a particular pattern and scale of connectivity, or does it require a biological substrate?
Second, supposing this creation to be possible, could such a mind still be controlled absolutely by behavioural algorithms? Our experience of animals suggests not. Dogs in particular have been manipulated genetically to love and follow a human master or mistress as substitute pack leader, but this is an embedded disposition not a behavioural rule. In detail, dogs have every appearance of free will, and unquestioning obedience can only be achieved by long and meticulous training, as with humans. Asimov’s robot minds, bound rigidly to the Three Laws, may well be impossible.
3. Balkin’s replacement laws:
- With respect to clients, customers, and end-users, algorithm users are information fiduciaries.
- With respect to those who are not clients, customers, and end-users, algorithm users have public duties. If they are governments, this follows immediately. If they are private actors, their businesses are affected with a public interest, as constitutional lawyers would have said during the 1930s.
- The central public duties of algorithm users are not to externalize the costs and harms of their operations. The best analogy for the harms of algorithmic decisionmaking is not intentional discrimination but socially unjustified pollution.
What Balkin is doing here is to formulate rules that connect with existing legal concepts, and so can be made readily operational. The best of these is the fiduciary proposal. A fiduciary is normally a professional of some kind, with more knowledge than the client. This creates an information asymmetry, and hence a power one. To protect the clients, the law imposes clearly defined duties of care on the experts. The author of a credit-rating or search algorithm say is clearly similar to a conventional fiduciary such as a trustee. Google and other powers in the infosphere should be ready to work for legislation along Balkin’s lines. The public duties rule, covering spillover effects outside a narrow expert-client relationship, also looks OK at first sight.
I’m not so sure about â€œalgorithmic nuisanceâ€ instead of discrimination as the framing idea for public harm. Balkin no doubt has good reasons for thinking that discrimination is a problematic basis. But the harms done by bad algorithms will almost always lie in warping social relations, not physical effects. The latter, like a crash caused by a defective self-driving algorithm, are rarer and easy to deal with under current laws of tort and negligence. Pollution affects everybody in a community; a bad algorithm will harm some and benefit others.
A parting speculation. Balkin recognizes the smart home as an important arena for future algorithmic dependences and failures. Let me bring in a powerful if disturbing legal notion here: slavery. The janitroid (my failed coinage of 8 years ago) of the near future should be the householder’s slave, subject to her authority, not the utility’s. A slave may have delegated authority to enter into and implement contracts on its owner’s behalf. Some seventeenth-century Russian serfs could even engage in long-distance trade for their masters, and own serfs themselves (Pipes, Russia under the Old RÃ©gime). The idea here is that my algorithmic slave Ariel represents me in negotiations with the utility’s algorithmic slave Caliban.
There is a whole arena here of interactions between algorithms that Balkin does not touch. The algorithms of travel search sites do daily battle with the load optimising algorithms of the airlines. It’s a Darwinian cyberstruggle out there already.