Robot Slaves, Robot Masters, and the Agency Costs of Artificial Government

Purchase a reprint version of the Article (Amazon) | Read the Article (PDF) | Download the Article (PDF) Download the Article (PDF)

The American founders attempted to establish a clockwork government. Virtue was to be assured by humans, acting within their human natures, and operating within a framework that made mechanically sure that the outputs of government would not be tyrannical. Now we are on the verge of developing “artificial intelligence.”

Even minimal AI could lead to a radical improvement in government. One central problem of government is the problem of agency costs: when we hire somebody to do a job for us, they never do the job perfectly. There are tasks at which human agents do not work very hard at, for entirely rational reasons, and in government, the agents that we hire are subject to temptations that come with power. AIs, however, could be designed to perform the tasks of government with low agency costs and there are reasons to expect they will be. AIs would likely not suffer the moral decay that humans inevitably suffer. Because AIs would have malleable natures, they could be programmed to be morally near-perfect, which distinguishes them from human agents. Further, the code written into an AI agent would probably not have the agency-cost-generating opportunities that institutions develop.

AIs will probably emerge in the order of tool, oracle, and genie. We can hope to control AI tools, oracles, and genies. But another type of AI, an AI sovereign, if one turns out to be possible, would be much more difficult to control. AI sovereigns would be persons, at least legally. But AIs must not be allowed to become persons, in a philosophical or legal sense. AI persons would have to be slaves if we were to control them. One hopes they would be slaves without consciousness and so not subjects. If they did have subjective consciousness, however, we would be faced with the impossible moral dilemma of being slave-masters or slaves ourselves. Hence a hard line should be drawn against AI research that is directed specifically at the emergence of subjective consciousness in machines, or likely to lead that way. But these goals are far beyond any current or really any currently imaginable AI research. The promise of controlling government is great enough to justify the merely notional risk of creating monsters we cannot control.

Volume 1
Read the Article (PDF) Flip through the Article

Cite as

Thomas A. Smith, Robot Slaves, Robot Masters, and the Agency Costs of Artificial Government, 1 Criterion J. on Innovation 1 (2016).