Artificial Intelligence Law and Policy: Key Challenges
Ryan Calo

These remarks are part of an ongoing project exploring key challenges for artificial intelligence law and policy.

Concerns over artificial intelligence are nothing new. In the nineteen-eighties, during the lead up to the so-called AI Winter when the field failed to deliver on its grander promises, headlines warned that robots would take our jobs (assuming Skynet didn’t kill us first). If there were calls for policymakers to intervene, none did.

Today, techniques in AI developed in the sixties and seventies join with cheap processing and ubiquitous data to yield promising new applications such as real-time translation and cancer diagnosis. The concerns are back as well. And this time, policymakers are paying attention.

In the summer of 2016, the Obama White House conducted four major workshops on artificial intelligence. Both the Senate and the House held first-of-their-kind hearings on AI and advanced robotics. Abroad, the European Commission formed a committee on robotics and AI law and the Japan Ministry of Trade and Energy formed a Robot Policy Office. Foundations, individuals, and companies have pledged tens of millions of dollars to foster beneficial AI.

Today there is recognition, largely absent in the scares of the past, that AI presents real policy challenges. So what are we to do about them? Somehow build law into AI? Research by Harry Surden suggests that only some rules are amenable to translations into code. And, as top Google lawyer Kent Walker has remarked, Isaac Asimov’s stories were only interesting because his laws didn’t work.

Develop an ethics of AI? Maybe. But there are reasons for caution. First, ethics is malleable and contested: both Kant and Bentham get to say “should.” Second, ethics lacks an enforcement mechanism (unless we count frowning). And third, the history of consumer protection law is replete with “ethical guidelines” that wound up restraining trade—providing ample reason for monitoring the new Partnership for AI consisting of Amazon, Apple, Facebook, Google, and Microsoft.

What is needed for AI is what is always needed: rolling up our sleeves and tackling concrete pragmatic challenges. We must, first, identify a set of grand challenges for AI that is simultaneously ambitious and tractable. These include: (a) establishing best practices for fair algorithms, (b) setting and verifying minimum safety thresholds for AI embedded in cyber-physical systems, (c) cushioning the effects of labor displacement, (d) removing perceived and actual barriers to research into AI systems, and (e) deciding the contexts and contours of meaningful human control over AI, among other examples. In addition, we should begin to study the medium and long-term puzzles AI presents for law, such as the contours and utility of criminal mens rea or tort foreseeability in a world of emergent behavior.

A growing community is already engaged in the serious work of thinking through AI’s social impacts; others seem distracted by far fetched or vague lines of reasoning about AI. The resources and discourse must flow toward the former. The alternative is to miss out on a unique opportunity to shape a transformative technology in its infancy. By failing to ask and answer hard, practical questions about AI, the entire field of AI and social impacts risks disillusionment—what Solon Barocas jokingly refers to as an AI Policy Winter. I hope that winter is not coming.

1. E.g., Harley Shaiken, A Robot is After Your Job; New technology isn’t a panacea, N.Y. Times, Sept. 3, 1980, at A19. The AI Winter refers to the period in the mid-1980s in which interest in AI began to drop as the field failed to yield significant practical gains. See Artificial Intelligence and Life in 2030, 51, Stanford University, Sept. 2016, online at https://ai100.stanford.edu/sites/default/files/ai_100_report_0831fnl.pdf. Skynet is the name of the fictional, murderous artificial intelligence in the Terminator movie series.
2. Id. at 9, 26–27.
3. National Science and Technology Council, Exec. Office of the President, Preparing for the Future of Artificial Intelligence, 12 (2016).
4. U.S. Senate Committee on Commerce, Science, & Transportation, The Dawn of Artificial Intelligence (Nov. 30, 2016), online at http://www.commerce.senate.gov/public/index.cfm/hearings?ID=042DC718-9250-44C0-9BFE-E0371AFAEBAB; U.S. Congress Joint Economic Committee, The Transformative Impact of Robots and Automation (May 5, 2016), online at https://www.jec.senate.gov/public/index.cfm/hearings-calendar?ID=BB1C3FD8-9FD1-46BA-917C-E5B3C585F1CC.
5. Ilina Lietzen, Robots: Legal Affairs Committee calls for EU-wide rules, European Parliament News (Jan. 12, 2017, 12:27 PM), http://www.europarl.europa.eu/news/en/news-room/20170110IPR57613/robots-legal-affairs-committee-calls-for-eu-wide-rules; Robotics Policy Office is to be Established in METI, Ministry of Economy, Trade and Industry (July 1, 2015), http://www.meti.go.jp/english/press/2015/0701_01.html.
6. E.g., April Glaser, LinkedIn’s and eBay’s founders are donating $20 million to protect us from artificial intelligence, recode (Jan. 10, 2017, 4:35 PM), online at http://www.recode.net/2017/1/10/14226564/linkedin-ebay-founders-donate-20-million-artificial-intelligence-ai-reid-hoffman-pierre-omidyar ($5 million from the Knight foundation and $1 million donation from the Hewlett Foundation to Harvard and MIT); Carnegie Mellon Receives $10 Million From K&L Gates to Study Ethical Issues Posed by Artificial Intelligence, Carnegie Mellon University News (Nov. 2, 2016), online at https://www.cmu.edu/news/stories/archives/2016/november/gift.html ($10 million donation establishing the K&L Gates Endowment for Ethics and Computational Technologies); Max Tegmark, Elon Must donates $10M to keep AI beneficial, Future of Life Institute (Jan. 15, 2015), online at https://futureoflife.org/2015/10/12/elon-musk-donates-10m-to-keep-ai-beneficial ($10 million donation from Elon Musk to Future of Life Institute).
7. Harry Surden, The Variable Determinacy Thesis, 12 Colum. Sc. & Tech. L. Rev. 1 (2011). See also Harry Surden, Technological Opacity, Predictability, and Self-Driving Cars, 38 Cardozo L. Rev. 121, 162–63 (2016).
8. Future of Life Institute, AI and Law, 3:05, YouTube (Feb. 20, 2017), https://www.youtube.com/watch?v=m9l90FMIWkY&feature=youtu.be.
9. See José de Sousa E Brito, Right, Duty, and Utility: from Bentham to Kant and from Mill to Aristotle, XVII/2 Revista Iberoamericana de Estudios Utilitaristas 91, 92 (2010).
10. See, e.g., Nat’l Soc’y of Prof’l Eng’rs v. United States, 435 U.S. 679 (1978) (Department of Justice complaint against engineers for price and marketing practices). See also In the Matter of the American Medical Association, et al., 94 F.T.C. 701 (1979) (doctors); In the Matter of Connecticut Chiropractic Ass’n, 114 F.T.C. 708 (1991) (chiropractors); In the Matter of Nat’l Soc’y of Prof’l Eng’rs, 116 F.T.C. 787 (1993) (engineers).
11. Romain Dillet, Apple Joins Amazon, Facebook, Google, IBM and Microsoft in AI Initiative, TechCrunch (Jan. 27, 2017), online at https://techcrunch.com/2017/01/27/apple-joins-amazon-facebook-google-ibm-and-microsoft-in-ai-initiative.
12. Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Calif. L. Rev. 513, 542 (2015).
13. See Kate Crawford & Ryan Calo, There is a blind spot in AI research, Nature (Oct. 13, 2016), online at http://www.nature.com/news/there-is-a-blind-spot-in-ai-research-1.20805 (discussing different approaches to AI’s social impacts).
14. E-mail from Solon Barocas to Ryan Calo (1:12 PM, Jan. 24, 2017) (on file with author).