Features

Designing Responsible Artificial Intelligence: Global Guideposts and Standard Setting

By Charles Morgan and Emily Reineke, CAE • July 24, 2019

Free Framework of Ethical Principles and Open Call for Comments

Recognizing the broad spectrum of uses for AI and the widespread global development of AI, ITechLaw Association has published Responsible AI: A Global Policy Framework that explores eight core ethical AI principles with the goal of setting global guideposts for the development of ethical and responsible AI.
Recognizing the broad spectrum of uses for AI and the widespread global development of AI, ITechLaw Association has published Responsible AI: A Global Policy Framework that explores eight core ethical AI principles with the goal of setting global guideposts for the development of ethical and responsible AI.

Artificial intelligence, AI, as it’s commonly referred to, is rapidly becoming ubiquitous in our society. Every kind of company from technology giants to associations has heard about the promise of AI to make jobs easier, but also the threat of AI to replace jobs that humans currently do. Recognizing the broad spectrum of uses for AI and the widespread global development of AI, ITechLaw Association (ITechLaw) has published a book called Responsible AI: A Global Policy Framework that explores the following eight core ethical AI principles with the goal of setting global guideposts for the development of ethical and responsible AI:

  • Ethical Purpose and Society Benefit
  • Accountability
  • Transparency and Explainability
  • Fairness and Non-discrimination
  • Safety and Reliability
  • Open Data and Fair Competition
  • Privacy
  • Intellectual Property

As AI is a topic of global proportions, it is fitting for ITechLaw, with membership from more than 70 countries, to present this global framework. The book was authored by a multi-disciplinary group of lawyers, academicians, and industry representatives from 16 countries across 5 continents. This author group was intentionally diverse to ensure that the standards being set would showcase the ideal and beneficial ways that AI could be used across the world.

The authors highlight the ideal uses of AI and make a call for the development of all AI to be accountable, explainable, transparent, and fair for all individuals. As this is an emerging technology, there aren’t any existing standard or rules which govern it and many organizations have been asking for them to be developed. This book provides a proposed draft of globally relevant principles.

As a collective unit, the principles set an aspirational goal for what AI can, and should be. They can broadly be separated into two categories: how AI is developed and how AI interacts with society.

How AI Is Developed

At its core, AI should be ethical and benefit society, and the ultimate accountability should rest with the humans behind it. AI needs to be trustworthy, and trust is built from transparency in decisions, and in the ability for the decisions to be explained.

All decisions also need to be fair, which is easy to understand, but hard to implement. The outcome of AI originates on the big data which was analyzed, a process that can be obscure to human understanding. This makes the building of trust and accountability more important, in addition to the development of complete and reliable data which is used.

Societal Implications of AI

The later four principles address societal implications of AI. To begin, AI should be implemented safely in our society through things like autonomous vehicles and robotic surgery. Moreover, since the “deep learning” AI algorithms tend to use massive sets of data to learn, this raises the question of whether the limited access to relevant and valuable data raises new issues of unfair competition. Society must also consider the privacy implications of AI, since AI systems can be trained to process huge volumes of sensitive persona information. Amid all of these developments, questions will rise about who owns what IP rights, as regards the masses of data that AI uses, the novel algorithms used to process the data and the products and services that incorporate (or are “powered by”) complex AI systems.

This book published by ITechLaw provides guidelines and principles to consider when developing, deploying or using AI. In this time of rapid development, it’s important for associations to provide the resources to their industries about how to do the most good for the most people.

While it would be easy to charge ahead quickly, for the good of the general public, it’s important that all industries think carefully and critically about next steps and act responsibly in developing AI.  Although the book raises a number of questions that do not yet have answers, Responsible AI provides a framework of analysis that provides clear guideposts to help resolve complex issues that may arise along the implementation journey.

ITechLaw welcomes insights on the principles and framework during its public comment period until September 15 to inform a second-version framework.

Responsible AI: A Global Policy Framework

Publisher: ITechLaw

Free: Global Policy Framework, 8 Principles

$9.99 (e-book), $79 (print, nonmembers of ITechLaw)

Order or download from ITechLaw.org/ResponsibleAI

Public Comment Period: Open till September 15, 2019

Charles Morgan is president of ITechLaw, editor and a chapter lead of Responsible AI: A Global Policy Framework, and a principal at McCarthy Tétrault LLP, Montreal.

Emily Reineke, CAE, is a managing director of ITechLaw.