In Governing Artificial Intelligence: Upholding Human Rights & Dignity, Mark Latonero shows how human rights can serve as a “North Star” to guide the development and governance of artificial intelligence.
Governing Artificial Intelligence
Upholding Human Rights & Dignity
![Image in a green motif of transparent digital vectors over a woman's face.](https://staging.datasociety.net/wp-content/uploads/2018/11/Governing_Artificial_Intelligence-01.jpg)
Mark Latonero
Recommendations
- Technology companies should find effective channels of communication with local civil society groups and researchers, particularly in geographic areas where human rights concerns are high, in order to identify and respond to risks related to AI deployments.
- Technology companies and researchers should conduct Human Rights Impact Assessments (HRIAs) through the life cycle of their AI systems. Researchers should reevaluate HRIA methodology for AI, particularly in light of new developments in algorithmic impact assessments. Toolkits should be developed to assess specific industry needs.
- Governments should acknowledge their human rights obligations and incorporate a duty to protect fundamental rights in national AI policies, guidelines, and possible regulations. Governments can play a more active role in multilateral institutions, like the UN, to advocate for AI development that respects human rights.
- Since human rights principles were not written as technical specifications, human rights lawyers, policy makers, social scientists, computer scientists, and engineers should work together to operationalize human rights into business models, workflows, and product design.
- Academics should further examine the value, limitations, and interactions between human rights law and human dignity approaches, humanitarian law, and ethics in relation to emerging AI technologies. Human rights and legal scholars should work with other stakeholders on the tradeoffs between rights when faced with specific AI risks and harms. Social science researchers should empirically investigate the on-the-ground impact of AI on human rights.
- UN human rights investigators and special rapporteurs should continue researching and publicizing the human rights impacts resulting from AI systems. UN officials and participating governments should evaluate whether existing UN mechanisms for international rights monitoring, accountability, and redress are adequate to respond to AI and other rapidly emerging technologies. UN leadership should also assume a central role in international technology debates by promoting shared global values based on fundamental rights and human dignity.