We use social science research to develop robust analyses of AI systems that can effectively inform their design, use, and governance.
To realize AI’s social benefits, it is necessary to understand its risks and potential harms to people, communities, and institutions through empirically grounded research rather than speculation. We argue that the impact of AI systems and policies can only be fully understood by observing, listening, and speaking with people on the ground–from government and business leaders, to scientists and engineers, to community activists and vulnerable groups.
Our ethnographically-oriented research explores new processes of integrating and governing AI within diverse social contexts; including medical, governmental, and humanitarian sectors, among others. We seek to complement critical analyses with pragmatic governance and design approaches that speak to multiple stakeholders.
To this end, we are focused not only on articulating emerging issues but also on developing guidelines, best practices, and recommendations for regulatory approaches. We will examine effective ethical, human rights, regulatory, and other governance frameworks to help policymakers and ethics bodies better understand what they should demand of AI development, procurement, and assessment processes. With a sociotechnical foundation of understanding, society will be better equipped to plan for, and govern, beneficial and human-centered AI.