Research Track

Intelligence and Autonomy

Active from 2014-2018, this track developed grounded, qualitative research to contextualize a cross-disciplinary understanding of AI, and to inform the design, evaluation, and regulation of AI-driven systems.

About this Track

Our newest AI-related research can be found under the research track AI on the Ground.

Rather than focus on utopian dreaming or dystopian fears, the Intelligence & Autonomy research track (2014-2018) began from the position that the historical and social contexts in which AI systems emerge and operate should be central to debates about their uses and potential effects. To contribute to and develop this understanding, we produced empirical research ranging from an analysis of the ways in which service platforms, like Uber, may present a potential backdoor to employment discrimination to the history of aviation autopilot litigation and its implications for legal responsibility in autonomous systems.

We also engaged a range of stakeholders, aiming to foster productive interdisciplinary and inter-institutional conversations. These engagements included invited talks and workshops, including Futures Forum 2015, a cross-disciplinary convening that used scenarios drawn from commissioned science fiction stories as a collective starting point for new and inclusive ways of planning for the future. We also published An AI Pattern Language, a booklet based on interviews conducted in 2015-2016 with practitioners working in the intelligent systems and AI industry that presents a taxonomy of social challenges facing AI industry practitioners and articulates an array of patterns that practitioners have developed in response.

The Intelligence & Autonomy Initiative was founded with support from the John D. and Catherine T. MacArthur Foundation and Microsoft Research, with additional research support from The Ethics and Governance of AI Fund.

All Work

  • op-ed
    Slate
    D&S researcher Madeleine Clare Elish discusses the complexities of error in automated systems. Elish argues that the human role in automated systems has become 'the weak link, rather than the point of stability'. We need t... Read on Slate
    June 2016
  • op-ed
    Harvard Business Review
    D&S Researcher Alex Rosenblat examines how Uber’s app design and deployment redistributes management functions to semiautomated and algorithmic systems, as well as to consumer ratings systems, creating ambiguity around who ... Read on Harvard Business Review
    April 2016
  • op-ed
    Slate
    D&S Researcher Tim Hwang and Samuel Woolley consider the larger trend toward automated politics and the likely future sophistication of automated politics and potential impacts on the public sphere in the era of social medi... Read on Slate
    March 2016
  • op-ed
    Motherboard
    "Uber’s access to real-time information about where passengers and drivers are has helped make it one of the most efficient and useful apps produced by Silicon Valley in recent years. But if you open the app assuming you’ll get... Read on Motherboard
    July 2015
  • op-ed
    Slate
    "Increasingly, what underlies the debate over the so-called sharing economy is a nascent, bigger battle about how society wants machines coordinating and governing human activity. These apps don't match and route people by hand... Read on Slate
    July 2015
  • op-ed
    Quartz
    "In a self-driving car, the control of the vehicle is shared between the driver and the car’s software. How the software behaves is in turn controlled — designed — by the software engineers. It’s no longer true to say that the ... Read on Quartz
    July 2015
  • op-ed
    Civicist
    In this piece for Civic Hall's Civicist, Samuel Woolley and D&S fellow Tim Hwang argue that "[t]he failure of the ‘good bot’ is a failure of design, not a failure of automation" and urge us not to dismiss the potential bene... Read on Civicist
    May 2015
Clear all filters