The “doppelganger” is a powerful, well-established literary trope: a biologically unrelated other, a living double that looks like us. Today, technology, social media, and the collection of big data have enabled the cultivation of new kinds of digital doubles and proxies. In some contemporary retellings of the doppelganger story, a person’s “data double” is extracted and secretly used against its human model, a product of Big Tech’s insatiable appetite for monetizing and marketing our identities. But other stories of such “doubling” describe something more intimate. From AI technologies that aim to reproduce our conversations with deceased loved ones to phones that know our habits better than we do, we live in a world where digital selves are ubiquitous yet often untrustworthy. Commercial AI efforts seek to develop their own version of the “digital doppelganger” as a means to replicate the specific skills, preferences, actions, or knowledge of a particular human. Sometimes, this double is conceived of as a target for nudging, persuasion, and manipulation. Sometimes it is intended to be an aid and support for human creativity and efficiency. Regardless, these digital others are not only cultivated and produced online by users, but scrutinized, surveilled, and in some cases even created by governments and corporations.
For this upcoming workshop, we invite applications from prospective participants who are contemplating questions including: How do our digital doubles allow us to know ourselves and each other? What questions do they raise about representation, authenticity, and impersonation? How are the doppelgangers extracted from us, constructed, and used in ways that go against our own interests? How might we construct our doubles in desirable ways, and how might we perform them for algorithms? In what ways do these digital doubles have the capacity to help or hurt our offline selves? How are digital doppelgangers involved in the ways power, surveillance, and control are exercised?
The notion of a data double is not entirely new (Haggerty and Ericson 2000): these recent iterations add to a long history of what it means for data to represent ourselves. However, the acceleration of sensing, tracking, and data storage has made the reality of them more ubiquitous, and often, less consensual. The accumulation of self-tracking data can also create new identities that mirror and reflect a double back to the user (Ruckenstein 2014), and raise questions about their authenticity and ownership. Importantly, this “doppelgangering” can be used both to target interventions and to implement systems that further surveil, monitor, and govern.