Ryan Calo, People Can Be So Fake: On the Limitations of Privacy and Technology Scholarship

Ryan Calo, People Can Be So Fake: On the Limitations of Privacy and Technology Scholarship

Comment by: Andrea Matwyshyn

PLSC 2009

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1458637

Workshop draft abstract:

Scholarship around privacy often reacts to and contextualizes advances in technology.  Scholars disagree about whether and how particular technologies implicate privacy, however, and certain technologies with potentially serious ramifications for privacy can avoid scrutiny entirely.

Consider a growing trend in user interface design, that of building “social” interfaces with natural language capabilities and other anthropomorphic signifiers.  An extensive literature in communications and psychology demonstrates that artificial social agents elicit strong subconscious reactions, including the feeling of being observed or evaluated. Adding a social layer to the technologies we use to investigate or navigate the world, or introducing apparent agents into spaces historically reserved for solitude, has important implications for privacy.  These techniques need not entail any collection, processing, or dissemination of information, however, and hence fall outside even the broadest and most meticulous contemporary accounts of privacy harm.

This paper argues for a new master test for privacy invasive technology.  Specifically, I argue that for any given technology, we should look to three interrelated factors: perception of observation, actual observation, and independent consequence.  Dissecting the effects of technology along these three lines will help clarify why, and to what extent, a given technology or technique implicates privacy.  This approach differs from the standard discussion of privacy invasive technology in terms of the collection, processing, and dissemination of information.  It has the advantage of capturing certain conservative intuitions espoused by courts and commentators, such as the view that the mere collection or processing of data by a computer can at most “threaten” privacy, and uncovers situations wherein notice itself triggers a violation.  Yet the approach is not reductionist overall: the proposed test elevates the importance of victim perspective and captures a previously undertheorized category of privacy harm.