Archives

Wendy Seltzer, Privacy, Feedback, and Option Value

Wendy Seltzer, Privacy, Feedback, and Option Value

Comment by: Michael Zimmer

PLSC 2012

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2032100

Workshop draft abstract:

We have confused intuitions about privacy in public. Sometimes, relying on a rationalist paradigm of secrecy, we say “if you don’t want something published to the world, don’t do it where others can see”: don’t post to Facebook, don’t converse on the public streets. Yet other times, drawing upon experience in natural and constructed social environments, we find that we can have productive interactions in a context of relative, not absolute, privacy: privacy is not binary.

Over time, we have worked out privacy-preserving fixes in architecture, norms, and law: we build walls and windowshades; develop understandings of friendship, trust, and confidentiality; and protect some of these boundaries with the Fourth Amendment, statute, regulation, tort, and contract. The environment provides feedback mechanisms; we adapt to the disclosure problems we experience (individually or societally). We move conversations inside, scold or drop untrustworthy friends, rewrite statutes. Feedback lets us find the boundaries of private contexts and probe the thickness of their membranes.

Technological change throws our intuitions off when we don’t see its privacy impact on a meaningful timescale. We get wrong, limited, or misleading feedback about the publicity of our actions and interactions online and offline. Even if we learn of the possibility of online profiling or constant location tracking, we fail to internalize this notice of publicity because it does not match our in-the-moment experience of semi-privacy. We thus end up with divergence between our understanding and our experience of privacy.

Prior scholarship has approached privacy in public from a few angles: It has identified various interests that fall under the heading of privacy: dignity, confidentiality, secrecy, presentation of self, harm; it has cataloged the legal responses, giving explanations of law’s development and suggestions for its further adaptation. Scholars have theorized privacy, moving beyond the binary of “secret or not secret” to offer contextual and experiential gradients. [1] Often, this scholarship reviews specific problems and situates them in larger context. [2] User studies and economic analysis have improved our understanding of the privacy experience, including the gap between expectations and reality. [3] Computer science and information theory help us quantify some of the elements we refer to as privacy. [4] Finally, design and systems-engineering literature suggest that feedback mechanisms play an important role in the usability and comprehensibility of individual objects and interfaces and in the ability of a system as a whole to reach stable equilibrium.[5]

This article aims to do three things:

1. Introduce a notion of privacy-feedback to bridge the gap between contextual privacy and the dominant secrecy paradigm. Privacy-feedback, through design and social interaction, enables individuals to gauge the publicity of their activities and to modulate their behavior in response.

2. Apply the tools of option value to explain the “harm” of technological and contextual breaches of privacy. The financial modeling of real options helps to describe and quantify the value of choice amid uncertainty. Even without knowing all the potential consequences of data misuse, or which ones will in fact come to pass, we can say that unconsented to data collection deprives the individual of options: to disclose on his or her own terms, and to act inconsistent with disclosed information.

3. Propose a broader framework for architectural regulation, in which technological feedback can enable individual self-regulation to serve as an alternative to command-and-control legal regulation. Feedback then provides a metric for evaluating proposed privacy fixes: does the fix help its users get meaningful feedback about the degree of privacy of their actions? Does it enable them to preserve disclosure options?

Finally, we see privacy-feedback take a larger systemic role. If technology and law fail to offer the choices necessary to protect privacy, we can give meta-feedback, changing the law to do better.


[1]          See Helen Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life, (Stanford Law Books, 2009); Daniel J. Solove, ‘A Taxonomy of Privacy’, 154 U. Pa. L. Rev. 477 (2006); Julie E. Cohen, ‘Examined lives: Informational privacy and the subject as object’, Stan. L. Rev. 52 (1999).

[2]          See Paul Ohm, ‘Broken Promises of privacy: Responding to the Surprising Failure of Anonymization’, 57 UCLA L.Rev. 1701 (2010); Orin S. Kerr, ‘The Fourth Amendment and New Technologies: Constitutional Myths and the Case for Caution’, Mich. L. Rev. 102 (2003); Lawrence Lessig, ‘The Architecture of Privacy’, Vand. J. Ent. L. & Prac. 1 (1999); Jeffrey Rosen, The unwanted gaze: The destruction of privacy in America, (Vintage, 2001).

[3]          See Alessandro Acquisti and Jens Grossklags, ‘Privacy and rationality in individual decision making’, Security & Privacy, IEEE, 3 (2005):1; A.M. McDonald and L.F. Cranor, ‘The cost of reading privacy policies’, ACM Transactions on Computer-Human Interaction, 4 (2008):3; C. Jolls, C.R. Sunstein and R. Thaler, ‘A behavioral approach to law and economics’, Stanford Law Review, (1998); R.H. Thaler and C.R. Sunstein, Nudge: Improving decisions about health, wealth, and happiness, (Yale Univ Pr, 2008).

[4]          See James Gleick, The information: A history, a theory, a flood, (Pantheon, 2011); C.E. Shannon and W. Weaver, The mathematical theory of communication, (University of Illinois Press Urbana, 1962);  Cynthia Dwork, ‘Differential privacy’, Automata, languages and programming, (2006).

[5]          See Donald A. Norman, Emotional Design, (Basic Books, 2004); J.W. Forrester, Industrial dynamics, (MIT Press Cambridge, MA, 1961); Charles Perrow, Normal accidents: Living with high-risk technologies, (Princeton University Press, 1984); H.A. Simon, The sciences of the artificial, (the MIT Press, 1996).

Anne Klinefelter, Negotiating for Privacy and Confidentiality in Electronic Legal Research

Anne Klinefelter, Negotiating for Privacy and Confidentiality in Electronic Legal Research

Comment by: Michael Zimmer

PLSC 2010

Workshop draft abstract:

Legal researchers’ privacy and confidentiality interests are poorly protected under current laws.  Legal research raises issues of attorney-client privilege as well as concerns about the private nature of facts at issue such as personal health information, trade secrets, and family matters.  Tracking of individuals’ legal research and insecurity of research results posted through cloud computing challenge both individual and societal interests in unfettered intellectual exploration and in a stable and effective legal system.  The porous line between commercial tracking and government surveillance increases the potential for compromise of these privacy and confidentiality interests.  Relatively long-standing systems such as issuance of personal passwords for LexisNexis and Westlaw are now joined by less-apparent tracking in legal resources such as Google Scholar’s offerings of patents, legal opinions and journals.  While state and federal laws fail to provide adequate protection, legal researchers are in a position to demand higher standards for privacy of online legal research and can help build and shape the market for privacy in online reading more generally.