Archives

Seda Gürses, “Privacy is don’t ask, confidentiality is don’t tell” An empirical study of privacy definitions, assumptions and methods in computer science research and Robert Sprague and Nicole Barberis, An Ontology of Privacy Law Derived from Probabilistic Topic Modeling Applied to Scholarly Works Using Latent Dirichlet Allocation (joint workshop)

Seda Gürses, “Privacy is don’t ask, confidentiality is don’t tell” An empirical study of privacy definitions, assumptions and methods in computer science research and Robert Sprague and Nicole Barberis, An Ontology of Privacy Law Derived from Probabilistic Topic Modeling Applied to Scholarly Works Using Latent Dirichlet Allocation (joint workshop)

Comment by: Helen Nissenbaum

PLSC 2013

Workshop draft abstract:

Since the end of the 60s, computer scientists have  engaged in  research on  privacy and  information  systems.  Over the years, this research has led to a whole  palette  of “privacy solutions”.  These vary from design principles and  privacy  tools,  to the application of privacy enhancing techniques.  These solutions originate  from  diverse  sub-fields of computer  science,  e.g.,  security  engineering, databases,  software  engineering,  HCI,  and  artificial intelligence. From a bird’s  eye view, all of these  researchers are  studying  privacy. However,  a  closer  look  reveals that each  community  of researchers relies  on different,  sometimes  even conflicting, definitions of privacy, and  on a variety of social and  technical assumptions. At best, they are referring to different facets of privacy and,  at worst,  they  fail  to take  into  account  the diversity  of existing definitions  and  to integrate  knowledge  on  the phenomenon generated by  other  communities  (Gürses  and  Diaz,  2013).  Researchers do have  a tradition of assessing  the (implicit) definitions and assumptions that un- derlie the studies in their respective communities (Goldberg, 2002; Patil et al., 2006). However,  a systematic evaluation of privacy  research prac- tice across  the different computer science communities is so far absent. This  paper  contributes to closing this gap through an empirical study of privacy  research in computer  science.  The  focus of the paper  is on the different  notions  of privacy  that the 30 interviewed  privacy  researchers employ, as well as on the dominant worldviews that inform their practice. Through a qualitative analysis  of their responses  using grounded theory we consider  how the researchers framing  of privacy  affects  what  counts as “worthwhile problems” and  “acceptable  scientific  evidence”  in their studies  (Orlikowski and  Baroudi, 1991).  We  further analyze  how  these conceptions of the problem  prestructure the potential solutions to privacy in their  fields (Van  Der Ploeg,  2005).

 

We expect the results to be of interest beyond  the confines of computer science.  Previous studies  on how privacy  is conceived  and  addressed in practice have brought new perspectives to “privacy on the books” (Bamberger and Mulligan,  2010): users’ changing  articulations of privacy  in networked publics  (danah boyd,  2007), the evolution of privacy  as practiced by private organizations (Bamberger and Mulligan,  2010), the conceptualization of privacy  in legal practice (Solove,  2006), or the framing of privacy  in media  coverage  in different cultural contexts (Petrison and Wang,  1995). However,  few studies have turned their gaze on the re- searchers themselves with the objective of providing a critical reflection of the field (Smith et al., 2011). The  few studies that exist in computer science focus on artifacts produced by the researchers, e.g., publications, or provide  an  analysis  of the state-of-the-art written by insiders.  While these are valuable contributions, we expect the comparative and the empirical nature of our study  to provide  deep  and  holistic  insight  into privacy  research in computer science.


Kenneth A. Bamberger  and Deirdre  K. Mulligan.  Privacy  on the Books and on the Ground.  Stanford  Law Review,  63:247 – 316, 2010.

danah   boyd.    Why  Youth  (Heart) Social  Network  Sites:  The  Role  of  Net- worked  Publics   in  Teenage  Social  Life.    The  John  D.  and  Catherine   T. MacArthur Foundation  Series on Digital Media and Learning,  pages 119–142,

2007.   URL  http://www.mitpressjournals.org/doi/abs/10.1162/dmal.9780262524834.119.

Ian  Goldberg.   Privacy-enhancing technologies  for the Internet, II: Five  years later.  In Proc.  of PET 2002, LNCS 2482, pages 1–12. Springer,  2002.

Seda  Gu¨rses  and  Claudia  Diaz.   An activist  and  a consumer  meet  at  a social network …  IEEE  Security  and Privacy  (submitted), 2013.

Wanda  J. Orlikowski and Jack. J. Baroudi.  Studying Information Technology in Organizations: Research  Approaches  and Assumptions.  Information Systems Research,  2(1):1 – 28, 1991.

Sameer Patil,  Natalia  Romero, and John Karat. Privacy  and HCI: Methodologies for studying privacy issues. In CHI ’06 Extended  Abstracts  on Human  Factors in Computing  Systems, CHI EA ’06, pages 1719–1722, New York, NY, USA,

2006. ACM.  ISBN 1-59593-298-4. doi: 10.1145/1125451.1125771.  URL http://doi.acm.org/10.1145/1125451.1125771.

Lisa A. Petrison and Paul Wang.  Exploring  the dimensions of consumer privacy: an analysis of coverage in british and american  media.   Journal of Direct Marketing,  9(4):19–37, 1995.  ISSN 1522-7138.  doi: 10.1002/dir.4000090404. URL http://dx.doi.org/10.1002/dir.4000090404.

H.  Jeff Smith,  Tamara Dinev,  and  Heng  Xu.   Information Privacy  Research: An interdisciplinary review.  MIS  Quarterly,  35(4):989–1016, December  2011.

ISSN  0276-7783.        URL   http://dl.acm.org/citation.cfm?id=2208940.2208950.

Daniel  J. Solove.   A Taxonomy   of Privacy.    University   of  Pennsylvania  Law Review,  154(3), January 2006.

Irma  Van  Der Ploeg.  Keys  To Privacy.  Translations of “the privacy  problem” in Information Technologies, pages 15–36. Maastricht: Shaker,  2005.

Heather Patterson and Helen Nissenbaum, Context-Dependent Expectations of Privacy in Self-Generated Mobile Health Data

Heather Patterson and Helen Nissenbaum, Context-Dependent Expectations of Privacy in Self-Generated Mobile Health Data

Comment by: Katie Shilton

PLSC 2013

Workshop draft abstract:

Rapid developments in health self-quantification via ubiquitous computing point to a future in which individuals will collect health-relevant information using smart phone apps and health sensors, and share that data online for purposes of self-experimentation, community building, and research. However, online disclosures of intimate bodily details coupled with growing contemporary practices of data mining and profiling may lead to radically inappropriate flows of fitness, personal habit, and mental health information, potentially jeopardizing individuals’ social status, insurability, and employment opportunities. In the absence of clear statutory or regulatory protections for self-generated health information, its privacy and security rest heavily on robust individual data management practices, which in turn rest on users’ understandings of information flows, legal protections, and commercial terms of service. Currently, little is known about how individuals understand their privacy rights in self-generated health data under existing laws or commercial policies, or how their beliefs guide their information management practices. In this qualitative research study, we interview users of popular self-quantification fitness and wellness services, such as Fitbit, to learn (1) how self-tracking individuals understand their privacy rights in self-generated health information versus clinically generated medical information; (2) how user beliefs about perceived privacy protections and information flows guide their data management practices; and (3) whether commercial and clinical data distribution practices violate users’ context-dependent informational norms regarding access to intimate details about health and personal well-being. Understanding information sharing attitudes, behaviors, and practices among self-quantifying individuals will extend current conceptions of context-dependent information flows to a new and developing health-related environment, and may promote appropriately privacy-protective health IT tools, practices, and policies among sensor and app developers and policy makers.

Helen Nissenbaum, Respect for Context as a Benchmark for Privacy Online: What it is and isn’t

Helen Nissenbaum, Respect for Context as a Benchmark for Privacy Online: What it is and isn’t

Comment by: James Rule

PLSC 2013

Workshop draft abstract:

In February 2012, the Obama White House unveiled a Privacy Bill of Rights within the report, Consumer Data Privacy in a Networked World: A Framework for Protecting Privacy and Promoting Innovation in the Global Digital Economy, developed by the Department of Commerce, NTIA. Among the Bill of Right’s seven principles, the third, “Respect for Context” was explained as the expectation that “companies will collect, use, and disclose personal data in ways that are consistent with the context in which consumers provide the data.” (p.47) Compared with the other six, which were more recognizable as kin of traditional principles of fair Information practices, such as, for example, the OECD Privacy Guidelines, the principle of respect for Context (PRC) was intriguingly novel.

Generally positive reactions to the White House Report and to the principle of respect-for-context aligned many parties who have disagreed with one another on virtually everything else to do with privacy. That the White House publicly and forcefully acknowledged the privacy problem buoyed those who have worked on it for decades; yet, how far the rallying cry around respect-for-context will push genuine progress is critically dependent on how this principle is interpreted. In short, convergent reactions may be too good to be true if they stand upon divergent interpretations and whether the Privacy Bill of Rights fulfills it promise as a watershed for privacy will depend on which one of these drives regulators to action – public or private. At least, this is the argument my article develops.

Commentaries surrounding the Report reveal five prominent interpretations: a) context as determined by purpose specification; b) context as determined by technology, or platform; c) context as determined by business sector, or industry; d) context as determined by business model; and e) context as determined by social sphere. In the report itself meaning seems to shift from section to section or is left indeterminate but without dwelling too long on what exactly NTIA may or may not have intended my article discusses these five interpretations focusing on what is at stake in adopting any one of them. Arguing that a) and c) would sustain existing stalemates and inertia and that b) and d) though a step forward would not realize the principle’s compelling promise, I defend e), which conceives context as social sphere. Drawing on ideas in Privacy in Context: Technology, Policy, and the Integrity of Social Life (2010), I argue (1) that substantive constraints derived from context-specific informational norms are essential for infusing fairness into purely procedural rule sets; and (2) rule sets that effectively protect privacy depend on a multi-stakeholder process (to which the NTIA is strongly committed), which is truly representative, in turn depends on properly identifying relevant social spheres.

Helen Nissenbaum & Andrew Selbst, Contextual Expectations of Privacy

Helen Nissenbaum & Andrew Selbst, Contextual Expectations of Privacy

Comment by: James  Grimmelmann

PLSC 2012

Workshop draft abstract:

The last decade of privacy scholarship is replete with theories of privacy that reject absolute binaries such as secret/not secret or inside/outside, instead favoring approaches that take context into account to varying degrees. Fourth Amendment doctrine has not caught up with theory, however, and courts continue to employ discredited binaries to justify often contradictory conclusions. At the same time, while some of the cases reveal the influence of contextual thinking, courts rarely have included an explicit commitment to context in their opinions. We believe that such a commitment would improve both the internal consistency and individual case outcomes of the Fourth Amendment.

The theory of contextual integrity, which characterizes a right to privacy as the preservation of expected information flows within a given context, offers a framework for injecting context into the conversation. Grounded, as it is, in context-based normative expectations, the theory offers a useful interpretive framework for Fourth Amendment search doctrine. This paper seeks to reexamine the meaning of a “reasonable expectation of privacy” under the theory of contextual integrity, and in doing so accomplish three goals: 1) create a picture of Fourth Amendment doctrine if the Katz test had always been interpreted this way, 2) demonstrate that contextual integrity can draw connections between seemingly disjointed doctrines within the Fourth Amendment, and 3) illustrate the mechanism of applying contextual integrity to a Fourth Amendment search case, with the intent of helping both theorists and practitioners in future cases, particularly those involving technology.

Helen Nissenbaum, Amanda Conley, Anupam Datta, Divya Sharma, The Obligation of Courts to Post Records Online: A Multidisciplinary Study

Helen Nissenbaum, Amanda Conley, Anupam Datta, Divya Sharma, The Obligation of Courts to Post Records Online: A Multidisciplinary Study

Comment by: Michael Traynor

PLSC 2011

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2112573

Workshop draft abstract:

On the face of it, what could be complicated about placing public court records online? After all, courts are obliged to make these records publicly available—shouldn’t they do so as effectively and efficiently as possible, offering a digitized collection on the Web? Indeed, the advent of PACER for the Federal Court System suggests that the matter has been settled.  Yet at state and local levels around the country, where some of the records richest in personal information are found, privacy concerns have brought this controversial prospect to the fore. Courts continue to grapple with it drafting new administrative rules and looking for guidance in constitutional, legislative, and common law sources, legal scholarship, as well as their own past practices and those of peer institutions.

The paper I am proposing for PLSC 2011 will report on findings of a collaborative project with Amanda Conley, Anupam Datta, and Divya Sharma on the normative question of placing court records online. Drawing on notable work by legal scholars Grayson Barber, Natalie Gomez-Velez, Daniel Solove, and Peter Winn, among others, and focusing on state civil courts, our project asks whether courts have an obligation to post on the Web, for open and unconditional access, records that traditionally have been made available in hard copy from court houses or electronically via local terminals. Guided by the framework of contextual integrity, we map, in detail, the differences in flow afforded by online placement and in so doing, make precise what others have attempted to capture with terms such as “hyper-dissemination” and “practical obscurity”. For the normative analysis, we compare local and online of dissemination in terms of how well they serve values, ends, and purposes attributed to courts, such as dispute resolution, justice and fairness, and accountability.

We reach the surprising (though tentative) conclusion that although courts in free and democratic societies are obliged to provide open access to records of court proceedings and outcomes, this obligation—both online and off—does not necessarily extend to all information that is presently included in these records. This means either that a great deal of personal information could be excised from records without violence to courts’ purposes, or there are reasons driving current practices that have not fully been acknowledged. Both alternatives are in critical need of further exploration.

Helen Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life

Helen Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life

Comment by: Anita Allen

PLSC 2010

Workshop draft abstract:

Newly emerging socio-technical systems and practices, undergirded by digital media and information science and technology, have enabled massive transformations in the capacity to monitor behavior, amass and analyze personal information, and distribute, publish, communicate and disseminate it. These have spawned great social anxiety and, in turn, laws, policies, public interest advocacy, and technologies, framed as efforts to protect a right to privacy. A settled definition of this right, however, remains elusive. The book argues that common definitions as control over personal information or secrecy, that is, minimization of access to personal information, do not capture what people care about when they complain and protest that their right to privacy is under threat. Suggesting that this right may be circumscribed as applying only to special categories of “sensitive” information or PII does not avoid its pitfalls.

What matters to people is not the sharing of information — this is often highly valued — but the inappropriate sharing. Characterizing appropriate sharing is the heart of the book’s mission, turning to wisdom embodied in entrenched norms governing flows of personal information in society. The theory of contextual integrity offers “context-relevant informational norms” as a model for these social norms of information flow. It claims that in assessing whether a particular act or system or practice violates privacy, people are sensitive to the context in which these occur — e.g. healthcare, politics, religious practice, education, commerce — what types of information are in question, about whom it is, from whom it flows and to what recipients. What also matters are the terms of flow, called “transmission principles” that govern these flows, for example, with the consent of an information subject, whether in one direction or reciprocally, whether forced, given, bought and sold, and so on. Radical transformations in information are protested because they violate entrenched informational norms, that is, when they violate contextual integrity.

But not all technologies that induce novel flows are resisted, and contextual integrity would be unhelpfully conservative if it sweepingly deemed them morally wrong. The theory, therefore, also discriminates between those that are acceptable, even laudable, from those that are problematic. Inspired by the great work of other privacy theorists — past and contemporary — the theory suggests we examine how well novel flows serve diverse interests as well as important moral and political values compared with entrenched flows. The crux, however, lies in establishing how well these serve internal values, ends, and purpose of the relevant background contexts, ultimately, that is, how well they serve the integrity of society’s key structures and institutions.