Archives

Ryan Calo, Digital Market Manipulation

Ryan Calo, Digital Market Manipulation

Comment by: Randal Picker

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2309703

Abstract:

Jon Hanson and Douglas Kysar coined the term “market manipulation” in 1999 to describe how companies exploit the cognitive limitations of consumers. Everything costs $9.99 because consumers see the price as closer to $9 than $10. Although widely cited by academics, the concept of market manipulation has had only a modest impact on consumer protection law.

This Article demonstrates that the concept of market manipulation is descriptively and theoretically incomplete, and updates the framework for the realities of a marketplace that is mediated by technology. Today’s firms fastidiously study consumers and, increasingly, personalize every aspect of their experience. They can also reach consumers anytime and anywhere, rather than waiting for the consumer to approach the marketplace. These and related trends mean that firms can not only take advantage of a general understanding of cognitive limitations, but can uncover and even trigger consumer frailty at an individual level.

A new theory of digital market manipulation reveals the limits of consumer protection law and exposes concrete economic and privacy harms that regulators will be hard-pressed to ignore. This Article thus both meaningfully advances the behavioral law and economics literature and harnesses that literature to explore and address an impending sea change in the way firms use data to persuade.

Paul Ohm, What is Sensitive Information?

Paul Ohm, What is Sensitive Information?

Comment by: Peter Swire

PLSC 2013

Workshop draft abstract:

The diverse and dizzying variety of regulations, laws, standards, and corporate practices in place to deal with information privacy around the world share at their core at least one unifying concept: sensitive information. Some categories of information—health, financial, education, and child-related, to name only a few—are deemed different than others, and data custodians owe special duties and face many more constraints when it comes to the maintenance of these categories of information.

Sensitive information is a show stopper. Otherwise lax regulations become stringent when applied to sensitive information. Permissive laws set stricter rules for the sensitive. The label plays a prominent role in rhetoric and debate, as even the most ardent believer in free markets and unfettered trade in information will bow low to the ethical edict that they never sell sensitive information regardless of the cost.

Despite the importance and prevalence of sensitive information, very little legal scholarship has systematically studied this important category. Sensitive information is deeply undertheorized. What makes a type of information sensitive? Are the sensitive categories set in stone, or do they vary with time and technological advances? What are the political and rhetorical mechanisms that lead a type of information into or out of the designation? Why does the designation serve as such a powerful trump card? This Article seeks to answer these questions and more.

The Article begins by surveying the landscape of sensitive information. It identifies dozens of examples of special treatment for sensitive information in rules, laws, policy statements, academic writing, and corporate practices from a wide number of jurisdictions, in the United States and beyond.

Building on this survey, the Article reverse engineers the rules of decision that define sensitive information. From this, it develops a multi-factor test that may be applied to explain, ex post, the types of information that have been deemed sensitive in the past and also predict, ex ante, types of information that may be identified as sensitive soon. First, sensitive information can lead to significant forms of widely-recognized harm. Second, sensitive information is the kind that exposes the data subject to a high probability of such harm. By focusing in particular on these two factors, this Article sits alongside the work of many other privacy scholars who have in recent years shifted their focus to privacy harm, a long neglected topic. Third, sensitive information is often governed by norms of limited sharing. Fourth, sensitive information is rare and tends not to exist in many databases. Fifth, sensitive information tends to focus on harms that apply to the majority—often the ruling majority—of data subjects while information leading to harms affecting only a minority less readily secure the label.

To test the predictive worth of these factors, the Article applies them to assess whether two forms of data that have been hotly debated by information privacy experts in recent years are poised to join the ranks of the sensitive: geolocation data and remote biometric data. Neither one of these have already been widely accepted as sensitive in privacy law, yet both trigger many of the factors listed above. Of the two, geolocation data is further down the path, already recognized by laws, regulations, and company practices world wide. By identifying and justifying the treatment of geolocation and remote biometric data as sensitive, this Article hopes to spur privacy law reform in many jurisdictions.

Turning from the rules of decision used in the classification of sensitive information to the public choice mechanisms that lead particular types of information to be classified, the Article tries to explain why new forms of sensitive information often fail to be recognized until years after they satisfy most of the factors listed above. It argues that this stems from the way political institutions incorporate new learning from technology slowly and haphazardly. To improve this situation, the Article suggests new administrative mechanisms to identify new forms of sensitive information on a much more accelerated timeframe. It specifically proposes that in the United States, the FTC undertake a periodic—perhaps biennial or triennial—review of potential categories of sensitive information, suggested by members of the public. The FTC would be empowered to classify particular types of information as sensitive, or to remove the designation from types that are no longer sensitive, because of changes in technology or society. It would base these decisions on rigorous empirical review of the factors listed above, focusing in particular on the harms inherent in the data and the probability of harm, given likely threat models. It illustrates the idea by considering a type of information that has not really been considered sensitive, calendar information. Calendar information tends to reveal location, associations, and other forms of closely-held, confidential information, yet very few recognize the status of this potentially new class of sensitive information. We might consider asking the FTC whether this deserves to be categorized sensitive.

Finally, the Article tackles the vexing and underanalyzed problem of idiosyncratically sensitive information. Since traditional conceptions of sensitive information cover primarily majoritarian concerns, it does little to protect the data that feel sensitive only to smaller groups. This is a significant gap in the information privacy landscape, as every person cares about idiosyncratic forms of information that worry only a few. It may be that traditional forms of information privacy law are ill-equipped to deal with idiosyncratically sensitive information. Regulating idiosyncratically sensitive information will require more aggressive forms of regulation, for example premising new laws on the amount of information held, not only on the type of information held, on the theory that larger databases are likelier to hold idiosyncractically sensitive information than smaller databases.

Dawn E. Schrader, Dipayan Ghosh, William Schulze, and Stephen B. Wicker, Civilization and Its Privacy Discontents: The Personal and Public Price of Privacy

Dawn E. Schrader, Dipayan Ghosh, William Schulze, and Stephen B. Wicker, Civilization and Its Privacy Discontents: The Personal and Public Price of Privacy

Comment by: Heather Patterson

PLSC 2013

Workshop draft abstract:

Privacy awareness and privacy law ought to be built proactively and grounded in specific moral principles that protect our fundamental rights to live together, yet autonomously, in civilized society. In reality, people are willing to compromise their individual liberty in favor of peaceful societal co-existence. In this paper, we examine both the psychological need for the self to be connected to the outside world, and the simultaneous sense that the self wishes to have autonomy separate from that world; thus building loosely on Freud’s thesis from which this paper draws it’s title. We explore how people both want and fear opportunities that public (utility) collection of consumer data provide, even though they might know that control over, and regulation of, thought and behavior may ensue (e.g. Thaler & Sunstein; Wicker & Schrader, 2011).  What is the price, or value, placed on private information? Does the value of privacy shift as risks and benefits shift? Is this valuing influenced by media?

Power consumption metering offers a real-world contextualization in which a price is paid for private information. In order for the real-time prices to be broadcast to consumers who decide on their energy consumption, advanced technology is required for billing purposes. Temporally precise consumption levels are needed in order to charge consumers properly for their usage, so advanced technological monitoring records usage at short intervals and reports the fine granularity usage data. As these temporally precise data are directly reported to the utility, private details of the consumer’s life are effectively revealed, posing a risk of privacy violation. Cellular technology creates a similar context, one in which location information is given to a service provider in return for mobile communication services.  We therefore designed and conducted two national surveys to ascertain the value of personal privacy and the comparative social and economic costs of privacy impacts of the use of these two exemplary technologies.

Our paper examines consumers’ use of these technologies, whether or not they are aware of the privacy and security risks, what prices they are willing to pay to keep that risk at bay, and what they are willing to accept to give their private information.   What is the balance is between convenience, cost savings, and privacy protection?  We experimentally manipulated whether or not people would be persuaded by a media presentation that was designed to increase their awareness of privacy and security.  We further examined the economic cost-benefits and risk ratios and decisions to either adopt new privacy-aware measures/technologies or to change their behavior.  In essence, this paper examines the “tipping point” between personal privacy value and public offering cost.  We conclude by examining the price people are willing to pay to accept for privacy in relation to privacy law and policy, and make recommendations to limit corporate society, and protect individuals, from creating and accepting tempting risky behaviors that erode privacy rights.


Thaler, R. H, and Sunstein, C. R. (2008).  Nudge: Improving decisions about health, wealth and happiness.  New Haven, CT: Yale University Press.

Wicker, S. B. & Schrader, D. E. (2011).  Privacy Aware Design Principles for Information Networks.  Proceedings of the IEEE.  Issue 99, pp. 1-21. Digital Object identifier: 10.1109/JPROC.2010.2073670.

Benedikt Burger, Claudia Langer, and Veronika Krizova, Privacy law in the U.S. and in Europe in emerging fields of biotechnology

Benedikt Burger, Claudia Langer, and Veronika Krizova, Privacy law in the U.S. and in Europe in emerging fields of biotechnology

Comment by: Paul Schwartz

PLSC 2013

Workshop draft abstract:

Biotechnology is one of the key technologies of our days. Its rapid evolution opens new opportunities for healthcare and bio-engineering. But as the progress of biotechnological medical research and therapy advances, so do the threats to privacy. Genes as the basic biological component of biological sample tissues contain sensitive information about donors or patients. While an individual may profit, for example, from his participation in research projects or from therapies, the protection of privacy requires him/her to remain in control of his/her data.

Only if the legal framework keeps track with the rapid development and solves the conflict between the researchers need for information and the individual’s right to privacy, biotechnology can bring benefits to individuals as well as the society in general without reducing the protection of an individual’s privacy.

The paper illustrates this intersection on three controversially discussed examples: human tissue engineering, biobanks, and organ donation and transplantation.

Human tissue engineering constantly consumes human tissue samples for the research for new and promising therapies and cures for diseases. By taking tissue samples which doctors/researchers then prepare as human tissue engineered products (HTEPs), they provide treatment while simultaneously collecting the patients’ clinical data. Furthermore, HTEPs are worldwide exchanged among researchers in international projects. The projects have to face the challenge of different national laws on patients’ privacy rights. But also single patients travel internationally for innovative therapies. Such medical tourist destinations are regularly countries where their privacy rights do (almost) not exist. Specific privacy rules on medical tourism and the international tissue exchange are necessary to ensure the individuals’ rights. The paper will present how informed consent is a way to serve both the patients’ and the researchers’ needs in a globalized world.

After the collection of tissues, biobanks may be installed to maximize scientific research in interdisciplinary and international projects. Genetic data is used as an input to perform either research which may help to understand diseases and to develop of new drugs. The nature of research does not allow precise information about its progress and findings. The level of information an individual can get at the time of his consent is therefore inevitably low. New models like open or broad consent are introduced to solve this conflict – and reduce an individual’s protection even further. The paper will present alternatives that will balance the scientific needs and an individual’s protection.

Organ donation and transplantation has been one of the first medical fields that drew attention of lawyers and the general public due to the controversial nature of the method in respect to personality rights. Many attempts have already been made to find the best compromise for a legal regulation of issues regarding i.e. consent, incentives and benefits for donors or the privacy of data collected, stored and shared. But there still seems to remain uncertainty as to what approach is the most sensible yet effective one. As the medical law and medicine itself evolves, new fields of research and innovations bring new challenges for legislation. At the same time, they help to introduce new legal solutions. New areas such as stem cell research or the use of biobanks for scientific and commercial purposes can also represent potential elements of a new legal framework for organ donation and transplantation while they can thrive from the variety of approaches to the existing problems in more traditional fields of medicine.

The goal of this paper is to show the interrelation between specific aspects of biotechnology and privacy. It will focus on the comparison between U.S. and European Privacy Law and show the different ways of regulation as well as commonalities in order to maximize potential benefits of international research.

Stephen Henderson and Kelly Sorensen, Search, Seizure, and Immunity: Second-Order Normative Authority, Kentucky v.King, and Police-Created Exigent Circumstances

Stephen Henderson and Kelly Sorensen, Search, Seizure, and Immunity: Second-Order Normative Authority, Kentucky v.King, and Police-Created Exigent Circumstances

Comment by: Marcia Hofmann

PLSC 2013

Workshop draft abstract:

A paradigmatic aspect of a paradigmatic kind of right is that the person holding the right is the only one who can alienate it.  Rights are constraints that protect individuals, and while individuals can consent to waive many or even all rights, the normative source of that waiving is normally taken to be the individual herself.

This moral feature – immunity – is usually in the background of discussions about rights.  We want to bring it into the foreground here.  This foregrounding is especially timely in light of a recent U.S. Supreme Court decision, Kentucky v. King (2011), concerning search and seizure rights.  An entailment of the Court’s decision is that, at least in some cases, a right can be removed by the intentional actions of the very party against whom the right supposedly protects the rights holder.  We will argue that the Court’s decision is mistaken.  The police officers in the case before the Court were not morally permitted, and should not be legally permitted, to intentionally create the very circumstances that result in the removal of an individual’s right against forced, warrantless search and seizure.  In Fourth Amendment terms, the Court was wrong to reject the doctrine of police-created exigency.

An embedded concern is this.  Law enforcement officers and others are able to create circumstances that transform, or in some cases seem to transform, a person into a kind of wrongdoer who was not one before.  There are moral constraints against creating the circumstances that transform persons in certain ways.  We will note some of these constraints as well.

Anjali S. Dalal, Administrative Constitutionalism and the Development of the Surveillance State

Anjali S. Dalal, Administrative Constitutionalism and the Development of the Surveillance State

Comment by: Michael Traynor

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2236502

Workshop draft abstract:

Administrative constitutionalism is a theory that promotes and protects laws that reflect the deliberative process.  It is a flavor of popular constitutionalism that values the multi-year conversations among the various branches of government, levels of government, and the public, reflecting the evolution and slow entrenchment of a set of norms.  For administrative constitutionalists, it is the dialogic process that lends legitimacy to the norm that ultimately evolves and become entrenched.

However, one of the dangers of administrative constitutionalism in practice is that of entrenchment before deliberation. Agencies, in their role as norm entrepreneurs, can develop and, over time, entrench norms before those norms have the opportunity to emerge from the deliberative process.  This situation threatens to create legitimacy based on historical practice and path dependency, not deliberation and consensus building.

This article provides an account of administrative constitutionalism at its best and its worst, by tracing the history of the creation and evolution of the Attorney General Guidelines, the governing document for the FBI.  In particular, this article looks at the defining features that led to the early success and later failure of administrative constitutionalism in practice and attempts to articulate the administrative architecture needed to ensure that the reality of administrative constitutionalism reflects the deliberative and democratic promise of the theory.

The article begins with a brief summary of Eskridge and Ferejohn’s theory of administrative constitutionalism.  Part I provides an account of domestic surveillance law that demonstrates the power and promise of administrative constitutionalism.  In this Part, I trace the growth of the FBI’s domestic surveillance practices from its early years until the development of the FBI’s first governing document, the

Attorney General Guidelines.  This document reflected the tenor of the time and a high point in the FBI’s protection of First Amendment rights in the face of competing national security concerns. Part II provides an account of the subsequent iterations of the Attorney

General Guidelines that illustrates the dangers of agency norm entrepreneurship and entrenchment gone unchecked.  In this Part, I detail the historical evolution of the Guidelines and attendant shift in the balance between free speech and national security, in favor of national security.  This shift occurs with neither the input of the other branches of government nor the public, but is encouraged by the norm entrenchment that follows, reflecting a failure of administrative constitutionalism in practice.  This failure is marked by the return of the three dominant features of the Hoover FBI — the unchecked expansion of the FBI’s mission, the pursuit of mission through illegal or potentially illegal means, and the creation of an intelligence gathering process cloaked in secrecy.  Part III suggests a few structural changes to help architect against future failures of administrative constitutionalism.  In particular, Part III explores the appropriate role of Congressional oversight, judicial intervention, and agency accountability to ensure the success of administrative constitutionalism.

Robert H. Sloan and Richard Warner, Beyond Notice and Choice: Streetlights, Norms, and Online Consent

Robert H. Sloan and Richard Warner, Beyond Notice and Choice: Streetlights, Norms, and Online Consent

Comment by: Robert Gellman

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2239099

Workshop draft abstract:

Informational privacy is the ability to determine for yourself when and how others may collect and use your information.  We assume there is good reason to ensure adequate informational privacy.  Adequate informational privacy requires a sufficiently broad ability to give or withhold free and informed consent to proposed uses; otherwise, you cannot determine for yourself how others use your information.

Notice and Choice (sometimes also called “notice and consent”) is the current paradigm for consent online. The Notice is a presentation of terms, typically in a privacy policy or terms of use agreement.  The Choice is an action signifying acceptance of the terms, typically clicking on an “I agree” button, or simply using the website.  Recent reports by the Federal Trade Commission explicitly endorse the Notice and Choice approach (and provide guidelines for its implementation). When the Notice contains information about data collection and use, the argument for Notice and Choice rests on two claims. First: a fully adequate implementation of the paradigm would ensure that website visitors can give (or withhold) free and informed consent to data collection and use practices.  Second: the combined effect of all the individual decisions is an acceptable overall tradeoff between privacy and the benefits of collecting and using consumers’ data.  There are (we contend) decisive critiques of both claims.  So why do policy makers and privacy advocates continue to endorse Notice and Choice?

An unsympathetic but not entirely inapt analogy is the old joke about the drunk searching for his keys underneath the streetlight:

A policeman sees a drunken man searching for something under a streetlight and asks the drunk what he lost. He says he lost his keys and they both look under the streetlight together.  After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, that he lost them in the park. “So, why are you looking under the streetlight?” asks the policeman, and the drunk replies, “This is where the light is.”

Policy makers and privacy advocates look under the streetlight of Notice and Choice even though it is clear that the consent is not there.  Why don’t they search more broadly?  Most likely, they see no need to do so.  We find the critique of Notice and Choice conclusive, but our assessment is far from widely shared—and understandably so.  Criticisms of Notice and Choice are scattered over several articles and books.  No one has unified them and answered the obvious counterarguments.  We do so in Section I.  Making the critique plain, however, is not enough to ensure that policy makers turn from the “streetlight” to the “park.” The critiques are entirely negative; they do not offer any alternative to Notice and Choice. They do not direct us to a “park” in which to search for consent.

Drawing on Helen Nissenbaum’s work, we offer an alternative:  informational norms.  Informational norms are social norms that constrain the collection, use, and distribution of personal information.  Such norms explain, for example, why your pharmacist may inquire about the drugs you are taking, but not about whether you are happy in your marriage.  When appropriate informational norms govern online data collection and use, they ensure both that visitors give free and informed consent to those practices, and yield an acceptable overall tradeoff between protecting privacy and the benefits of processing information.  A fundamental difficulty is the lack of norms.  Rapid advances in information processing technology have fueled new business models, and the rapid development has outpaced the slow evolution of norms. Notice and Choice cannot be pressed into service to remedy this lack.  It is necessary to develop new norms, and in later sections of the paper we discuss how to develop new norms.

Latanya Sweeney, Racial Discrimination in Online Ad Delivery

Latanya Sweeney, Racial Discrimination in Online Ad Delivery

Comment by: Margaret Hu

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2208240

Workshop draft abstract:

Investigating the appearance of online advertisements that imply the existence of an arrest record, this writing chronicles field experiments that measure racial discrimination in ads served by Google AdSense.  A specific company, instantcheckmate.com, sells aggregated public information about individuals in the United States and sponsors ads to appear with Google search results for searches of some exact “firstname lastname” queries. A Google search for a person’s name, such as “Trevon Jones”, may yield a personalized ad that may be neutral, such as “Looking for Trevon Jones? Comprehensive Background Report and More…”, or may be suggestive of an arrest record (Suggestive ad), such as “Trevon Jones, Arrested?…” or “Trevon Jones: Truth. Arrests and much more. … “

Field experiments documented in this writing show racial discrimination in ad delivery based on searches of 2200 personal names across two websites.  First names, documented by others as being assigned primarily to black babies, such as Tyrone, Darnell, Ebony and Latisha, generated ads suggestive of an arrest 75 percent and 96 percent of the time, and names having a first name documented by others as being assigned at birth primarily to whites, such as Geoffrey, Brett, Kristen and Anne, generated more neutral copy: the word “arrest” appeared zero to 9 percent of the time.  A few names did not follow these patterns: Brad, a name predominantly given to white babies, generated a Suggestive ad 62 percent to 65 percent of the time.  All ads return results for actual individuals and Suggestive ads appear regardless of whether the subjects have an arrest record in the company’s database.  Notwithstanding these findings, the company maintains Google received the same ad copy for groups of last names (not first names), raising questions as to whether Google’s algorithm exposes racial bias in society.

Neil M. Richards, Data Privacy and the Right to be Forgotten after Sorrell

Neil M. Richards, Data Privacy and the Right to be Forgotten after Sorrell

Comment by: Lauren Gelman

PLSC 2013

Workshop draft abstract:

This paper takes on the argument that the First Amendment also bars the government from regulating some or all of the commercial trade in personal information.  This argument had some success in the federal appellate courts in the 1990s and 2000s, and the Supreme Court seems to have recently embraced it.  In the 2011 case of Sorrell v. IMS, the Court held that the First Amendment prohibited a state from regulating the use of doctor’s prescribing data for marketing purposes.  These cases present the issue of whether the sale of commercial data is “free speech” or not, and the consequences of this decision for consumers and citizens.  But what is really at stake in Sorrell and in the cases which will inevitably follow it are two very different conceptions of our Information Society.  From the perspective of the First Amendment critics (and possibly a majority of the Roberts Court), flows of commercial information are truthful speech protected by the First Amendment.  This argument has received a boost from the Court’s decision in Sorrell and from the poorly-named and articulated “Right to Be Forgotten” that is on the rise in European data protection circles.  By contrast, privacy scholars and activists have often failed to explain why privacy is worth protecting in these cases, particularly in the face of the constitutional arguments on the other side.

Building on my earlier work on this important question, I argue that the First Amendment critique of data privacy law is largely unpersuasive.  While the First Amendment critique of the privacy torts and some broad versions of the Right to Be Forgotten do threaten important First Amendment interests, the broader First Amendment critique of data privacy protections for consumers does not.  Unlike news articles, blog posts, or even gossip, which are expressive speech by human beings, the commercial trade in personal data uses information as a commodity traded from one computer to another.  The data trade is much more commercial than expressive, and the Supreme Court has long held for good reason that the sale of information is commercial activity that receives very little First Amendment protection.  We must keep faith with this tradition in our law rather than abandoning it.  Moreover, I will show how a more modest form of the Right to be Forgotten can be protected consistent with the First Amendment, and even with Sorrell.  But I argue that we should rethink our use of the general term “privacy” to deal with the commercial trade in personal information.  The use of “privacy” connotes tort-centered notions protecting against extreme emotional harm – a model that poorly tracks the modern trade in personal information, which gives opponents of consumer data protection unnecessary ammunition.  A better way to understand the consumer issues raised by the trade in personal information is the European concept of “data protection,” or perhaps just “data privacy.”  Putting the old tort-focused conception of anti-disclosure away allows us to better understand the problem, and can suggest better solutions.  The regulation of data privacy should focus on managing the flows of commercial personal information and allowing greater consumer input into decisions made on the basis of data about them.

But however we as a society choose to regulate data flows, we should be able to choose.  We should not be sidetracked by misleading First Amendment arguments, because the costs of not regulating the trade in commercial data are significant.  As we enter the Information Age, where the trade in information is a multi-billion dollar industry, government should be able to regulate the huge flows of personal information, as well as the uses to which this information can be put.  At the dawn of the Industrial Age, businesses interests persuaded the Supreme Court in the Lochner case that the freedom of contract should immunize them from regulation.  I explain how and why we should reject the calls of First Amendment critics for a kind of digital Lochner for personal information. It shows how we can have consumer protection law in the Information Age without sacrificing meaningful free speech.

(This paper is an adaptation for PLSC of portions of Chapters 5 and 11 of my forthcoming book, Intellectual Privacy: Rethinking Civil Liberties in the Information Age (forthcoming Oxford University Press 2014)).

Seda Gürses, “Privacy is don’t ask, confidentiality is don’t tell” An empirical study of privacy definitions, assumptions and methods in computer science research and Robert Sprague and Nicole Barberis, An Ontology of Privacy Law Derived from Probabilistic Topic Modeling Applied to Scholarly Works Using Latent Dirichlet Allocation (joint workshop)

Seda Gürses, “Privacy is don’t ask, confidentiality is don’t tell” An empirical study of privacy definitions, assumptions and methods in computer science research and Robert Sprague and Nicole Barberis, An Ontology of Privacy Law Derived from Probabilistic Topic Modeling Applied to Scholarly Works Using Latent Dirichlet Allocation (joint workshop)

Comment by: Helen Nissenbaum

PLSC 2013

Workshop draft abstract:

Since the end of the 60s, computer scientists have  engaged in  research on  privacy and  information  systems.  Over the years, this research has led to a whole  palette  of “privacy solutions”.  These vary from design principles and  privacy  tools,  to the application of privacy enhancing techniques.  These solutions originate  from  diverse  sub-fields of computer  science,  e.g.,  security  engineering, databases,  software  engineering,  HCI,  and  artificial intelligence. From a bird’s  eye view, all of these  researchers are  studying  privacy. However,  a  closer  look  reveals that each  community  of researchers relies  on different,  sometimes  even conflicting, definitions of privacy, and  on a variety of social and  technical assumptions. At best, they are referring to different facets of privacy and,  at worst,  they  fail  to take  into  account  the diversity  of existing definitions  and  to integrate  knowledge  on  the phenomenon generated by  other  communities  (Gürses  and  Diaz,  2013).  Researchers do have  a tradition of assessing  the (implicit) definitions and assumptions that un- derlie the studies in their respective communities (Goldberg, 2002; Patil et al., 2006). However,  a systematic evaluation of privacy  research prac- tice across  the different computer science communities is so far absent. This  paper  contributes to closing this gap through an empirical study of privacy  research in computer  science.  The  focus of the paper  is on the different  notions  of privacy  that the 30 interviewed  privacy  researchers employ, as well as on the dominant worldviews that inform their practice. Through a qualitative analysis  of their responses  using grounded theory we consider  how the researchers framing  of privacy  affects  what  counts as “worthwhile problems” and  “acceptable  scientific  evidence”  in their studies  (Orlikowski and  Baroudi, 1991).  We  further analyze  how  these conceptions of the problem  prestructure the potential solutions to privacy in their  fields (Van  Der Ploeg,  2005).

 

We expect the results to be of interest beyond  the confines of computer science.  Previous studies  on how privacy  is conceived  and  addressed in practice have brought new perspectives to “privacy on the books” (Bamberger and Mulligan,  2010): users’ changing  articulations of privacy  in networked publics  (danah boyd,  2007), the evolution of privacy  as practiced by private organizations (Bamberger and Mulligan,  2010), the conceptualization of privacy  in legal practice (Solove,  2006), or the framing of privacy  in media  coverage  in different cultural contexts (Petrison and Wang,  1995). However,  few studies have turned their gaze on the re- searchers themselves with the objective of providing a critical reflection of the field (Smith et al., 2011). The  few studies that exist in computer science focus on artifacts produced by the researchers, e.g., publications, or provide  an  analysis  of the state-of-the-art written by insiders.  While these are valuable contributions, we expect the comparative and the empirical nature of our study  to provide  deep  and  holistic  insight  into privacy  research in computer science.


Kenneth A. Bamberger  and Deirdre  K. Mulligan.  Privacy  on the Books and on the Ground.  Stanford  Law Review,  63:247 – 316, 2010.

danah   boyd.    Why  Youth  (Heart) Social  Network  Sites:  The  Role  of  Net- worked  Publics   in  Teenage  Social  Life.    The  John  D.  and  Catherine   T. MacArthur Foundation  Series on Digital Media and Learning,  pages 119–142,

2007.   URL  http://www.mitpressjournals.org/doi/abs/10.1162/dmal.9780262524834.119.

Ian  Goldberg.   Privacy-enhancing technologies  for the Internet, II: Five  years later.  In Proc.  of PET 2002, LNCS 2482, pages 1–12. Springer,  2002.

Seda  Gu¨rses  and  Claudia  Diaz.   An activist  and  a consumer  meet  at  a social network …  IEEE  Security  and Privacy  (submitted), 2013.

Wanda  J. Orlikowski and Jack. J. Baroudi.  Studying Information Technology in Organizations: Research  Approaches  and Assumptions.  Information Systems Research,  2(1):1 – 28, 1991.

Sameer Patil,  Natalia  Romero, and John Karat. Privacy  and HCI: Methodologies for studying privacy issues. In CHI ’06 Extended  Abstracts  on Human  Factors in Computing  Systems, CHI EA ’06, pages 1719–1722, New York, NY, USA,

2006. ACM.  ISBN 1-59593-298-4. doi: 10.1145/1125451.1125771.  URL http://doi.acm.org/10.1145/1125451.1125771.

Lisa A. Petrison and Paul Wang.  Exploring  the dimensions of consumer privacy: an analysis of coverage in british and american  media.   Journal of Direct Marketing,  9(4):19–37, 1995.  ISSN 1522-7138.  doi: 10.1002/dir.4000090404. URL http://dx.doi.org/10.1002/dir.4000090404.

H.  Jeff Smith,  Tamara Dinev,  and  Heng  Xu.   Information Privacy  Research: An interdisciplinary review.  MIS  Quarterly,  35(4):989–1016, December  2011.

ISSN  0276-7783.        URL   http://dl.acm.org/citation.cfm?id=2208940.2208950.

Daniel  J. Solove.   A Taxonomy   of Privacy.    University   of  Pennsylvania  Law Review,  154(3), January 2006.

Irma  Van  Der Ploeg.  Keys  To Privacy.  Translations of “the privacy  problem” in Information Technologies, pages 15–36. Maastricht: Shaker,  2005.