Archives

Jennifer Rothman, The Inalienability of the Right of Publicity

Jennifer Rothman, The Inalienability of the Right of Publicity

Comment by: Deven Desai

PLSC 2010

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2174646

Workshop draft abstract:

Publicity rights first developed in the heartland of privacy and tort law as a compensation scheme for injuries to personal dignity through the misappropriation of a person’s identity. Today, however, the right of publicity is most often situated as a robust property right. Some courts and scholars have avoided classifying publicity rights as property-based, but have done so only because they say it does not matter whether publicity rights are tort or property-based. This article contends that the difference does matter. Tort-based rights are personal, non-assignable, and cannot be sold to satisfy court judgments. Property rights, however, are assignable and can be sold to satisfy court judgments. Despite the opportunity for individuals to assign in total their publicity rights, courts are uncomfortable with truly divesting an individual of control over his or her identity. One recent example arose when the Goldman estate sought not only to obtain the profits from O.J. Simpson’s publicity rights, but also to affirmatively control the use of his right of publicity. If publicity rights are property rights, such control seems uncontroversial. Nevertheless, taking away Simpson’s control over his own identity challenges the underlying autonomy-based justifications for publicity rights and more generally our commitment to individual liberty. This article will therefore suggest that publicity rights remain privacy-based torts. Resituating publicity rights in tort law will provide a basis for more appropriate limits on both the alienability and scope of publicity rights.

Guilherme Roschke, PETs, PITs and Safety: Privacy Enhancing and Privacy Invasive Technologies for Online Safety and Parental Control

Guilherme Roschke, PETs, PITs and Safety: Privacy Enhancing and Privacy Invasive Technologies for Online Safety and Parental Control

Comment by: Alissa Cooper

PLSC 2010

Workshop draft abstract:

Several reports have identified consumer-directed technological solutions as at least one of the responses to the various harms and inappropriate content that minors are exposed to online. However, the privacy impact of these solutions is often left undiscussed. I analyze a few of these online safety and parental control technologies in the context of the continuum of Privacy Enhancing vs. Privacy Invasive technologies. This paper explores legal and technological responses to the problems they raise.

First I describe three examples of Privacy Invasive Technologies touted for online safety. “Walled gardens of surveillance” are whitelist or other services that place children in an enclosed  online environment where they are subjected to potentially much more surveillance and online behavioral targeting. “Leaky monitoring” solutions provide to consumers the ability to monitor their children’s online experience, but also re-use the collected data in ways unexpected by the consumer, such as for market research. “Stalkerware” technologies are monitoring technologies sometimes marketed for safety, but also often marketed and used for illegitimate surveillance.

The response to Privacy Invasive online safety and parental control technologies includes legal as well as technological options.  Regulators can apply existing legislation such as COPPA and the FTC Act to alleviate some of the privacy impact of these technologies.  Further, by considering that the definition of “inappropriate content” should include the concerns of parents who find Privacy Invasive Technologies inappropriate, we invite the creation of Privacy Enhancing Technologies for online safety and parental control.

Priscilla M. Regan & Gerald FitzGerald, Generational Views of Privacy?

Priscilla M. Regan & Gerald FitzGerald, Generational Views of Privacy?

Comment by: Mary Culnan

PLSC 2010

Workshop draft abstract:

There is a growing body of social science research about the behavior and attitudes of young people online (Valentine and Halloway 2002, Livingstone and Bober 2003, Steeves 2006) and especially in social-networking sites, such as Facebook (Lenhart and Madden 2007).  I propose to expand on that research in several ways: by focusing on privacy rather than on a larger set of values; by examining attitudes rather than behavior; and by comparing attitudes across age groups rather than examining a specific age group in detail.  Specifically, I propose to perform an age cohort analysis of responses to “concern about privacy and technology” using data from a range of public opinion surveys beginning in the early 1980s and including the privacy surveys of Alan Westin and Lou Harris, and the Pew Internet and American Life surveys.  The goal of this part of the research is to determine if there are indeed generational patterns in concerns about privacy, to identify consistencies and disjunctures among generational attitudes, and to determine how these patterns have emerged over time.  Although scholars have analyzed changes in concern about privacy over time (Gandy 2003), no one has examined how age cohorts’ views of privacy are different or similar and how those age cohorts’ views change or endure over time.  The central argument/hypothesis of this research is that as generations increasingly use computer and information technologies in seamlessly mediating their online and offline worlds they see these technologies as integral to their way of “presenting themselves” (Goffman 1959) and that this in turn causes/contributes to a fundamental change in the way the generations conceptualize privacy as a value in their lives.

Neil Richards, Brandeis, Privacy, and Speech

Neil Richards, Brandeis, Privacy, and Speech

Comment by: Andrew Taslitz

PLSC 2010

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1584831

Workshop draft abstract:

Although most courts and commentators assume that privacy and free speech are in conflict, in American jurisprudence, each of these traditions can be traced back to writings by Louis D. Brandeis – his 1890 Harvard Law Review article “The Right to Privacy” and his 1927 dissent in Whitney v. California.  How can privacy and speech be irreconcilable if Brandeis played a major role in creating both?  And how, if at all, did Brandeis recognize or address these tensions?  These questions have remained neglected not just by privacy scholars, but by those of Brandeis and free speech as well.  In this paper, I argue that the puzzle of Brandeis’ views on privacy and speech can be resolved, and that its resolution points the way towards a more fruitful and helpful understanding of both privacy and free speech.  My basic claim is that “The Right to Privacy” should be considered neither the central text of American privacy law nor an accurate record of Brandeis’ mature views of privacy and its relationship with free speech.  Brandeis’ views on privacy and speech evolved over the course of his life, from a relatively simplistic theory of privacy at the expense of speech as a young lawyer to a more nuanced understanding of the complex relationships between these two values later in his life.  Unlike tort privacy, which Brandeis seems to have thought about infrequently, a more important hallmark of Brandeis’ public career was the contradictory idea that public disclosure of many kinds of fraud and wrongdoing are in the public interest.  As he famously put it, “sunlight is the best disinfectant.”  In understanding Brandeis’ entire body of work, the best interpretation of his mature views on civil liberties is that we should minimize the importance of tort privacy where it conflicts with free speech.  But to minimize tort privacy is not to suppress it altogether.  A consideration of Brandeis’ free speech writings in light of Olmstead, also written late in his life, suggests some interesting ways in which privacy and First Amendment values are related in the generation of new ideas and the freedom of thought.  Although prior scholarship on Brandeis has overlooked these interesting connections, they suggest some tremendously important implications for modern understandings of our civil liberties.

Matthew Tokson, Automation and the Fourth Amendment

Matthew Tokson, Automation and the Fourth Amendment

Comment by: Stephen Henderson

PLSC 2010

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1471517

Workshop draft abstract:

Most Internet users are not aware that many Internet Service Providers collect data about their customers’ online activities and sell it to third party marketers.   Yet, remarkably, many users that are aware of their Providers’ invasive practices remain unconcerned, and very few users change their behavior in order to protect their privacy.  This presents several problems for scholars who propose that users have a reasonable expectation of privacy in personal online data.

This article posits that Internet users are largely unconcerned that their ISPs have access to intimate forms of online communications data (from emails to web surfing data to associated subscriber information) because in virtually every case no other human being will ever use or even see such data.  Instead, all of the operations involving data that can be traced to an individual user are carried out by computers performing automated tasks on databases of customer information.  Because the information is never viewed by a person, the user never perceives a privacy harm or privacy risk.

However, the Supreme Court has held that voluntary disclosure of one’s personal information to either an employee or the automated equipment of a third-party corporation eliminates a reasonable expectation of privacy in that information.  This article examines how this aspect of the Court’s third-party doctrine threatens to eviscerate criminal and civil privacy protections for online content.  It discusses the failures of many courts and scholars to distinguish between disclosure to automated systems and disclosure to human beings when determining the legal protection that electronic data should receive.  The article proposes that the automated equipment rationale can be and must be limited to the context of telephone number switching, and challenges the misconception of privacy that lies behind the Court’s over-aggressive application of the third party doctrine.  It concludes by analyzing whether the reasonable expectation of privacy test as developed by Katz and its progeny is destined to be dramatically underprotective of privacy whenever it is applied to the complex and ever-changing technological framework of Internet communications and personal data.

Larry Ponemon, How Global Organizations Approach the Challenge of Protecting Personal Data

Larry Ponemon, How Global Organizations Approach the Challenge of Protecting Personal Data

Comment by: Ken Anderson

PLSC 2010

Workshop draft abstract:

Public and private sector organizations need to understand how cultural and regulatory issues in various countries affect their ability to achieve privacy and data security goals. Dr. Ponemon, chairman and founder of Ponemon Institute will discuss the challenges of creating a global privacy and data protection strategy for business concerns.

In this session, the speaker will share their real world experiences, successes, failures and lessons learned. An integral part of the discussion will be the findings of the “Global Data Privacy & Protection Survey” conducted by Accenture Ponemon Institute. This is the first truly “global” study that attempts to compare and contrast how individuals in different nations view or deal with privacy and data protection challenges.

The Survey asked more than 5,500 business and IT practitioners in 19 countries to respond to the following issues:

  • Consumer privacy rights vs. organizational control over citizens’ information
  • Organizations’ obligations to secure personal information
  • Government regulations for privacy and data protection
  • Organization vs. consumer ownership of personal information
  • Importance of safeguarding children’s personal information
  • Awareness about data breaches
  • Limitations on the collection and sharing of individuals’ sensitive information
  • Protection of citizens’ privacy rights
  • Protection of cross-border data transfers
  • Disclosure of privacy practices and obtaining citizens’ consent
  • Sharing consumers’ information with the government
  • Openness to identity management tools such as biometrics

What the research determined is that there is not one universal or shared global perspective about the protection of personal information, consumer privacy rights and the need for strict data security safeguards. Rather, perceptions about privacy and the safeguarding of personal information vary significantly by national or regional cultures. The challenge for organizations is creating a strategy that addresses cultural and regulatory differences yet is effective in keeping sensitive data secure. The overall objective will be to provide guidance on how to implement a data security strategy that enhances and not hinders the organization’s ability to operate globally.

Paul Ohm, The Benefits of the Old Privacy: Restoring the Focus to Traditional Harm

Paul Ohm, The Benefits of the Old Privacy: Restoring the Focus to Traditional Harm

Comment by: Bruce Boyden

PLSC 2010

Workshop draft abstract:

The rise of the Internet stoked so many new privacy fears that it inspired a wave of legal scholars to give birth to a new specialty of legal scholarship, Information Privacy law. We should both recognize this young specialty’s great successes and wonder about its frustrating shortcomings. On the one hand, it has provided a rich structure of useful and intricate taxonomies with which to analyze new privacy problems and upon which to build sweeping prescriptions for law and policy.

But why has this important structural work had so little impact on concrete law and policy reform? Has any significant new or amended law depended heavily on this impressive body of scholarship? I submit that none has, which is particularly curious given the way privacy has dominated policy debates in recent years.

In this Article, I propose a theory for why the Information Privacy law agenda has failed to provoke meaningful reform. Building on Ann Bartow’s “dead bodies” thesis,  I argue that Information Privacy scholars gave up too soon on the prospect of relying on traditional privacy harms, the kind of harms embodied in the laws of harassment, blackmail, discrimination, and the traditional four privacy torts. Instead, these scholars have proposed broader theories of harm, arguing that we should worry about small incursions of privacy that aggregate across society, focusing on threats to individual autonomy, deliberative democracy, and human development, among many other values. As the symbol of these types of privacy harms, these scholars have pointed to Bentham’s and Foucault’s Panopticon.

Unfortunately, fear of the Panopticon is unlikely to drive contemporary law and policy for two reasons. First, as a matter of public choice, Panoptic fears are not the kind that spurs legislators to act. Lawmakers want to point to poster children suffering concrete, tangible harm—to Bartow’s dead bodies—before they will be motivated to act. The Panopticon provides none. Second, privacy is a relative, contingent, contextualized, and malleable value. It is weighed against other values, such as security and economic efficiency, so any theory of privacy must be presented in a commensurable way. But the Panopticon is an incommensurable fear. Even if you agree that it represents something dangerous that society must work to avoid, when you place this amorphous fear against any concrete, countervailing value, the concrete will always outweigh the vague.

I argue that we should shift our focus away from the Panopticon and back on traditional privacy harm. We should point to people who suffer tangible, measureable, harm; we should spotlight privacy’s dead bodies.

But this isn’t a call to return meekly back to the types of narrow concerns that gave rise to the traditional privacy torts. Theories of privacy harm should include not only the stories of people who already have been harmed but also rigorous predictions of new privacy harms that people will suffer because of changes to technology.

Ironically, information privacy law scholars who make these kinds of predictions will often propose prescriptions that are as broad and sweeping as some of those made by their Panopticon-driven counterparts. Traditional-harm theories of information privacy aren’t necessarily regressive forms of privacy scholarship, and this Article points to the work of a new wave of information privacy law scholars who are situated in traditional harm but at the same time offer aggressive new prescriptions. From my own work, I revisit the “database of ruin” theory, a prediction that leads to aggressive prescriptions for new privacy protections.

Finally, I argue why this predictive-traditional-harm approach is more likely to lead to political action than the Panoptic approach, recasting prescriptions from some of the classic recent works of Information Privacy into more politically saleable forms by translating them through the traditional harm lens.

Marilyn Prosch, Privacy by Design: A Case Study of the Mobile Millennium Traffic Pilot

Marilyn Prosch, Privacy by Design: A Case Study of the Mobile Millennium Traffic Pilot

Comment by: Steve Wicker

PLSC 2010

Workshop draft abstract:

This research study will take Commissioner Cavoukian’s 7 Foundational Principles of Privacy  by Design (2009) and the activities in Porter’s Value chain (1985) that relate to the collection, use, storage, retention and destruction of personal information to study how they have been applied and to develop guidance for organizations and businesses in the collaborative technologies industry. Our focus on the value chain follows on the work of Morgan et al. (2009) that examines the notion of corporate citizenship and suggests that in order for it to be effective, companies need to minimize harm and maximize benefits through their activities and, in so doing, take account of and be responsive to a full range of stakeholders. This parallels Cavoukian’s Privacy by Design, positive-sum approach. A pioneering concept, Privacy by Design ensures the protection of privacy by embedding it into the design specifications of information technology, business practices and infrastructure – thereby making privacy the default. Specifically, Morgan et al. (2009) call for a “next generation” approach to corporate citizenship that is embedded in structures, systems, processes and policies across the company’s value chain.

David and Prosch (2009) assert that designing privacy into the value chain model is a practical, business view of organizational and privacy issues.  This puts privacy where it belongs in an organization – everywhere where personal information exists.  They conclude that further research is needed to consider the internal stakeholders communications among the various departments within an organization with the goal of better communications and shared values, and we believe the value chain approach helps to further this engagement along.  Also, federated environments necessitate that organizations can “trust” their third parties providers.  Research and case studies are needed regarding how these organizations can create value and competitive advantages by PbD implementation and voluntarily sharing these experiences.

Nadja Kanellopoulou, Liam Curren, & Jane Kaye, 3-Dimensional Privacy, Consent, and Revocation of Consent: A Story of Two Tales

Nadja Kanellopoulou, Liam Curren, & Jane Kaye, 3-Dimensional Privacy, Consent, and Revocation of Consent: A Story of Two Tales

Comment by: Ted Janger

PLSC 2010

Workshop draft abstract:

This work presents preliminary findings from legal research pursued within EnCoRe, a multidisciplinary UK research project. EnCoRe seeks to build better mechanisms for individual control in the management of personal data, with particular focus on the development of reliable and robust solutions for consent and revocation of consent. Our work, which combines law and philosophy, builds on our critical assessment of EU and UK jurisprudential perspectives on privacy and data protection. We have used this work to create a new methodology for understanding privacy on the basis of three dimensions – informational, decisional, and spatial – inspired by the work of Beate Rössler. Our analysis highlights connections between privacy and personal autonomy, and posits privacy as essential for individual self-understanding, self-perception, self-development and self-control. This approach to privacy could be applied as a tool: firstly, to clarify legal thinking about interferences with privacy; and, secondly, for the maintenance of personal identity in today’s information society. Our work seeks to contribute to the current debate on new privacy paradigms. In particular, we are interested in the development of models that facilitate meaningful control of personal identity in new, interactive informational environments.

Marc Blitz, Privacy and the Thought Centered First Amendment

Marc Blitz, Privacy and the Thought Centered First Amendment

Comment by: Ann Bartow

PLSC 2010

Workshop draft abstract:

In his 2008 article, Intellectual Privacy (presented at PLSC 2008), Neil Richards proposes that activities that constitute close proxies for our thought should be shielded with a distinctive (and perhaps stronger) level of privacy protection than that provided to other activities — and that this additional layer of privacy protection is needed to protect the freedom of thought that underlies, and provides the foundation, for our First Amendment freedom of speech. (Neil M. Richards, Intellectual Privacy, 87 TEXAS L. REV. 387, 411 (2008))

This essay aims to build upon this project in two closely related ways: First, it closely considers the possible implications for both privacy and First Amendment law of the concept of “the extended mind,” which was proposed by Andy Clark and David Chalmers in 1998 and has recently attracted significant attention from both philosophers of mind and ethicists.   Clark and Chalmers argue that mental processes may sometimes be embodied not only in our brains, but also in certain parts of the inanimate world, in the notes we take in a journal, for example, or in computer technology we use to store and retrieve information.  As Clark and Chalmers note, this concept of mind as extending beyond the body may have ethical implications, since “in some cases, interfering with someone’s environment will have the same moral significance as interfering with their person.”  (“The Extended Mind” in Andy Clark, Supersizing the Mind (2008)).  Philosopher Neil Levy spells out such ethical implications of the extended mind concept, noting, for example, that “if it would be wrong to read [a person’s] mind because it would be an invasion of their privacy, then it might be equally wrong for the same reason to read their diary.” (Neil Levy, Neuroethics 62 (2004)).  If, as Richard proposes, there is a class of activities that is so closely related to thought it deserves a distinctive kind of privacy protection, then we might define the rough boundaries of such activities by drawing on work (like that of Clark and Chalmers) in philosophy of mind, and on related work in cognitive neuroscience, that considers what sort of activity — including activities outside of our person — counts as an integral part of a mental process.

Second, the essay looks at recent consideration of whether observations or records of our brain processes (for example, in functional Magnetic Resonance Imaging (fMRI) or EEG readings) raise privacy problems distinct from those raised by other kinds of monitoring that reveals aspects of our physiological functioning (e.g., our blood type) or of our psychological character (e.g., profiling based on records of our consumer purchases).   An argument for a special type of thought-protecting privacy does not necessarily entail the view that brain activity requires stronger insulation against monitoring than other kinds of activity — especially if “extended mind” encompasses many processes that occur outside of the brain.  But, given the assumption among many in Western societies that mental processes take place inside of the brain, intuitions about the appropriateness of monitoring the brain provide a starting point for elaborating privacy protections that might ultimately also protect embodiment of mental processes which occur outside of it.