Monthly Archives: May 2013

Frank Pasquale, Reputation Regulation: Disclosure and the Challenge of Clandestinely Commensurating Computing

Frank Pasquale, Reputation Regulation: Disclosure and the Challenge of Clandestinely Commensurating Computing

Comment by: Tal Zarsky

PLSC 2010

Workshop draft abstract:

Reputational systems can never be rendered completely just, but legislators can take two steps toward fairness. The first is relatively straightforward: to assure that key decision makers reveal the full range of online sources they consult as they approve or deny applications for credit, insurance, employment, and college and graduate school admissions. Such disclosure will at least serve to warn applicants of the dynamic digital dossier they are accumulating in cyberspace. Effective disclosure requirements need to cover more than the users of reputational information—they should also apply to some aggregators as well. Just as banks have moved from consideration of a long-form credit report to use of a single commensurating credit score, employers and educators in an age of reputation regulation may turn to intermediaries which that combine extant indicators of reputation into a single scoring of a person. Since such scoring can be characterized as a trade secret, it may be even less accountable than the sorts of rumors and innuendo discussed above. Any proposed legislation will need to address the use of such reputation scores, lest black- box evaluations defeat its broader purposes of accountability and transparency.

Woodrow Hartzog, Privacy in an Age of Contracts

Woodrow Hartzog, Privacy in an Age of Contracts

Comment by: William McGeveran

PLSC 2010

Workshop draft abstract:

Traditionally, contracts have been most relevant in transactional contexts.  Yet, as websites became ubiquitous, so did terms of use, which brought communication and information into an age of contracts.  How have these digital agreements impacted our privacy?  Contracts require direct interactions constituting privity of contract, yet many technologies can be used to violate an individual’s privacy without such a relationship.  Under this logic, Warren and Brandeis dismissed contracts as a viable remedy for harms to privacy. Essentially, contacts provided no remedies against strangers.  With each advancement in communication technology, the potential distance between the individual and those who would violate their privacy has grown, seemingly culminating with the Internet’s obliteration of the ability to negotiate privacy.  This article attempts to organize and analyze the contemporary impact of contracts on privacy, both as binding agents and as evidence in extra-contractual contexts.  I argue that the reciprocal communication of the participatory web could actually alleviate the problem Warren and Brandeis identified: The web allows us greater control over self-disclosed information and gives us the ability to “dicker” for confidentiality.  If courts would legitimize explicit attempts by an individual to protect her privacy as part of an online agreement – such as taking advantage of offered website features like untagging photos, deleting personal information and increasing privacy settings – they would move one step closer to reclaiming contracts as a “meeting of the minds.”  This recognition would also revitalize contracts as a method for protecting privacy.

Jacqueline D Lipton, Righting Cyber Wrongs

Jacqueline D Lipton, Righting Cyber Wrongs

Comment by: Rafi Cohen-Amalgor

PLSC 2010

Workshop draft abstract:

In light of recent instances of cyberstalking and cyberharassment, particularly involving female victims,some commentators have argued that the law should do more to protect individual autonomyand privacyonline.   Others reject these views, suggesting that such developments would be undesirable, unnecessary, and potentially unconstitutional.  The proposed paper would argue for greater protections for individual privacy and autonomy online.  The author suggests protecting victims of reputational damage by providing new, more affordable avenues for redress than are currently available.  Existing literature has focused on judicial remediesand some market solutionsto remedy reputational damage, and have noted the limitations of each.  Problems identified include the fact that judicial remedies are time and cost-intensive,and it can be notoriously difficult to identify individual defendants,or, in the alternative, to proceed against operators of online services that host damaging content. Current market-based solutions are also problematic, because of the costs to victims.

This paper would advocate a multi-pronged approach to give victims of cyber-harassment better access to meaningful remedies.  The author advocates a combination of: (a) developing pro bono reputation defense services for online harassment; (b) developing public education programs to empower victims to combat such harassment themselves; and, (c) encouraging existing pro bono legal services to take on more cyber-harassment cases.  Advantages of this multi-pronged approach are that pro bono reputation services can utilize reputation protection tools currently utilized by for-profit services, but can do so at low or no cost to victims.  Public education likewise can empower victims to utilize these strategies for themselves at little to no cost.  Supplementing these approaches with a focus on reputation defense by pro bono legal services would capitalize on law’s expressive functions to send messages to the wider community about conduct that should not be tolerated online.

Alessandro Acquisti and Catherine E. Tucker, Guns, Privacy, and Crime

Alessandro Acquisti and Catherine E. Tucker, Guns, Privacy, and Crime

Comment by: Aaron Burstein

PLSC 2010

Workshop draft abstract:

In December 2008, a Memphis newspaper made publicly available an online, searchable database of all gun permit holders in Tennessee. The database included information such as the permit holder’s name, ZIP code, and his or her permit’s start and expiration dates. It did not receive much attention until, in February, an article about a parking argument that ended in a deadly shooting referred to it. The fierce debate which thereafter arose – with the NRA accusing the newspaper of a “hateful, shameful form of public irresponsibility,” and the newspaper standing by a “right to know” argument – exemplifies the complex interactions, and sometimes collisions, between privacy and other rights and needs. In this case, individual privacy rights collided with the collective right to know, and, arguably, with both individual and communal issues of security.

By preventing the release of personal data, individuals often hope to prevent harm to themselves. However, the publication of the gun permits data highlights one case where privacy and personal security may appear to be in conflict. Whereas gun rights advocates suggested that the publication exposed gun owners to risk (for instance, of criminals targeting houses known to hold guns, in order to steal them), those defending it argued that gun owners may be less likely to be targeted, precisely because the information was made publicly available. In this manuscript we attempt to quantify the actual impact that the publication of TN gun permits data had on 1) crime rates and 2) gun permit requests in the city of Memphis. Combining gun, crime, demographic, and location data from an array of sources and databases, we measured how rates of occurrences of different classes of crime changed, as function of the local density of gun ownership made public by the newspaper, before and after the publication of the database. Our results suggest that the publication of the database reduced more significantly the occurrence of violent crimes (such as robberies and burglaries) in ZIP codes with higher gun ownership density. At the same time, the publication was accompanied by a more significant percentage increase in gun permits requests in areas with pre-existing higher rates of gun ownership. To address concerns about unobserved heterogeneity, we also performed a falsification test by studying crime trends in a similar town (Jackson) in a neighboring state. We found no similar trends in crime during the time period in such town.

This paper contributes not just to the policy debate on the openness or secrecy of gun data (19 states allow the public to access gun permits information; other states either have no laws addressing the issue, or keep the information outside the public domain), but to the broader discourse on the boundaries and connections between privacy and security.

Helen Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life

Helen Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life

Comment by: Anita Allen

PLSC 2010

Workshop draft abstract:

Newly emerging socio-technical systems and practices, undergirded by digital media and information science and technology, have enabled massive transformations in the capacity to monitor behavior, amass and analyze personal information, and distribute, publish, communicate and disseminate it. These have spawned great social anxiety and, in turn, laws, policies, public interest advocacy, and technologies, framed as efforts to protect a right to privacy. A settled definition of this right, however, remains elusive. The book argues that common definitions as control over personal information or secrecy, that is, minimization of access to personal information, do not capture what people care about when they complain and protest that their right to privacy is under threat. Suggesting that this right may be circumscribed as applying only to special categories of “sensitive” information or PII does not avoid its pitfalls.

What matters to people is not the sharing of information — this is often highly valued — but the inappropriate sharing. Characterizing appropriate sharing is the heart of the book’s mission, turning to wisdom embodied in entrenched norms governing flows of personal information in society. The theory of contextual integrity offers “context-relevant informational norms” as a model for these social norms of information flow. It claims that in assessing whether a particular act or system or practice violates privacy, people are sensitive to the context in which these occur — e.g. healthcare, politics, religious practice, education, commerce — what types of information are in question, about whom it is, from whom it flows and to what recipients. What also matters are the terms of flow, called “transmission principles” that govern these flows, for example, with the consent of an information subject, whether in one direction or reciprocally, whether forced, given, bought and sold, and so on. Radical transformations in information are protested because they violate entrenched informational norms, that is, when they violate contextual integrity.

But not all technologies that induce novel flows are resisted, and contextual integrity would be unhelpfully conservative if it sweepingly deemed them morally wrong. The theory, therefore, also discriminates between those that are acceptable, even laudable, from those that are problematic. Inspired by the great work of other privacy theorists — past and contemporary — the theory suggests we examine how well novel flows serve diverse interests as well as important moral and political values compared with entrenched flows. The crux, however, lies in establishing how well these serve internal values, ends, and purpose of the relevant background contexts, ultimately, that is, how well they serve the integrity of society’s key structures and institutions.

Peter Winn, On-Line Access to Court Records

Peter Winn, On-Line Access to Court Records

Comment by: Peter Winn

PLSC 2008

Workshop draft abstract:

In 2002, with almost no debate, US courts began using electronic filing systems. Under the earlier paper system, court records were required to be kept public to maintain the accountability of the legal system, but given the difficulty of accessing paper records, most legal files remained “practically obscure,” thus still protecting the privacy of litigants. This accountability/privacy  balance was dramatically changed by the shift to electronic court records, subjecting a treasure trove of sensitive information to unintended uses – from wholesale extraction by commercial data-miners to individual mischief by criminals.  What is the proper balance between accountability and privacy in an age of electronic judicial information?

Deirdre K. Mulligan & Joseph Simitian, Creating a Flexible Duty of Care to Secure Personal Information

Deirdre K. Mulligan & Joseph Simitian, Creating a Flexible Duty of Care to Secure Personal Information

Comment by: Deirdre Mulligan

PLSC 2008

Workshop draft abstract:

The use of compulsory information disclosures as a regulatory tool is recognized as an important, modern development in American law. The Toxics Release Inventory (TRI), a publicly available EPA database that contains information on toxic chemical releases and other waste management activities, established under the Emergency Planning and Community Right-to-Know Act of 1986 (EPCRA) is a widely studied example of the potential power of these comparatively light-weight regulatory interventions. The EPCRA has been credited with providing incentives for reductions and better management of toxic chemicals by firms eager to avoid reporting releases.  It has also been credited with providing information essential citizen and government engagement and action.

Drawing from a wide body of literature documenting how and why the EPCRA led to dramatic reductions in toxic releases, the paper considers the extent to which security breach notification laws are likely to produce similar results.  Anecdotal evidence and some qualitative research indicate that the security breach notification laws have created incentives for businesses to better secure personal information.  The law has encouraged investments in computer security as well as the development of new corporate policies.  The desire to avoid incidents that trigger the reporting requirement have led businesses to reconsider decisions about where data is stored, who has access to it, under what circumstances and with what protections it can reside on portable devices or media, and to generate more detailed mechanisms of both controlling and auditing information access events.  The authors, who, respectively, advised upon and authored California’s security breach notification law (AB 700/SB 1386), conclude that, in contrast to previous prescriptive regulation, the reporting requirement created an evolving standard of care, in effect a race or at least rise to the top, but due to characteristics of information breaches and aspects of the current laws it has not engendered citizen engagement and organization similar to that of the EPCRA.

Peter Swire & Cassandra Butts, The ID Divide

Peter Swire & Cassandra Butts, The ID Divide

Comment by:

PLSC 2008

Workshop draft abstract:

This report examines how a next Administration should approach the complex issues of authentication and identification, for issues including: national and homeland security; immigration; voting; electronic medical records; computer security; and privacy and civil liberties.  For many reasons, the number of ID checks in American life has climbed sharply in recent years.  The result, we conclude, is what we call the “ID Divide.”

The ID Divide is similar to the “Digital Divide” that exists for access to computers and the Internet.  The Digital Divide matters because those who lack computing lose numerous opportunities for education, commerce, and participation in civic and community affairs.  Today, millions of Americans lack official identification, suffer from identity theft, are improperly placed on watch lists, or otherwise face burdens when asked for identification.  The problems of these uncredentialed people are largely invisible to credentialed Americans, many of whom have a wallet full of proofs of identity.  Yet those on the wrong side of the ID Divide are finding themselves squeezed out of many parts of daily life, including finding a job, opening a bank account, flying on an airplane, and even exercising the right to vote.

Part I of this report describes the background of the issue, including the sharp rise in recent years in how often Americans are asked for proof of identity.  Part II examines the facts of the ID Divide in detail.  There are at least four important types of problems under the ID Divide:

  1. Large population affected by identity theft and data breaches.
  2. Growing effects of watch lists.
  3. Specific groups disproportionately lack IDs today.
  4. The effects of stricter ID and matching requirements.

Part III develops Progressive Principles for Identification Systems.  These principles apply at two stages: (1) whether to create the system at all; and (2) if so, how to do it:

  1. Achieve real security or other goals.
  2. Accuracy.
  3. Inclusion.
  4. Fairness/equality.
  5. Effective redress mechanisms.
  6. Equitable financing for systems.

Part IV explains a “due diligence” process for considering and implementing identification systems, and examines biometrics and other key technical issues.  Part V applies the progressive principles and due diligence insights to two current examples of identification programs, photo ID for voting and the Transportation Worker Identification Card.

Stephen Wicker & Dawn E. Schrader, Privacy-Aware Engineering Design Practices for Mobile Networks

Stephen Wicker & Dawn E. Schrader, Privacy-Aware Engineering Design Practices for Mobile Networks

Comment by: Lance Hoffman

PLSC 2009

Workshop draft abstract:

In this paper we propose a framework for the development of privacy-aware engineering design practices.  A brief overview of the various forms that the invasion of privacy can take is provided, reiterating the taxonomy developed by Daniel Solove in Understanding Privacy (2008).  Various perspectives on the harm that may be caused through loss of privacy are then considered, both in terms of the individual and the public acting in concert.  Emphasis is placed on the potential for inhibited epistemic growth and potential damage to public institutions.  We conclude that information system design policies that ignore privacy considerations are harmful, and that information engineers have a moral obligation to protect the privacy interests of the public that extends well beyond current legal requirements.  We then review the Fair Information Practices proposed in Records, Computers, and the Rights of Citizens (1973), and show how they can be translated into privacy-aware engineering design policies.  These rules begin with an absolute imperative to limit information collection to explicit and publicly expressed mission requirements.  We then show that this simple imperative flows into a mandate for distributed information processing, anonymity-preserving information routing and tracking functions, and strong distinctions between identifying active equipment and identifying operators and owners.  We show that privacy-invading design decisions were made (without malice) in the development of cellular technology, and then show how the proposed design rules can guide the development of near-term power consumption monitoring technologies in general, and demand response systems in particular.

Danielle Keats Citron & David Super, Cyber Civil Rights

Danielle Keats Citron & David Super, Cyber Civil Rights

PLSC 2008

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1271900

Workshop draft abstract:

Social networking sites and blogs have increasingly become breeding grounds for anonymous online groups that attack members of traditionally disadvantaged groups, especially women and people of color.  These destructive groups target individuals with lies, threats of violence, and denial of service attacks that silence victims and concomitantly destroy privacy and reputations.  Victims go offline or assume pseudonyms to prevent future attacks, thereby losing economic opportunities associated with a vibrant online presence and impoverishing online dialogue.  Search engines also reproduce the lies and threats for employers and clients to see, creating digital “scarlet letters” that ruin reputations.

Today’s destructive cyber groups update a history of anonymous mobs such as the Ku Klux Klan coming together to victimize and subjugate vulnerable people.  The social science literature identifies conditions that accelerate dangerous group behavior and those that tend to defuse it.  Unfortunately, Web 2.0 technologies provide all of the accelerants of mob behavior but very few of its inhibitors.  With little reason to expect self-correction of this intimidation of vulnerable individuals, the law must respond.

This article argues that the harm inflicted by such destructive crowds ought to be understood and addressed as civil rights violations.  Federal criminal and civil rights laws must be read to provide effective means to challenge the intimidation and harassment perpetrated by today’s anonymous crowds as they have been to combat other masked mobs that menaced vulnerable groups and outspoken champions in the past.