Archives

Woodrow Hartzog, The Life, Death, and Revival of Implied Confidentiality

Woodrow Hartzog, The Life, Death, and Revival of Implied Confidentiality

Comment by: Patricia Abril

PLSC 2012

Workshop draft abstract:

Confidence is implied in many of our face-to-face relationships. Those seeking to disclose in confidence can close doors, speak in hushed tones, and rely on context and other signals to convey a trust in the recipient that has not been explicitly articulated. Yet, according to courts, the same usually cannot be said for our relationships on the Internet. Online relationships are frequently perceived by courts as missing the same implicit cues of confidentiality that are present in face-to-face relationships. Indeed, implied confidentiality is absent in the judicial analysis of Internet-related cases except in the most obvious scenarios. Yet it is clear that Internet users often have implicitly shared expectations of confidentiality. This article posits that the diminished legal relevance of implied confidentiality on the Internet is not solely attributable to the inherent differences between online and offline interaction. Rather, this article argues that implied confidentiality has not been refined enough to be a workable concept in online disputes. The absence of online implied confidentiality as a legal concept is a problem because courts are tasked with ascertaining the actual agreement or relationship between Internet users. Although courts have regularly found implied confidences between parties offline, their analyses have left insufficient direction for future courts to consistently apply doctrine across the myriad of factual scenarios. As a result, the concept of implied confidentiality has, as a practical matter, been rendered too flimsy to play a significant role in Internet jurisprudence. The purpose of this article is to mine the rich history of implied confidentiality doctrine in an attempt to refine the concept with a unifying decision-making framework. This article proposes a technology-neutral framework based on a review of case law to help courts ascertain the two most common and important judicial considerations in implied obligations of confidentiality – party perception and party inequality. A more nuanced framework will better enable the application of implied confidentiality in online disputes than the currently vague articulation of the concept. This framework is offered to demonstrate that the Internet  need not spell the end of implied agreements and relationships of trust.

Jens Grossklags, Na Wang & Heng Xu: A field study of social applications’ data practices & authentication and authorization dialogues

Jens Grossklags, Na Wang & Heng Xu: A field study of social applications’ data practices & authentication and authorization dialogues

Comment by: Ross Anderson

PLSC 2012

Workshop draft abstract:

Several studies have documented the constantly evolving privacy practices of social networking sites and users’ misunderstandings about them. Justifiably, users have criticized the interfaces to “configure” their privacy preferences as opaque, disjointed, uninformative and ultimately ineffective. The same problems have also plagued the constantly growing economy of third-party applications and their equally troubling authentication and authorization dialogues with important options being unavailable at installation time and/or widely distributed across the sites’ privacy options pages.

In this paper, we report the results of a field study of the current authorization dialogue as well as four novel designs of installation dialogues for the dominant social networking site. In particular, we study and document the effectiveness of installation-time configuration and awareness-enhancing interface changes when 250 users investigate our experimental application in the privacy of their homes.

Michael Froomkin, Lessons Learned Too Well

Michael Froomkin, Lessons Learned Too Well

Comment by: Anne McKenna

PLSC 2012

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1930017

Workshop draft abstract:

A decade ago the Internet was already subject to a significant degree of national legal regulation.  This first generation of internet law was somewhat patchy and often reactive.  Some legal problems were solved by simple categorization, whether by court decisions, administrative regulation, or statute.  Other problems required new approaches: the creation of new categories or new institutions.  And in some cases, governments in the US and elsewhere brought out the big guns of direct legislation, sometimes with stiff penalties.

The past decade has seen the crest of the first wave of regulation and the gathering of a second, stronger, wave based on a better understanding of the Internet and of law’s ability to shape and control it.  Aspects of this second wave are encouraging: Internet regulation is increasingly based on a sound understanding of the technology, minimizing pointless rules or unintended consequences. But other aspects are very troubling: where a decade ago it was still reasonable to see the Internet technologies as empowering and anti-totalitarian, now regulators in both democratic and totalitarian states have learned to structure rules that previous techniques cannot easily evade, leading to previously impossible levels of regulatory control.

On balance, that trend seems likely to continue.  One result that seems likely to follow from current trends in centralization and smarter and more global regulation is legal restriction, and perhaps the prohibition, of online anonymity.  As a practical matter, the rise of identification technologies combined with commercial and regulatory incentives have made difficult for any but sophisticated users to remain effectively anonymous.  First wave internet regulation could not force the identification of every user and packet, but the second wave regulation is more international, more adept, and benefits from technological change driven by synergistic commercial and regulatory objectives.  Law which harnesses technology to its ends achieves far more than law regulating outside technology or against it.

The consequences of an anonymity ban are likely to be negative. This paper attempts to explain how we came to this pass, and what should be done to avoid making the problem worse.

Part One of this article discusses the first wave of Internet regulation, before the year 2000, focusing on US law.  This parochial focus is excusable because even at the start of the 21st Century a disproportionate number of Internet users were in the US.  And, with only a very few exceptions . the greatest of which involve aspects of privacy law emanating from the EU’s Privacy Directive . the US either led or at least typified most of the First Wave regulatory developments.

The second wave of regulation has been much more global, so in Part Two, which concerns the most recent decade, the paper’s focus expands geographically, but narrows to specifically anonymity-related developments.  Part A describes private incentives and initiatives that resulted in the deployment of a variety of technologies and private services each of which is unfriendly to anonymous communication.  Part B looks at three types of government regulation, relevant to anonymity: the general phenomenon of chokepoint regulation, and the more specific phenomena of online identification requirements and data retention (which can be understood as a special form of identification).

Part Three examines competing trends that may shape the future of anonymity regulation.  It takes a pessimistic view of the likelihood that given the rapid pace of technical and regulatory changes the fate of online anonymity in the next decade will be determined by law rather than by the deployment of new technologies or, most likely, pragmatic political choices.  It therefore offers normative and pragmatic arguments why anonymity is worth preserving and concludes with questions that proponents of further limits on anonymous online speech should be expected to answer.

Goaded by factors ranging from traditional public order concerns to fear of terrorism and hacking to public disclosures by WikiLeaks and others, both democratic and repressive governments are increasingly motivated to attempt to identify the owners of every packet online, and to create legal requirements that will assist in that effort.  Yet whether a user can remain anonymous or must instead use tools that identify him is fundamental to communicative freedom online.  One who can reliably identify speakers and listeners can often tell what they are up to even if he is not able to eavesdrop on the content of their communications; getting the content makes the intrusion and the potential chilling effects that much greater.  Content industries with copyrights to protect, firms with targeted ads to market, and governments with law enforcement and intelligence interests to protect all now appreciate the value of identification, and the additional value of traffic analysis, not to mention the value of access to content on demand . or even the threat of it.

Online anonymity is closely related to a number of other issues that contribute to communicative freedom, and thus enhance civil liberties, such as the free use of cryptography, and the use of tools designed to circumvent online censorship and filtering.  One might reasonably ask why, then this essay concentrates on anonymity, and on its inverse, identification technologies. The reason is that anonymity is special, arguably more essential to online freedom than any other tool except perhaps cryptography (and one of the important functions of cryptography is to enable or enhance anonymity as well as communications privacy).  Without the ability to be anonymous, the use of any other tool, even encrypted communications, can be traced back to the source.  Gentler governments may use traffic analysis to piece together networks of suspected dissidents, even if the government cannot acquire the content of their communications.  Less-gentle governments will use less-gentle means to pressure those whose communications they acquire and identify. Whether or not the ability to be anonymous is sufficient to permit circumvention of state-sponsored communications control, it is necessary to ensure that those who practice circumvention in the most difficult circumstances have some confidence that they may survive it.

Susan Freiwald & Sylvain Métille, Simply More Privacy Protective: Law Enforcement Surveillance in Switzerland as compared to in the United States

Susan Freiwald & Sylvain Métille, Simply More Privacy Protective: Law Enforcement Surveillance in Switzerland as compared to in the United States

Comment by: Stephen Henderson

PLSC 2012

Workshop draft abstract:

Calls for reform of the American laws governing electronic surveillance have heightened as the principal federal law, the Electronic Communications Privacy Act (“ECPA”), has approached its twenty-fifth birthday this year.  Passed in 1986 to bring communications surveillance into the electronic age, the ECPA has not been meaningfully updated since the advent of the World Wide Web. Courts currently disagree over whether the statute even applies to surveillance using mobile technology, years after cell phones have become ubiquitous in American’s lives.  Switzerland, by contrast, has recently updated its laws to cover surveillance technology. In January of 2011, the Swiss enacted a brand new statute, the Swiss Criminal Procedure Code (CrimPC). Substantively, CrimPC imposes similar procedural requirements on law enforcement agents’ use of a variety of investigatory techniques. That nearly uniform treatment stands in stark contrast to the ECPA, which uses a complicated set of categories and rules to make surveillance law in the United States exceedingly difficult to understand and apply. More importantly, Swiss law precludes the use of surveillance techniques not authorized and regulated by CrimPC, while in the United States, a tremendous amount of what the Swiss consider to be surveillance takes place outside the confines of the applicable surveillance laws.

Even if Congress were to amend the ECPA by passing the most privacy protective of the current bills proposed, resulting U.S. law would achieve neither the uniformity nor all the privacy protective features of CrimPC. In short, Swiss law involves judges in many more types of surveillance and in a much more active way than any of the current proposals in this country would do. Swiss law also requires, as U.S. law does not and would not even after amendment, that clear notice must be given in almost all cases to those targeted by surveillance, once done.

This paper describes the passage of CrimPC and its key provisions, which govern the surveillance of mail and telecommunications, collection of user identification data, use of technical surveillance equipment, the surveillance of contacts with a bank, use of undercover agents and the surveillance through physical observation of people and places accessible to the general public. It contrasts those provisions with current U.S. law. The discussion puts the proposals for U.S. law reform in perspective, and sheds light on two radically different approaches to regulating law enforcement surveillance of communications technologies.

Cynthia Dwork & Deirdre K. Mulligan, Aligning Classification Systems with Social Values through Design

Cynthia Dwork & Deirdre K. Mulligan, Aligning Classification Systems with Social Values through Design

Comment by: Joseph Turow

PLSC 2012

Workshop draft abstract:

Ad serving services, search engines, passenger screening systems all rely on mathematical models to make value-judgments about how to treat people:  What ads to serve them, what urls to suggest, whether to single them out for extra scrutiny or prohibit them from boarding a plane. Each of these models presents multiple points for value judgments:  which data to include; how to weigh and analyze the data; whether to prefer false positives or false negatives in the identified sets.  Whether these judgments are thought of as value judgments, whether the value judgments reflected in the systems are known to those who rely on or are subject to them, and whether they are viewed as interrogable varies by context.

Currently objections to behavioral advertising systems are framed first and foremost as concerns about privacy. The collection and use of detailed digital dossiers of information about individuals’ online behavior that reflects not only commercial activities but also the political, social, and intellectual life of internet users is viewed as a threat to privacy.  Privacy objections run the gamut from objections to the surreptitious acquisition of personal information, to the potential sensitivity of the data, to the retention and security practices of those handling the data, as well as the possibility that it will be accessed and used by additional parties for additional purposes.  Framed as privacy concerns the responses to these systems—both policy and technical—aim to provide internet users with the ability to limit or modify their participation in them.

This is an incomplete response that stems in part from an imprecise documentation of the objections.  Objections to behavioral advertising systems in large part stem from concerns about its power to invisibly shape and control individuals’ exposure to information.  This power raises a disparate set of concerns including, the potential of such algorithms to discriminate or marginalize specific populations, and the potential to balkanize and sort the population through narrowcasting thereby undermining the shared public sphere. While often framed first as privacy concerns, these objections raise issues of fairness and are better understood as concerns with social justice and related but not synonymous concerns about social fragmentation and its impact on deliberative democracy.

Our primary focus is on this set of objections to behavioral advertising that lurk below the surface of privacy discourse.

The academic and policy communities have wrestled with this knot of concerns in other settings and have produced a range of policy solutions, some of which have been adopted.  Policy solutions to address concerns related to segmentation have focused primarily on limiting its impact on protected classes.  They include the creation of “standard offers” made equally available to all; the use of test files to identify biased outputs based on ostensibly unbiased inputs; and required disclosure of categories, classes, inputs, and algorithms. More recently, researchers and practitioners in computer science have developed technical approaches to mitigate the ethical issues presented by algorithms. They have developed a set of methods for conforming algorithms to external ethical commands; advocated systems push value judgments off to end users (for example whether to err toward false positives or false negatives); developed techniques for formalizing fairness in classification schemes; and advocated approaches that expose embedded value judgments and allow users to manipulate and experience the outcomes various values produce.  They have also used datamining to reveal discriminatory outputs.

Given the intersecting concerns with privacy and classification raised by behavioral advertising, proposed policy and technical responses to date are quite limited.  Regulatory efforts are primarily concerned with addressing the collection and use of data to target individuals for advertising. Similarly, technical efforts focus on limiting the use of data for advertising purposes or preventing its collection. The responses are largely silent on the social justice implications of classification.

This paper teases out this latter set of concerns with the impact of classification. As in other areas, digging below the rhetoric of privacy one finds a variety of outcome-based objections that reflect political commitments to equality, opportunity, and community.  We then examine concerns and responses to classification in other areas.  We then consider the extent to which computer science methods and tools can be deployed to address this set of concerns with classification in the behavioral advertising context.  We conclude with some broader generalizations about the role of policy and technology in addressing this set of concerns in automated decision making systems.

Nick Doty & Deirdre Mulligan, The technical standard-setting process and regulating Internet privacy: a case study of Do Not Track

Nick Doty & Deirdre Mulligan, The technical standard-setting process and regulating Internet privacy: a case study of Do Not Track

Comment by: Jon Peha

PLSC 2012

Workshop draft abstract:

Regulating Internet privacy involves understanding rapidly-changing technology and reflecting the diverse policy concerns of stakeholders from around the world. Technical standard-setting bodies provide the promise of software engineering expertise and a stable consensus process created for interoperability. But does the process reflect the breadth and depth of participation necessary for self- and co-regulation of online privacy? What makes a standard-setting or regulatory process sufficiently “open” for the democratic goals we have for determining public policy?

Drawing from literature in organizational theory, studies of standards development organizations and cases of environmental conflict resolution, this paper explores the applicability of consensus-based standard-setting processes to Internet policy issues. We use our experience with the ongoing standardization of Do Not Track at the World Wide Web Consortium (W3C) to evaluate the effectiveness of the W3C process in addressing a current, controversial online privacy concern. We also develop success criteria with which the privacy professional and regulatory community can judge future “techno-policy standards”.

While the development of techno-policy standards within consortia like the W3C and the Internet Engineering Task Force shows promise for technocratic and democratic regulation, success depends on particular properties of the participation model, the involvement of policymakers and even the technical architecture.

Laura K. Donohue, Remote Biometric Identification

Laura K. Donohue, Remote Biometric Identification

Comment by: Babak Siavoshy

PLSC 2012

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2137838

Workshop draft abstract:

Facial Recognition Technology represents the first of a series of next generation biometrics, such as hand geometry, iris, vascular patterns, hormones, and gait, which, when paired with surveillance of public space, give rise to unique and novel questions of law and policy.  Together, these technologies constitute what can be considered Remote Biometric Identification (RBI). That is, they give the government the ability to ascertain identity (1) in public space, (2) absent notice and consent, and (3) in a continuous and on-going manner.  RBI fundamentally differs from what can be considered Immediate Biometric Identification (IBI)—or the use of biometrics to determine identity at the point of arrest, following conviction, or in conjunction with access to secure facilities.  Use of the technology in this way is (1) individual-specific, (2) involves notice and often consent, (3) is a one-time occurrence, and (4) takes place in relation either to custodial detention or in the context of a specific physical area related to government activity.  Fingerprint is the most obvious example of IBI, although more recent forays into palm prints fall within this class.  DNA technologies also can be considered as part of IBI.  The types of legal and policy questions raised by RBI differ from those accompanying IBI, and they give rise to serious constitutional concerns.

 

Part II of the Article details the recent explosion of federal initiative in this area. Part III considers the federal statutory frameworks that potentially apply to the current systems: government acquisition of individually-identifiable data, foreign intelligence surveillance, and criminal law legislative warrant requirements.  It posits that the federal agencies involved have considerable, largely unchallenged authority to collect and analyze personally-identifiable information. Congressional restrictions on the exercise of such authorities generally don’t apply to biometric systems.  Gaps in the 1974 Privacy Act and its amendments and the 1990 Computer Act, in conjunction with exemptions contained in the Privacy Act and the 2002 E-government Act minimize the extent to which such instruments can be brought to bear. The 1978 Foreign Intelligence Surveillance Act and its later amendments, similarly stops short of considered treatment of biometric technologies, generating serious questions about how and to what extent the provisions contained in the statute apply.  Title III of the 1968 Omnibus Crime Control and Safe Streets Act and Title I of the 1986 Electronic Communications Privacy Act do not address facial recognition technology, much less the pairing of this with video surveillance.  In the absence of a statutory framework with which to evaluate the current federal initiatives and their potential inclusion of facial recognition and video technologies, we are driven back upon Constitutional considerations.  Part IV thus focuses on the Court’s jurisprudence in relation to four areas: the Fourth Amendment’s guarantee to protection against unreasonable search and seizure and the probable cause requirement for the issuance of warrants; the Fifth Amendment’s right against self-incrimination; the First Amendment’s protection of speech, assembly, and religion; and the Fifth and Fourteenth Amendments’ due process protections.  Part V concludes the Article by noting the need for Congressional action in this area, as related to biometric technologies generally—as well as facial recognition systems—and contemplating the potential pairing of such technologies with video surveillance. Towards this end, it proposes both amendments to existing measures and the introduction of new restrictions specifically tailored to the biometric realm.

Colin J. Bennett & Deirdre K. Mulligan, Privacy on the Ground Through Codes of Conduct: Lessons from Canada

Colin J. Bennett & Deirdre K. Mulligan, Privacy on the Ground Through Codes of Conduct: Lessons from Canada

Comment by: Robert Gellman

PLSC 2012

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2230369

Workshop draft abstract:

The recent White Paper on privacy from the U.S. Department of Commerce encourages “the development of voluntary, enforceable privacy codes of conduct in specific industries through the collaborative efforts of multi-stakeholder groups, the Federal Trade Commission, and a Privacy Policy Office within the Department of Commerce.”   The policy envisages a coordination of multi-stakeholder groups through a new Privacy Policy Office which would work with the FTC “to develop voluntary but enforceable codes of conduct…Compliance with such a code would serve as a safe harbor for companies facing certain complaints about their privacy practices.”

Privacy codes of practice have extensive histories in a number of countries outside the United States.  At various times they have been adopted to anticipate privacy legislation, to supplement privacy legislation, to pre-empt privacy legislation and to implement privacy legislation. This paper draws upon international experiences and interviews with chief privacy officers to offer important lessons for American policy-makers about how codes of practice might best encourage privacy protection “on the ground.”

Despite obvious differences, the Canadian policy experience may be especially instructive.  Private sector regulation was originally based on a bottom-up approach, through which legislation (the Personal Information Protection and Electronic Documents Act of 2000) was based on a voluntarily negotiated standard through the Canadian Standards Association (CSA).  This in turn was based on existing sectoral codes of practice, of the kind envisaged by the US Department of Commerce.   What has been the experience over the last decade?   What useful lessons can be drawn for US policy?   What are the economic, technological, legal and social conditions under which codes of practice might promote better privacy protection?

 

Meg Leta Ambrose, It’s About Time: Privacy, Information Life Cycles, and the Right to be Forgotten

Meg Leta Ambrose, It’s About Time: Privacy, Information Life Cycles, and the Right to be Forgotten

Comment by: Stephen Lau

PLSC 2012

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2154374

Workshop draft abstract:

The current consensus is that information, once online, is there forever.1

Content permanence has led many European countries to establish a Right to be Forgotten to protect citizens from the shackles of the past presented by the Internet.2 But, the Internet has not defeated time, and information, like everything, gets old, decays, and dies, even online. Quite the opposite of permanent, the Web cannot be self-preserving.3 One study from the field of content persistence, a body of research that has been almost wholly overlooked by legal scholars, found that 85% of content disappears in a year and that 59% disappears in a week, signifying a decrease in the lifespan of online content when compared with previous studies.4 Those that have debated this privacy issue have consistently done so in terms of permanence and also neglected an important consideration: the nature of information. Our efforts to address disputes arising from old personal information residing online should focus on the changing value of information over time and the ethics of preservation. Understanding how information changes over time in relation to its subject, how and where personal information resides online longer than deemed appropriate, and what information is important for preservation allows regulation to be tailored to the problem, correctly framed. This understanding requires an interdisciplinary approach and the inclusion of research from telecommunications, information theory, information science, behavioral and social sciences, and computer sciences. Permanence is not yet upon us, and therefore, now is the time to develop practices of information stewardship that will preserve our cultural history as well as protect the

privacy rights of those that will live with the information

 

 

1 See Jeffrey Rosen, “The Web Means the End of Forgetting,” The New York Times. July 21, 2010 http://www.nytimes.com/2010/07/25/magazine/25privacy- t2.html?pagewanted=all (last visited Dec. 16, 2011); John Hendel, “In Europe, a Right to be Forgotten Trumps the Memory of the Internet,” The Atlantic. Feb. 3, 2011 http://www.theatlantic.com/technology/archive/2011/02/in-europe-a-right-to-be-forgotten- trumps-the-memory-of-the-internet/70643/ (last visited Dec. 18, 2011); Common Sense with Phineas and Ferb, The Disney Channel http://tv.disney.go.com/disneychannel/commonsense/ (last visited Dec. 16, 2011).

2 Council of the European Union, “Communication from the Commission to the European Parliament, the Council, the Economic and Social Committee and the Committee of the Regions 8. April 11, 2010. See also IAPP: European Commision Sends Draft Regulation out for Review. Dec. 8, 2011 https://www.privacyassociation.org/publications/european_commission_sends_draft_regula tion_out_for_review (a focus of the Right to be Forgotten emphasized information created during childhood and “shall apply especially in relation to personal data which are made available by the data subject while he or she was a child.”) (last visited Jan. 13, 2011).

3 Julien Masanès, Web Archiving, at 7 (2006).

4 Daniel Gomes and Mario J. Silvia, Modelling Information Persistence on the Web, Proceedings of the 6th International Conference on Web Engineering 1 (2006).

Alessandro Acquisti & Christina Fong, An Experiment in Hiring Discrimination via Online Social Networks

Alessandro Acquisti & Christina Fong, An Experiment in Hiring Discrimination via Online Social Networks

Comment by: Robert Sprague

PLSC 2012

Workshop draft abstract:

Self-report surveys and anecdotal evidence indicate that U.S. firms are using social networking sites to seek information about prospective hires. However, little is known about how the information they find online actually influences firms’ hiring decisions.  We present the design and preliminary results of a series of controlled experiments of the impact that information posted on a popular social networking site by job applicants can have on employers’ hiring behavior. In two studies (a survey experiment and a field experiment) we measure the ratio of callbacks that different job applicants receive as function of their personal traits. The experiments focus on traits that U.S. employers are not allowed to enquiry about during interviews, but which can be inferred from perusing applicants’ online profiles: religious and sexual orientation, and family status.