Archives

2012 Participants

Patricia Abril,
University of Miami

Alessandro Acquisti,
CMU

Joseph Alhadeff,
Oracle

Meg Leta Ambrose,
University of Colorado

Marvin Ammori,
New America Foundation

Ross Anderson,
Cambridge

Mark Andrejevic,
University of Queensland

Annie Anton,
North Carolina State University

Axel Arnbak,
Institute for Information Law, University of Amsterdam

Stewart Baker,
Steptoe & Johnson

Derek Bambauer,
Brooklyn Law School

Kevin Bankston,
CDT

Khaliah Barnes,
Electronic Privacy Information Center (EPIC)

Martha Barnett,
TLO, Inc.

Carol Bast,
University of Central Florida

Robin Bayley,
Linden Consulting, Inc.

Steven Bellovin,
Columbia University

Colin Bennett,
University of Victoria

Ellen Blackler,
The Walt Disney Company

Jody Blanke,
Mercer University

Marc Blitz,
Oklahoma City University School of Law

Dominika Blonski,
NYU

Courtney Bowman,
Palantir Technologies

danah boyd,
Microsoft Research/New York University

Bruce Boyden,
Marquette University Law School

Travis Breaux,
Carnegie Mellon University

Julie Brill,
Federal Trade Commission

Jeffrey Brown,
Cybercrime Review

Aaron Burstein,
NTIA

Ryan Calo,
Stanford Law School

Lisa Madelon Campbell,
Competition Bureau Canada

Robert Cannon,
FCC

Tim Casey,
Calif. Western School of Law

Wade Chumney,
Georgia Institute of Technology

Danielle Citron,
University of Maryland School of Law

Andrew Clearwater,
Center for Law and Innovation

Bret Cohen,
Hogan Lovells

Jules Cohen,
Microsoft

Julie Cohen,
Georgetown Law

Amanda Conley,
O’Melveny & Myers

Chris Conley,
ACLU of Northern California

Lorrie Cranor,
Carnegie Mellon University

Thomas Crocker,
University of South Carolina School of Law

Mary Culnan,
Bentley University

Bryan Cunningham,
Palantir

Doug Curling,
New Kent Capital

Deven Desai,
Thomas Jefferson School of Law

Pam Dixon,
World Privacy Forum

Dissent Doe,
PogoWasRight.org

Laura Donohue,
GULC

Nick Doty,
UC Berkeley School of Information

Cynthia Dwork,
Microsoft Research

Mark Eckenwiler,
USDOJ

Elizabeth Eraker,
Google Inc.

Adrienne Felt,
University of California, Berkeley

Ed Felten,
Federal Trade Commission

Darleen Fisher,
National Science Foundation

David Flaherty,
University of Western Ontario

Roger Ford,
NYU School of Law

Tanya Forsheit,
InfoLawGroup LLP

Kristina Foster,
NPS

Susan Freiwald,
University of San Francisco School of Law

Allan Friedman,
Brookings Institution

Michael Froomkin,
U.Miami School of Law

Simson Garfinkel,
Naval Postgraduate School

Robert Gellman,
Privacy Consultant

Tomas Gomez-Arostegui,
Lewis & Clark

Nathan Good,
Good Research

David Gordon,
Carnegie Mellon University

Jennifer Granick,
Attorney

John Grant,
Palantir Technologies

Jim Graves,
Carnegie Mellon University

Kim Gray,
IMS Health

Rebecca Green,
William & Mary Law School

James Grimmelmann,
NYLS

Marc Groman,
Federal Trade Commission

Jens Grossklags,
The Pennsylvania State University

Alexandra Grossman,
Skidmore College

Elizabeth Ha,
UC Berkeley

Joseph Hall,
New York University

Jim Harper,
The Cato Institute

Woodrow Hartzog,
Cumberland School of Law, Samford University

Allyson Haynes,
Charleston School of Law

Stephen Henderson,
The University of Oklahoma College of Law

Evan Hendricks,
Privacy Times, Inc.

Mike Hintze,
Microsoft

Dennis Hirsch,
Capital Law School

Lance Hoffman,
GW Cyber Security Policy & Research Inst

Marcia Hofmann,
Electronic Frontier Foundation

Chris Hoofnagle,
UC Berkeley Law

Jane Horvath,
Apple Inc.

Kirsty Hughes,
University of Cambridge

Trevor Hughes,
IAPP

Stuart Ingis,
Venable LLP

Jeff Jonas,
IBM

Margot Kaminski,
Information Society Project at Yale Law School

Ian Kerr,
uOttawa

Orin Kerr,
GW Law School

Jennifer King,
UCB School of Information

Anne Klinefelter,
University of North Carolina School of Law

Jacqueline Klosek,
Goodwin Procter LLP

Christina Kühnl,
Attorney

Rick Kunkel,
University of St. Thomas

Stephen Lau,
University of California, Office of the President

Travis LeBlanc,
California Attorney General’s Office

Ron Lee,
Arnold & Porter LLP

Pedro Leon,
Carnegie Mellon University

Catherine Lotrionte,
Georgetown University

Mark MacCarthy,
Georgetown University

Alexander Macgillivray,
Twitter

Mary Madden,
Pew Internet Project

Peder Magee,
FTC

Laureli Mallek,
Attorney

Carter Manny,
Univ. of Southern Maine

Alice Marwick,
Microsoft Research

Keith Marzullo,
NSF

Aaron Massey,
North Carolina State University

Kristen Mathews,
Proskauer Rose LLP

Andrea Matwyshyn,
Wharton School, University of Pennsylvania

Jonathan Mayer,
Stanford University

William McGeveran,
University of Minnesota Law School

Anne McKenna,
ToomeyMcKenna Law Group, LLC

Joanne McNabb,
California Office of Privacy Protection

Edward McNicholas,
Sidley Austin LLP

David Medine,
Attorney

Sylvain Métille,
id est attorneys

Ed Mierzwinski,
USPIRG

Douglas Miller,
AOL Inc.

Jon Mills,
University of Florida

Tracy Mitrano,
Cornell University

Manas Mohapatra,
Federal Trade Commission

Laura Moy,
Institute for Public Representation

Deirdre Mulligan,
UC Berkeley School of Information

Scott Mulligan,
Skidmore College

Kirk Nahra,
Wiley Rein LLP

Arvind Narayanan,
Stanford University

Helen Nissenbaum,
New York University

Paul Ohm,
University of Colorado Law School

Christopher Parsons,
University of Victoria, Department of Political Science

Brian Pascal,
Palantir Technologies

Heather Patterson,
UC Berkeley School of Law

Jon Peha,
Carnegie Mellon University

Nikolaus Peifer,
University of Cologne

Stephanie Pell,
SKP Strategies, LLC

Scott Peppet,
University of Colorado Law School

Vince Polley,
KnowConnect PLLC

Jules Polonetsky,
Future of Privacy Forum

Robert Quinn,
AT&T

Charles Raab,
University of Edinburgh

Jeffrey Rabkin,
Stroz Friedberg LLC

Alan Raul,
Sidley Austin LLP

Priscilla Regan,
George Mason University

Joel Reidenberg,
Fordham Law School

Virginia Rezmierski,
University of Michigan

Neil Richards,
Washington University School of Law

Sasha Romanosky,
Carnegie Mellon University

Dana Rosenfeld,
Kelley Drye & Warren LLP

Alan Rubel,
University of Wisconsin, Madison

Ira Rubinstein,
NYU School of Law

Nathan Sales,
George Mason Law School

Albert Scherr,
UNH School of Law

Viola Schmid,
Technical University of Darmstadt

Andrew Selbst,
NYU Information Law Institute

Wendy Seltzer,
Yale ISP

Andrew Serwin,
Foley & Lardner LLP

Stuart Shapiro,
MITRE Corporation

Katie Shilton,
University of Maryland

Babak Siavoshy,
Berkeley Samuelson Clinic

Robert Sloan,
University of Illinois at Chicgo

Tom Smedinghoff,
Edwards Wildman

Christopher Soghoian,
Center for Applied Cybersecurity Research, Indiana University

Daniel Solove,
George Washington University Law School

Ashkan Soltani,
Independent Researcher

Lisa Sotto,
Hunton & Williams LLP

Tim Sparapani,
Consultant

Robert Sprague,
Univ. of Wyoming

Dave Stampley,
KamberLaw

Jay Stanley,
ACLU

Gerard Stegmaier,
Wilson Sonsini Goodrich & Rosati

Katherine Strandburg,
New York University School of Law

Zoe Strickland,
UnitedHealth Group

Frederic Stutzman,
Carnegie Mellon University

Clare Sullivan,
University of Adelaidei

Latanya Sweeney,
Harvard University

Peter Swire,
Ohio State

Rahul Telang,
Carnegie Mellon University

Omer Tene,
Israeli College of Management School of Law

Melanie Teplinsky,
American University Washington College of Law

David Thaw,
University of Maryland

Timothy Tobin,
Hogan Lovells LLP

Frank Torres,
Microsoft

Michael Traynor,
Cobalt LLP

Joseph Turow,
Univ of Pennsylvania

Blase Ur,
Carnegie Mellon University

Jennifer Urban,
UC-Berkeley Law

Steven Vine,
PulsePoint

Colette Vogele,
Without My Consent

Serge Voronov,
Duke University School of Law

Heidi Wachs,
Georgetown University

Kent Wada,
UCLA

Richard Warner,
Chicago-Kent

Yael Weinman,
Federal Trade Commission

Daniel Weitzner,
White House

Tara Whalen,
Office of the Privacy Commissioner of Canada

Jan Whittington,
University of Washington

Craig Wills,
Worcester Polytechnic Institute

Peter Winn,
U.S. Department of Justice

Christopher Wolf,
Hogan Lovells/Future of Privacy Forum

Felix Wu,
Cardozo School of Law

Heng Xu,
The Pennsylvania State University

Jane Yakowitz,
Brooklyn Law School

Marsha Young,
SAIC

Harlan Yu,
Princeton University

Barbara Yuill,
BNA’s Privacy & Security Law Report

Tal Zarsky,
University of Haifa – Faculty of Law

Bo Zhao,
Penn Law

Michael Zimmer,
University of Wisconsin-Milwaukee

Diane Zimmerman,
New York University

Marc Zwillinger,
ZwillGen PLLC

Danielle Citron, Hate 3.0

Danielle Citron, Hate 3.0

Comment by: Rebecca Green

PLSC 2012

Workshop draft abstract:

Cyber harassment is an endemic and devastating form of invidious discrimination.  As my book Hate 3.0 (forthcoming Harvard University Press 2013) explores, the identity of the victims and the nature of the attacks explain why.  Statistically speaking, women and/or sexual minorities bear the brunt of the abuse, and the harassment tends to exploit victims’ gender and sexuality to threaten, demean, and economically disadvantage them.

To set the stage for the rest of the book, chapter one presents detailed case studies of four individuals with different life experiences whose harassment experiences are strikingly similar.  It situates the experiences of straight white men in this phenomenon: cyber harassers often accuse them of being secretly gay or women.  Chapter two takes up the harassers and the harm that they do.  Chapter three considers why explicit hate appears in networked spaces when it seems less prevalent in real space.  Any one of the Internet’s key features—anonymity, group dynamics, information spreading, and virtual environments—can be a force multiplier for bigotry and incivility.

Nonetheless, as chapter four considers, cyber harassment remains in the shadows where it is often ignored or legitimated, leaving victims to fend for themselves.  This requires a sustained campaign to re-conceptualize abuse online, in much the way that the women’s movement struggled to change the social meaning of workplace sexual harassment and domestic violence.  Chapter five provides a conceptual apparatus to help us do so.

Part II points the way forward.  Chapter six asks what can be done now, looking to intermediaries, schools, and parents as crucial private avenues for social action.  Internet intermediaries, notably entities that host online communities and mediate expressive conduct, have great freedom and power to influence online discourse.  As chapter seven explores, achieving equality online will require legal solutions.  Although current law addresses some online abuse, its shortcomings require fresh thinking and legislative action.  Yet, in doing so, we need to tread carefully given our commitment to free speech.

Chapter eight argues that civil rights protections can, however, be reconciled with civil liberty guarantees, both doctrinally and theoretically.

Jane Yakowitz, The New Intrusion

Jane Yakowitz, The New Intrusion

Comment by: Jon Mills

PLSC 2012

Workshop draft abstract:

The tort of intrusion upon seclusion offers the best theory to target legitimate privacy harms in the information age. This Article introduces a new taxonomy that organizes privacy law across four key stages of information flow—observation, capture (the creation of a record), dissemination, and use. Popular privacy proposals place hasty, taxing constraints on dissemination and use. Meanwhile, regulation targeting information flow at its source—at the point of observation—is undertheorized and ripe for prudent expansion.

Intrusion imposes liability for offensive observations. The classic examples involve intruders who gain unauthorized access to information inside the home or surreptitiously intercept telephone conversations, but the concept of seclusion is abstract and flexible. Courts have honored expectations of seclusions in public when the intruder’s efforts to observe were too aggressive and exhaustive. They have also recognized expectations of seclusion in files and records outside the plaintiff’s possession. This article proposes a framework for extending the intrusion tort to new technologies by assigning liability to targeted and offensive observations of the data produced by our gadgets.

Intrusion is a theoretically and constitutionally sound form of privacy protection because the interests in seclusion and respite from social participation run orthogonal to free information flow. Seclusion can be invaded without the production of any new information, and conversely, sensitive new information can become available without intrusion. This puts the intrusion tort in stark contrast with the tort of public disclosure, where the alleged harm is a direct consequence of an increase in knowledge. Since tort liability for intrusion regulates conduct (observation) instead of speech (dissemination), it does not prohibit a person from saying what he already knows, and therefore can coexist comfortably with the bulk of First Amendment jurisprudence.

Peter Swire, Backdoors

Peter Swire, Backdoors

Comment by: Orin Kerr

PLSC 2012

Workshop draft abstract:

This article, which hopefully will be the core of a forthcoming book, uses the idea of “backdoors” to unify previously disparate privacy and security issues in a networked and globalized world.  Backdoors can provide government law enforcement and national security agencies with lawful (or unlawful) access to communications and data.  The same, or other, backdoors, can also provide private actors, including criminals, with access to communications and data.

Four areas illustrate the importance of the law, policy, and technology of backdoors:

(1) Encryption.  As discussed in my recent article on “Encryption and Globalization,” countries including India and China are seeking to regulate encryption in ways that would give governments access to encrypted communications.  An example is the Chinese insistence that hardware and software built there use non-standard cryptosystems developed in China, rather than globally-tested systems.  These types of limits on encryption, where implemented, give governments a pipeline, or backdoor, into the stream of communications.

(2) CALEA.  Since 1994, the U.S. statute CALEA has required telephone networks to make communications “wiretap ready.”  CALEA requires holes, or backdoors, in communications security in order to assure that the FBI and other agencies have a way into communications flowing through the network.  The FBI is now seeking to expand CALEA-style requirements to a wide range of Internet communications that are not covered by the 1994 statute.

(3) Cloud computing.  We are in the midst of a massive transition to storage in the cloud of companies’ and individuals’ data.  Cloud providers promise strong security for the stored data. However, government agencies increasingly are seeking to build automated ways to gain access to the data, potentially creating backdoors for large and sensitive databases.

(4) Telecommunications equipment.  A newly important issue for defense and other government agencies is the “secure supply chain.”  The concern here arises from reports that major manufacturers, including the Chinese company Huawei, are building equipment that has the capability to “phone home” about data that moves through the network.  The Huawei facts (assuming they are true) illustrate the possibility that backdoors can be created systematically by non-government actors on a large scale in the global communications system.

These four areas show key similarities with the more familiar software setting for the term “backdoor” – a programmer who has access to a system, but leaves a way for the programmer to re-enter the system after manufacturing is complete.  White-hat and black-hackers have often exploited backdoors to gain access to supposedly secure communications and data.  Lacking to date has been any general theory, or comparative discussion, about the desirability of backdoors across these settings.  There are of course strongly-supported arguments for government agencies to have lawful access to data in appropriate settings, and these arguments gained great political support in the wake of September 11.  The arguments for cybersecurity and privacy, on the other hand, counsel strongly against pervasive backdoors throughout our computing systems.

Government agencies, in the U.S. and globally, have pushed for more backdoors in multiple settings, for encryption, CALEA, and the cloud.  There has been little or no discussion to date, however, about what overall system of backdoors should exist to meet government goals while also maintaining security and privacy.  The unifying theme of backdoors will highlight the architectural and legal decisions that we face in our pervasively networked and globalized computing world.

Richard Warner and Robert H. Sloan, Behavioral Advertising: From One-Sided Chicken to Informational Norms

Richard Warner and Robert H. Sloan, Behavioral Advertising:  From One-Sided Chicken to Informational Norms

Comment by: Aaron Massey

PLSC 2012

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2034424

Workshop draft abstract:

When you download the free audio recording software from Audacity, your transaction is like any traditional provision of a product for free or for a fee—with one difference:  you agree that Audacity may collect your information and use it to send you advertising.  Billions of such “data commercialization” (DC) exchanges occur daily.  They feed data to a massive advertising ecosystem that constructs individual profiles in order to tailor web site advertising as closely as possible to individual interests.  The vast majority of us object.  We want considerably more control over our information than websites that participate in the advertising ecosystem allow.  Our misgivings are evidently idle, however.  We routinely enter DC exchanges when we visit CNN.com, use Gmail, or visit any of a vast number of other websites.  Why?  And, what, if anything, should we do about it?

We answer both questions by describing DC exchanges as a game of Chicken that we play over and over with sellers under conditions that guarantee we will always lose.  Chicken is traditionally played with cars.  Two drivers at opposite ends of a road drive toward each other at high speed.  The first to swerve loses.  We play a similar game with sellers—with one crucial difference:  we know in advance that the sellers will never “swerve.”

In classic Chicken with cars, the players’ preferences are mirror images of each other.  Imagine, for example, Phil and Phoebe face each other in their cars.  Phil’s first choice is that Phoebe swerve first.  His second choice is that they swerve simultaneously.  Mutual cowardice is better than a collision.  Unilateral cowardice is too, so third place goes to his swerving before Phoebe does.  Collision ranks last.  Phoebe’s preferences are the same except that she is in Phil’s place and Phil in hers.  Now change the preferences a bit, and we have the game we play in DC exchanges.  Phil’s preferences are the same, but Phoebe’s differ.  She still prefers that Phil swerve first, but collision is in second place. Suppose Phoebe was recently jilted by her lover; as a result, her first choice is to make her male opponent reveal his cowardice by swerving first, but her second choice is a collision that will kill him and her broken-hearted self.  Given these preferences, Phoebe will never swerve.  Phil knows Phoebe has these preferences, so he knows he has only two options:  he swerves, and she does not; and, neither swerves.  Since he prefers the first, he will swerve.  Call this One-Sided Chicken.

We play One-Sided Chicken when in our website visits we enter DC exchanges.  We argue that buyers’ preferences parallel Phil’s while the sellers’ parallel heart-broken, “collision second” Phoebe’s. We name the players’ choices in this DC game “Give In,” (the “swerve” equivalent) and “Demand” (the “don’t swerve” equivalent). For buyers, “Demand” means refusing to use the website unless the seller’s data collection practices conform to the buyer’s informational privacy preferences.  “Give in” means permitting the seller to collect and process information accord with whatever information processing policy it pursues.  For sellers, “Demand” means refusing to alter its information processing practices even when they conflict with a buyer’s preferences.  “Give in” means conforming information processing to a buyer’s preferences.  We contend that sellers’ first preference to demand while buyers to give in and that their second is the collision equivalent in which both sides demand.  Demanding sellers leave buyers only two options:  give in and use the site, or demand and do not.  Since buyers prefer the first option, they always give in.

It would be better if we were not locked into One-Sided Chicken.  Ideally, informational norms should regulate the flow of personal information.  Informational norms are norms that constrain the collection, use, and distribution of personal information.  In doing so, they implement tradeoffs between protecting privacy and realizing the benefits of processing information.  Unfortunately, DC exchanges are one of a number of situations in which rapid advances in information processing technology have outrun the slow evolution of norms.

How do we escape from One-Sided Chicken to appropriate informational norms?  Chicken with cars contains a clue.  In the late 1950s B-grade Hollywood youth movie, Phil would introduce broken-hearted Phoebe to just-moved-to-town Tony.  They would fall in love, and, in a key dramatic turning point, Phil and Phoebe would play Chicken.  Phoebe would see that Tony is also in the car and be the first to swerve.  We need a “Tony” to change businesses’ preferences.  We contend that we would all become the DC exchange equivalent Tony if we had close to perfect tracking prevention technologies.  Tracking prevention technologies are perfect when they are 100% effective in blocking information processing for advertising purposes, completely transparent their effect, effortless to use, and permit the full use of the site.  Phoebe swerves because she does not want to lose her beloved Tony.  Sellers are “in love with” advertising revenue.  We argue that they will “swerve” to avoid losing the revenue they would lose if buyers prevented data collection for advertising purposes.  The result will be that, in a sufficiently competitive market, appropriate informational norms arise.  We conclude by considering the prospects for approximating perfect tracking prevention technologies.

David Thaw, Comparing Management-Based Regulation and Prescriptive Legislation: How to Improve Information Security Through Regulation

David Thaw, Comparing Management-Based Regulation and Prescriptive Legislation: How to Improve Information Security Through Regulation

Comment by: Derek Bambauer

PLSC 2012

Workshop draft abstract:

Information security regulation of private entities in the United States can be grouped into two general categories. This paper examines these two categories and presents the results of an empirical study comparing their efficacy at addressing organizations’ failures to protect sensitive consumer information. It examines hypotheses about the nature of regulation in each category to explain their comparative efficacy, and presents conclusions suggesting two changes to existing regulation designed to improve organizations’ capacity to protect sensitive consumer information.

The first category is prescriptive legislation, which lays out performance standards that regulated entities must achieve. State Security Breach Notification (SBN) statutes are the primary example of this type, and require organizations to report to consumers breaches involving certain types of sensitive personal information. This form of legislation primarily lays out performance-based standards, under which the regulatory requirement is that entities achieve (or avoid) certain conditions. Such legislation may also lay out specific means by which regulatory goals are to be achieved.

The second category describes forms of management-based regulatory delegation, under which administrative agencies promulgate regulations requiring organizations to develop security plans designed to achieve certain aspirational goals. Two notable examples are the Health Insurance Portability and Accountability Act (HIPAA) and the Gramm-Leach- Bliley Financial Modernization Act (GLBA). The Federal Trade Commission also engages in such activity reactively through its data security consumer protection enforcement actions. The regulatory requirement in this case is the development of the plan itself (and possible adherence to the plan), rather than the necessary achievement of stated goals or usage of certain methods to achieve those goals.

This paper presents the results of an empirical study analyzing security breach incidence to evaluate the efficacy of information security regulation at preventing breaches of sensitive personal information. Publicly reported breach incidents serve as a proxy for the efficacy of organizations’ security measures, and while clearly limited in scope (as noted below) they are currently the only data point uniformly available across industrial sectors. Analysis of breaches reported between 2000 and 2010 reveals that the combination of prescriptive legislation and management-based regulatory delegation may be four times more effective at preventing breaches of sensitive personal information than is either method alone.

While this method of analysis bears certain limitations, even under unfavorable assumptions the results still support a conclusion that prescriptive standards should be added to existing regulations. Such standards would abate a current “race to the bottom,”

whereby regulated entities adopt compliance plans consistent with “industry-standards” but often (and in some cases woefully) inadequate to achieve the aspirational goals of the regulation. Since the conclusion of this study, there have been two notable such additions of performance-based standards: 1) the inclusion of a breach notification requirement in HIPAA, and 2) the recent promulgation of regulations by the SEC requiring publicly traded companies to report material security risks and events to investors. The results of this analysis also support the expansion of management-based regulatory models to other industrial sectors.

The second component of empirical analysis presented in this paper includes the results of a qualitative study of Chief Information Security Officers (CISOs) at large U.S. regulated entities. The interview data reveals the effects of regulation both on information security practices and on the role of technical professionals within organizations. The results of these interviews suggest hypotheses to explain both the weaknesses in compliance plan design and the proposition that, notwithstanding new performance-based standards, security conditions remain ineffective.

The first hypothesis suggests that the relative effects of prescriptive legislation and management-based regulatory delegation on the role of technical professionals in organizations explain the inability of performance-based standards fully to address information security failures. The data suggest two specific outcomes – first, that current performance-based standards weaken the role of technical professionals; and second, that management-based models of regulatory delegation strengthen professionals’ role. This result stems from reliance on technical professionals’ skill in developing compliance plans to meet management-based regulatory goals. The current model of performance- based regulation, by contrast, under which security failures are exempt from (the regulatory penalty of) reporting when the compromised data is encrypted, decreases reliance on technical skill by effectively specifying one means-based approach to “compliance.” By redirecting essentially-fixed resources to a specific means of compliance addressing only a single threat, these performance-based standards hamper the ability of CISOs adequately to address other salient threats. In this regard, SBNs effectively lock the front door to the bank while leaving the back window wide open.

The second hypothesis suggests that the lack of proactive guidance by regulators hampers the ability of CISOs to justify requests for increased resources to address vulnerabilities not covered by performance-based standards. This hypothesis answers the question of why “industry-standards” may be so ineffective at achieving the aspirational goals of the regulation. Management-based regulatory delegation models rely heavily on a context of “reasonableness,” many of which scale to the size, complexity, and capabilities of the regulated entity. Reasonableness is a well-examined concept in law, but becomes problematic in the context of a highly-technical and fast-changing regulatory environment. Regulators’ failure to provide proactive guidance regarding what constitutes reasonable security hampers the ability of CISOs to justify the need for greater resources. Combined with the “redirection” of resources to address specific compliance objectives associated with performance-based standards, these pressures cause broad-based security plans to be inadequate (either in design or implementation) at addressing the broader base of threats facing the organization. The effects of this condition are evident in the abundance of “low-hanging fruit” available to regulators – review of the Federal Trade Commission’s data security enforcement actions reveals few answers to “gray areas” of reasonableness, and many examples of security failures extreme in degree.

These findings and analysis suggest three conclusions. First, regulators should increase the use of performance-based standards, specifically standards not tied to specific means of implementation. Second, management-based regulatory models should be expanded to other industrial sectors beyond finance and healthcare, perhaps through the promulgation of proactive regulations by the FTC consistent with its history of enforcement action. Third, regulators should provide more proactive guidance as to the definition of reasonable security, so as to avoid a “race to the bottom” in the development security plans to address management-based regulatory goals.

Clare Sullivan, Digital Identity and Privacy

Clare Sullivan, Digital Identity and Privacy

Comment by: Bryan Choi

PLSC 2012

Workshop draft abstract:

This paper examines the relationship between digital identity and privacy.

The paper analyses the legal nature and functions of digital identity in the context of commercial transactions. The analysis reveals that digital identity, in this context, consists of two sets of information. The primary set of information constitutes an individual’s transactional identity. This is the identity required for transactions. Transaction identity is a defined set of information which typically consists of full name, date of birth, gender, and identifying information such as a signature and/or unique number. This transaction identity acts as both gateway to, and gatekeeper of, more detailed and dynamic information which tells a story about the dealings and activities of the individual associated with the transaction identity.

The paper distinguishes digital identity from privacy and then considers how privacy protects the two sets of information which constitute digital identity. The analysis reveals that while privacy can provide some protection for the broader collection of information, it does not adequately protect transaction identity.  The paper concludes by examining the right to identity which is capable protecting transactional identity, and contrasting the right to identity with the right to privacy.

The discussion is relevant to common law and civil law jurisdictions which recognise and protect human rights.

Fred Stutzman and Woodrow Hartzog, Obscurity by Design

Fred Stutzman and Woodrow Hartzog, Obscurity by Design

Comment by: Travis Breaux

PLSC 2012

Workshop draft abstract:

Currently, the most pressing issue for privacy regulators is the accumulation and use of consumer data by companies, including social media providers. Post-hoc responses by regulators to privacy violations, including violations by social media providers, do not sufficiently protect consumer privacy. To enhance consumer privacy, regulators recommend that privacy protections be built into all phases of the technology development lifecycle. This approach, known as privacy-by-design (PbD), mandates companies to proactively address privacy concerns so as to produce positive privacy outcomes for users. Although well intentioned, PbD faces a number of challenges in implementation, including a lack of specificity and weak market forces motivating adoption. To date, applied PbD work has largely focused on back-end implementation principles, such as data minimization and security. Very little work has focused on integrating PbD into the design of interfaces or interaction. Additionally, although regulators have paid much attention to the potential harms committed by companies that hold personal information, the threat posed by other users has been largely neglected. In the context of social media, PbD has not yet addressed the “social.” In this work, we argue that the design of privacy in social media user interaction is an integral concern that necessitates policy coordination between site designers and administrators. In social media sites, the development of PbD practices for interaction are equally important to those developed for data storage and security. Of course, the design of PbD practices for interaction is challenging. Interaction varies by site, culture, and context, and is not necessarily amenable to formal engineering requirements. To address this challenge, we propose a novel, empirically grounded approach to PbD for social media interaction. Drawing on an established framework for online obscurity, which identifies a set of practices for how individuals shield their identity in online social interaction, we propose the four factors of online obscurity as a set of design and policy criteria for approaching PbD for user interaction in social media. We then illustrate how designers and administrators of sites can address these factors through a range of technical, policy, and behavioral “nudge” solutions. In doing so, our work improves PbD discourse by providing actionable, empirically grounded specifications that are both flexible and feasible to implement.

Stuart Shapiro, Categorical Denial: Deconstructing Personally Identifiable Information

Stuart Shapiro, Categorical Denial: Deconstructing Personally Identifiable Information

Comment by: Lance Hoffman

PLSC 2012

Workshop draft abstract:

The concept of personally identifiable information (PII) has been drawing increased attention of late, sparked by problems with de-­‐identification. Depending on who you talk to, PII is a distinction without meaning or an inescapable necessity for bounding regulation. Irrespective of their conclusions, one trait common to all these analyses of the “PII problem” is their failure to look at PII as the construct it fundamentally is: a category.

Categories constitute a basic conceptual building block of human thought, related to but distinct from other building blocks such as analogies. They are both socially and cognitively constructed and grounded in both objective and subjective perceptions. Most importantly for the purpose of analyzing the PII problem, they exhibit structure and a host of associated properties which continue to be investigated by researchers in a variety of fields, including cognitive science.

To fully grasp the problems and possibilities of PII, we must take it seriously as a category, as opposed to a legal or technical label. This paper aims to rigorously analyze PII as a category, leveraging the substantial body of existing research on how humans construct and use categories. This includes situating PII with respect to different types of categories, inferring its internal structure and related characteristics, and drawing out the implications of differential sorting behavior among subject matter experts and laypersons.

The problems with PII will not be resolved by abandoning the concept or by introducing ad hoc constructions. Ensuring the viability of PII as a category requires more explicit understanding and treatment of it as such. Doing so reveals new avenues both for understanding current difficulties and for addressing them in a coherent fashion consistent with what we know about categories qua categories.