Uncategorized

Christopher Slobogin, Making the Most of United States v. Jones in a Surveillance Society: A Statutory Implementation of Mosaic Theory

Christopher Slobogin, Making the Most of United States v. Jones in a Surveillance Society: A Statutory Implementation of Mosaic Theory

Comment by: Susan Freiwald

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2098002

Workshop draft abstract:

In the Supreme Court’s recent decision in United States v. Jones, a majority of the Justices appeared to recognize that under some circumstances aggregation of information about an individual through governmental surveillance can amount to a Fourth Amendment search. If adopted by the Court, this notion—sometimes called “mosaic theory”—could bring about a radical change to Fourth Amendment jurisprudence, not just in connection with surveillance of public movements—the issue raised in Jones—but also with respect to the government’s increasingly pervasive record-mining efforts. One reason the Court might avoid the mosaic theory is the perceived difficulty of implementing it. This article provides, in the guise of a model statute, a means of doing so. More specifically, this article explains how proportionality reasoning and political process theory can provide concrete guidance for the courts and police in connection with physical and data surveillance.

A. Michael Froomkin, Privacy Impact Notices

A. Michael Froomkin, Privacy Impact Notices

Comment by: Stuart Shapiro

PLSC 2013

Workshop draft abstract:

The systematic collection of personal data is a big and urgent problem, and the pace of that collection is accelerating as the cost of collection plummets.  Worse, the continued development of data processing technology means that this data can be used and cross-indexed increasingly effectively and cheaply.  Add in the fact the that there is more and more historical data — and self-reported data — to which the sensor data can be linked, and we will soon find ourselves in the equivalent of a digital goldfish bowl.

It is time – or even past time – to do something.  In this paper I suggest we borrow home-grown solutions from US environmental law.   By combining the best features of a number of existing environmental laws and regulations, and — not least — by learning from some of their mistakes, we can craft rules about data collection that would go some significant distance towards stemming the tide of privacy-destroying technologies being, and about to be, deployed.

I propose that we should require Privacy Impact Notices (PINs) before allowing large public or private projects which risk having a substantial impact on personal information privacy or on privacy in public. [“Privacy Impact Statements” would make for better parallelism with Environmental Impact Statements but the plural form of the acronym would be unfortunate.] The PINs requirement would be modeled on existing environmental laws, notably the National Environmental Policy Act of 1969 (NEPA), the law that called into being  the Environmental Impact Statement (EIS).  A PINs rule would be combined with other reporting requirements modeled on the Toxics Release Inventory (TRI). It would also take advantage of progress in ecosystem modeling, particularly the insight that complex systems like ecologies, whether of living things or the data about them, are dynamic systems that must be re-sampled over time in order to understand how they are changing and whether mitigation measures or legal protections are working.

The overarching goals of this regulatory scheme are familiar ones from environmental law and policy-making: to inform the public of decisions being considered (or made) that affect it, to solicit public feedback as plans are designed, and to encourage decision-makers to consider privacy — and public opinion — from an early stage in their design and approval processes.  That was NEPA’s goal, however imperfectly achieved. In addition, however, because the relevant technologies change quickly, and because the accumulation of personal information by those gathering data can have unexpected synergistic effects as we learn new ways of linking previously disparate data sets, we now know from the environmental law and policy experience that it is also important to invest effort in on-going, or at least annual, reporting requirements in order to allow the periodic re-appraisal of the legitimacy and net social utility of the regulated activity (here, data collection programs).

There is an important threshold issue. Privacy regulation today differs from contemporary environmental regulation in one particularly important way: there are relatively few data privacy (or privacy-in-public) -protective laws and rules on the books.  Thus, privacy law today more resembles anti-pollution law before the Clean Air Act or the Clean Water Act. NEPA’s rules are triggered by state action: a government project, or a request to issue a permit.  In order to give the PINs system traction outside of direct governmental data collection, additional regulation reaching private conduct will be required.  That could be direct regulation of large private-sector data gathering or, as a first step, it could be something less effective but easier to legislate such as a rule reaching all government contractors and suppliers.  Legislation could be federal, but it might also be effective at the state level.

The proposals in this paper intersect with active and on-going debates over the value of notice policies.  They build on, but in at least one critical way diverge from, the work of Dennis D. Hirsch, who in 2006 had the important insight — even truer today — that many privacy problems resemble pollution problems and that therefore privacy-protective regulation could profitably be based on the latest learning from environmental law.

Deven Desai, Data Hoarding: Privacy in the Age of Artificial Intelligence

Deven Desai, Data Hoarding: Privacy in the Age of Artificial Intelligence

Comment by: Kirsten Martin

PLSC 2013

Work draft abstract:

We live in an age of data hoarding. Those who have data never wish to release it. Those who don’t have data want to grab it and increase their stores. In both cases—refusing to release data and gathering data—the mosaic theory, which accepts that “seemingly insignificant information may become significant when combined with other information,”1 seems to explain the result. Discussions of mosaic theory focus on executive power. In national security cases the government refuses to share data lest it reveal secrets. Yet recent Fourth Amendment cases limit the state’s ability to gather location data, because under the mosaic theory the aggregate the data could reveal more than what isolated surveillance would reveal.2 The theory describes a problem but yields wildly different results. Worse it does not explain what to do about data collection, retention, and release in different contexts. Furthermore, if data hoarding is a problem for the state, it is one for the private sector too. Private companies, such as Amazon, Google, Facebook, and Wal-Mart, gather and keep as much data as possible, because they wish to learn more about consumers and how to sell to them. Researchers gather and mine data to open new doors in almost every scientific discipline. Like the government, neither group is likely to share the data they collect or increase transparency for in data is power.

I argue that just as we have started to look at the implications of mosaic theory for the state, we must do so for the private sector. So far, privacy scholarship has separated government and private sector data practices. That division is less tenable today. Not only governments, but also companies and scientists assemble digital dossiers. The digital dossiers of just ten years ago emerge faster than ever and with deeper information about us. Individualized data sets matter, but they are now part of something bigger. Large, networked data sets—so-called Big Data—and data mining techniques  simultaneously allow someone to study large groups, to know what an individual has done in the past, and to predict certain future outcomes.3 In all sectors, the vast wave of automatically gathered data points is no longer a barrier to such analysis. Instead, it fuels and improves the analysis, because new systems learn from data sets. Thanks to artificial intelligence, the fantasy of a few data points connecting to and revealing a larger picture may be a reality.

Put differently, discussions about privacy and technology in all contexts miss a simple, yet fundamental, point: artificial intelligence changes everything about privacy. Given that large data sets are here to stay and artificial intelligence techniques promise to revolutionize what we learn from those data sets, the law must understand the rules for these new avenues of information. To address this challenge, I draw on computer science literature to test claims about the harms or benefits of data collection and use. By showing the parallels between state and private sector claims about data and mapping the boundaries of those claims, this Article offers a way to understand and manage what is at stake in the age of pervasive data hoarding and automatic analysis possible with artificial intelligence.


1 Jameel Jaffer, The Mosaic Theory, 77 SOCIAL RESEARCH 873, 873 (2010)

2 See e.g., Orin Kerr, The Mosaic Theory of the Fourth Amendment, 110 MICH. L. REV. __ (2012)

(forthcoming) (criticizing application of mosaic theory to analysis of when collective surveillance steps

constitute a search)

3 See e.g., Hyunyoung Choi and Hal Varian, Predicting the Present with Google Trends, Google, Inc. (April,

2009) available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1659302

Daniel J. Solove and Woodrow Hartzog, The FTC and the New Common Law of Privacy

Daniel J. Solove and Woodrow Hartzog, The FTC and the New Common Law of Privacy

Comment by: Gerald Stegmaier & Chris Jay Hoofnagle

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2312913

Workshop draft comment:

One of the great ironies about information privacy law is that the primary regulation of privacy in the United States is not really law and has barely been studied in a scholarly way.  Since the late 1990s, the Federal Trade Commission (FTC) has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices.  Despite over fifteen years of FTC enforcement, there is no meaningful body of case law to show for it.  The cases have nearly all resulted in settlement agreements.  Nevertheless, companies look to these agreements to guide their decisions regarding privacy practices.  Those involved with helping businesses comply with privacy law – from chief privacy officers to inside counsel to outside counsel – parse and analyze the FTC’s settlement agreements, reports, and activities as if they were pronouncements by the High Court.

In this article, we contend that the FTC’s privacy jurisprudence is the functional equivalent to a body of common law, and we examine it as such.  The FTC has said quite a lot through its actions and settlement agreements. And FTC privacy jurisprudence is the broadest and most influential regulating force on information privacy in United States – more so than nearly any privacy statute and any common law tort.  The statutory law regulating privacy is diffuse and discordant, and the common law torts fail to regulate the majority of activities concerning privacy.  Despite the central governing role of the FTC’s privacy activity, it has not received much scholarly attention.

In Part I of this article, we discuss how the FTC’s actions function practically as a body of common law for privacy.   In the late 1990s, it was far from clear that the body of law regulating privacy policies would come from the FTC and not from traditional contract and promissory estoppel.  Though privacy policies often have all the indicia of enforceable promises, they have rarely been utilized as contracts.  On the few occasions when contract law is invoked for privacy policies, it usually fails. We explore how and why the current state of affairs developed.  In Part II, we examine the principles that emerge from this body of law.  These principles extend far beyond merely honoring promises.   We discuss how these principles compare to principles in other legal domains, such as contract law. In Part III, we explore the implications of these developments and the ways that this body of law could develop.

Helen Nissenbaum, Respect for Context as a Benchmark for Privacy Online: What it is and isn’t

Helen Nissenbaum, Respect for Context as a Benchmark for Privacy Online: What it is and isn’t

Comment by: James Rule

PLSC 2013

Workshop draft abstract:

In February 2012, the Obama White House unveiled a Privacy Bill of Rights within the report, Consumer Data Privacy in a Networked World: A Framework for Protecting Privacy and Promoting Innovation in the Global Digital Economy, developed by the Department of Commerce, NTIA. Among the Bill of Right’s seven principles, the third, “Respect for Context” was explained as the expectation that “companies will collect, use, and disclose personal data in ways that are consistent with the context in which consumers provide the data.” (p.47) Compared with the other six, which were more recognizable as kin of traditional principles of fair Information practices, such as, for example, the OECD Privacy Guidelines, the principle of respect for Context (PRC) was intriguingly novel.

Generally positive reactions to the White House Report and to the principle of respect-for-context aligned many parties who have disagreed with one another on virtually everything else to do with privacy. That the White House publicly and forcefully acknowledged the privacy problem buoyed those who have worked on it for decades; yet, how far the rallying cry around respect-for-context will push genuine progress is critically dependent on how this principle is interpreted. In short, convergent reactions may be too good to be true if they stand upon divergent interpretations and whether the Privacy Bill of Rights fulfills it promise as a watershed for privacy will depend on which one of these drives regulators to action – public or private. At least, this is the argument my article develops.

Commentaries surrounding the Report reveal five prominent interpretations: a) context as determined by purpose specification; b) context as determined by technology, or platform; c) context as determined by business sector, or industry; d) context as determined by business model; and e) context as determined by social sphere. In the report itself meaning seems to shift from section to section or is left indeterminate but without dwelling too long on what exactly NTIA may or may not have intended my article discusses these five interpretations focusing on what is at stake in adopting any one of them. Arguing that a) and c) would sustain existing stalemates and inertia and that b) and d) though a step forward would not realize the principle’s compelling promise, I defend e), which conceives context as social sphere. Drawing on ideas in Privacy in Context: Technology, Policy, and the Integrity of Social Life (2010), I argue (1) that substantive constraints derived from context-specific informational norms are essential for infusing fairness into purely procedural rule sets; and (2) rule sets that effectively protect privacy depend on a multi-stakeholder process (to which the NTIA is strongly committed), which is truly representative, in turn depends on properly identifying relevant social spheres.

Jane Yakowitz, The New Intrusion

Jane Yakowitz, The New Intrusion

Comment by: Jon Mills

PLSC 2012

Workshop draft abstract:

The tort of intrusion upon seclusion offers the best theory to target legitimate privacy harms in the information age. This Article introduces a new taxonomy that organizes privacy law across four key stages of information flow—observation, capture (the creation of a record), dissemination, and use. Popular privacy proposals place hasty, taxing constraints on dissemination and use. Meanwhile, regulation targeting information flow at its source—at the point of observation—is undertheorized and ripe for prudent expansion.

Intrusion imposes liability for offensive observations. The classic examples involve intruders who gain unauthorized access to information inside the home or surreptitiously intercept telephone conversations, but the concept of seclusion is abstract and flexible. Courts have honored expectations of seclusions in public when the intruder’s efforts to observe were too aggressive and exhaustive. They have also recognized expectations of seclusion in files and records outside the plaintiff’s possession. This article proposes a framework for extending the intrusion tort to new technologies by assigning liability to targeted and offensive observations of the data produced by our gadgets.

Intrusion is a theoretically and constitutionally sound form of privacy protection because the interests in seclusion and respite from social participation run orthogonal to free information flow. Seclusion can be invaded without the production of any new information, and conversely, sensitive new information can become available without intrusion. This puts the intrusion tort in stark contrast with the tort of public disclosure, where the alleged harm is a direct consequence of an increase in knowledge. Since tort liability for intrusion regulates conduct (observation) instead of speech (dissemination), it does not prohibit a person from saying what he already knows, and therefore can coexist comfortably with the bulk of First Amendment jurisprudence.

Peter Swire, Backdoors

Peter Swire, Backdoors

Comment by: Orin Kerr

PLSC 2012

Workshop draft abstract:

This article, which hopefully will be the core of a forthcoming book, uses the idea of “backdoors” to unify previously disparate privacy and security issues in a networked and globalized world.  Backdoors can provide government law enforcement and national security agencies with lawful (or unlawful) access to communications and data.  The same, or other, backdoors, can also provide private actors, including criminals, with access to communications and data.

Four areas illustrate the importance of the law, policy, and technology of backdoors:

(1) Encryption.  As discussed in my recent article on “Encryption and Globalization,” countries including India and China are seeking to regulate encryption in ways that would give governments access to encrypted communications.  An example is the Chinese insistence that hardware and software built there use non-standard cryptosystems developed in China, rather than globally-tested systems.  These types of limits on encryption, where implemented, give governments a pipeline, or backdoor, into the stream of communications.

(2) CALEA.  Since 1994, the U.S. statute CALEA has required telephone networks to make communications “wiretap ready.”  CALEA requires holes, or backdoors, in communications security in order to assure that the FBI and other agencies have a way into communications flowing through the network.  The FBI is now seeking to expand CALEA-style requirements to a wide range of Internet communications that are not covered by the 1994 statute.

(3) Cloud computing.  We are in the midst of a massive transition to storage in the cloud of companies’ and individuals’ data.  Cloud providers promise strong security for the stored data. However, government agencies increasingly are seeking to build automated ways to gain access to the data, potentially creating backdoors for large and sensitive databases.

(4) Telecommunications equipment.  A newly important issue for defense and other government agencies is the “secure supply chain.”  The concern here arises from reports that major manufacturers, including the Chinese company Huawei, are building equipment that has the capability to “phone home” about data that moves through the network.  The Huawei facts (assuming they are true) illustrate the possibility that backdoors can be created systematically by non-government actors on a large scale in the global communications system.

These four areas show key similarities with the more familiar software setting for the term “backdoor” – a programmer who has access to a system, but leaves a way for the programmer to re-enter the system after manufacturing is complete.  White-hat and black-hackers have often exploited backdoors to gain access to supposedly secure communications and data.  Lacking to date has been any general theory, or comparative discussion, about the desirability of backdoors across these settings.  There are of course strongly-supported arguments for government agencies to have lawful access to data in appropriate settings, and these arguments gained great political support in the wake of September 11.  The arguments for cybersecurity and privacy, on the other hand, counsel strongly against pervasive backdoors throughout our computing systems.

Government agencies, in the U.S. and globally, have pushed for more backdoors in multiple settings, for encryption, CALEA, and the cloud.  There has been little or no discussion to date, however, about what overall system of backdoors should exist to meet government goals while also maintaining security and privacy.  The unifying theme of backdoors will highlight the architectural and legal decisions that we face in our pervasively networked and globalized computing world.

Richard Warner and Robert H. Sloan, Behavioral Advertising: From One-Sided Chicken to Informational Norms

Richard Warner and Robert H. Sloan, Behavioral Advertising:  From One-Sided Chicken to Informational Norms

Comment by: Aaron Massey

PLSC 2012

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2034424

Workshop draft abstract:

When you download the free audio recording software from Audacity, your transaction is like any traditional provision of a product for free or for a fee—with one difference:  you agree that Audacity may collect your information and use it to send you advertising.  Billions of such “data commercialization” (DC) exchanges occur daily.  They feed data to a massive advertising ecosystem that constructs individual profiles in order to tailor web site advertising as closely as possible to individual interests.  The vast majority of us object.  We want considerably more control over our information than websites that participate in the advertising ecosystem allow.  Our misgivings are evidently idle, however.  We routinely enter DC exchanges when we visit CNN.com, use Gmail, or visit any of a vast number of other websites.  Why?  And, what, if anything, should we do about it?

We answer both questions by describing DC exchanges as a game of Chicken that we play over and over with sellers under conditions that guarantee we will always lose.  Chicken is traditionally played with cars.  Two drivers at opposite ends of a road drive toward each other at high speed.  The first to swerve loses.  We play a similar game with sellers—with one crucial difference:  we know in advance that the sellers will never “swerve.”

In classic Chicken with cars, the players’ preferences are mirror images of each other.  Imagine, for example, Phil and Phoebe face each other in their cars.  Phil’s first choice is that Phoebe swerve first.  His second choice is that they swerve simultaneously.  Mutual cowardice is better than a collision.  Unilateral cowardice is too, so third place goes to his swerving before Phoebe does.  Collision ranks last.  Phoebe’s preferences are the same except that she is in Phil’s place and Phil in hers.  Now change the preferences a bit, and we have the game we play in DC exchanges.  Phil’s preferences are the same, but Phoebe’s differ.  She still prefers that Phil swerve first, but collision is in second place. Suppose Phoebe was recently jilted by her lover; as a result, her first choice is to make her male opponent reveal his cowardice by swerving first, but her second choice is a collision that will kill him and her broken-hearted self.  Given these preferences, Phoebe will never swerve.  Phil knows Phoebe has these preferences, so he knows he has only two options:  he swerves, and she does not; and, neither swerves.  Since he prefers the first, he will swerve.  Call this One-Sided Chicken.

We play One-Sided Chicken when in our website visits we enter DC exchanges.  We argue that buyers’ preferences parallel Phil’s while the sellers’ parallel heart-broken, “collision second” Phoebe’s. We name the players’ choices in this DC game “Give In,” (the “swerve” equivalent) and “Demand” (the “don’t swerve” equivalent). For buyers, “Demand” means refusing to use the website unless the seller’s data collection practices conform to the buyer’s informational privacy preferences.  “Give in” means permitting the seller to collect and process information accord with whatever information processing policy it pursues.  For sellers, “Demand” means refusing to alter its information processing practices even when they conflict with a buyer’s preferences.  “Give in” means conforming information processing to a buyer’s preferences.  We contend that sellers’ first preference to demand while buyers to give in and that their second is the collision equivalent in which both sides demand.  Demanding sellers leave buyers only two options:  give in and use the site, or demand and do not.  Since buyers prefer the first option, they always give in.

It would be better if we were not locked into One-Sided Chicken.  Ideally, informational norms should regulate the flow of personal information.  Informational norms are norms that constrain the collection, use, and distribution of personal information.  In doing so, they implement tradeoffs between protecting privacy and realizing the benefits of processing information.  Unfortunately, DC exchanges are one of a number of situations in which rapid advances in information processing technology have outrun the slow evolution of norms.

How do we escape from One-Sided Chicken to appropriate informational norms?  Chicken with cars contains a clue.  In the late 1950s B-grade Hollywood youth movie, Phil would introduce broken-hearted Phoebe to just-moved-to-town Tony.  They would fall in love, and, in a key dramatic turning point, Phil and Phoebe would play Chicken.  Phoebe would see that Tony is also in the car and be the first to swerve.  We need a “Tony” to change businesses’ preferences.  We contend that we would all become the DC exchange equivalent Tony if we had close to perfect tracking prevention technologies.  Tracking prevention technologies are perfect when they are 100% effective in blocking information processing for advertising purposes, completely transparent their effect, effortless to use, and permit the full use of the site.  Phoebe swerves because she does not want to lose her beloved Tony.  Sellers are “in love with” advertising revenue.  We argue that they will “swerve” to avoid losing the revenue they would lose if buyers prevented data collection for advertising purposes.  The result will be that, in a sufficiently competitive market, appropriate informational norms arise.  We conclude by considering the prospects for approximating perfect tracking prevention technologies.

David Thaw, Comparing Management-Based Regulation and Prescriptive Legislation: How to Improve Information Security Through Regulation

David Thaw, Comparing Management-Based Regulation and Prescriptive Legislation: How to Improve Information Security Through Regulation

Comment by: Derek Bambauer

PLSC 2012

Workshop draft abstract:

Information security regulation of private entities in the United States can be grouped into two general categories. This paper examines these two categories and presents the results of an empirical study comparing their efficacy at addressing organizations’ failures to protect sensitive consumer information. It examines hypotheses about the nature of regulation in each category to explain their comparative efficacy, and presents conclusions suggesting two changes to existing regulation designed to improve organizations’ capacity to protect sensitive consumer information.

The first category is prescriptive legislation, which lays out performance standards that regulated entities must achieve. State Security Breach Notification (SBN) statutes are the primary example of this type, and require organizations to report to consumers breaches involving certain types of sensitive personal information. This form of legislation primarily lays out performance-based standards, under which the regulatory requirement is that entities achieve (or avoid) certain conditions. Such legislation may also lay out specific means by which regulatory goals are to be achieved.

The second category describes forms of management-based regulatory delegation, under which administrative agencies promulgate regulations requiring organizations to develop security plans designed to achieve certain aspirational goals. Two notable examples are the Health Insurance Portability and Accountability Act (HIPAA) and the Gramm-Leach- Bliley Financial Modernization Act (GLBA). The Federal Trade Commission also engages in such activity reactively through its data security consumer protection enforcement actions. The regulatory requirement in this case is the development of the plan itself (and possible adherence to the plan), rather than the necessary achievement of stated goals or usage of certain methods to achieve those goals.

This paper presents the results of an empirical study analyzing security breach incidence to evaluate the efficacy of information security regulation at preventing breaches of sensitive personal information. Publicly reported breach incidents serve as a proxy for the efficacy of organizations’ security measures, and while clearly limited in scope (as noted below) they are currently the only data point uniformly available across industrial sectors. Analysis of breaches reported between 2000 and 2010 reveals that the combination of prescriptive legislation and management-based regulatory delegation may be four times more effective at preventing breaches of sensitive personal information than is either method alone.

While this method of analysis bears certain limitations, even under unfavorable assumptions the results still support a conclusion that prescriptive standards should be added to existing regulations. Such standards would abate a current “race to the bottom,”

whereby regulated entities adopt compliance plans consistent with “industry-standards” but often (and in some cases woefully) inadequate to achieve the aspirational goals of the regulation. Since the conclusion of this study, there have been two notable such additions of performance-based standards: 1) the inclusion of a breach notification requirement in HIPAA, and 2) the recent promulgation of regulations by the SEC requiring publicly traded companies to report material security risks and events to investors. The results of this analysis also support the expansion of management-based regulatory models to other industrial sectors.

The second component of empirical analysis presented in this paper includes the results of a qualitative study of Chief Information Security Officers (CISOs) at large U.S. regulated entities. The interview data reveals the effects of regulation both on information security practices and on the role of technical professionals within organizations. The results of these interviews suggest hypotheses to explain both the weaknesses in compliance plan design and the proposition that, notwithstanding new performance-based standards, security conditions remain ineffective.

The first hypothesis suggests that the relative effects of prescriptive legislation and management-based regulatory delegation on the role of technical professionals in organizations explain the inability of performance-based standards fully to address information security failures. The data suggest two specific outcomes – first, that current performance-based standards weaken the role of technical professionals; and second, that management-based models of regulatory delegation strengthen professionals’ role. This result stems from reliance on technical professionals’ skill in developing compliance plans to meet management-based regulatory goals. The current model of performance- based regulation, by contrast, under which security failures are exempt from (the regulatory penalty of) reporting when the compromised data is encrypted, decreases reliance on technical skill by effectively specifying one means-based approach to “compliance.” By redirecting essentially-fixed resources to a specific means of compliance addressing only a single threat, these performance-based standards hamper the ability of CISOs adequately to address other salient threats. In this regard, SBNs effectively lock the front door to the bank while leaving the back window wide open.

The second hypothesis suggests that the lack of proactive guidance by regulators hampers the ability of CISOs to justify requests for increased resources to address vulnerabilities not covered by performance-based standards. This hypothesis answers the question of why “industry-standards” may be so ineffective at achieving the aspirational goals of the regulation. Management-based regulatory delegation models rely heavily on a context of “reasonableness,” many of which scale to the size, complexity, and capabilities of the regulated entity. Reasonableness is a well-examined concept in law, but becomes problematic in the context of a highly-technical and fast-changing regulatory environment. Regulators’ failure to provide proactive guidance regarding what constitutes reasonable security hampers the ability of CISOs to justify the need for greater resources. Combined with the “redirection” of resources to address specific compliance objectives associated with performance-based standards, these pressures cause broad-based security plans to be inadequate (either in design or implementation) at addressing the broader base of threats facing the organization. The effects of this condition are evident in the abundance of “low-hanging fruit” available to regulators – review of the Federal Trade Commission’s data security enforcement actions reveals few answers to “gray areas” of reasonableness, and many examples of security failures extreme in degree.

These findings and analysis suggest three conclusions. First, regulators should increase the use of performance-based standards, specifically standards not tied to specific means of implementation. Second, management-based regulatory models should be expanded to other industrial sectors beyond finance and healthcare, perhaps through the promulgation of proactive regulations by the FTC consistent with its history of enforcement action. Third, regulators should provide more proactive guidance as to the definition of reasonable security, so as to avoid a “race to the bottom” in the development security plans to address management-based regulatory goals.