Archives

Paul Ohm, What is Sensitive Information?

Paul Ohm, What is Sensitive Information?

Comment by: Peter Swire

PLSC 2013

Workshop draft abstract:

The diverse and dizzying variety of regulations, laws, standards, and corporate practices in place to deal with information privacy around the world share at their core at least one unifying concept: sensitive information. Some categories of information—health, financial, education, and child-related, to name only a few—are deemed different than others, and data custodians owe special duties and face many more constraints when it comes to the maintenance of these categories of information.

Sensitive information is a show stopper. Otherwise lax regulations become stringent when applied to sensitive information. Permissive laws set stricter rules for the sensitive. The label plays a prominent role in rhetoric and debate, as even the most ardent believer in free markets and unfettered trade in information will bow low to the ethical edict that they never sell sensitive information regardless of the cost.

Despite the importance and prevalence of sensitive information, very little legal scholarship has systematically studied this important category. Sensitive information is deeply undertheorized. What makes a type of information sensitive? Are the sensitive categories set in stone, or do they vary with time and technological advances? What are the political and rhetorical mechanisms that lead a type of information into or out of the designation? Why does the designation serve as such a powerful trump card? This Article seeks to answer these questions and more.

The Article begins by surveying the landscape of sensitive information. It identifies dozens of examples of special treatment for sensitive information in rules, laws, policy statements, academic writing, and corporate practices from a wide number of jurisdictions, in the United States and beyond.

Building on this survey, the Article reverse engineers the rules of decision that define sensitive information. From this, it develops a multi-factor test that may be applied to explain, ex post, the types of information that have been deemed sensitive in the past and also predict, ex ante, types of information that may be identified as sensitive soon. First, sensitive information can lead to significant forms of widely-recognized harm. Second, sensitive information is the kind that exposes the data subject to a high probability of such harm. By focusing in particular on these two factors, this Article sits alongside the work of many other privacy scholars who have in recent years shifted their focus to privacy harm, a long neglected topic. Third, sensitive information is often governed by norms of limited sharing. Fourth, sensitive information is rare and tends not to exist in many databases. Fifth, sensitive information tends to focus on harms that apply to the majority—often the ruling majority—of data subjects while information leading to harms affecting only a minority less readily secure the label.

To test the predictive worth of these factors, the Article applies them to assess whether two forms of data that have been hotly debated by information privacy experts in recent years are poised to join the ranks of the sensitive: geolocation data and remote biometric data. Neither one of these have already been widely accepted as sensitive in privacy law, yet both trigger many of the factors listed above. Of the two, geolocation data is further down the path, already recognized by laws, regulations, and company practices world wide. By identifying and justifying the treatment of geolocation and remote biometric data as sensitive, this Article hopes to spur privacy law reform in many jurisdictions.

Turning from the rules of decision used in the classification of sensitive information to the public choice mechanisms that lead particular types of information to be classified, the Article tries to explain why new forms of sensitive information often fail to be recognized until years after they satisfy most of the factors listed above. It argues that this stems from the way political institutions incorporate new learning from technology slowly and haphazardly. To improve this situation, the Article suggests new administrative mechanisms to identify new forms of sensitive information on a much more accelerated timeframe. It specifically proposes that in the United States, the FTC undertake a periodic—perhaps biennial or triennial—review of potential categories of sensitive information, suggested by members of the public. The FTC would be empowered to classify particular types of information as sensitive, or to remove the designation from types that are no longer sensitive, because of changes in technology or society. It would base these decisions on rigorous empirical review of the factors listed above, focusing in particular on the harms inherent in the data and the probability of harm, given likely threat models. It illustrates the idea by considering a type of information that has not really been considered sensitive, calendar information. Calendar information tends to reveal location, associations, and other forms of closely-held, confidential information, yet very few recognize the status of this potentially new class of sensitive information. We might consider asking the FTC whether this deserves to be categorized sensitive.

Finally, the Article tackles the vexing and underanalyzed problem of idiosyncratically sensitive information. Since traditional conceptions of sensitive information cover primarily majoritarian concerns, it does little to protect the data that feel sensitive only to smaller groups. This is a significant gap in the information privacy landscape, as every person cares about idiosyncratic forms of information that worry only a few. It may be that traditional forms of information privacy law are ill-equipped to deal with idiosyncratically sensitive information. Regulating idiosyncratically sensitive information will require more aggressive forms of regulation, for example premising new laws on the amount of information held, not only on the type of information held, on the theory that larger databases are likelier to hold idiosyncractically sensitive information than smaller databases.

Peter Swire, Backdoors

Peter Swire, Backdoors

Comment by: Orin Kerr

PLSC 2012

Workshop draft abstract:

This article, which hopefully will be the core of a forthcoming book, uses the idea of “backdoors” to unify previously disparate privacy and security issues in a networked and globalized world.  Backdoors can provide government law enforcement and national security agencies with lawful (or unlawful) access to communications and data.  The same, or other, backdoors, can also provide private actors, including criminals, with access to communications and data.

Four areas illustrate the importance of the law, policy, and technology of backdoors:

(1) Encryption.  As discussed in my recent article on “Encryption and Globalization,” countries including India and China are seeking to regulate encryption in ways that would give governments access to encrypted communications.  An example is the Chinese insistence that hardware and software built there use non-standard cryptosystems developed in China, rather than globally-tested systems.  These types of limits on encryption, where implemented, give governments a pipeline, or backdoor, into the stream of communications.

(2) CALEA.  Since 1994, the U.S. statute CALEA has required telephone networks to make communications “wiretap ready.”  CALEA requires holes, or backdoors, in communications security in order to assure that the FBI and other agencies have a way into communications flowing through the network.  The FBI is now seeking to expand CALEA-style requirements to a wide range of Internet communications that are not covered by the 1994 statute.

(3) Cloud computing.  We are in the midst of a massive transition to storage in the cloud of companies’ and individuals’ data.  Cloud providers promise strong security for the stored data. However, government agencies increasingly are seeking to build automated ways to gain access to the data, potentially creating backdoors for large and sensitive databases.

(4) Telecommunications equipment.  A newly important issue for defense and other government agencies is the “secure supply chain.”  The concern here arises from reports that major manufacturers, including the Chinese company Huawei, are building equipment that has the capability to “phone home” about data that moves through the network.  The Huawei facts (assuming they are true) illustrate the possibility that backdoors can be created systematically by non-government actors on a large scale in the global communications system.

These four areas show key similarities with the more familiar software setting for the term “backdoor” – a programmer who has access to a system, but leaves a way for the programmer to re-enter the system after manufacturing is complete.  White-hat and black-hackers have often exploited backdoors to gain access to supposedly secure communications and data.  Lacking to date has been any general theory, or comparative discussion, about the desirability of backdoors across these settings.  There are of course strongly-supported arguments for government agencies to have lawful access to data in appropriate settings, and these arguments gained great political support in the wake of September 11.  The arguments for cybersecurity and privacy, on the other hand, counsel strongly against pervasive backdoors throughout our computing systems.

Government agencies, in the U.S. and globally, have pushed for more backdoors in multiple settings, for encryption, CALEA, and the cloud.  There has been little or no discussion to date, however, about what overall system of backdoors should exist to meet government goals while also maintaining security and privacy.  The unifying theme of backdoors will highlight the architectural and legal decisions that we face in our pervasively networked and globalized computing world.

Peter Swire, Social Networks, Privacy, and Freedom of Association

Peter Swire, Social Networks, Privacy, and Freedom of Association

Comment by: Harry Surden

PLSC 2011

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1989516

Workshop draft abstract:

This article introduces a topic that has not drawn attention to date but will undoubtedly become important to the legal treatment of social networks.  Many people have written insightfully about the individual’s right to privacy, and how that can be threatened when “friends” or others use social networks to spread information about you without your consent.  Other people have written about how social networks are powerful tools for political mobilization, fostering the freedom of association.  Strangely enough, my research has not found any analysis of how the two fit together — how freedom of association interacts with privacy protection.

This discussion draft highlights the profound connection between social networking and freedom of association.  At the most basic level, linguistically, “networks” and “associations” are close synonyms.  They both depend on “links” and “relationships.”  If there is a tool for lots and lots of networking, that is also a tool for how we do lots and lots of associations.

I stumbled into this topic due to a happenstance of work history.  I have long worked and written on privacy and related information technology issues, including as the Chief Counselor for Privacy under President Clinton.  Then, during the Obama transition, I was asked to be counsel to the New Media team.  These were the people who had done such a good job at grassroots organizing during the campaign.  During the transition, the team was building New Media tools for the transition website and into the overhaul of whitehouse.gov.

My experience historically had been that people on the progressive side of politics often intuitively support privacy protection.  They often believe that “they” — big corporations or law enforcement — will grab our personal data and put “us” at risk.  The Obama New Media folks, by contrast, often had a different intuition.  They saw personal information as something that “we” use.  Modern grassroots organizing seeks to go viral, to galvanize one energetic individual who then gets his or her friends and contacts excited, and so on. In this New Media paradigm, “we” the personally motivated use emails, texts, and other outreach tools to tell our friends and associates about the campaign and remind them to vote.  We may reach out to people we don’t know or barely know but who have a shared interest — the same college club, rock band, religious group, or whatever.  In this way, “our” energy and commitment can achieve scale and effectiveness.  The tools provide “data empowerment” — ordinary people can do things with personal data that only large organizations used to be able to do.

This shift from only “them” using the data to “us” being able to use the data tracks the changes in information technology since the 1970s, when the privacy fair information practices were articulated and the U.S. passed the Privacy Act.   In the 1970s, personal data resided in mainframe computers.  These were operated by big government agencies and the largest corporations.  Today, by contrast, my personal computer has more processing power than an IBM mainframe from thirty years ago. My home has a fiber optic connection, so bandwidth is rarely a limitation.  Today, “we” own mainframes and use the Internet as a global distribution system.

To explain the interaction between privacy and freedom of association, this paper has three sections.  The first section explains how privacy debates to date have often featured the “right to privacy” on one side and utilitarian arguments in favor of data use on the other.  This section provides more detail about how social networks are major enablers of the right of freedom of association.  This means that rules about information flows involve individual rights on both sides, so advocates for either sort of right need to address how to take account of the opposing right.  The second section shows step-by-step how U.S. law will address the multiple claims of right to privacy and freedom of association.  The outcome of litigation will depend on the facts in a particular case, but the legal claims arising from freedom of expression appear relevant to a significant range of possible privacy rules that would apply to social networks.  Based on precedent, strict scrutiny may apply to material infringements on freedom of association.  The third section explains how the interesting arguments by Professor Katherine Strandburg fit into the overall analysis.  She has written about a somewhat different interaction between privacy and freedom of association, where the right of freedom of association is a limit on the power of government to require an association to reveal its members.

Peter Swire & Cassandra Butts, The ID Divide

Peter Swire & Cassandra Butts, The ID Divide

Comment by:

PLSC 2008

Workshop draft abstract:

This report examines how a next Administration should approach the complex issues of authentication and identification, for issues including: national and homeland security; immigration; voting; electronic medical records; computer security; and privacy and civil liberties.  For many reasons, the number of ID checks in American life has climbed sharply in recent years.  The result, we conclude, is what we call the “ID Divide.”

The ID Divide is similar to the “Digital Divide” that exists for access to computers and the Internet.  The Digital Divide matters because those who lack computing lose numerous opportunities for education, commerce, and participation in civic and community affairs.  Today, millions of Americans lack official identification, suffer from identity theft, are improperly placed on watch lists, or otherwise face burdens when asked for identification.  The problems of these uncredentialed people are largely invisible to credentialed Americans, many of whom have a wallet full of proofs of identity.  Yet those on the wrong side of the ID Divide are finding themselves squeezed out of many parts of daily life, including finding a job, opening a bank account, flying on an airplane, and even exercising the right to vote.

Part I of this report describes the background of the issue, including the sharp rise in recent years in how often Americans are asked for proof of identity.  Part II examines the facts of the ID Divide in detail.  There are at least four important types of problems under the ID Divide:

  1. Large population affected by identity theft and data breaches.
  2. Growing effects of watch lists.
  3. Specific groups disproportionately lack IDs today.
  4. The effects of stricter ID and matching requirements.

Part III develops Progressive Principles for Identification Systems.  These principles apply at two stages: (1) whether to create the system at all; and (2) if so, how to do it:

  1. Achieve real security or other goals.
  2. Accuracy.
  3. Inclusion.
  4. Fairness/equality.
  5. Effective redress mechanisms.
  6. Equitable financing for systems.

Part IV explains a “due diligence” process for considering and implementing identification systems, and examines biometrics and other key technical issues.  Part V applies the progressive principles and due diligence insights to two current examples of identification programs, photo ID for voting and the Transportation Worker Identification Card.

Peter Swire, Peeping

Peter Swire, Peeping

Comment by: James Rule

PLSC 2009

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1418091

Workshop draft abstract:

There have been recent revelations of “peeping” into the personal files of celebrities. Contractors for the U.S. State Department looked at passport files, without authorization, for candidates Barack Obama and John McCain.  Employees at UCLA Medical Center and other hospitals have recently been caught looking at the medical files of movie stars, and one employee received money from the National Enquirer to access and then leak information.  In the wake of these revelations, California passed a statute specifically punishing this sort of unauthorized access to medical files.

This article examines the costs and benefits of laws designed to detect and punish unauthorized “peeping” into files of personally identifiable information. Part I looks at the history of “peeping Tom” and eavesdropping statutes, examining the common law baseline.  Part II examines the current situation.  As data privacy and security regimes become stricter, and often enforced by technological measures and increased audits, there will be an increasing range of systems that detect such unauthorized use.  Peeping is of particular concern where the information in the files is especially sensitive, such as for tax, national security, intelligence, and medical files.

The remedy for peeping is a particularly interesting topic.  Detection of peeping logically requires reporting of a privacy violation to someone.  The recipient of notice, for instance, could include: (1) a manager in the hospital or other organization, who could take administrative steps to punish the perpetrator; (2) a public authority, who would receive notice of the unauthorized use (“peeping”); and/or (3) the individual whose files have been the subject of peeping.  For the third category, peeping could be seen as a natural extension of current data breach laws, where individuals receive notice when their data is made available to third parties in an unauthorized way.  An anti-peeping regime would face issues very similar to the debates on data breach laws, such as what “trigger” should exist for the notice requirement, and what defenses or safe harbors should exist so that notice is not necessary.