Archives

Helen Nissenbaum, Respect for Context as a Benchmark for Privacy Online: What it is and isn’t

Helen Nissenbaum, Respect for Context as a Benchmark for Privacy Online: What it is and isn’t

Comment by: James Rule

PLSC 2013

Workshop draft abstract:

In February 2012, the Obama White House unveiled a Privacy Bill of Rights within the report, Consumer Data Privacy in a Networked World: A Framework for Protecting Privacy and Promoting Innovation in the Global Digital Economy, developed by the Department of Commerce, NTIA. Among the Bill of Right’s seven principles, the third, “Respect for Context” was explained as the expectation that “companies will collect, use, and disclose personal data in ways that are consistent with the context in which consumers provide the data.” (p.47) Compared with the other six, which were more recognizable as kin of traditional principles of fair Information practices, such as, for example, the OECD Privacy Guidelines, the principle of respect for Context (PRC) was intriguingly novel.

Generally positive reactions to the White House Report and to the principle of respect-for-context aligned many parties who have disagreed with one another on virtually everything else to do with privacy. That the White House publicly and forcefully acknowledged the privacy problem buoyed those who have worked on it for decades; yet, how far the rallying cry around respect-for-context will push genuine progress is critically dependent on how this principle is interpreted. In short, convergent reactions may be too good to be true if they stand upon divergent interpretations and whether the Privacy Bill of Rights fulfills it promise as a watershed for privacy will depend on which one of these drives regulators to action – public or private. At least, this is the argument my article develops.

Commentaries surrounding the Report reveal five prominent interpretations: a) context as determined by purpose specification; b) context as determined by technology, or platform; c) context as determined by business sector, or industry; d) context as determined by business model; and e) context as determined by social sphere. In the report itself meaning seems to shift from section to section or is left indeterminate but without dwelling too long on what exactly NTIA may or may not have intended my article discusses these five interpretations focusing on what is at stake in adopting any one of them. Arguing that a) and c) would sustain existing stalemates and inertia and that b) and d) though a step forward would not realize the principle’s compelling promise, I defend e), which conceives context as social sphere. Drawing on ideas in Privacy in Context: Technology, Policy, and the Integrity of Social Life (2010), I argue (1) that substantive constraints derived from context-specific informational norms are essential for infusing fairness into purely procedural rule sets; and (2) rule sets that effectively protect privacy depend on a multi-stakeholder process (to which the NTIA is strongly committed), which is truly representative, in turn depends on properly identifying relevant social spheres.

Tal Zarsky, Data Mining, Personal Information & Discrimination

Tal Zarsky, Data Mining, Personal Information & Discrimination

Comment by: James Rule

PLSC 2011

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1983326

Workshop draft abstract:

Governments are extremely interested in figuring out what their citizens are going to do. To meet this ambitious objective, governments began to engage in predictive modeling analyses which they apply to massive datasets of personal information at their disposal, from both governmental and commercial sources. Such analyses are, in many cases, enabled by data mining, which allows for both automation, and the revealing of previously unknown patterns. The outcomes of these analyses are rules and associations, providing approximated predictions as to future actions and reactions of individuals. These individualized predictions can thereafter be used by government officials (or possibly solely as part of an automated system) for a variety of tasks. For instance, such input could be used to make decisions regarding the future allocation of resources and privileges. In other instances, these predictions can be used to establish risks posed by specific individuals (in terms of security or law enforcement), while allowing the state to take relevant precautions. The patterns and decision-trees resulting from these analyses (and further applied to the policy objectives mentioned) are very different from those broadly used today to differentiate among groups and individuals; they might include a great variety of factors and variables, or perhaps are only established by an algorithm using ever-changing rules which cannot be easily understand by the observing human.

As knowledge of such ventures is unfolding, and technological advances enable their expansion, policymakers and scholars are quickly moving to point out the determinants of such practices. They are striving to establish which practices are legitimate and which go too far. This article aims to join this discussion, in an attempt to resolve arguments and misunderstandings concerning a central argument raised in this discussion – whether such practices amount to unfair discrimination. This argument is challenging, as governments often treat different individuals differently (and are indeed required to do so). This article sets out to examine why and in which instances must these practices be considered discriminatory and problematic when carried out by government.

To approach this concern, the article identifies and explores three arguments in the context of discrimination:

(1) These practices might resemble, enable or are a proxy for various forms of discrimination which are currently prohibited by law: here the article will briefly note these forms of illegal discrimination and the theories underlying their prohibition. It will then explain how these novel forms of allegedly “neutral” models might generate similar results and effects.

(2) These practices might generate stigma and stereotypes for those indicated by this process: This argument is challenging given the fact that these forms of discrimination use elaborate factors and are at times opaque. However, in some context these concerns might indeed persist, especially when we fear these practices will “creep” into other projects. In addition, these practices might also signal the governmental “appetite” for discrimination, and set an inappropriate example for the public.

(3) These practices “punish” individuals for things they did not do; or, are premised on what they did (actions) and how they are viewed, but not who they really are: – these loosely connected arguments are commonly raised, yet require several in-depth inquiries: are these concerns are sound policy considerations or utterances of a Luddite crowd? Are the practices here addressed generating these concerns? In this discussion, the article will distinguish among immutable and changeable attributes.

While examining these arguments, the article emphasizes that if individualized predictive modeling is banned, government will engage in other forms of analysis to distinguish among individuals, their needs and the risks they pose. Thus, predictive modeling would constantly be compared to the common practice of treating individuals as part of predefined groups. Other dominant alternatives are allowing for broad discretion and refraining from differentiated treatment, or doing so on a random basis.

After mapping out these concerns and taking note of the alternatives, the article moves to offer concrete recommendations. While distinguishing among different contexts, it advises when these concerns lead to abandoning the use of such prediction models. It further notes other steps which might be taken to allow these practices to persist – steps which range from closely monitoring decision making models and making needed adjustments to eliminate some forms of discrimination, greater governmental disclosure and transparency and public education. It concludes by noting that in some cases and with specific tinkering, predictive modeling which is premised upon personal information can leads to outcomes which promote fairness and equality.

It is duly noted that perhaps the most central arguments against predictive data mining practices are premised upon other theories and concerns. For instance, often mentioned are the inability to control personal information, the lack of transparency and the existence and persistence of errors in the analysis and decision making process. This article acknowledges these concerns, yet leaves them to be addressed in other segments of this broader research project. Furthermore, it sees great importance in addressing the “discrimination” element separately and directly; at times, the other concerns noted could be resolved. In other instances, the interests addressed here are confused with other elements. For these reasons and others, a specific inquiry is warranted. In addition, the article sets aside the argument that such actions are problematic, as they are premised upon decisions made by a machine, as opposed to a fellow individual. This powerful argument is also to be addressed elsewhere. Finally, the article acknowledges that these policy strategies should only be adopted if proven efficient and effective. This finding must be established by experts on a case-by-case basis. The arguments set out here, though, can also assist in adding additional factors to such analyses.

Addressing the issues at hand is challenging. They present a considerable paradigm shift in the way policymakers and scholars have been thinking of discrimination and decision-making in the past. In addition, there is the difficulty of applying existing legal paradigms and doctrines to these new concerns: is this question one of privacy, equality and discrimination, data protection, autonomy – something else or none of the above? Rather than tacking this latter question, I apply methodologies from all of the above to address an issue which will no doubt constantly arise in today’s environment of ongoing flows of personal information.

Peter Swire, Peeping

Peter Swire, Peeping

Comment by: James Rule

PLSC 2009

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1418091

Workshop draft abstract:

There have been recent revelations of “peeping” into the personal files of celebrities. Contractors for the U.S. State Department looked at passport files, without authorization, for candidates Barack Obama and John McCain.  Employees at UCLA Medical Center and other hospitals have recently been caught looking at the medical files of movie stars, and one employee received money from the National Enquirer to access and then leak information.  In the wake of these revelations, California passed a statute specifically punishing this sort of unauthorized access to medical files.

This article examines the costs and benefits of laws designed to detect and punish unauthorized “peeping” into files of personally identifiable information. Part I looks at the history of “peeping Tom” and eavesdropping statutes, examining the common law baseline.  Part II examines the current situation.  As data privacy and security regimes become stricter, and often enforced by technological measures and increased audits, there will be an increasing range of systems that detect such unauthorized use.  Peeping is of particular concern where the information in the files is especially sensitive, such as for tax, national security, intelligence, and medical files.

The remedy for peeping is a particularly interesting topic.  Detection of peeping logically requires reporting of a privacy violation to someone.  The recipient of notice, for instance, could include: (1) a manager in the hospital or other organization, who could take administrative steps to punish the perpetrator; (2) a public authority, who would receive notice of the unauthorized use (“peeping”); and/or (3) the individual whose files have been the subject of peeping.  For the third category, peeping could be seen as a natural extension of current data breach laws, where individuals receive notice when their data is made available to third parties in an unauthorized way.  An anti-peeping regime would face issues very similar to the debates on data breach laws, such as what “trigger” should exist for the notice requirement, and what defenses or safe harbors should exist so that notice is not necessary.