Archives

Ian Kerr, Privacy and the Bad Man: Or, How I Got Lucky With Oliver Wendell Holmes Jr.

Ian Kerr, Privacy and the Bad Man: Or, How I Got Lucky With Oliver Wendell Holmes Jr.

Comment by: Tal Zarsky

PLSC 2012

Workshop draft abstract:

You all (y’all?) having likewise experienced the recursive nature of the exercise of writing a law review article, you will appreciate that, this year, for PLSC, I decided to challenge myself to a game of Digital Russian Roulette. I wondered what result Google’s predictive algorithm would generate as the theoretical foundation for an article that I would soon write on predictive computational techniques and their jurisprudential implications. Plugging the terms: ‘prediction’, ‘computation’, ‘law’ and ‘theory’ into Google, I promised myself that I would focus the article on whatever subject matter popped up when I clicked on the ‘I’m Feeling Lucky’ search feature.

So there I was, thanks to Google’s predictive algorithm, visiting a Wikipedia page on the jurisprudence of Oliver Wendell Holmes Jr. (Wikipedia, 2011). Google done good. Perhaps America’s most famous jurist, Holmes was clearly fascinated by the power of predictions and the predictive stance. So much so that he made prediction the centerpiece of his own prophecies regarding the future of legal education: ‘The object of our study, then, is prediction, the prediction of the incidence of the public force through the instrumentality of the courts’ (Holmes, 1897: 457).

Given his historical role in promoting the skill of prediction to aspiring lawyers and legal educators, one cannot help but wonder what Holmes might have thought of the proliferation of predictive technologies and probabilistic techniques currently under research and development within the legal domain. Would he have approved of the legal predictions generated by expert systems software that provide efficient, affordable, computerized legal advice as an alternative to human lawyers? What about the use of argument schemes and other machine learning techniques in the growing field of ‘artificial intelligence and the law’ (Prakken, 2006) seeking to make computers, rather than judges, the oracles of the law?

Although these were not live issues in Holmes’s time, contemporary legal theorists cannot easily ignore such questions. We are living in the kneecap of technology’s exponential growth curve, with a flight trajectory limited more by our imaginations than the physical constraints upon Moore’s Law. We are also knee-­‐deep in what some have called ‘the computational turn’ (Hildebrandt, 2011) wherein innovations in storage capacity, data aggregation techniques and cross-­‐contextual linkability enable new forms of idiopathic predictions. Opaque, anticipatory algorithms and social graphs allow inferences to be drawn about people and their preferences. These inferences may be accurate (or not), without our knowing exactly why.

One might say that our information society has swallowed whole Oliver Wendell Holmes Jr.’s predictive pill, except that our expansive social investment in predictive techniques extends well beyond the bounds of predicting, ‘what the courts will do in fact’ (Holmes, 1897: 457). What Holmes said more than a century and a decade ago about the ‘body of reports, of treatises, and of statutes in the United States and in England, extending back for six hundred years, and now increasing annually by hundreds’ (Holmes, 1897: 457) can now be said of the entire global trade in personal information, fueled by emerging techniques in computer and information science, such as KDD (knowledge discovery in databases):

In these sibylline leaves are gathered the scattered prophecies of the past upon the cases in which the axe will fall. These are what properly have been called the oracles of the law. Far the most important and pretty nearly the whole meaning of every new effort of … thought is to make these prophecies more precise, and to generalize them into a thoroughly connected system. (Homes,1897: 457)

As described in my article, the computational axe has fallen many times already and will continue to fall.

My article examines the path of law after the computational turn. Inspired by Holmes’s use of prediction to better understand the fabric of law and social change, I suggest that his predictive stance (the famous “bad man” theory) is also a useful heuristic device for understanding and evaluating the predictive technologies currently embraced by public-­‐ and private-­‐sector institutions worldwide. I argue that today’s predictive technologies threaten privacy and due process. My concern is that the perception of increased efficiency and reliability in the use of predictive technologies might be seen as the justification for a fundamental jurisprudential shift from our current ex post facto systems of penalties and punishments to ex ante preventative measures premised on social sorting, increasingly adopted across various sectors of society.

This jurisprudential shift, I argue, could significantly undermine the value-­‐based approach that underlies the ‘reasonable expectation of privacy’ standard adopted by common law courts, privacy and data commissioners and an array of other decision makers. More fundamentally, it could alter the path of law, significantly undermining core presumptions built into the fabric of today’s retributive and restorative models of social justice, many of which would be preempted by tomorrow’s actuarial justice.

Holmes’s predictive approach was meant to shed light on the nature of law by shifting law’s standpoint to the perspective of everyday citizens who are subject to the law. Preemptive approaches enabled by the computational turn will obfuscate the citizen’s legal standpoint championed by Holmes. I warn that preemptive approaches have the potential to alter the very nature of law without justification, undermining many core legal presumptions and other fundamental commitments.

In the article, I propose that the unrecognized genius in Holmes’s jurisprudence is his (self-­‐fulfilling) prophecy, more than a century ago, that law would become one of a series of businesses focused on prediction and the management of risk. I suggest that his famous speech, The Path of Law, lays a path not only for future lawyers but also for data scientists and other information professionals. The article commences with an examination of Holmes’s predictive theory. I articulate what I take to be his central contribution—that to understand prediction, one must come to acknowledge, understand and account for the point of view from which it is made. An appreciation of Holmes’s “predictive stance” allows for comparisons with the standpoints of today’s prediction industries. I go on to discuss these industries, attempting to locate potential harms generated by the prediction business associated with the computational turn. These harms are more easily grasped through a deeper investigation of prediction, wherein I argue that when prediction is understood in the broader context of risk, it is readily connected to the idea of preemption of harm. I suggest that the rapid increase in technologies of prediction and preemption go hand in hand, and I demonstrate how their broad acceptance could undermine the normative foundations of the ‘reasonable expectations of privacy’ standard, and show how it also fosters a growing social temptation to adopt a philosophy of preemption, and demonstrate how this could also have a significant impact on our fundamental commitments to due process.

Tal Zarsky, Data Mining, Personal Information & Discrimination

Tal Zarsky, Data Mining, Personal Information & Discrimination

Comment by: James Rule

PLSC 2011

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1983326

Workshop draft abstract:

Governments are extremely interested in figuring out what their citizens are going to do. To meet this ambitious objective, governments began to engage in predictive modeling analyses which they apply to massive datasets of personal information at their disposal, from both governmental and commercial sources. Such analyses are, in many cases, enabled by data mining, which allows for both automation, and the revealing of previously unknown patterns. The outcomes of these analyses are rules and associations, providing approximated predictions as to future actions and reactions of individuals. These individualized predictions can thereafter be used by government officials (or possibly solely as part of an automated system) for a variety of tasks. For instance, such input could be used to make decisions regarding the future allocation of resources and privileges. In other instances, these predictions can be used to establish risks posed by specific individuals (in terms of security or law enforcement), while allowing the state to take relevant precautions. The patterns and decision-trees resulting from these analyses (and further applied to the policy objectives mentioned) are very different from those broadly used today to differentiate among groups and individuals; they might include a great variety of factors and variables, or perhaps are only established by an algorithm using ever-changing rules which cannot be easily understand by the observing human.

As knowledge of such ventures is unfolding, and technological advances enable their expansion, policymakers and scholars are quickly moving to point out the determinants of such practices. They are striving to establish which practices are legitimate and which go too far. This article aims to join this discussion, in an attempt to resolve arguments and misunderstandings concerning a central argument raised in this discussion – whether such practices amount to unfair discrimination. This argument is challenging, as governments often treat different individuals differently (and are indeed required to do so). This article sets out to examine why and in which instances must these practices be considered discriminatory and problematic when carried out by government.

To approach this concern, the article identifies and explores three arguments in the context of discrimination:

(1) These practices might resemble, enable or are a proxy for various forms of discrimination which are currently prohibited by law: here the article will briefly note these forms of illegal discrimination and the theories underlying their prohibition. It will then explain how these novel forms of allegedly “neutral” models might generate similar results and effects.

(2) These practices might generate stigma and stereotypes for those indicated by this process: This argument is challenging given the fact that these forms of discrimination use elaborate factors and are at times opaque. However, in some context these concerns might indeed persist, especially when we fear these practices will “creep” into other projects. In addition, these practices might also signal the governmental “appetite” for discrimination, and set an inappropriate example for the public.

(3) These practices “punish” individuals for things they did not do; or, are premised on what they did (actions) and how they are viewed, but not who they really are: – these loosely connected arguments are commonly raised, yet require several in-depth inquiries: are these concerns are sound policy considerations or utterances of a Luddite crowd? Are the practices here addressed generating these concerns? In this discussion, the article will distinguish among immutable and changeable attributes.

While examining these arguments, the article emphasizes that if individualized predictive modeling is banned, government will engage in other forms of analysis to distinguish among individuals, their needs and the risks they pose. Thus, predictive modeling would constantly be compared to the common practice of treating individuals as part of predefined groups. Other dominant alternatives are allowing for broad discretion and refraining from differentiated treatment, or doing so on a random basis.

After mapping out these concerns and taking note of the alternatives, the article moves to offer concrete recommendations. While distinguishing among different contexts, it advises when these concerns lead to abandoning the use of such prediction models. It further notes other steps which might be taken to allow these practices to persist – steps which range from closely monitoring decision making models and making needed adjustments to eliminate some forms of discrimination, greater governmental disclosure and transparency and public education. It concludes by noting that in some cases and with specific tinkering, predictive modeling which is premised upon personal information can leads to outcomes which promote fairness and equality.

It is duly noted that perhaps the most central arguments against predictive data mining practices are premised upon other theories and concerns. For instance, often mentioned are the inability to control personal information, the lack of transparency and the existence and persistence of errors in the analysis and decision making process. This article acknowledges these concerns, yet leaves them to be addressed in other segments of this broader research project. Furthermore, it sees great importance in addressing the “discrimination” element separately and directly; at times, the other concerns noted could be resolved. In other instances, the interests addressed here are confused with other elements. For these reasons and others, a specific inquiry is warranted. In addition, the article sets aside the argument that such actions are problematic, as they are premised upon decisions made by a machine, as opposed to a fellow individual. This powerful argument is also to be addressed elsewhere. Finally, the article acknowledges that these policy strategies should only be adopted if proven efficient and effective. This finding must be established by experts on a case-by-case basis. The arguments set out here, though, can also assist in adding additional factors to such analyses.

Addressing the issues at hand is challenging. They present a considerable paradigm shift in the way policymakers and scholars have been thinking of discrimination and decision-making in the past. In addition, there is the difficulty of applying existing legal paradigms and doctrines to these new concerns: is this question one of privacy, equality and discrimination, data protection, autonomy – something else or none of the above? Rather than tacking this latter question, I apply methodologies from all of the above to address an issue which will no doubt constantly arise in today’s environment of ongoing flows of personal information.

Frank Pasquale, Reputation Regulation: Disclosure and the Challenge of Clandestinely Commensurating Computing

Frank Pasquale, Reputation Regulation: Disclosure and the Challenge of Clandestinely Commensurating Computing

Comment by: Tal Zarsky

PLSC 2010

Workshop draft abstract:

Reputational systems can never be rendered completely just, but legislators can take two steps toward fairness. The first is relatively straightforward: to assure that key decision makers reveal the full range of online sources they consult as they approve or deny applications for credit, insurance, employment, and college and graduate school admissions. Such disclosure will at least serve to warn applicants of the dynamic digital dossier they are accumulating in cyberspace. Effective disclosure requirements need to cover more than the users of reputational information—they should also apply to some aggregators as well. Just as banks have moved from consideration of a long-form credit report to use of a single commensurating credit score, employers and educators in an age of reputation regulation may turn to intermediaries which that combine extant indicators of reputation into a single scoring of a person. Since such scoring can be characterized as a trade secret, it may be even less accountable than the sorts of rumors and innuendo discussed above. Any proposed legislation will need to address the use of such reputation scores, lest black- box evaluations defeat its broader purposes of accountability and transparency.