Archives

Jane Bambauer and Derek Bambauer, Vanished

Jane Bambauer and Derek Bambauer, Vanished

Comment by: Eric Goldman

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2326236

Workshop draft abstract:

The conventional wisdom on Internet censorship assumes that the United States government makes fewer attempts to remove and delist content from the Internet than other democracies. Likewise, democratic governments are believed to make fewer attempts to control on-line content than the governments of non-democratic countries. These assumptions are theoretically sound: most democracies have express commitments to the freedom of speech and communication, and the United States has exceptionally strong legal immunities for Internet content providers, along with judicial protection of free speech rights that make it unique even among democracies. However, the conventional wisdom is not entirely correct. A country’s system of governance does not predict well how it will seek to regulate on-line material. And democracies, including the United States, engage in far more extensive censorship of Internet communication than is commonly believed.

This Article explores the gap between free speech rhetoric and practice by analyzing data recently released by Google that describes the official requests or demands to remove content made to the company by a government between 2010 and 2012. Controlling for Internet penetration and Google’s relative market share in each country, we examine international trends in the content removal demands. Specifically, we explore whether some countries have a propensity to use unenforceable requests or demands to remove content, and whether these types of extra-legal requests have increased over time. We also examine trends within content categories to reveal the differences in priorities among governments. For example, European Union governments more frequently seek to remove content for privacy reasons. More surprisingly, the United States government makes many more demands to remove content for defamation, even after controlling for population and Internet penetration.

The Article pays particular attention to government requests to remove content based upon claims regarding privacy, defamation, and copyright enforcement. We make use of more detailed data prepared specially for our study that shows an increase in privacy-related requests following the European Commission’s draft proposal to create a Right To Be Forgotten.

Heather Patterson and Helen Nissenbaum, Context-Dependent Expectations of Privacy in Self-Generated Mobile Health Data

Heather Patterson and Helen Nissenbaum, Context-Dependent Expectations of Privacy in Self-Generated Mobile Health Data

Comment by: Katie Shilton

PLSC 2013

Workshop draft abstract:

Rapid developments in health self-quantification via ubiquitous computing point to a future in which individuals will collect health-relevant information using smart phone apps and health sensors, and share that data online for purposes of self-experimentation, community building, and research. However, online disclosures of intimate bodily details coupled with growing contemporary practices of data mining and profiling may lead to radically inappropriate flows of fitness, personal habit, and mental health information, potentially jeopardizing individuals’ social status, insurability, and employment opportunities. In the absence of clear statutory or regulatory protections for self-generated health information, its privacy and security rest heavily on robust individual data management practices, which in turn rest on users’ understandings of information flows, legal protections, and commercial terms of service. Currently, little is known about how individuals understand their privacy rights in self-generated health data under existing laws or commercial policies, or how their beliefs guide their information management practices. In this qualitative research study, we interview users of popular self-quantification fitness and wellness services, such as Fitbit, to learn (1) how self-tracking individuals understand their privacy rights in self-generated health information versus clinically generated medical information; (2) how user beliefs about perceived privacy protections and information flows guide their data management practices; and (3) whether commercial and clinical data distribution practices violate users’ context-dependent informational norms regarding access to intimate details about health and personal well-being. Understanding information sharing attitudes, behaviors, and practices among self-quantifying individuals will extend current conceptions of context-dependent information flows to a new and developing health-related environment, and may promote appropriately privacy-protective health IT tools, practices, and policies among sensor and app developers and policy makers.

David Thaw, Criminalizing Hacking, Not Dating: Reconstructing the CFAA Intent Requirement

David Thaw, Criminalizing Hacking, Not Dating: Reconstructing the CFAA Intent Requirement

Comment by: Jody Blanke

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2226176

Workshop draft abstract:

The Computer Fraud and Abuse Act (CFAA) originally was enacted as a response to a growing threat of electronic crimes, a threat which continues to grow rapidly.  Congress, to address concerns about hacking and cybercrime, criminalized unauthorized access to computer systems through the CFAA.  The  statute poorly defines this threshold concept of “unauthorized access,” however, resulting in widely varied judicial interpretation.  While this issue is perhaps still under-examined, the bulk of existing scholarship generally agrees that an overly broad interpretation of unauthorized access — specifically one that allows private contract unlimited freedom to define authorization — creates a constitutionally-impermissible result.  Existing scholarship, however, lacks workable solutions.  The most notable approach, prohibiting contracts of adhesion (e.g., website “Terms of Service”) from defining authorized access, strips system operators of their ability to post the virtual equivalent of “no trespassing” signs and set enforceable limits on the (ab)use of their private property.

This Article considers an alternative approach, based on examination of what is likely the root cause of vagueness and overbreadth problems in the CFAA — a poorly constructed mens rea element.  It argues that judicial interpretation may not be sufficient to effect Congressional intent concerning the CFAA, and argues for legislative reconstruction of the mens rea requirement requiring a strong nexus between an individual’s intent and the unique computer-based harm sought to be prevented.  The Article proposes a two-part conjunctive test:  first, that an individual’s intent must not only be to engage in an action (which technically results in unauthorized access), but that the intent must itself be to engage in unauthorized access; and second, that the resultant actions must be in furtherance either of an (enumerated) computer-specific malicious action or of an otherwise-unlawful act.  While courts may be able to reinterpret the statute to accomplish the first part, this still leaves substantial potential for private agreements to create vagueness and overbreadth problems.  The second part of the test mitigates this risk, and thus Congressional intervention is required to save both the validity of the statute as well as the important protections it affords.

Peter Winn, The Protestant Origins of the Anglo-American Right to Privacy

Peter Winn, The Protestant Origins of the Anglo-American Right to Privacy

Comment by: Andrew Odlyzko

PLSC 2013

Workshop draft abstract:

In 1606 Attorney General Edward Coke and Chief Justice of the Kings Bench Sir John Popham, at the request of Parliament and the King’s Council, issued an opinion addressing the narrow question of when an ecclesiastical officer was authorized to administer the “oath ex officio” during proceedings at cannon law.  They held that, except in very narrow circumstances, the accused in such proceedings could not be compelled to take such an oath and testify against himself.  This opinion, representing a clear break from earlier medieval practice where such procedures were common and unexceptionable, is traditionally understood as one of the great landmarks which eventually resulted in the establishment of a right to remain silent, now embodied in the Fifth Amendment of the U.S. Constitution.  In this article, I argue that placed in its proper historical context, the Coke & Popham opinion also recognizes an enforceable legal right of privacy—a right of privacy to one’s thoughts.  Today, the right to keep one’s thoughts to oneself is so ingrained in our understanding of the world, that it is difficult to imagine how radical this idea was at the time.  But in the medieval period, it was taken for granted that the jurisdiction of the authorities extended to the utmost limits of the human mind.  Furthermore, at the time Coke and Popham wrote, the most important affairs of the state were ecclesiastical in nature; and prosecution of the crime of heresy was as much a concern of the civil as the religious authorities.  Although the holding of the opinion made it more difficult to prosecute heresy, the authors of the opinion were by no means soft on heretics.  Furthermore, by limiting the jurisdiction of ecclesiastical authorities, in a country where the King was also the head of the Church, Coke and Popham were also limiting the power of the sovereign state itself.  The opinion thus recognized in a very limited way the legal right of an individual to control access to a private sphere beyond the jurisdiction of the sovereign; a development which begins the process of establishing what Brandeis was later to call, the “right to be let alone.”  This important step in the law was not driven by utilitarian rationale (nothing could be more effective a means to prosecute heretics than administration of the oath); nor was it compelled by earlier medieval precedents (the authors tortured medieval case law to reach the desired outcome).  But in the text of the opinion, itself, one can see what drove Coke and Popham to what at the time was such a counterintuitive result—the remorseless logic of a quintessentially Protestant theology.  The authors were concerned that in a panoptic state with the power to intrude into an individual thoughts, the first victim would be the authenticity of individual conscience, which, according to Protestant teaching, was so critically necessary for religious salvation.

Elizabeth Joh, Privacy Protests: Surveillance Evasion and Fourth Amendment Suspicion

Elizabeth Joh, Privacy Protests: Surveillance Evasion and Fourth Amendment Suspicion

Comment by: Tim Casey

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2285095

Workshop draft abstract:

To the police, evading surveillance is strong evidence that you’re a criminal; the problem is that the evasion may only be a protest against the surveillance itself.   How do we tell the difference, and why does it matter?

Surprisingly, these questions have not attracted serious attention by judges or legal commentators.   It is surprising because the means of surveillance have become ever more sophisticated and difficult to avoid.  If you want to track someone down, you can discover a surprising amount of information with increasing ease.  Sophisticated technologies have made the collection of data, verification of identity, and prediction of behavior simpler and faster.  These technologies have also greatly improved the capabilities of police investigations.  The police have added thermal imaging cameras, global positioning satellite trackers, cell phone site data, computer surveillance software, and DNA swabs.

But some people resist these incursions and take steps to thwart police surveillance out of ideological belief or personal conviction.   Instructions and products are readily available on the internet.  Use photoblocker film on a license plate or a ski mask to stop a red-light camera.   Avoid ordinary credit cards and choose only cash or prepaid credit cards to make a financial trail harder to detect.  Avoid cellphones unless they are prepaid phones or “freedom phones” from Asia that have all tracking devices removed.   Avoid using email unless you use disposable “guerilla email” addresses which disappear within an hour.  Use “spoof cards” that mask your identity on caller id devices.   Burn your garbage to hamper investigations of your financial records or genetic evidence.   A professional can alter your digital self on the internet by erasing data or posting multiple false identities.   At the extreme end, you could live “off the grid” and cut off all contact with the modern world.

These are all examples of what I call privacy protests: actions individuals take to block or to thwart surveillance from the police for reasons that are unrelated to criminal wrongdoing.   Unlike people who hide their activities because they have committed a crime, those engaged in privacy protests do so primarily because they object to the presence of perceived or potential government surveillance in their lives.

Privacy protests are easily grouped together with the evasive actions taken by those who have committed crimes.   The evasion of police surveillance can look the same whether perpetrated by a criminal or a privacy protestor.  For this reason, privacy protests against the police in particular and the government in general are largely underappreciated within the criminal law literature.

This article aims to document privacy protests as well to discuss how the police and the Fourth Amendment fail to take them into account.   These individual actions demonstrate that the boundaries of privacy and legitimate governmental action are the product of a dynamic process.  A more comprehensive account of privacy must consider not only the attempts of individuals to exert control over their own information, lives, and personal spaces, but the ways in which they also take active countermeasures against the government and other private actors to thwart attempts at surveillance.

Tara Whalen, More Words About Design and Privacy: A Critique of the Privacy by Design Framework and Jaap-Henk Hoepman, Privacy Design Strategies (joint workshop)

Tara Whalen, More Words About Design and Privacy: A Critique of the Privacy by Design Framework and Jaap-Henk Hoepman, Privacy Design Strategies (joint workshop)

Comment by: Anne Klinefelter & Elizabeth Johnson

PLSC 2013

Workshop draft abstract:

The idea of “privacy by design” has been promoted and embraced by regulators, advocates, and industry as means for supporting privacy. Privacy by design comes in a few different varieties, although most share a core of principles, with one of the most frequently-touted precepts being “build privacy in from the beginning”, as part of the design process of a product, service, or system. Despite the good intentions of this approach, it is far from clear that privacy by design has been an effective technique for promoting privacy, or, indeed, that it even can be.  This paper will highlight some of the failings of the privacy by design approach, both from a design perspective, and a privacy perspective.

An article entitled “Privacy by Design” appeared in House Beautiful’s Building Manual in 1964. Here, the phrase was used in the context of architectural design for housing in densely-packed suburbia, “to secure maximum seclusion for your house and its setting.” The authors outline potential solutions that presage many of those heard decades later in the information security context, including legal approaches: “He can attempt, with legal aid, to break unreasonable restrictive covenants. He can apply for a zoning variance (these can be obtained, although not without difficulty).” More important is the introduction of a design approach that will no doubt be familiar: “But, most important, he can plan his house, from the ground up, for maximum privacy, inside and out.” Rounding out the article are six specific design ideas, to demonstrate how these ideas may be realized in practice.

How does this embodied design approach differ from the current frameworks of privacy by design, such as the ones promoted by Canadian, US and EU regulatory bodies? Of particular significance is how these frameworks tend to consist of high-level principles that are far removed from the concrete requirements of design—hence making “privacy by design” a problematic approach. There has been some scholarly work on this topic already—for example, Rubinstein and Good’s “Privacy by Design: A Counterfactual Analysis of Google and Facebook Privacy Incidents” (presented at PLSC last year) showed how privacy incidents might not have been averted through blanket application of the privacy by design principles. This article extends that work by grounding the discussion more broadly into the philosophy of design. Many of the issues of abstraction in privacy design are apparent in other design domains, making this topic a narrow piece of a much broader problem. Design scholarship serves to highlight ways in which we can expect ongoing tensions when trying to create concrete products and systems, no matter what the circumstances. As a starting point, consider the “bootcamp bootleg” from the Hasso Plattner Institute of Design at Stanford, which guides the creation of “useful and meaningful designs” through quite specific methods, from mapping to madlibs. While these design professional recognize the value of design principles, they are also mindful of the need to create actual solutions (“bias toward action”). This more practical bent is often ignored in the oversimplified application of privacy principles, where they are often invoked like magic words that will cause excellent designs to appear.

This critique also calls into question the degree to which design itself can be relied upon as a method to promote privacy. In a nutshell, while poor design will likely degrade privacy, it is not necessarily true that good design will improve it.  Excellent design is a necessary but not sufficient condition for privacy, which is complex, dynamic, and contextual, and not something that responds readily to a simple design solution. This paper introduces ideas from ethical design and value-sensitive design, which speak to the ways in which design operates with a larger sociological framework. In particular, scholarship from these fields will be presented to highlight the limitations of design. One such is the “technological neutrality” problem, in which the pursuit of progress (such as in software) is seen as positive, and the design aspects of an artifact are assumed to be benign at worst. Indeed, in some cases, strong technology designs are presented as a panacea and assumed to more than compensate for any negative effects they might have; this is not in keeping with precepts of ethical design. Additionally, one must consider the gulf between the intended use of artifacts and their actual use. In this case, even with careful design, there is a limit to how well privacy harms can be anticipated and precluded in the design stage.  Other factors—legal, social, and otherwise—also play a key role.

To summarize, this paper will propose that privacy by design not be seen as a “silver bullet.” Its lack of specificity makes it a weak tool for designers, and the general limits of design preclude it from being the champion of privacy that will conquer all deep and abiding sociological concerns. Going forward, we can borrow lessons from design scholarship to help guide thinking around privacy design, hopefully to strengthen it, but also to add an important (and often overlooked) dimension to the privacy debate.

Jules Polonetsky and Omer Tene, A Theory of Creepy: Technology, Privacy and Shifting Social Norms

Jules Polonetsky and Omer Tene, A Theory of Creepy: Technology, Privacy and Shifting Social Norms

Comment by: Felix Wu

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2326830

Workshop draft abstract:

The rapid evolution of digital technologies has hurled to the forefront of public and legal discourse dense social and ethical dilemmas that we have hardly begun to map and understand. In the near past, general community norms helped guide a clear sense of ethical boundaries with respect to privacy. One does not peek into the window of a house even if it is left open. One does not hire a private detective to investigate a casual date or the social life of a prospective employee. Yet with technological innovation rapidly driving new models for business and socialization, we often have nothing more than a fleeting intuition as to what is right or wrong. Our intuition may suggest that it is responsible to investigate the driving record of the nanny who drives our child to school, since such tools are now readily available. But is it also acceptable to seek out the records of other parents in our child’s car pool or of a date who picks us up by car? Alas, intuitions and perceptions of “creepiness” are highly subjective and difficult to generalize as social norms are being strained by new technologies and capabilities. And businesses that seek to create revenue opportunities by leveraging newly available data sources face huge challenges trying to operationalize such subjective notions into coherent business and policy strategies.

This article presents a set of legal and social considerations to help individuals, businesses and policymakers navigate a world of new technologies and evolving norms. These considerations revolve around concepts that we have explored in prior work, including transparency; accessibility to information in usable format; and the elusive principle of context.

Eloïse Gratton, Interpreting “personal information” in light of its underlying risk of harm

Eloïse Gratton, Interpreting “personal information” in light of its underlying risk of harm

Comment by: Mark MacCarthy

PLSC 2013

Workshop draft abstract:

“Personal data are and will remain a valuable asset, but what counts as personal data? If one wants to protect X, one needs to know what X is.”(van den Hoven, 2008)

In the late sixties and early seventies, with the development of automated data banks and the growing use of computers in the private and public sector, privacy was conceptualized as having individuals “in control over their personal information” (Westin, 1967). The principles of Fair Information Practices were elaborated during this period and have been incorporated in data protection laws (“DPLs”) adopted in various jurisdictions around the world ever since.

Personal information is defined similarly in various DPLs (such as in Europe and Canada) as “information relating to an identifiable individual”. In the U.S., information is accorded special recognition through a series of sectoral privacy statutes focused on protecting “personally identifiable information” (or PII), a notion close to personal information. This definition of personal information (or very similar definitions) have been included in transnational policy instruments including the OECD Guidelines. Going back in time, we can note that identical or at least similar definitions of personal information were in fact used in the resolutions leading to the elaboration of the Convention 108 dating back to the early seventies (Council of Europe, Resolutions (73) 2 and (74) 29). This illustrates that a similar definition of personal information was already elaborated at that time, and has not been modified since.

In recent days, with the Internet and the circulation of new types of information, the efficiency of this definition may be challenged. Individuals constantly give off personal information and recent technological developments are triggering the emergence of new identification tools allowing for easier identification of individuals. Data-mining techniques and capabilities are reaching new levels of sophistication. Because it is now possible to interpret almost any data as personal information (any data can in one way or another be related to some individual) the question arises as to how much data should be considered as personal information. I maintain that when using a literal interpretation of the definition of personal information, many negative outcomes may occur. First, DPLs may be protecting all personal information, regardless of whether the information is worthy of protection, encouraging a potentially over-inclusive and burdensome framework. This definition may also prove to be under-inclusive as it may not govern certain profiles (falling outside of the scope of the definition), even if these profiles, although they may not “identify” an individual by name, may still be used against the individuals behind them. A literal interpretation of this definition may also create various uncertainties, especially in light of new types of data and collection tools which may identify a device or an object which may be used by one or more individuals.

In light of these issues, various authors have recently been proposing potential guidance, mostly on the issue of what “identifiability” actually means. For example, the work of Bercic and George (2009) is examining how knowledge of relational database design principles can greatly help to understand what is and what is not personal data. Lundevall-Unger and Tranvik (2011) propose a different and practical method for deciding the legal status of IP addresses (with regard to the concept of personal data) which consist of a “likely reasonable” test, resolved by assessing the costs (in terms of time, money, expertise, etc.) associated with employing legal methods of identification. Schwartz and Solove (2011) also argue that the current approaches to PII are flawed and propose a new approach called “PII 2.0,” which accounts for PII’s malleability. Based upon a standard rather than a rule, PII 2.0 would be based upon a continuum of “risk of identification” and would regulate information that relates to either an “identified” or “identifiable” individual (making a distinction between the two categories), and establishing different requirements for each category.

My contribution in providing guidance on this notion of “identifiability” has to do with using a new method for interpreting the notion of personal information, taking into account the ultimate purpose behind the adoption of DPLs, in order to ensure that only data that were meant to be covered by DPLs will in fact be covered. In the context of proposing such interpretation, the idea is to aim for a level of generality which corresponds with the highest level goal that the lawmakers wished to achieve (Bennet Moses, 2007). I will demonstrate how the ultimate purpose of DPLs is broader than protecting the privacy rights of individuals, as it is to protect individuals against the risk of harm that may result from the collection, use or disclosure of their information. Likewise, with the proposed approach, only data that may present such risk of harm to individuals would be protected.

I argue that in certain cases, the harm will take place at the point of collection while in other cases, at the point where the data will be used or even disclosed. Instead of trying to determine exactly what “identifiable” individual means, I maintain that a method of interpretation, which is consistent with the original goals of DPLs, should be favoured. Relying and building on Calo’s theory (Calo, 2011) and others, I will elaborate a taxonomy of criteria in the form of a decision tree which takes into account the fact that while the collection or disclosure of information may trigger a more subjective kind of harm (the collection, a feeling of being observed and the disclosure, embarrassment and humiliation), the use of information will trigger a more objective kind of harm (financial, physical, discrimination, etc.). The risk of harm approach which I propose, applied to the definition, will reflect this and protect data only at the time that it presents such risk, or in light of the importance or extent of such risk of objective or subjective harm. Accordingly, interpreting the notion of “identifiability” will vary in light of the data handling activity at stake. For instance, while I maintain that the notion of “identifiability” should be interpreted in light of the overall sensitivity of the information being disclosed (taking into account other criteria which are relevant in evaluating the risk of subjective harm), I am also of the view that this notion is irrelevant when evaluating information being used (only the presence of an objective harm being relevant).

In the preliminary section of my article, I will provide an overview of the various conception of privacy, elaborate on the historical context leading to the adoption of DPLs and the elaboration of the definition of personal information and discuss the changes which have recently taken place at the technological level. In the following section (section 2), I will first elaborate on how a literal interpretation of the definition of personal information is no longer workable. In light of this, I will be presenting the proposed approach to interpreting the definition of personal information, under which the ultimate purpose behind DPLs should be taken into account. I will then demonstrate why the ultimate purpose of DPLs was to protect individuals against a risk of harm triggered by organizations collecting, using and disclosing their information. In section 3, I will demonstrate how this risk of harm can be subjective or objective, depending on the data handling activity at stake. I will offer a way forward, proposing a decision-tree test useful when deciding whether certain information should qualify as personal information. I will also demonstrate how the proposed test would work in practice, using practical business cases as examples.

The objective of my work is to come to a common understanding of the notion of personal information, the situations in which DPLs should be applied, and the way they should be applied. A corollary of this work is to provide guidance to lawmakers, policymakers, privacy commissioners, courts, organizations handling personal information and individuals assessing whether certain information are or should be governed by the relevant DPLs, depending on whether the data handling activity at stake creates a risk of harm for an individual. This will provide for a useful framework under which DPLs remain efficient in light of modern Internet technologies.