Archives

Allyson Haynes Stuart, Finding Privacy in a Sea of Electronic Discovery

Allyson Haynes Stuart, Finding Privacy in a Sea of Electronic Discovery

PLSC 2013

Workshop draft abstract:

In November of 2011, a Connecticut judge ordered lawyers representing a divorcing couple to exchange passwords to their client’s Facebook and dating websites.  In contrast, a California judge rejected Home Depot’s attempt to gain broad access to social network posts made by a former employee who sued the company.  What standards are courts applying to the discovery of social media evidence?  Is there any bite to the argument that, even if relevant, certain electronic discovery should be shielded from compelled disclosure because of privacy considerations under Fed. R. Civ. P. 26 (b) or (c)?

Robert Sprague, An Ontology of Privacy Law Derived from Probabilistic Topic Modeling Applied to Scholarly Works Using Latent Dirichlet Allocation

Robert Sprague, An Ontology of Privacy Law Derived from Probabilistic Topic Modeling Applied to Scholarly Works Using Latent Dirichlet Allocation

PLSC 2013

Workshop draft abstract:

Privacy, being an evolutionary product of social development, has been a human need and desire for millennia. Privacy law scholarship, in contrast, is a relatively recent phenomenon. Of all the privacy-related law review articles published in the history of the United States, for example, over ninety percent were published after 1990—and half that amount in the past decade. Within this recent profusion of scholarship lies a fundamental conundrum: there is no clear definition of privacy; there is not even consensus of what would constitute an adequate definition. Fundamental categories of privacy have been identified and analyzed—e.g., seclusion, intimacy, surveillance, anonymity, control of information. But most calls for privacy arise from context, as well as advancing technologies, meaning the legal system often has difficulty identifying and protecting rights to privacy. Without a coherent construction of privacy principles shared by the community of scholars, the legal system can never explicitly articulate those principles.

This paper will report preliminary results from a research project aimed at identifying fundamental privacy law principles derived from the writings of legal scholars themselves using probabilistic topic modeling, which uses a suite of algorithms to discover hidden thematic structures in large archives of documents. Topic modeling algorithms are statistical methods that analyze the words of texts to discover the themes (topics) that run through them, how those themes are connected to each other, and how they change over time. For example, in Warren’s and Brandeis’s Harvard Law Review article “The Right to Privacy,” the word “property” is identified as the most statistically probable primary topic in the article—which makes sense since Warren and Brandeis were postulating privacy as a form of intangible property right. A latent Dirichlet allocation, which identifies sets of terms that more tightly co-occur, is incorporated into the topic modeling analysis to identify words most closely associated with each identified topic. In “The Right to Privacy,” in addition to identifying “property” as the primary topic, the process also identifies the words “privacy” and “individual” as co-occurring most frequently with the topic “property.” The latent Dirichlet allocation therefore provides insight into the context in which each identified topic occurs.

All published law review articles which cite “The Right to Privacy” (some 3,500 articles) are being converted to plain text. “The Right to Privacy” was selected as the focal point of the document corpus because it is the original published scholarly call for a formal legal right to privacy in the United States; hence, the vast majority of privacy law scholarship cites to it. Probabilistic topic modeling using latent Dirichlet allocation is being applied to the document corpus in time slices to reveal the evolution of fundamental privacy law concepts expressed in the legal literature published from 1890 through 2012. Studies in different disciplines have demonstrated the ability of latent Dirichlet allocation to analyze the rich underlying structures of a particular domain—depicting emerging and sustained trends in a given discourse. The ultimate goal of this project is to identify the fundamental conceptual structure of privacy law in the United States as reflected by over a century of legal scholarly work.

The proposed paper will provide an overview of the topic modeling process using latent Dirichlet allocation to explain and validate the underlying analytical methodology. Preliminary results of applying the statistical modeling to the law scholarship document corpus as of May 2013 will be presented and discussed.

Randal C. Picker, Structuring Competition in Privacy

Randal C. Picker, Structuring Competition in Privacy

PLSC 2013

Workshop draft abstract:

Nature and regulators are very much alike: both abhor a vacuum. The shift of life online has brought with it an unusual power to accumulate information about consumers, though, with the emergence of new technologies such as facial recognition technology, the opportunities for information gathering in physical space are increasing as well. We should expect substantial changes in technology to give rise to conflict over the appropriate boundaries of that technology and calls for regulation.

In 2012, U.S. regulators have produced a series of reports in an effort to define a framework for this regulation. In February, 2012, the White House issued its consumer data privacy report with a call for a new Consumer Privacy Bill of Rights. The U.S. Federal Trade Commission has been particularly active this year issuing a major consumer privacy report in March, 2012 and subsequent reports in September, 2012 and October, 2012 offering mobile app guidelines and addressing the use of facial recognition technology. And the FTC has done much more than just issue reports: the breadth of the underlying statutes under which the FTC operates and the natural mistakes that firms will make in a rapidly developing industry have made it possible for the FTC to move aggressively into direct privacy regulation.

And it isnít just the U.S. federal government that is moving forward on privacy regulation. Without even considering developments in the EUóand with most of what is going on in privacy occurring on the Internet or through apps on devices like tablets and smartphones, we really need to focus on the world marketóCalifornia has moved to extended its online privacy regime to apps. At 1 million mobile apps on the iOS and Android platforms and a potential fine of $2,500 per non-complying download, California may have figured out how to solve its budget problems.

In Section I of the paper, I consider how the FTC currently regulates privacy. Much of the FTCís direct regulatory efforts to date have piggybacked on the privacy disclosures required by other law or made voluntarily by firms themselves. This puts the FTC in the posture of engaging in purely after-the-fact, one-by-one regulation of firms and doesn’t push the FTC to articulate broader standards. And many of the situations are resolved through settlements such that the underlying issues aren’t tested through litigation. The FTC could issue substantive rules as it has done in the past in other areas, but as Congress has repeatedly amended the statutes to create a demanding standard for many of the rules that the FTC might issue, the FTC has moved to using the report process described above. The reports sidestep the statutory standards that the FTC would otherwise face and make it possible for the FTC to issue non-rule ìrules,î rules that the FTC hopes will shape the relevant industry but without obvious direct legal effect. Of course, the line between actual rules and faux rules may not be clear to all involved and the FTC may indeed welcome that ambiguity.

In Section II of the paper, I focus on what should we expect in a competitive market in consumer data. Privacy-attentive consumers will be presented with choices that they will attend to, either through direct competition through data limits or through personalization signals to enable choices. Even privacy-insensitive consumers will benefit from competition as firms will value the data that those consumers will provide and will offer additional value to those consumers to attract them to their services. But we should expect firms to overconsume data as it were, meaning to capture data from privacy-inattentive consumers where the value to the firms of receiving that data is less than the value to the consumers giving up the data.

In Section III of the paper, I consider mechanisms for addressing the overcapture of data from privacy-inattentive consumers. Of course, those may just be consumers who donít value privacy very much, so I look for metrics to assess whether consumers are interacting with transparency toolsósuch as data collection icons and personalization signalsóin the way that we might expect. I then turn to considering the tools available to the government to perturb how consumers interact with these privacy signals. The government could require online services and apps to disclose more informationóthink the online equivalent of the FTCís octane or home-insulation rulesóbut a less centralized approach would be for the government itself to build disclosure apps available for downloading.

In Section IV of the paper, I consider a core part of the three-part framework put forward by the FTC in its March, 2012 privacy report, namely the requirement of privacy by design. I consider to what extent that idea should limit how app developers charge for their products. If that turns out not to be a fruitful analysisóand I think that it is notóthe analysis may highlight problems with the concept of privacy by design. I then turn to one example of how privacy by design has played out in practice, namely, the setting of the do-not-track default in Microsoftís Internet Explorer 10. Microsoft has announced a setting which might be thought to be required by privacy by design and yet has faced hostility for so doing from many quarters.

Laura Moy, Social Values in the Era of Big Data: When Does “Okay by Me” Become Not Okay for Society?

Laura Moy, Social Values in the Era of Big Data: When Does “Okay by Me” Become Not Okay for Society?

PLSC 2013

Workshop draft abstract:

A web surfer consents to sharing information with a website in exchange for receiving targeted advertisements; a shopper consents to sharing information with her grocery store in exchange for receiving selected discounts on goods; and a patient consents to sharing information with her health insurance company in exchange for the benefits she enjoys. On an individual basis, these three people appear to have experienced no violation of their privacy interests.

But what if the website uses aggregate information about its visitors to display targeted content that keeps web surfers in predesigned “boxes”; the grocery store uses aggregate information about its shoppers to offer big spenders loss leaders that are subsidized by price hikes on staples; and the insurance provider uses aggregate information about patients to raise premiums on those it predicts are genetically predisposed to developing chronic illnesses? If the three people introduced above knew all of this, would they still feel confident that their personal information was being used in a way consistent with their expectations?

I argue that for a substantial number of people the answer to this question is “no.” Accordingly, while the primary use of each individual’s information by the collecting entity may be perfectly consistent with that individual’s expectations and therefore not present a privacy violation, secondary uses of that information aggregated with information about others may be inconsistent with society’s expectations and therefore present violations. This idea addresses (and responds in the resounding affirmative) to a question that arose at last year’s PLSC—a writing partner and I presented a paper at that time on the grocery store scenario briefly touched on above, and several commenters asked, “Is this even a privacy issue?”

Drawing on Priscilla Regan’s work on the social importance of privacy and Helen Nissenbaum’s theory of contextual integrity, I suggest a framework for identifying particular information uses or flows as possible violations of the social privacy value, using examples to illustrate the framework. I then explore the policy implications of the framework and seek suggestions for determining when potential benefits associated with a particular use outweigh the cost to society.

Aleecia M. McDonald and Wendy Seltzer, Quantifying the Internet’s Erasers: Analysis through Chilling Effects Data

Aleecia M. McDonald and Wendy Seltzer, Quantifying the Internet’s Erasers: Analysis through Chilling Effects Data

PLSC 2013

Workshop draft abstract:

The Internet has turned the childhood threat of a “permanent record” real (though it hasn’t necessarily made that record accurate), causing many to ask whether there’s any hope for privacy once information has entered search engines. In Europe, discussion centers on a “right to be forgotten,” which relies not on removing data, but rather de-indexing it. In the US, a 2012 bill proposed an “Internet Eraser Button,” with a similar theme of de-indexing data for children.

Opponents to these de-indexing approaches have asserted they are technically impossible. And yet, de-indexing sounds remarkably similar to how we already handle some information that corporations would prefer not to share — claimed copyright infringements. The DMCA encourages “information location tools,” including search engines, to respond to takedown notices by removing links, even if the alleged infringement remains online — much as an Eraser Button might operate to de-index data for individuals.

This form of notice and takedown has been operating for more than a decade, and the Chilling Effects Clearinghouse maintains records of such de-index requests to Google and Twitter. In this paper, we use Chilling Effects to quantify the growth of copyright take down notices over time, put forward a set of competing possibilities for comparison with the scale of problem an Internet Eraser Button for Privacy might address. We then project what the results might mean for the technical viability or intractability of addressing individual privacy harms through de-linking.

Jaap-Henk Hoepman, Privacy Design Strategies

Jaap-Henk Hoepman, Privacy Design Strategies

PLSC 2013

Workshop draft abstract:

Privacy by design is the philosophy of protecting privacy throughout the process of technological development, that is from the conception of a new technology up to its realisation. The idea is that when privacy is a integral part of the technological development process, the final product protects privacy throughout its entire life cycle.

In the context of developing IT systems, this implies that privacy protection is a system requirement that must be treated like any other (non-) functional requirement. In particular, privacy protection (together with all other requirements) will determine the design and implementation of the system. To support privacy by design, we therefore need guiding principles to support the inclusion of privacy requirements throughout the system development life cycle, in particular during the concept development, analysis, design and implementation phases. Unfortunately there is so far little experience in applying privacy by design in engineering. This work aims to contribute to closing this gap.

An important methodology during the design phase is the application of so called software design patterns. These design patterns refine the system architecture to achieve certain functional requirements within a given set of constraints. However, such design patterns do not necessarily play a role in the earlier, concept development and analysis, phases of the software development cycle. The main reason is that such design patterns are already quite detailed in nature, and more geared towards solving an implementation problem. To guide the development team in the earlier stages, privacy design strategies at a higher level of abstraction are needed.

In this work we define the notion of a privacy design strategy, and derive the following eight privacy design strategies: minimise, hide, separate, aggregate, inform, control, enforce and demonstrate based on both the legal and the technical perspective on privacy protection. We validate our approach by showing how these strategies apply to both an information storage and an information flow type of system, and by comparing our classification to existing privacy frameworks. We believe these strategies help to support privacy by design throughout the full software development life cycle, even before the design phase. It makes explicit which high level strategies can be applied to protect privacy when drafting the first concepts from which a new information system will be derived.

Rebecca Green, Post Election Transparency

Rebecca Green, Post Election Transparency

PLSC 2013

Workshop draft abstract:

This country’s commitment to the secret ballot a notable exception, state and federal statutes direct election administrators to allow broad public oversight of the election process at every stage. This article examines the increasingly controversial topic of transparency after ballots are cast, with particular emphasis on the complexities new forms of voting bring. What level of access should the public be granted to review election outcomes? Are voted ballots public property to which candidates and citizens should be allowed access on demand? Should materials associated with the voting process like provisional and absentee ballot envelopes, lists of voters who vote absentee or provisionally, and poll book records be classified as accessible public records? If not, what principles should constrain public access to election materials after the polls have closed? Should candidate access to post-election materials be treated differently than access for ordinary citizens? This article will demonstrate that current statutory frameworks for post election transparency are lacking, and will suggest alternatives for how state election officials should respond to access requests after Election Day has passed.

Nathan Good and Nick Doty, If privacy is the answer, who are the users?

Nathan Good and Nick Doty, If privacy is the answer, who are the users?

PLSC 2013

Workshop draft abstract:

When designers create a product or design, they first seek to understand who will use the product and what their needs are.  Without this understanding, it is difficult for designers to use their tools effectively.  When designing with privacy in mind, designers face particularly challenging questions.  What are users’ privacy goals?  How can designers describe them?  How can they incorporate them in actual products or designs? And who are they designing the privacy protections for? Are they to address policy concerns, or are they to address latent consumer concerns?

Today we have many companies who have created products to address audiences privacy concerns, as well as a history of larger companies reacting to privacy concerns and implementing privacy and usability designs into their products.  Despite this, there is still a pressing concern that the additional privacy measures are inadequate for users and are doing little to address consumers concerns about privacy.

There is also a great deal of interest on the policy side about creating privacy safeguards for consumers, as well as companies and government agencies that are enacting new privacy standards. Protecting the backend of computer systems is one approach (data encryption, etc), while usability design has been cited as an area of increasing interest in protecting the “front end” of systems and facilitating choice and transparency for consumers. For usability designers, addressing the question of audience and goals gives them concrete steps to use to implement products, as well as help delineate what is outside of the scope of designers concerns and possibly addressed through policy and other means.  As a result, user goals and stated policy goals with respect to privacy are sometimes in conflict. Consequently, usability professionals in practice are confronted with a unique set of challenges when attempting to design with privacy in mind. Cookie management, for example, is an area that is difficult to design for, as there exists a wide gap in consumers basic understanding and concerns and the evolving policy demands for consumer control. This paper argues that a clearer understanding of audience would help delineate what is the role of the designer and what is the role of policy and backend protections.

This paper examines existing and evolving design practices and products that are designed to protect a user’s privacy as well as products that have privacy implications. This paper categorizes the use cases and privacy concerns, and attempts to derive a delineation of where design has succeed and can succeed, and where it struggles. Additionally, it examines the limits of use cases with respect to privacy, and suggests alternatives and new directions for policy makers and designers with respect to designing consumer facing solutions for privacy concerns.

Nick Doty and Deirdre K. Mulligan, Standardizing Do Not Track: How Participants See the Process

Nick Doty and Deirdre K. Mulligan, Standardizing Do Not Track: How Participants See the Process

PLSC 2013

Workshop draft abstract:

Who really participates in the DNT standardization process? What kinds of positions are represented and what kinds of people are actively involved? How do those participants see the process? And what defines the process? (Beyond the World Wide Web Consortium’s Tracking Protection Working Group, discussions at various levels of formality take place in a number of distinct fora.) As part of a larger project exploring how engineers and standards development participants make decisions that affect privacy, we discussion initial results from interviews, textual analysis and participant observation.

While the concerns regarding procedural and substantive fairness we highlighted previously are themselves raised by participants and observers in the process, we also identify concerns around trust and communication. Finally, participants’ statements support a particular theory of values in design, with its own challenges and opportunities for privacy-by-design.

Caspar Bowden, Don’t put your Data in the Cloud, Mrs.Reding

Caspar Bowden, Don’t put your Data in the Cloud, Mrs.Reding

PLSC 2013

Workshop draft abstract:

This multidisciplinary paper assesses the privacy situation of European citizens when their personal data is transferred to Cloud computing systems under United States jurisdiction, with particular reference to the FISA Amendment Act of 2008 (FISAAA). The technical varieties of Cloud computing are analysed in terms of the 1995 EU Data Protection Directive and the proposed new Regulation, and the mechanisms envisaged for legitimating transfers examined, together with the origins of these “derogations” in the Council of Europe’s Convention 108.

The analysis of the United States position begins with precedent rulings on the inapplicability of 4th Amendment protections for non-US persons located outside the US, in the light of political and media controversy attending the “warrantless wiretapping” affair and whistle-blower allegations of mass-surveillance programs illegally impacting US persons. The terms of FISAAA §1881 (now also known as FISA section 702) are reviewed with particular attention to the inclusion of obligations on providers of “remote computing services” (absent from the interim Protect America Act 2007), the definition of “foreign intelligence information”, and the concept of ex post facto “minimization” of the privacy consequences for US persons. A pattern of bipartisan secrecy and redaction of documents and court rulings around the time of FISAAA’s passage in 2008 and renewal in 2012 is scrutinized together with propaganda efforts by US government and industry to neutralize foreign concerns over Cloud surveillance powers, which strongly indicate a covert policy of concealment by omission, misdirection, and specious reasoning. Alternative technical means of conducting very large scale surveillance of the Cloud are reviewed, as well as architectural specifications emerging from standards bodies. Specific modalities of Cloud surveillance are distinguished from ordinary interception of communications, and brief comparisons made with what can be inferred about “secret interpretations” of section 215 of the USA PATRIOT Act. The EU/US Safe Harbour Agreement of 2000, and in particular the new notion in the EU Regulation of “Binding Corporate Rules for data processors” which was ostensibly devised to be suitable for Cloud transfers, are then critiqued as vulnerable to foreseeable relevant risks, and anomalies in the Opinions of regulatory authorities are highlighted.

Finally the jurisprudence of the European Court of Human Rights is reviewed to locate certain lacunae in the tests for lawfulness of secret strategic communications surveillance thus far, arising from universal versus nationality based conceptions of human rights. Nevertheless there are obligations on signatory states to provide effective measures to protect the rights of those within their jurisdiction, irrespective of unresolved conflicts of international public law. The conclusion is that transfers of Europeans’ data to US controlled Clouds are impermissible, at the very least absent repeal of certain clauses of FISAAA, and new binding treaties offering explicit guarantees. Recommendations are offered to the European Parliament for measures which could have some mitigating dissuasive and deterrent effects, with reflections on the fractured governance of EU privacy by institutions which either failed to detect, or acquiesced in the construction of complex legal antinomies over several years.

*With apologies to http://www.leoslyrics.com/noel-coward/dont-put-your-daughter-on-the-stage-mrs-worthington-lyrics/