Archives

Ira Rubinstein and Nathan Good, Privacy by Design: A Counterfactual Analysis of Google and Facebook Privacy Incidents

Ira Rubinstein and Nathan Good, Privacy by Design: A Counterfactual Analysis of Google and Facebook Privacy Incidents

Comment by: Danny Weitzner

PLSC 2012

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2128146

Workshop draft abstract:

Both U.S. and E.U. regulators have embraced privacy by design as a core element of their ongoing revision of current privacy laws. This idea of building in privacy from the outset is commonsensical, yet it remains rather vague—surely it should enhance consumer privacy but what does it mean and how does it really work? In particular, is privacy by design simply a matter of developing products with greater purpose or intention (as opposed to haphazardly) or of following specific design principles? What do regulators expect to accomplish by encouraging firms to implement privacy by design and how would this benefit consumers? This paper seeks to answer these questions by presenting case studies of high profile privacy incidents involving Google and Facebook and analyzing them against a set of notional design principles.

Based on news accounts, company statements, and detailed regulatory reports, we explore these privacy incidents to determine whether the two firms might have avoided them if they had implemented privacy by design. As a prerequisite to this counterfactual analysis, we offer the first comprehensive evaluation and critique of existing approaches to privacy by design, including those put forward by regulators in Canada, U.S. and Europe, by private sector firms, and by several non-profit privacy organizations. Despite the somewhat speculative nature of this “what if” analysis, we believe that it reveals the strengths and weaknesses of privacy by design and thereby helps inform ongoing regulatory debates.

This paper is in three parts. Part I is a case study of nine privacy incidents—three from Google (Gmail, StreetView and Buzz) and six from Facebook (News Feed, Beacon, Photo Tagging, Facebook Apps, Instant Personalization, and a number of recent changes in privacy policies and settings). Part II identifies the design principles that the counterfactual analysis relies on. It proceeds in four steps: first, we argue that existing approaches to privacy by design are best understood as a hybrid with up to three components—Fair Information Practices (FIPs), accountability, and engineering (including design). Ironically, we find that design is the most neglected element of privacy by design.  We also show that existing regulatory approaches pay insufficient attention to the business factors firms consider in making design decisions. Second, we point out the shortcomings of FIPs, especially the stripped down versions that focus mainly on notice-and-choice. Third, we suggest that because all of the existing approaches to privacy by design incorporate FIPs and the definition of privacy on which it depends—namely, privacy as a form of individual control over personal information—they are ill-suited to addressing the privacy concerns associated with the voluntary disclosure of personal data in Web 2.0 services generally and especially in social networking services such as Facebook. Finally, we take a closer look at privacy engineering and especially at interface design. We find the latter is inspired not by theories of privacy as control but rather by an understanding of privacy in terms of social interaction as developed in the 1970s by Irwin Altman, a social psychologist, and more recently by Helen Nissenbaum, a philosopher of technology. Part III applies the design principles identified in Part II to the nine case studies, showing what went wrong with Google and Facebook and what they might have done differently. Specifically, we try to identify how better design practices might have assisted both firms in avoiding privacy violations and consumer harms and discuss the regulatory implications of our findings. We also offer some preliminary thoughts on how design practitioners should think about privacy by design in their everyday work.

Ira Rubinstein, Regulating Privacy by Design

Ira Rubinstein, Regulating Privacy by Design

Comment by: Marilyn Prosch & Ken Anderson

PLSC 2011

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1837862

Workshop draft abstract:

Privacy officials in Europe and North America are embracing Privacy by Design (PbD) as never before. PbD is the idea that “building in” privacy throughout the design and development of products and services achieves better results than “bolting it on” as an afterthought. However enticing this idea may be, what does it mean? In the US, a very recent FTC Staff Report makes PbD one of three main components of a new privacy framework. According to the FTC, firms should adopt PbD by incorporating substantive protections into their development practices (such as data security, reasonable collection limitations, sound retention practices, and data accuracy) and implementing comprehensive data management procedures; the latter may also require a privacy impact assessment (PIA) where appropriate. In contrast, European privacy officials view PbD as also requiring the broad adoption of Privacy Enhancing Technologies (PETs), especially PETs that shield or reduce identification or minimize the collection of personal data. Despite the enthusiasm of privacy regulators, neither PbD nor PIAs nor PETs have yet to achieve widespread acceptance in the marketplace.

There are many reasons for this, not the least of which is a lack of clarity over the meaning of these terms, how they relate to one another, or what rules apply when a firm undertakes the PbD approach. In addition, Internet firms derive much of their profit from the collection and use of PII and therefore PbD may disrupt profitable activities or new business ventures. Although the European Commission sponsored a study of the economic costs and benefits of PETs, and the UK is looking at how to improve the business case for investing in PbD, the available evidence does not support the view that PbD pays for itself (except for a small group of firms who must protect privacy to maintain highly valued brands and avoid reputational damage). In the meantime, the regulatory implications of PbD are murky at best, not only for firms that might adopt this approach but for free riders as well. Indeed, discussion of the economic or regulatory incentives for PbD is sorely lacking in the FTC report.

This Article seeks to clarify the meaning of PbD and thereby suggest how privacy officials might develop appropriate regulatory incentives that offset the certain economic costs and uncertain privacy benefits of this new approach. It begins by developing an analytic framework around two sets of distinctions. First, it classifies PETs as substitutes or complements depending on their interaction with data protection or privacy law.  Substitute PETs aim for zero-disclosure of PII, whereas complementary PETs enable greater user control over personal data through enhanced notice and choice. Second, it distinguishes two forms of PbD, one in which firms seek to build-in privacy protections either by using PETs or by relying on engineering approaches and related tools that implement FIPPs throughout both the product development and the data management lifecycles.  Building on these distinctions, and using targeted advertising as its primary illustration, it then suggests how regulators might achieve better success in promoting the use of PbD by 1) identifying best practices in privacy design and development, including prohibited practices, required practices, and recommended practices; and 2) situating best practices within an innovative regulatory framework that a) promotes experimentation with new technologies and engineering practices; b) encourages regulatory agreements through stakeholder representation, face-to-face negotiations, and consensus-based decision making; and c) supports flexible, incentive-driven safe harbor mechanisms as defined by (newly enacted) privacy legislation.

Ira Rubinstein, Anonymity Reconsidered

Ira Rubinstein, Anonymity Reconsidered

Comment by: Mary Culnan

PLSC 2009

Workshop draft abstract:

According to the famous New Yorker cartoon, “On the Internet, nobody knows you’re a dog.”  Today-about 15 years later-this caption is less apt; if “they” don’t know who you are they at least know what brand of dog food you prefer and who you run with.  Internet anonymity remains very problematic.  On the one hand, many privacy experts would say that anonymity is defunct, citing as evidence the increasing use of the Internet for data mining and surveillance purposes.  On the other, a wide range of commentators are equally troubled by the growing lack of trust on the Internet and many view as a leading cause of this problem the absence of a native “identity layer”-i.e., a reliable way of identifying the individuals with whom we communicate and the Web sites to which we connect.  While the need for stronger security and better online identity mechanisms grows more apparent, the design and implementation of identity systems inevitably raises longstanding concerns over the loss of privacy and civil liberties. Furthermore, with both beneficial and harmful uses, the social value of Internet anonymity remains highly contested.  For many, this tension between anonymity and identity seems irresolvable, leading to vague calls for balancing solutions or for simply preserving the status quo because proposed changes would only make matters worse.  This paper offers a fresh look at some of the underlying assumptions of the identity-anonymity standoff by re-examining the meaning of anonymity and raising questions about three related claims: 1) anonymity is the default in cyberspace; 2) anonymity is essential to protecting online privacy; and, 3) the First Amendment confers a right of anonymity.  Based on the results of this analysis, the paper concludes by critically evaluating a recently issued CSIS report entitled “Securing Cybersecurity for the 44th Presidency,” which includes 7 major recommendations, one of which is that the government require strong authentication for access to critical infrastructure.