Ira Rubinstein and Nathan Good, Privacy by Design: A Counterfactual Analysis of Google and Facebook Privacy Incidents
Comment by: Danny Weitzner
Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2128146
Workshop draft abstract:
Both U.S. and E.U. regulators have embraced privacy by design as a core element of their ongoing revision of current privacy laws. This idea of building in privacy from the outset is commonsensical, yet it remains rather vague—surely it should enhance consumer privacy but what does it mean and how does it really work? In particular, is privacy by design simply a matter of developing products with greater purpose or intention (as opposed to haphazardly) or of following specific design principles? What do regulators expect to accomplish by encouraging firms to implement privacy by design and how would this benefit consumers? This paper seeks to answer these questions by presenting case studies of high profile privacy incidents involving Google and Facebook and analyzing them against a set of notional design principles.
Based on news accounts, company statements, and detailed regulatory reports, we explore these privacy incidents to determine whether the two firms might have avoided them if they had implemented privacy by design. As a prerequisite to this counterfactual analysis, we offer the first comprehensive evaluation and critique of existing approaches to privacy by design, including those put forward by regulators in Canada, U.S. and Europe, by private sector firms, and by several non-profit privacy organizations. Despite the somewhat speculative nature of this “what if” analysis, we believe that it reveals the strengths and weaknesses of privacy by design and thereby helps inform ongoing regulatory debates.
This paper is in three parts. Part I is a case study of nine privacy incidents—three from Google (Gmail, StreetView and Buzz) and six from Facebook (News Feed, Beacon, Photo Tagging, Facebook Apps, Instant Personalization, and a number of recent changes in privacy policies and settings). Part II identifies the design principles that the counterfactual analysis relies on. It proceeds in four steps: first, we argue that existing approaches to privacy by design are best understood as a hybrid with up to three components—Fair Information Practices (FIPs), accountability, and engineering (including design). Ironically, we find that design is the most neglected element of privacy by design. We also show that existing regulatory approaches pay insufficient attention to the business factors firms consider in making design decisions. Second, we point out the shortcomings of FIPs, especially the stripped down versions that focus mainly on notice-and-choice. Third, we suggest that because all of the existing approaches to privacy by design incorporate FIPs and the definition of privacy on which it depends—namely, privacy as a form of individual control over personal information—they are ill-suited to addressing the privacy concerns associated with the voluntary disclosure of personal data in Web 2.0 services generally and especially in social networking services such as Facebook. Finally, we take a closer look at privacy engineering and especially at interface design. We find the latter is inspired not by theories of privacy as control but rather by an understanding of privacy in terms of social interaction as developed in the 1970s by Irwin Altman, a social psychologist, and more recently by Helen Nissenbaum, a philosopher of technology. Part III applies the design principles identified in Part II to the nine case studies, showing what went wrong with Google and Facebook and what they might have done differently. Specifically, we try to identify how better design practices might have assisted both firms in avoiding privacy violations and consumer harms and discuss the regulatory implications of our findings. We also offer some preliminary thoughts on how design practitioners should think about privacy by design in their everyday work.