Archives

Nathan Good and Nick Doty, If privacy is the answer, who are the users?

Nathan Good and Nick Doty, If privacy is the answer, who are the users?

PLSC 2013

Workshop draft abstract:

When designers create a product or design, they first seek to understand who will use the product and what their needs are.  Without this understanding, it is difficult for designers to use their tools effectively.  When designing with privacy in mind, designers face particularly challenging questions.  What are users’ privacy goals?  How can designers describe them?  How can they incorporate them in actual products or designs? And who are they designing the privacy protections for? Are they to address policy concerns, or are they to address latent consumer concerns?

Today we have many companies who have created products to address audiences privacy concerns, as well as a history of larger companies reacting to privacy concerns and implementing privacy and usability designs into their products.  Despite this, there is still a pressing concern that the additional privacy measures are inadequate for users and are doing little to address consumers concerns about privacy.

There is also a great deal of interest on the policy side about creating privacy safeguards for consumers, as well as companies and government agencies that are enacting new privacy standards. Protecting the backend of computer systems is one approach (data encryption, etc), while usability design has been cited as an area of increasing interest in protecting the “front end” of systems and facilitating choice and transparency for consumers. For usability designers, addressing the question of audience and goals gives them concrete steps to use to implement products, as well as help delineate what is outside of the scope of designers concerns and possibly addressed through policy and other means.  As a result, user goals and stated policy goals with respect to privacy are sometimes in conflict. Consequently, usability professionals in practice are confronted with a unique set of challenges when attempting to design with privacy in mind. Cookie management, for example, is an area that is difficult to design for, as there exists a wide gap in consumers basic understanding and concerns and the evolving policy demands for consumer control. This paper argues that a clearer understanding of audience would help delineate what is the role of the designer and what is the role of policy and backend protections.

This paper examines existing and evolving design practices and products that are designed to protect a user’s privacy as well as products that have privacy implications. This paper categorizes the use cases and privacy concerns, and attempts to derive a delineation of where design has succeed and can succeed, and where it struggles. Additionally, it examines the limits of use cases with respect to privacy, and suggests alternatives and new directions for policy makers and designers with respect to designing consumer facing solutions for privacy concerns.

Ira Rubinstein and Nathan Good, Privacy by Design: A Counterfactual Analysis of Google and Facebook Privacy Incidents

Ira Rubinstein and Nathan Good, Privacy by Design: A Counterfactual Analysis of Google and Facebook Privacy Incidents

Comment by: Danny Weitzner

PLSC 2012

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2128146

Workshop draft abstract:

Both U.S. and E.U. regulators have embraced privacy by design as a core element of their ongoing revision of current privacy laws. This idea of building in privacy from the outset is commonsensical, yet it remains rather vague—surely it should enhance consumer privacy but what does it mean and how does it really work? In particular, is privacy by design simply a matter of developing products with greater purpose or intention (as opposed to haphazardly) or of following specific design principles? What do regulators expect to accomplish by encouraging firms to implement privacy by design and how would this benefit consumers? This paper seeks to answer these questions by presenting case studies of high profile privacy incidents involving Google and Facebook and analyzing them against a set of notional design principles.

Based on news accounts, company statements, and detailed regulatory reports, we explore these privacy incidents to determine whether the two firms might have avoided them if they had implemented privacy by design. As a prerequisite to this counterfactual analysis, we offer the first comprehensive evaluation and critique of existing approaches to privacy by design, including those put forward by regulators in Canada, U.S. and Europe, by private sector firms, and by several non-profit privacy organizations. Despite the somewhat speculative nature of this “what if” analysis, we believe that it reveals the strengths and weaknesses of privacy by design and thereby helps inform ongoing regulatory debates.

This paper is in three parts. Part I is a case study of nine privacy incidents—three from Google (Gmail, StreetView and Buzz) and six from Facebook (News Feed, Beacon, Photo Tagging, Facebook Apps, Instant Personalization, and a number of recent changes in privacy policies and settings). Part II identifies the design principles that the counterfactual analysis relies on. It proceeds in four steps: first, we argue that existing approaches to privacy by design are best understood as a hybrid with up to three components—Fair Information Practices (FIPs), accountability, and engineering (including design). Ironically, we find that design is the most neglected element of privacy by design.  We also show that existing regulatory approaches pay insufficient attention to the business factors firms consider in making design decisions. Second, we point out the shortcomings of FIPs, especially the stripped down versions that focus mainly on notice-and-choice. Third, we suggest that because all of the existing approaches to privacy by design incorporate FIPs and the definition of privacy on which it depends—namely, privacy as a form of individual control over personal information—they are ill-suited to addressing the privacy concerns associated with the voluntary disclosure of personal data in Web 2.0 services generally and especially in social networking services such as Facebook. Finally, we take a closer look at privacy engineering and especially at interface design. We find the latter is inspired not by theories of privacy as control but rather by an understanding of privacy in terms of social interaction as developed in the 1970s by Irwin Altman, a social psychologist, and more recently by Helen Nissenbaum, a philosopher of technology. Part III applies the design principles identified in Part II to the nine case studies, showing what went wrong with Google and Facebook and what they might have done differently. Specifically, we try to identify how better design practices might have assisted both firms in avoiding privacy violations and consumer harms and discuss the regulatory implications of our findings. We also offer some preliminary thoughts on how design practitioners should think about privacy by design in their everyday work.