Archives

Wendy Seltzer, Privacy, Feedback, and Option Value

Wendy Seltzer, Privacy, Feedback, and Option Value

Comment by: Michael Zimmer

PLSC 2012

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2032100

Workshop draft abstract:

We have confused intuitions about privacy in public. Sometimes, relying on a rationalist paradigm of secrecy, we say “if you don’t want something published to the world, don’t do it where others can see”: don’t post to Facebook, don’t converse on the public streets. Yet other times, drawing upon experience in natural and constructed social environments, we find that we can have productive interactions in a context of relative, not absolute, privacy: privacy is not binary.

Over time, we have worked out privacy-preserving fixes in architecture, norms, and law: we build walls and windowshades; develop understandings of friendship, trust, and confidentiality; and protect some of these boundaries with the Fourth Amendment, statute, regulation, tort, and contract. The environment provides feedback mechanisms; we adapt to the disclosure problems we experience (individually or societally). We move conversations inside, scold or drop untrustworthy friends, rewrite statutes. Feedback lets us find the boundaries of private contexts and probe the thickness of their membranes.

Technological change throws our intuitions off when we don’t see its privacy impact on a meaningful timescale. We get wrong, limited, or misleading feedback about the publicity of our actions and interactions online and offline. Even if we learn of the possibility of online profiling or constant location tracking, we fail to internalize this notice of publicity because it does not match our in-the-moment experience of semi-privacy. We thus end up with divergence between our understanding and our experience of privacy.

Prior scholarship has approached privacy in public from a few angles: It has identified various interests that fall under the heading of privacy: dignity, confidentiality, secrecy, presentation of self, harm; it has cataloged the legal responses, giving explanations of law’s development and suggestions for its further adaptation. Scholars have theorized privacy, moving beyond the binary of “secret or not secret” to offer contextual and experiential gradients. [1] Often, this scholarship reviews specific problems and situates them in larger context. [2] User studies and economic analysis have improved our understanding of the privacy experience, including the gap between expectations and reality. [3] Computer science and information theory help us quantify some of the elements we refer to as privacy. [4] Finally, design and systems-engineering literature suggest that feedback mechanisms play an important role in the usability and comprehensibility of individual objects and interfaces and in the ability of a system as a whole to reach stable equilibrium.[5]

This article aims to do three things:

1. Introduce a notion of privacy-feedback to bridge the gap between contextual privacy and the dominant secrecy paradigm. Privacy-feedback, through design and social interaction, enables individuals to gauge the publicity of their activities and to modulate their behavior in response.

2. Apply the tools of option value to explain the “harm” of technological and contextual breaches of privacy. The financial modeling of real options helps to describe and quantify the value of choice amid uncertainty. Even without knowing all the potential consequences of data misuse, or which ones will in fact come to pass, we can say that unconsented to data collection deprives the individual of options: to disclose on his or her own terms, and to act inconsistent with disclosed information.

3. Propose a broader framework for architectural regulation, in which technological feedback can enable individual self-regulation to serve as an alternative to command-and-control legal regulation. Feedback then provides a metric for evaluating proposed privacy fixes: does the fix help its users get meaningful feedback about the degree of privacy of their actions? Does it enable them to preserve disclosure options?

Finally, we see privacy-feedback take a larger systemic role. If technology and law fail to offer the choices necessary to protect privacy, we can give meta-feedback, changing the law to do better.


[1]          See Helen Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life, (Stanford Law Books, 2009); Daniel J. Solove, ‘A Taxonomy of Privacy’, 154 U. Pa. L. Rev. 477 (2006); Julie E. Cohen, ‘Examined lives: Informational privacy and the subject as object’, Stan. L. Rev. 52 (1999).

[2]          See Paul Ohm, ‘Broken Promises of privacy: Responding to the Surprising Failure of Anonymization’, 57 UCLA L.Rev. 1701 (2010); Orin S. Kerr, ‘The Fourth Amendment and New Technologies: Constitutional Myths and the Case for Caution’, Mich. L. Rev. 102 (2003); Lawrence Lessig, ‘The Architecture of Privacy’, Vand. J. Ent. L. & Prac. 1 (1999); Jeffrey Rosen, The unwanted gaze: The destruction of privacy in America, (Vintage, 2001).

[3]          See Alessandro Acquisti and Jens Grossklags, ‘Privacy and rationality in individual decision making’, Security & Privacy, IEEE, 3 (2005):1; A.M. McDonald and L.F. Cranor, ‘The cost of reading privacy policies’, ACM Transactions on Computer-Human Interaction, 4 (2008):3; C. Jolls, C.R. Sunstein and R. Thaler, ‘A behavioral approach to law and economics’, Stanford Law Review, (1998); R.H. Thaler and C.R. Sunstein, Nudge: Improving decisions about health, wealth, and happiness, (Yale Univ Pr, 2008).

[4]          See James Gleick, The information: A history, a theory, a flood, (Pantheon, 2011); C.E. Shannon and W. Weaver, The mathematical theory of communication, (University of Illinois Press Urbana, 1962);  Cynthia Dwork, ‘Differential privacy’, Automata, languages and programming, (2006).

[5]          See Donald A. Norman, Emotional Design, (Basic Books, 2004); J.W. Forrester, Industrial dynamics, (MIT Press Cambridge, MA, 1961); Charles Perrow, Normal accidents: Living with high-risk technologies, (Princeton University Press, 1984); H.A. Simon, The sciences of the artificial, (the MIT Press, 1996).

Ira Rubinstein and Nathan Good, Privacy by Design: A Counterfactual Analysis of Google and Facebook Privacy Incidents

Ira Rubinstein and Nathan Good, Privacy by Design: A Counterfactual Analysis of Google and Facebook Privacy Incidents

Comment by: Danny Weitzner

PLSC 2012

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2128146

Workshop draft abstract:

Both U.S. and E.U. regulators have embraced privacy by design as a core element of their ongoing revision of current privacy laws. This idea of building in privacy from the outset is commonsensical, yet it remains rather vague—surely it should enhance consumer privacy but what does it mean and how does it really work? In particular, is privacy by design simply a matter of developing products with greater purpose or intention (as opposed to haphazardly) or of following specific design principles? What do regulators expect to accomplish by encouraging firms to implement privacy by design and how would this benefit consumers? This paper seeks to answer these questions by presenting case studies of high profile privacy incidents involving Google and Facebook and analyzing them against a set of notional design principles.

Based on news accounts, company statements, and detailed regulatory reports, we explore these privacy incidents to determine whether the two firms might have avoided them if they had implemented privacy by design. As a prerequisite to this counterfactual analysis, we offer the first comprehensive evaluation and critique of existing approaches to privacy by design, including those put forward by regulators in Canada, U.S. and Europe, by private sector firms, and by several non-profit privacy organizations. Despite the somewhat speculative nature of this “what if” analysis, we believe that it reveals the strengths and weaknesses of privacy by design and thereby helps inform ongoing regulatory debates.

This paper is in three parts. Part I is a case study of nine privacy incidents—three from Google (Gmail, StreetView and Buzz) and six from Facebook (News Feed, Beacon, Photo Tagging, Facebook Apps, Instant Personalization, and a number of recent changes in privacy policies and settings). Part II identifies the design principles that the counterfactual analysis relies on. It proceeds in four steps: first, we argue that existing approaches to privacy by design are best understood as a hybrid with up to three components—Fair Information Practices (FIPs), accountability, and engineering (including design). Ironically, we find that design is the most neglected element of privacy by design.  We also show that existing regulatory approaches pay insufficient attention to the business factors firms consider in making design decisions. Second, we point out the shortcomings of FIPs, especially the stripped down versions that focus mainly on notice-and-choice. Third, we suggest that because all of the existing approaches to privacy by design incorporate FIPs and the definition of privacy on which it depends—namely, privacy as a form of individual control over personal information—they are ill-suited to addressing the privacy concerns associated with the voluntary disclosure of personal data in Web 2.0 services generally and especially in social networking services such as Facebook. Finally, we take a closer look at privacy engineering and especially at interface design. We find the latter is inspired not by theories of privacy as control but rather by an understanding of privacy in terms of social interaction as developed in the 1970s by Irwin Altman, a social psychologist, and more recently by Helen Nissenbaum, a philosopher of technology. Part III applies the design principles identified in Part II to the nine case studies, showing what went wrong with Google and Facebook and what they might have done differently. Specifically, we try to identify how better design practices might have assisted both firms in avoiding privacy violations and consumer harms and discuss the regulatory implications of our findings. We also offer some preliminary thoughts on how design practitioners should think about privacy by design in their everyday work.

Neil Richards, Social Reading and Intellectual Privacy

Neil Richards, Social Reading and Intellectual Privacy

Comment by: Tommy Crocker

PLSC 2012

Workshop draft abstract:

This article deals with the overlooked privacy and media law implications of one of our most important new technologies – electronic reading.  I will examine the importance of privacy protections to the new modes of electronic reading, and the need for intellectual privacy protection of reading habits in order to ensure a robust and free culture of public debate.  I will show how recent radical developments in reading technologies and social media have unsettled many long-standing norms of intellectual privacy, and that legal and technological regulation of reader privacy in particular is necessary to preserve the vital civil liberties at stake. The article builds on my prior work on intellectual privacy, and is part of a larger project I have been working on for several years about the relationships between free speech and privacy, and how we should rethink our approach to these values in media and information law.

The generation of ideas frequently depends on access to the ideas of others who have come before, as intellectual property scholarship has shown in detail.  In a free society, access to new ideas (whether we agree with them or not) requires the ability to read widely and without constraint.  Oversight or interference with our reading habits can curtail our willingness to read freely and to experiment with ideas that others might think deviant, laughable, or embarrassing.  But the right to read has been underappreciated and under-theorized.

At the same time, the right to read is increasingly under threat in the modern age of networked communications and access to information. In terms of making information and ideas broadly available, the Internet has opened up new horizons of access, on a scale that is unprecedented in human history.  Moreover, the rise of laptops, smart phones, tablets, and electronic books means that more and more of what we read is being mediated by digital technologies.  But these technologies have a potential dark side: while they open up new opportunities to read and interact with new ideas, they also create records of reading habits and intellectual explorations.  For instance, Amazon maintains records not just of the books its users buy, but also the books they don’t, and the pages they browse.  The Kindle website allows anyone to view the most-read passages on Kindle readers from automatically collected data on reading habits.  And Facebook wants to become not just a media platform, but a social media platform, with all of our media consumption shared by default to everyone we know.

While companies sometimes claim they will respect the confidentiality of such records, in reality these records are subject to a very low level of legal protection.  Moreover, other legal requirements such as copyright and child protection laws mandate a logic of surveillance that can become highly intrusive.  Thus, the DMCA requires the unmasking of anonymous users in order to protect copyright holders from infringement.  And in order to protect children from adult content, users of Youtube.com who wish to access content flagged as indecent must register with YouTube and create accounts that allow even greater surveillance and identification of their viewing habits.  This creates the irony of greater intellectual privacy protections for users who read and view only the non-objectionable content, and creates a chill on the unfettered right to read anonymously.

In this article, I will show that many of the answers to these pressing problems of modern technology can be found in an unlikely source – in the professional and legal norms of librarians.  Librarians were the original information stewards – a paper version of the Internet in the pre-electronic era.  The norms of reader privacy and patron confidentiality developed by the American Library Association can point the way to a better understanding of reader privacy in the digital age – striking the balance between open access to ideas, and the privacy necessary to engage with those ideas on our own terms.  It may seem paradoxical the solution to such modern problems can be found in such a dusty old source, but in reality this shows the timelessness and importance of the vital civil liberties that are at stake.

Charles Raab, Beyond the Privacy Paradigm: Implications for Regulating Surveillance

Charles Raab, Beyond the Privacy Paradigm: Implications for Regulating Surveillance

Comment by: Julie Cohen

PLSC 2012

Workshop draft abstract:

The conventional privacy paradigm can be criticised for its emphasis on an individualistic, rights-based conception of information privacy. It also sets the claims of privacy against the protection of the common good or the public interest, while proponents of the latter tend to set aside the claims of privacy. However, a growing but somewhat diverse body of useful sources exist for an alternative construction of privacy, one that overcomes an ‘individual versus society’ approach by emphasising privacy’s social and public-interest value. The proposed paper aims to take further steps in this direction, building upon the work of other scholars and upon the author’s recent and forthcoming writing. It argues that privacy can be conceptualised in a different way, emphasising the social values of privacy, in order to overcome deficiencies in theory and regulatory practice. Whereas conventional views of individual privacy and of the social and public interest often ignore the complexity of people’s dynamic involvement in public and private relationships, the conceptual alternative is grounded not so much in rights discourse as in social-scientific analysis of the multi-level social relationships in which individuals and groups engage. Insights can be gained by considering the effect of surveillance practices not only on the individual’s privacy understood conventionally, but also on her relationships within groups and categories of persons as collective data subjects. The paper looks at some of the policy and regulatory consequences of this conceptualisation. These include expanding the assessment of the impact of privacy – a current regulatory measure – into ways of assessing surveillance’s impact on these further values and collective data subjects. This is not to argue that individual privacy’s importance as a human right should be disregarded, but that it is insufficient for a comprehensive regulation of surveillance in the ‘information society’.

Helen Nissenbaum & Andrew Selbst, Contextual Expectations of Privacy

Helen Nissenbaum & Andrew Selbst, Contextual Expectations of Privacy

Comment by: James  Grimmelmann

PLSC 2012

Workshop draft abstract:

The last decade of privacy scholarship is replete with theories of privacy that reject absolute binaries such as secret/not secret or inside/outside, instead favoring approaches that take context into account to varying degrees. Fourth Amendment doctrine has not caught up with theory, however, and courts continue to employ discredited binaries to justify often contradictory conclusions. At the same time, while some of the cases reveal the influence of contextual thinking, courts rarely have included an explicit commitment to context in their opinions. We believe that such a commitment would improve both the internal consistency and individual case outcomes of the Fourth Amendment.

The theory of contextual integrity, which characterizes a right to privacy as the preservation of expected information flows within a given context, offers a framework for injecting context into the conversation. Grounded, as it is, in context-based normative expectations, the theory offers a useful interpretive framework for Fourth Amendment search doctrine. This paper seeks to reexamine the meaning of a “reasonable expectation of privacy” under the theory of contextual integrity, and in doing so accomplish three goals: 1) create a picture of Fourth Amendment doctrine if the Katz test had always been interpreted this way, 2) demonstrate that contextual integrity can draw connections between seemingly disjointed doctrines within the Fourth Amendment, and 3) illustrate the mechanism of applying contextual integrity to a Fourth Amendment search case, with the intent of helping both theorists and practitioners in future cases, particularly those involving technology.

Jules Polonetsky & Omer Tene, Privacy in the Age of Big Data: A Time for Big Decisions

Jules Polonetsky & Omer Tene, Privacy in the Age of Big Data: A Time for Big Decisions

Comment by: Ed Felten

PLSC 2012

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2149364

Workshop draft abstract:

We live in an age of “big data”. Data has become the raw material of production, a new source for immense economic and social value. Advances in data mining and analytics and the massive increase in computing power and data storage capacity have expanded by orders of magnitude the scope of information available for businesses, government and individuals.[1] In addition, the increasing number of people, devices, and sensors that are now connected by digital networks has revolutionized the ability to generate, communicate, share, and access data. Data creates enormous value for the world economy, driving innovation, productivity, efficiency and growth.[2] At the same time, the “data deluge” presents privacy concerns which could stir a regulatory backlash dampening the data economy and stifling innovation.

Privacy advocates and data regulators increasingly decry the era of big data as they observe the growing ubiquity of data collection and increasingly robust uses of data enabled by powerful processors and unlimited storage. Researchers, businesses and entrepreneurs equally vehemently point to concrete or anticipated innovations that may be dependent on the default collection of large data sets. In order to craft a balance between beneficial uses of data and individual privacy, policymakers must address some of the most fundamental concepts of privacy law, including the definition of “personally identifiable information”, the role of consent, and the principles of purpose limitation and data minimization.

In our paper we intend to develop a model where the benefits of data for businesses and researchers are balanced with individual privacy rights. Such a model would help determine whether processing could be based on legitimate business interest or subject to individual consent and whether consent must be structured as opt-in or opt-out. In doing so, we will address questions such as: Is informed consent always the right standard for data collection? How should law deal with uses of data that may be beneficial to society or to individuals when individuals may decline to consent to those uses? Are there uses that provide high value and minimal risk where the legitimacy of processing may be assumed? What formula determines whether data value trumps individual consent?

Our paper draws on literature discussing behavioral economics, de-identification techniques, and consent models, to seek a solution to the big data quandary. Such a solution must enable privacy law to adapt to the changing market and technological realities without dampening innovation or economic efficiency.


[1] Kenneth Cukier, Data, data everywhere, The Economist, February 25, 2010, http://www.economist.com/node/15557443.

[2] McKinsey, Big data: The next frontier for innovation, competition, and productivity, June 2011, http://www.mckinsey.com/Insights/MGI/Research/Technology_and_Innovation/Big_data_The_next_frontier_for_innovation.

 

Paul Ohm, Branding Privacy

Paul Ohm, Branding Privacy

Comment by: Deven Desai

PLSC 2012

Workshop draft abstract:

This Article focuses on the problem of what James Grimmelmann has called the “privacy lurch,”[1] which I define as an abrupt change made to the way a company handles data about individuals. Two prominent examples include Google’s decision in early 2012 to tear down the walls that once separated data about users collected from its different services and Facebook’s decisions in 2009 and 2010 to expose more user profile information to the public web by default than it had in the past. Privacy lurches disrupt long-settled user expectations and undermine claims that companies protect privacy by providing adequate notice and choice. They expose users to much more risk to their individual privacy than the users might have anticipated or desired, assuming they are paying attention at all. Given the special and significant problems associated with privacy lurches, this Article calls on regulators to seek creative solutions to address them.

But even though privacy lurches lead to significant risks of harm, some might argue we should do nothing to limit them. Privacy lurches are the product of a dynamic marketplace for online goods and services.  What I call a lurch, the media instead tends to mythologize as a “pivot,” a wel-come shift in a company’s business model, celebrated as an example of the nimble dynamism of entrepreneurs that has become a hallmark of our information economy.  Before we intervene to tamp down the harms of privacy lurches, we need to consider what we might give up in return.

Weighing the advantages of the dynamic marketplace against the harms of privacy lurches, this Article prescribes a new form of mandatory notice and choice. To breathe a little life into the usually denigrated options of notice and choice this Article looks to the scholarship of trademark law, representing a novel integration of two very important but until now almost never connected areas of information law.  This bridge has long been overdue to be built, as the theory of trademark law centers on the very same information quality and consumer protection concerns that animate notice and choice debates in privacy law. These theories describe the important informational power of trademarks (and service marks and, more generally, brands) to signal quality and goodwill to consumers concisely and efficiently.  Trademark scholars also describe how brands can serve to punish and warn, helping consumers recognize a company with a track record of shoddy practices or weak attention to consumer protection.

The central recommendation of this Article is that lawmakers and regulators should force almost every company that handles customer information to associate its brand name with a specified set of core privacy commitments.  The name, “Facebook,” for example, should be inextricably bound to that company’s specific, fundamental promises about the amount of information it collects and the uses to which it puts that information. If the company chooses someday to depart from these initial core privacy commitments, it must be required to use a new name with its modified service, albeit perhaps one associated with the old name, such as “Facebook Plus” or “Facebook Enhanced.”

Although this solution is novel, it is far from radical when one considers how well it is sup-ported by the theoretical underpinnings of both privacy law and trademark law. It builds on the work of privacy scholars who have looked to consumer protection law for guidance, representing another important intradisciplinary bridge, this one between privacy law and product safety law.  Just as companies selling inherently dangerous products are obligated to attach warning labels,  so too should companies shifting to inherently dangerous privacy practices be required to display warn-ing labels. And the spot at the top of every Internet web page listing the brand name is arguably the only space available for an effective online warning label. A “branded privacy” solution is also well-supported by trademark theory, which focuses on giving consumers the tools they need to accurately and efficiently associate trademarks with the consistent qualities of a service in ways that privacy lurches disregard.

At the same time, because this solution sets the conditions of privacy lurches rather than prohibiting them outright, and by restricting mandatory rebranding only to situations involving a narrow class of privacy promises, it leaves room for market actors to innovate, striking a proper balance between the positive aspects of dynamism and the negative harms of privacy lurches. Com-panies will be free to evolve and adapt their practices in any way that does not tread upon the set of core privacy commitments, but they can change a core commitment only by changing their brand. This rule will act like a brake, forcing companies to engage more in internal deliberation than they do today about the class of choices consumers care about most, without preventing dynamism when it is unrelated to those choices or when the value of dynamism is high. And when companies do choose to modify a core privacy commitment, its new brand will send a clear, unambiguous signal to consumers and privacy watchers that something important has changed, directly addressing the information quality problems that plague notice-and-choice regimes in ways that improve upon prior suggestions.


[1] James Grimmelmann, Saving Facebook, 94 Iowa L. Rev. 1137 (2009).

Andrea M. Matwyshyn, Repossessing the Disembodied Self: Rolling Privacy and Secured Transactions

Andrea M. Matwyshyn, Repossessing the Disembodied Self:  Rolling Privacy and Secured Transactions

Comment by: Diane Zimmerman

PLSC 2012

Workshop draft abstract:

Consumer privacy in commercial contexts is primarily governed by rolling form contracts that are usually amendable in the sole discretion of the drafter.   As these contracts have become longer and less readable over time and as consumers have become progressively more comfortable with the informational disembodiment of the self, concerns over fairness in privacy contracting grow.   These concerns loom particularly large when a company enters bankruptcy:  privacy contracts/terms of use may include a provision that allows for the disposition of consumer data in bankruptcy in a manner unfettered by privacy promises.  Alternatively, in the absence of FTC intervention, a bankruptcy court may attempt to facilitate sale of database assets by simply setting aside the privacy contracts with consumers.   Engaging with the contract literature, secured transactions literature and bankruptcy literature, this article argues that as progressively more sensitive consumer information becomes controlled by private companies, a fundamental tension arises:  databases frequently become the primary assets of companies, but yet their collateralization, repossession and disposition processes are uncertain as a matter of law.   Equally uncertain is the extent of companies’ continuing privacy obligations to consumers in bankruptcy.   This dynamic pushes borrowers, lenders, courts, consumers and the FTC into an unsustainable relationship in the innovation lifecycle.    The article concludes by proposing an amendment to the current law of secured transactions in databases that balances the privacy interests of consumers with enabling information entrepreneurship and capital formation.

Torin Monahan & Priscilla M. Regan, Fusion Centers Information Sharing: Revisiting Reliance on Suspicious Activity Reports

Torin Monahan & Priscilla M. Regan, Fusion Centers Information Sharing: Revisiting Reliance on Suspicious Activity Reports

Comment by: Ron Lee

PLSC 2012

Workshop draft abstract:

Interviews, with fusion center officials, conducted as part of our NSF funded research, reveal that Suspicious Activity Reports (SARs) are often mentioned as a way of collecting and sharing information.  Law enforcement has been using some form of SARs for decades, collected through a variety of mechanisms including hotlines, 911 calls, neighborhood watches, schools and community centers, etc.  The value and reliability of such reporting have often been questioned, especially as their use expands in ways that will likely result in an overload of information of dubious quality requiring a large investment of time to investigate. (Nojeim, 2009; Randol 2009, ACLU 2010)  Despite these concerns, SARs have persisted as a tool of community oriented policing and as a practical tool for collecting information and raising public awareness (Steiner 2010).  Since its creation, DHS has adopted SARs in its counter-terrorism activities, with the newest SARs version being Secretary Napolitano’s, “If You See Something, Say Something Campaign.”

Our interviews indicate that SARs reporting is labor intensive and generally does not yield useful information.  An official at one state level fusion center stated “A lot of our activity on the counter-terrorism side is responding to suspicious activity reports…I would say an overwhelming majority of the reports that we get are, once we do a little bit of checking,  we can determine that they, that the person had a reason to be doing what they were doing – and those get closed out and we don’t pursue those any further.”  An official at another center estimated that the center received “in the realm of four hundred to five hundred SARs a year…the SARs are not necessarily all terrorism, but some are.”

Notwithstanding the widely recognized limitations of SARs, they do continue to be used.  This paper will investigate why they continue to be used in intelligence gathering, how and when they are be used in fusion centers, what the policy landscape for their use currently is, what revisions to that landscape might be necessary, and what intelligence gathering alternatives to SARs exist.

Laura Moy & Amanda Conley, Paying the Wealthy for being Wealthy: The Hidden Costs of Behavioral Marketing

Laura Moy & Amanda Conley, Paying the Wealthy for being Wealthy: The Hidden Costs of Behavioral Marketing

Comment by: Marc Groman

PLSC 2012

Workshop draft abstract:

Despite a growing awareness that third parties gather, store, and even sell their personal information, many consumers continue to share their information with companies that collect it by shopping online or signing up for loyalty card programs in brick and mortar stores. Insofar as this results from individuals affirmatively weighing the cost of ceding control of their personal information against the benefit of having that information conveniently pre-stored in retailers’ databases or the benefit of receiving advertisements targeted at their precise interests, behavioral advertising appears desirable, holding the promise of eventual perfect market efficiency. But even if individual consumers consciously consent to having their personal information recorded and used to provide them with targeted advertising, this information collection—particularly in the context of loyalty programs—is not cost-free. Our paper seeks to illuminate some of the hidden costs to consumers of information collection associated with individualized targeting.

The costs to consumers of highly targeted marketing are likely borne disproportionately by those with the least disposable income. Grocery store loyalty programs, for example, are designed to identify and reward the wealthiest shoppers—the small minority of customers responsible for a majority of the store’s revenue—at the expense of those at the lowest end of the income spectrum.

Not so long ago, coupons were distributed primarily in newspapers and circulars, and consumers could clip and organize these coupons if they felt that was a valuable use of their time. Today, by contrast, wealthier consumers receive targeted discounts on the products they purchase most, while the coupon clippers of the past who visit multiple stores and purchase low-cost items receive little or no benefit from stores that now view them as valueless or even revenue decreasing and undesirable. This flip on traditional price discrimination—overcharging the poor while giving discounts to the wealthy—is facilitated by information collectors’  relatively newfound ability to extract and use individual-level consumer data.

Not only does this reverse price discrimination reinforce existing income inequalities; it may even exacerbate them. Many stores mark up their prices above the MSRP in order to create the impression that they are providing a benefit to their (wealthy) customers by giving them a “discount.” Wealthy customers’ “discounts,” however, must be paid for somehow. To avoid alienating their most profitable customers, rational stores shift this cost onto the shoulders of non-wealthy customers in the form of inflated prices.

Drawing on Helen Nissenbaum’s theory of contextual integrity, we suggest that these uses of customer information violate traditional expectations of all who sign up for loyalty programs, regardless of income level.  In addition, they place the power of correcting this injustice entirely in the hands of the wealthy: only if the most favored consumers choose to opt out of having their personal information collected will non-wealthy consumers cease to be disadvantaged by behavioral marketing.