Archives

Mark MacCarthy, Social Networks: Privacy Externalities and Public Policy

Mark MacCarthy, Social Networks: Privacy Externalities and Public Policy

Comment by: Anupam Chander

PLSC 2011

Workshop draft abstract:

In the apocryphal Fitzgerald – Hemingway anecdote, Fitzgerald says the rich are very different from you and me; Hemmingway responds that the rich have more money. The exchange is relevant to social networks and privacy. Are social networks a disruptive technology that challenges existing thinking on privacy, or just more of the same?  The frame of this paper is that social networks are something radically new that this difference should prompt us to rethink the framework we use to structure public policy toward privacy.

From this perspective, social networks represent another example of the connection between technical development and the evolution of privacy policy.  Warren and Brandeis reacted to the widespread use of the snap camera in journalism to develop a tort-based right of privacy as the right to be left alone.  The widespread use of mainframe computers in the 1960s by large private and public institutions to compile and process information about individuals led to the ex ante rules embodied in the fair information practices framework.

Social networks present a similar technological breakthrough that forces us to rethink privacy assumptions.  Unlike the Internet they create and thrive upon a culture of identified sharing of information.  People who use social network sites want to be known by others.  Providing personal information on an ongoing basis to a limited group of other people is the whole point of a social network.  The privacy challenge is this: a technology that depends for its highest and best uses on the exchange of information is ill-suited to a privacy norm of standardized before-the- fact limitations on information exchange.

This paper sets the stage for a discussion of public policy on privacy toward social networks by examining several accounts in the literature of privacy in social networks. Nissenbaum’s contextualist theory holds that privacy is the right to the appropriate flow of information, where appropriate is defined by the context in which the information is created and exchanged.  One way to apply this approach to new contexts where information norms are not yet well developed is to assimilate new contexts to old ones.  This is the tack Nissenbaum takes when she denies that social networks are genuinely new phenomena, and tries to model them as a medium of information exchange like the telephone system.  Entrenched norms from this context of ordinary life apply, she says, and this explains the sense of outrage when information meant for a network of friends is used by recruiters to evaluate job candidates or by aggregation services such as Rapleaf that generate profiles from social network information and make them available in the context of marketing or eligibility decisions for insurance, credit or employment.

Strahilevitz approaches the public – private question in privacy tort with the apparatus of the sociological theory of social networks, which studies how information flows among groups of loosely or tightly connected individuals.  He concludes that a person should have a reasonable expectation of privacy when there is a low probability that information will flow beyond a limited subset of his friends.  If someone causes information to move beyond this group, Strahilevitz contends, then he should be liable for the privacy tort of public disclosure of private information.  This attempt to put some structure into the idea of privacy in public can be applied to social networks as new technologically-based institutions.  To the extent that there is a low probability that information disclosed by a social network user will travel beyond the network of friends for whom it was intended, then these users have a legitimate expectation that others will not cause the information to cross these boundaries. Under this approach, uses of social network information in employment or data aggregation contexts would be surprising and would violate these legitimate expectations of privacy.

Lipford and Hull et al. focus on the need for users to have a sense of how visible their information really is on social network sites.  Once they can see, using a tool such as Audience View, how others see their information they would then be in a position to determine how much information they would like to share.  Empowering user control in this way is the key to keeping information flows within the contextual norms for social networks, without users having to assume that all information they post will be public to everyone and can be used for every purpose.

These perspectives have limitations.  Nissenbaum misses the extent to which the new technology of sharing creates a genuinely new context, and that the simple extrapolation of old norms into the new context is insufficient to respond to its novelty.  Norms for information flow in social networks are under construction; they are contested terrain, not areas where privacy norms are completely specified and generally accepted.  What the information rules should be for social networks cannot be resolved by appeal to widely-shared intuitions about social norms; these questions require further normative debate.

Strahilevitz has an answer to the question of what rules should apply, but it misses a key dimension.  By focusing solely on the factual question of the actual probability of information flowing out of a social network context to other contexts, he mistakenly allows the normative privacy question to be decided by those who can create facts on the ground by appropriating social network information.  People might believe and expect that their information will stay in the social network context, but, despite what they think, the probability is really quite high that information made available on a social network site will migrate far beyond its original context. Under Strahilevitz’s theory, this fact would make the privacy expectations of social network users unreasonable. The normative dispute about privacy rules for social networks cannot be resolved by appeal to the facts of information flow.

The attempt by Lipford and Hull et al. to provide greater user control avoids the mistakes of appealing to old norms in a new context and the reduction of the normative question to a manipulable factual one.  It reaffirms the idea that consent is at the heart of privacy and urges the development of more visible and transparent user controls.  If people don’t want their information spreading beyond their immediate social network of friends, they should set their privacy controls to implement this preference.

This approach is probably the dominant public policy approach to privacy on social networks. Increased user control is often recommended as a way to address privacy issues on social networks.  Public policy demands from regulators, legislators and privacy advocates have focused primarily on giving users adequate control over social network information.

The limitations of individual user control as the primacy regulator of privacy have been widely discussed.  As applied to social networks, these concerns can be summarized as follows.  A blizzard of privacy choices in a social network context simply encourages passivity.  The number and type of choices will inevitably be too granular for those who care only to adopt the most restrictive or the most open of controls.  Alternatively, the choices will not be granular enough for others – preventing them from selectively revealing information to some, while withholding it from others. Moreover, an extraordinary level of knowledge is needed to evaluate what information is available to whom.  App developers, aggregations services, and data brokers can obtain supposedly concealed social network information in ways that surprise both users and operators of social network sites.  The use of this information is opaque as well, so there is little guidance available to individuals as to whether it is a good idea or a bad idea to share this information. Inevitably, some discretion is retained by the social network operator over what information cannot be controlled by users.  Finally, users cannot expect to keep information hidden when they are under suspicion of activities that are of public concern such as national security or cyber bulling, or impersonation.  Surveillance of social network activity for these purposes has to taken as a given.  These factors all mean that user control can never be complete, or completely effective at preventing privacy harms.

I approach the question of privacy in social networks through the lens of privacy externalities, and from this angle it is clear that social networks present a substantial challenge to the informed consent ideal of privacy regulation.  Information one person reveals often reveals information about others.  This is clearest in eligibility contexts, where, for instance, non-smokers can reveal the status of smokers by voluntarily answering optional insurance company questions on their smoking habits. Social networks are a major source of privacy externalities because networks of friends tend to have certain features in common.  People with similar sexual orientation, credit worthiness, political beliefs, racial identities tend to group together.  As a result, researchers, marketers and others can predict some characteristics of people based on characteristics of their network friends.  If your social network friends are gay, you probably are too.  If they are deadbeats, likely you are as well.  These indirect inferences about people can then be used for a variety of purposes, most of them, so far, unregulated.  These externalities can sometimes help people and sometimes hurt them, but they have to be taken into account when assessing privacy in a social network context.

Privacy externalities provide an additional reason why a focus on individual user control is misplaced.  Even when one person is perfectly willing to release information about himself, his decision has implications for others that, individualistically, he is not taking into account when deciding what is in his best interest to reveal. External privacy harms can be inflicted on people whose identity or essential information is revealed by the action of others.  In these cases, too much information has been released. Alternatively, external privacy benefits can be conferred on people who free ride on the information revealed by others.  In this case, too little information has been released.

The article draws together existing literature on these externalities in the social network contexts with a view toward demonstrating that they are frequent and pervasive in this context.  Together with other concerns that have been expressed regarding an overreliance on user controls they warrant a revision in the public policy perspective that privileges user control over other more effective ways to protect privacy in the context of social networks.

One implication of this approach is that standardized limitations on the collection of information through company policies, industry self-regulatory codes or legislation is the wrong way to go.  In the social network context, more information sharing often means greater benefits.  Many of these benefits are externalities in the sense that people benefit from information disclosure other than the individuals who have revealed the information.  Collection restrictions in the social network context mean default rules on the sharing of information that might unnecessarily restrict the growth of new innovative functions and benefits of social networks.

 

For example, sharing price and quality information of various goods and services among similarly situated network friends is a benefit to them. This information could be aggregated, analyzed combined with other information and made available to other network users.  The value of this service to its consumers increases more than linearly as the information on which it is based increases. The problem with leaving the decision on sharing this information entirely to individuals is that the amount of available information will be too small.  People will not factor in the benefits to others of having their information available to outside parties for analysis, and so they will chose to withhold when the socially optimal choice would be for sharing.

This focus on the beneficial uses of information sharing in a social network context has to be balanced by an assessment of the harmful uses of information exchanged on social networks.  If these harmful uses are not recognized and controlled, then network users will refuse to share information as a way to protect themselves from these possible harms.  The quality and quantity of information traded in this context will shrink and the full value of these new innovative tools will not be realized.

The focus has to be on the use of the information, not on what information is collected. Public policy in the form of legislation or regulation could develop prohibitions or restrictions on the harmful use of social network information.  We need an open debate and consideration of the question of whether social network information should be used, for example,  for eligibility decisions involving credit, employment, and insurance, or for setting individualized prices, terms or conditions for products or services.  This need not apply just to operators of social networks; scraping social networks by outside parties or disclosure by applications developers of information used for these purposes might be prohibited or restricted. Legislation of this type is already under consideration in Germany and California in the form of bills that would prohibit the use of Facebook information for employment screening.  Public policy might also develop the concept of publicly beneficial uses of information and restrict or prohibit user control for these uses.

 

As a background to this public policy approach to social network privacy, the paper develops and applies an unfairness model of privacy regulation under which uses of information fall into one of three categories: unfair, publicly beneficial and intermediate.  The unfair uses can be prohibited or subjected to strong opt-in defaults that disfavor them; the publicly beneficial uses should not be subject to easy to use user controls because the external benefits from sharing are so substantial and individual failure to participate would dissipate these benefits. The intermediate uses can be subjected to appropriate and well-tailored user controls.  The paper provides examples of each of these categories in a social network context.

In Part I, I review the literature on privacy and social networks, including discussions by Nissenbaum, Strahilevitz and Lipford and Hull et al and explore the limitations in their approaches.  Part II sets out the concept of privacy externalities in the social network context, exploring the ways in which information revealed by some people can reveal information about others.  It then discusses examples of positive and negative privacy externalities, where in some cases indirect inferences about people can help them and in other cases where it can hurt them.  Part III sets out the unfairness framework for regulating privacy and suggests various specific prohibitions and restrictions that might be considered by privacy legislation.  It also discusses how user controls can fit into this framework as a way to approach intermediate uses that are neither unfair, nor publicly beneficial. Part IV summarizes the discussion and concludes with specific suggestions for further research in this area.

Andrea Matwyshyn, Digital Childhood

Andrea Matwyshyn, Digital Childhood

Comment by: Joel Reidenberg

PLSC 2011

Workshop draft abstract:

The Children’s Online Privacy Protection act suffers from numerous shortcomings. Perhaps the most notable of these deficiencies is the lack of statutory coverage for children over the age of thirteen but below the legal age of contractual capacity.   This article argues that as a matter of contract theory and doctrine, children under the legal age of contractual capacity retain the right to ask that all contracts relating to their conduct online be deemed voidable.   As such, when a minor asks that an agreement (for a non-necessity) be set aside on the basis of lack of capacity, the other party can no longer derive benefit from the consideration paid by the minor, including her information.   A duty of deletion then pertains to the holders of the minor’s information as a matter of contract law.

Maritza Johnson, Tara Whalen & Steven M. Bellovin, The Failure of Online Social Network Privacy Settings II – Policy Implications

Maritza Johnson, Tara Whalen & Steven M. Bellovin, The Failure of Online Social Network Privacy Settings II – Policy Implications

Comment by: Aaron Burstein

PLSC 2011

Workshop draft abstract:

The failure of today’s privacy controls has a number of legal and policy implications.  One concerns the Fourth Amendment.  Arguably, people have a reasonable expectation of privacy in data they have marked “private” on Facebook; conversely, such an expectation is not reasonable if they have made it available to Facebook’s 500,000,000 users.  Our results, though, show that people often cannot carry out their intentions, and that they are unaware of this fact.  Given this, we suggest that a broader view of a reasonable expectation of privacy is necessary.

There are also implications for privacy regulations.  In jurisdictions that regulate collection of data (e.g., Canada and the EU), the existence of access controls could be viewed as a consent mechanism: a user who has marked an item as publicly accessible has voluntarily waived privacy rights.  We assert that such a waiver is not a knowing one, in that people cannot carry out their intentions.

Michelle Madejski, Maritza Johnson & Steven M. Bellovin, A Study of Privacy Setting Errors in Online Social Networks

Michelle Madejski, Maritza Johnson & Steven M. Bellovin, A Study of Privacy Setting Errors in Online Social Networks

Comment by: Aaron Burstein

PLSC 2011

Workshop draft abstract:

Increasingly, people are sharing sensitive personal information via online social networks (OSN). While such networks do permit users to control what they share with whom, access control policies are notoriously difficult to configure correctly; this raises the question of whether users’ privacy settings match their intentions. We present the results of an empirical evaluation that measures privacy attitudes and sharing intentions and compares these against the actual privacy settings on Facebook. Our results indicate a serious mismatch: every one of the 65 participants in our study had at least one sharing violation. In other words, OSN users are sharing more information than they wish to. Furthermore, a majority of users cannot or will not fix such errors. We conclude that the current approach to privacy settings is fundamentally flawed and cannot be fixed; a fundamentally different approach is needed. We present recommendations to ameliorate the current problems, as well as providing suggestions for future research.

Pauline T. Kim, Employee Privacy and Speech: Pushing the Boundaries of the Modern Employment Relationship

Pauline T. Kim, Employee Privacy and Speech:  Pushing the Boundaries of the Modern Employment Relationship

Comment by: Anne T. McKenna

PLSC 2011

Workshop draft abstract:

Employee privacy and speech have always been contested terrain.  Employees have asserted their rights to keep certain matters private, or to speak without fear of retaliation.  Employers have argued that intrusions on employees’ privacy or restrictions on their speech are justified by legitimate business interests and their need to manage the workplace.  When it comes to privacy, the law strikes a rough accommodation between these competing interests by distinguishing between personal life and work life.  Aspects of an employee’s personal life are more likely to be protected against employer scrutiny or retaliation.  Conversely, the more closely an employee’s activities are connected to her job duties or the workplace, the less likely they are to be protected.  The doctrine governing employee speech exhibits a different pattern.  Although off-duty speech may be incidentally protected as an aspect of off-duty conduct, the law’s particular focus is on protecting certain types of socially valued speech, such as collective speech about working conditions or speech as private citizens that contributes to public debate.   Despite the differing doctrinal frameworks, these two employee interests—privacy and speech—are closely interrelated, and work together to protect not only individual dignitary interests but broader social concerns as well.

Recent changes in the nature of work and changes in technology have significantly shifted the balance between employees’ privacy interests and employers’ managerial concerns.  Together, these changes are raising the incentives and lowering the costs for employers to intrude on areas employees have claimed are private.  In addition, these changes are increasingly blurring the line between home and work, between off-duty and on-duty activities.  In the face of these changes, the traditional doctrinal frameworks used to analyze employee privacy claims are becoming obsolete because they rely on the existence of established social norms of privacy and on drawing a distinction between personal and work life.   The net effect of all these developments is that the traditional doctrinal forms are increasingly inadequate to delineate the boundaries of employees’ personal activities that should be free from employer scrutiny.  This development in turn, has left employee speech rights more vulnerable.  Current forms of regulation of employee privacy, which typically rely on after-the-fact challenges by individual plaintiffs (e.g. tort or constitutional claims for invasion or privacy) are unlikely to be successful in addressing emerging privacy challenges in the workplace.  In the final section, I review alternative types of regulatory responses that might be used to address employee privacy concerns, and briefly discuss their advantages and limitations.

Orin Kerr, A Substitution-Effects Theory of the Fourth Amendment

Orin Kerr, A Substitution-Effects Theory of the Fourth Amendment

Comment by: Deven Desai

PLSC 2011

Workshop draft abstract:

Fourth Amendment law is often considered a theoretical embarrassment. The law consists of dozens of rules for very specific situations that seem to lack a coherent explanation. Constitutional protection varies dramatically based on seemingly arcane distinctions.

This Article introduces a new theory that explains and justifies both the structure and content of Fourth Amendment rules: The theory of equilibrium-adjustment. The theory of equilibrium-adjustment posits that the Supreme Court adjusts the scope of protection in response to new facts in order to restore the status quo level of protection.  When changing technology or social practice expands government power, the Supreme Court tightens Fourth Amendment protection; when it threatens government power, the Supreme Court loosens constitutional protection.  Existing Fourth Amendment law therefore reflects many decades of equilibrium-adjustment as facts change.  This simple argument explains a wide range of puzzling Fourth Amendment doctrines including the automobile exception; rules on using sense-enhancing devices; the decline of the “mere evidence” rule; how the Fourth Amendment applies to the telephone network; undercover investigations; the law of aerial surveillance; rules for subpoenas; and the special Fourth Amendment protection for the home.

The Article then offers a normative defense of equilibrium-adjustment. Equilibrium-adjustment maintains interpretive fidelity while permitting Fourth Amendment law to respond to changing facts.  Its wide appeal and focus on deviations from the status quo facilitates coherent decisionmaking amidst empirical uncertainty and yet also gives Fourth Amendment law significant stability.  The Article concludes by arguing that judicial delay is an important precondition to successful equilibrium-adjustment.

Woodrow Hartzog & Frederic Stutzman, The Case for Practical Obscurity

Woodrow Hartzog & Frederic Stutzman, The Case for Practical Obscurity

Comment by: Gaia Bernstein

PLSC 2011

Workshop draft abstract:

Courts have consistently misapplied the concept of practical obscurity online.  Practical obscurity holds that information that is practically hidden, but generally accessible, should be treated as functionally private.  Critics of practical obscurity argue that publicly accessible information cannot be classified as private.  Courts mistakenly agree, holding that the unfettered ability of any hypothetical individual to find and access information on the Internet renders that information public, or ineligible for privacy protection.  This article attempts to correct these misconceptions.

We propose using practical obscurity as a reliable metric for describing the privacy of online information.  Obscurity of online information is not the exception – all information online is, to some extent, hidden.  Therefore, a court’s analysis of what is “public” and “private” under various legal standards should not hinge on a dichotomous question of accessibility, but rather a determination of the degree of obscurity. Courts have created an arbitrary definition of “public information” by relying on easily identified lines drawn when users employ passwords.  This understanding of privacy is out of line with normative expectation, as demonstrated by empirical research.

The mistreatment of practical obscurity is a result of courts’ reliance on  technology to define what information is public.  For example, courts’ reliance on passwords as the test of privacy entrenches a technologically-defined understanding of privacy, in which password-restricted disclosures are private, and all other disclosures online are public. This assumption is untenable for the Internet. Our lives are increasingly mediated through communication technologies; this interweaving of our online and offline lives has important implications for the ways we communicate, negotiate trust and gain confidence.  Individuals employ a number of obfuscating techniques to create practical obscurity such as name variants, multiple profiles or identities, search invisibility and simple contextual separation when posting content.  A huge portion of the public Internet, the so-called “Dark Web,” is completely hidden from search engines and only accessible by those with the right search terms, URL, or insider knowledge. Is this functionally clandestine information any different in practice than information protected by a password?  This article aims to answer that question by providing a framework for a nuanced analysis of practical obscurity online.

Jens Grossklags & Nigel Barradale, Social Status and the demand for security and privacy

Jens Grossklags & Nigel Barradale, Social Status and the demand for security and privacy

Comment by: Alice Marwick

PLSC 2011

Workshop draft abstract:

The majority of the stakeholders of the political process argue for consistently increased funding for defense, anti-terrorism activities and domestic security. However, it is far from obvious whether these concerns for superior security activities are shared by the majority of citizens. Specifically, we argue that individuals belonging to different social status categories perceive the need for security and the sometimes associated privacy tradeoff in substantially different ways.

The method of investigation used is experimental, with 146 subjects interacting in high- or low-status assignments and the subsequent change in the demand for security and privacy being related to status assignment with a significant t-statistic up to 2.9, depending on the specification. We find that a high-status assignment strongly increases the demand for security. This effect is observable for two predefined sub-dimensions of security (i.e., personal and societal concerns) as well as for the composite measure. We find only weak support for an increase in the demand for privacy with a low-status manipulation.

Hence high status decision-makers, including the political elite, will be inclined to over-spend on security measures relative to the demand of the populace.

Victoria Groom & M. Ryan Calo, User Experience As A Form Of Privacy Notice: An Experimental Study

Victoria Groom & M. Ryan Calo, User Experience As A Form Of Privacy Notice: An Experimental Study

Comment by: Lauren Willis

PLSC 2011

Published version available here:

Workshop draft abstract:

This study and paper represent a collaboration between a privacy scholar and a PhD in human-computer interaction aimed at testing the efficacy of user experience as a form of privacy notice.  Notice is among the only affirmative requirements websites face with respect to privacy.  Yet few consumers read or understand privacy policies.  Indeed, studies show that the presence of a link labeled “privacy” leads consumers to assume that the website has specific privacy practices that may or may not actually exist.

One alternative to requiring consumers to read lengthy prose or decipher complex symbols is to influence a user’s mental model of the website directly by adjusting the user interface.  Use of particular design elements influences users’ cognitive and affective perceptions of websites and can affect behaviors relevant to privacy.

We intend to present the results of an ongoing, experimental study designed to determine how strategies of “visceral notice” compare to traditional notice.   Drawing on a rich literature in human-computer interaction, social psychology, and cognitive psychology, we examine whether anthropomorphism, formality, self-awareness, and other website features can instill in people a more accurate understanding of information practice than a privacy policy.

Eric Goldman, In Defense of 47 U.S.C. §230

Eric Goldman, In Defense of 47 U.S.C. §230

Comment by: Deirdre Mulligan

PLSC 2011

Workshop draft abstract:

47 U.S.C. §230 is the most important Cyberlaw statute, but it keeps adding new critics.  This Essay responds to those critics by analyzing a previously under-explored policy justification for the statute.  230 works because it enables online publishers to obtain non-public information about marketplace offerings and publish that information in ways that help consumers make better decisions.  As a result, 230 helps the marketplace’s “invisible hand” work more effectively—a crucial social benefit that we should not jeopardize by modifying the statute.