Archives

Mary D. Fan, Regulating Sex and Privacy in a Casual Encounters Culture

Mary D. Fan, Regulating Sex and Privacy in a Casual Encounters Culture

Comment by: Jason Schultz

PLSC 2011

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1796534

Workshop draft abstract:

The regulation of sex and disease is a cultural and political flashpoint and persistent problem that law’s antiquated arsenal has been hard-pressed and clumsily adapted to effectively address.  The need for attention is demonstrated by arresting data – for example, that one in four women aged 14 to 19 has been infected with at least one sexually transmitted disease (STD), that managing STDs costs an estimated $15.9 billion a year, and that syphilis, once near eradication, is on the rise again as are HIV and other STDs.  Public health officials on the front lines have called for paradigm changes to tackle the enormous challenge.  Controversial proposals have circulated, such as mass HIV screening for everyone aged 13 to 64, STD testing in high schools, mandatory HIV screening, strict liability in tort for transmission, and criminalizing first-time sex without a condom. The article argues that a less intrusive, more narrowly tailored and efficient avenue of regulation has been overlooked because of the blinders of old paradigms of sex and privacy.

The article contends that we should shift our focus away from the cumbersome costly hammers and perverse incentives of criminal and tort law and focus on adapting privacy law and culture to changes in how we meet and mate today.  Information-sharing innovations would better deter without as much intrusion and avoid perverse victim-chilling and incentives against acquiring knowledge or seeking help.  Turning to adjusting privacy law rather than criminal and tort law would better help safeguard sexual autonomy and ameliorate the information deficit in the increasingly prevalent casual sex culture and Internet-mediated marketplace for sex and love.

Lothar Determann and Robert Sprague, Intrusive Monitoring: Employee Privacy Expectations are Reasonable in Europe, Destroyed in the United States

Lothar Determann and Robert Sprague, Intrusive Monitoring: Employee Privacy Expectations are Reasonable in Europe, Destroyed in the United States

Comment by: Vince Polley

PLSC 2011

Workshop draft abstract:

In the increasingly global economy and workplace, the difference in workplace privacy expectations and protections in the United States and Europe stand out.  In the United States, privacy protections depend on whether employees have reasonable privacy expectations, but employers are relatively free to destroy actual expectations through notices.  In Europe, workplace privacy is not conditioned on employee privacy expectations, but is protected as a matter of public policy.  Thus, in Europe – where reasonable privacy expectations are not a condition to privacy protection – employees can actually and reasonably expect workplace privacy, and in the United States – where privacy protections depend on reasonable privacy expectations – employees cannot expect much privacy in practice. Our article will examine the underlying policy reasons and legal frameworks that control the extent to which employers may monitor their employees, including implications for multinational employers and employees in the United States and Europe.

Mary J. Culnan, Accountability as the Basis for Regulating Privacy: Can Information Security Regulations Inform a New Policy Regime for Privacy?

Mary J. Culnan: Accountability as the Basis for Regulating Privacy: Can Information Security Regulations Inform a New Policy Regime for Privacy?

Comment by: Joe Alhadeff

PLSC 2011

Workshop draft abstract:

There is an emerging consensus that the current regulatory regime for privacy based on notice/choice or harm is not effective and needs to be revisited. In general, the current approaches place too much burden on individuals, frequently deal with privacy only after harm has occurred, and have failed to motivate organizations to address privacy proactively by implementing effective risk management processes. This paper adopts Solove’s view that privacy is best characterized as a set of problems resulting from the ways organizations process information. As a result, the most effective way to address privacy is for organizations to proactively avoid causing privacy problems through accountability.

First, the paper first argues why a new approach based on accountability is both necessary and appropriate. Next, the requirements of three information security laws (GLB Safeguards Rule, HIPAA Security Rule and the Massachusetts Standards for the Protection of Personal Information) were analyzed against the elements of accountability and the feasibility of adapting these requirements to privacy were assessed. These laws require organizations to develop security programs appropriate to the organization’s size, its available resources, and the amount and sensitivity of stored data. While these security laws are judged to provide a good starting point for privacy legislation, there are also additional challenges that need to be addressed for privacy and these are described. The paper concludes by reviewing arguments in favor of adopting a delegation approach to privacy regulation rather than the traditional compliance approach.

Andrew Clearwater, Reducing Data Security Breaches Through Enhancements in Property, Tort and Contract Law

Andrew Clearwater, Reducing Data Security Breaches Through Enhancements in Property, Tort and Contract Law

Comment by: Kristen Mathews

PLSC 2011

Workshop draft abstract:

The power inequalities that exist when information is transferred between individuals and bureaucracies have left consumers vulnerable to data breaches. While data breach disclosure laws have improved consumer protection and informed the marketplace of security risks, consumers are not entirely rational and they continue to suffer from behavioral biases that hinder their ability to reduce or avoid loss. Many breach notification letters go ignored, and those that are read provide little recourse as most consumers do not know how to act on the information. Many consumers whose data is exposed do not suffer any actual incident of identity theft, moreover, and are thus faced with the nearly impossible challenge of demonstrating a particularized injury under tort law. The mere fear of future identity theft is generally insufficient to warrant damages.

Courts applying tort and contract law generally incorporate prior assumptions about privacy that fail to mitigate or compensate for the rapid escalation of data breaches today. For instance, Bell v. Acxiom Corp. demonstrates that standing requirements can be hard to meet due to the lack of a concrete or particularized harm for many victims of data breaches. Additionally, Key v. DSW Inc. shows that courts are reluctant to analogize the need for credit monitoring in breach cases to the need for medical monitoring in product liability cases. Only where a special relationship exists, as it did in Bell v. Mich. Council, No. 246684, have courts found a duty to safeguard personal data.

This paper investigates the problem of data breaches through the lenses of property, tort, and contract law and analyzes proposals in each of these areas to reduce security breaches, or at least compensate consumers for their harm. Proposals investigated include: a right to personal information alienability paired with a well policed personal information market; the creation of new causes of action such as “breach of trust;” the creation of an affirmative duty to secure personal data for all data stewards that maintain consumers’ personal information; and improving the chances of consumer success under breach of contract theory by shifting the burden of proof on damages to the data steward.

Cynthia A. Brown and Carol M. Bast: Who’s Listening, Now? An Examination of the Government’s Use of FISA Evidence in Criminal Prosecutions

Cynthia A. Brown and Carol M. Bast: Who’s Listening, Now?  An Examination of the Government’s Use of FISA Evidence in Criminal Prosecutions

PLSC 2011

Workshop draft abstract:

Our nation’s national security efforts over the last decade generated a multitude of policy changes, including a redesign of the Foreign Intelligence Surveillance Act of 1978 (FISA).  FISA once prescribed procedures mostly for electronic surveillance and physical searches necessary for gathering intelligence information on foreign soil and from foreigners.  Today, FISA outlines the process for conducting surveillance and searches by federal authorities of individuals, including American citizens within the United States, suspected of espionage or terrorism against the United States on behalf of a foreign power.  As a result of the many amendments since 9/11, the original legislation is all but unrecognizable, and the consequences of these amendments to Americans’ privacy are largely unknown.  We do know that the administration’s requests for FISA surveillance have increased nearly 200 percent in the most recent eight years as compared to the statute’s first 23 years; restrictions on surreptitious surveillance of Americans under FISA have been greatly relaxed; and criminal cases indicate an increase in the government’s use of evidence obtained through FISA surveillance in the prosecution of ordinary crimes.  This research is an empirical legal study examining the content of all reported federal cases involving FISA evidence.  This study presents information that will better inform the impact FISA’s redesign is exacting on Americans’ privacy as our nation continues to struggle to strike the difficult balance between security and civil rights.

Bruce Boyden: Can a Computer Intercept Your Email?

Bruce Boyden: Can a Computer Intercept Your Email?

Comment by: Marcia Hofmann

PLSC 2011

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2028980

Workshop draft abstract:

Can a computer spy on you? Obviously a computer can be used to spy on you, but can the computer itself invade your privacy? This question, once perhaps metaphysical, has gained added urgency in recent years as email services such as Google have begun scanning their users’ emails in order to target advertisements and ISPs have begun using filtering to weed out viruses, spam, and most controversially, copyrighted material from their systems. Such automated scanning and handling of electronic communications arguably violates the federal Wiretap Act, which prohibits intentional interception of an electronic communication without the consent of a party.

Interception is defined under the Wiretap Act as “the aural or other acquisition of the contents of any wire, electronic, or oral communication through the use of any electronic, mechanical, or other device.” Courts have long struggled to apply this definition outside of the context of the traditional wiretap or electronic eavesdropping situation. For example, early Wiretap Act cases involving recordings posed the challenge of determining when exactly the moment of interception occurred: when the conversation was recorded, in which case the circumstances of the act of recording would determine whether an interception occurred, or when the conversation was listened to, in which case the circumstances of playback would determine whether there was a violation. In other words, in recording situations, do machines intercept communications or do humans? Courts have generally answered that question by holding that it is machines that accomplish the interception, albeit as one early case put it, they do so as the “agent of the ear.” Subsequent decisions have held recordings or copied communications to be interceptions whether or not they are ultimately listened to or read by humans.

The conclusion that devices intercept, even if it makes sense for the recording context, should not be reflexively applied to all automated handling of communications. Even if “acquisition” can apply to a recording rather than perception, recordings and other copies enable human perception of the contents of a communication. It is the prospect of third-party perception of the contents of a private communication that is the harm the Wiretap Act protects against. Unmoored from that prospect of harm, automated handling of communications does not pose the relevant danger and should not fall within the definition of “acquisition.” It neither carries those contents to a human for perception, nor does it capture them for later perception. Advertising, filtering, blocking, and other actions in which the substance of the communication is not preserved should not be held to be a violation of the ECPA.

danah boyd & Alice Marwick: Social Privacy in Networked Publics: Teens’ Attitudes, Practices, and Strategies

danah boyd & Alice Marwick: Social Privacy in Networked Publics: Teens’ Attitudes, Practices, and Strategies

Comment by: Priscilla Regan

PLSC 2011

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1925128

Workshop draft abstract:

When 17-year-old Carmen broke up with her boyfriend, she wanted her friends to know that she was feeling sad.  Her initial instinct was to post sappy song lyrics to her Facebook, but she decided against doing so out of fear that her mother would think she was suicidal.  On Facebook, Carmen is friends with her mother and while her mother knew about the breakup, Carmen knew her mother had a tendency to overreact.  As a solution, she decided to post lyrics from “Always Look on the Bright Side of Life.”  Her geeky friends immediately recognized the song from Monty Python’s “Life of Brian” and knew that the song was sung when the main character was about to be executed.  Her mother, on the other hand, did not realize that the words were a song lyric, let alone know the film reference.  She took the words literally and posted a comment on Carmen’s profile, noting that she seemed to be doing really well.  Her friends, knowing the full backstory, texted her.

The technique that Carmen takes can best be understood as “social steganography.” Steganography is an ancient tactic of hiding information in plain sight.  It’s the ultimate “security through obscurity.” Long before cryptography, the Greeks were notorious for using steganography – hiding messages in wax tablets, tattooing the heads of slaves and then sending them to their destination once their hair has grown back, etc. Steganography isn’t powerful because of strong encryption. It’s powerful because people don’t think to look for a hidden message. In a networked society, privacy isn’t going to be about controlling or limiting access. It’s going to be about controlling and limiting meaning.

As teens use sites like Facebook to socialize with peers, they struggle to manage diverse audiences simultaneously.  Facing collapsed contexts and social expectations, they are unable to segment their personal networks to maintain distinct social roles.  Instead, they use techniques like social stenography, limiting access to meaning instead of access to content. They use song lyrics, in-jokes, and external referents to encode messages that only have meaning to those “in the know.”  Their practices, while not new, allow teens to achieve privacy in public in new ways through social media.  Social stenography is just one of the techniques that teens take to manage privacy while living very public lives through social media.

This article will explore various techniques that teens take in order to examine the core practices of privacy in everyday life and the implications of emergent social norms on legal and technical discourse surrounding privacy. The argument leverages ethnographic data concerning American teen social media practices and situates the argument in a discussion of counterpublics (Warner 2002), symbols as subcultural signals (Chauncey 1995), civil inattention (Goffman 1959), and privacy as a contextual process (Gavison 1980; Nissenbaum 2009). The analysis then interrogates conflicts between people’s practices and both the design of technical systems and also the legal constructs for addressing privacy.  I will argue that the techniques that teens take to manage privacy are rooted in a model of “networked privacy” that doesn’t mesh well with the individual-centric nature of technological privacy settings or legal notions of harm.

Marc Jonathan Blitz: Warranting a Closer Look: When Should the Government Need Probable Cause to Analyze Information It Has Already Acquired?

Marc Jonathan Blitz: Warranting a Closer Look: When Should the Government Need Probable Cause to Analyze Information It Has Already Acquired?

Comment by: Peter Winn

PLSC 2011

Workshop draft abstract:

As the Supreme Court made clear in United States v. Jacobsen, the fact that government officials may constitutionally seize and hold an item doesn’t mean they have the authority to look inside: “Even when government agents may lawfully seize [] a package to prevent loss or destruction of suspected contraband, the Fourth Amendment requires that they obtain a warrant before examining the contents of such a package.”  466 U.S. 107, 114 (1984).  A similar distinction between authority to seize and authority to search arises in other contexts, and Orin Kerr has recently proposed applying a rule like this to Internet communications, arguing that government officials should be “allowed to run off a copy of the data without a warrant but then not actually observe the data until a warrant is obtained.”  Applying the Fourth Amendment to the Internet: A General Approach, 62 Stan. L. Rev. 1005, 1042 (2010).

However, while government might not be permitted to search what it has seized, courts have been more willing to let law enforcement officials analyze information they have previously acquired in a search (or other information-gathering). A few months ago, for example, the Eighth Circuit Court of Appeals rejected a litigant’s argument that if government needed a warrant to look inside the seized package in Jacobsen, it should also need a warrant to chemically reveal the contents of the blood sample they had taken.  See Dodd v. Jones, 623 F.3d 563, 568-69 (8th Cir. 2010).  It cited an earlier Ninth Circuit case, United States v. Snyder, that reached a similar conclusion and stated that treating the extraction of blood as one search and the chemical testing of it as another would divide the information-gathering process “into too many separate incidents, each to be given significance for fourth amendment purposes.”  U.S. v. Snyder, 852 F.2d 471, 473 (9th Cir. 1988).

This article will argue, however, that in some circumstances, the information gathering process should be divided up in precisely this way for Fourth Amendment purposes – requiring government to get a warrant before analyzing information they have obtained in a permissible warrantless search, or in surveillance that does not count as a search.  More specifically, I will consider whether and when (1) government should be required to obtain a warrant to “unscramble” technologically-masked faces in video records, or otherwise apply identification technologies, to archives generated by public video surveillance; (2) to analyze data obtained in Internet searches; and (3) to conduct a chemical analysis of blood, DNA, or other biological samples, as Justice Marshall suggested in his dissent in Skinner v. Railway Executives Labor Association.  See 489 U.S. 602, 642 (1989) (Marshall, J., dissenting) (observing that even if requiring a warrant is impractical when the urine samples were taken from railroad workers, “no exigency prevents railroad officials from securing a warrant before chemically testing the samples they obtain.”).  Indeed, I argue, changes in the architecture that protects our physical and electronic privacy may increasingly require that warrant or other probable cause protections be moved from the information-acquisition stage to the information-analysis stage and that “analysis warrants” of this kind should (consistent with Justice Marshall’s suggestion in his Skinner dissent) play a key role in courts’ jurisprudence on the special needs and administrative search exceptions to the warrant requirement.

Colin J. Bennett: In Defense of Privacy: The Concept and the Regime

Colin J. Bennett: In Defense of Privacy: The Concept and the Regime

Comment by: Michael Froomkin

PLSC 2011

Workshop draft abstract:

For many years those scholars interested in the nature and effects of “surveillance” have been generally critical of “privacy” as a concept, as a way to frame the political and social issues, and as a regime of governance. “Privacy” and all that it entails is considered too narrow, too based on liberal assumptions about subjectivity, too implicated in rights-based theory and discourse, insufficiently sensitive to the discriminatory aspects of surveillance, culturally relative, overly embroiled in spatial metaphors about “invasion” and “intrusion,” and ultimately practically ineffective.

On closer examination, however, I suggest that the critiques of privacy are quite diverse, and often based on some faulty assumptions about the contemporary framing of the privacy issue, and about the implementation of privacy protection policy.  Some critiques are pitched at a conceptual level; others focus on practice.  There is a good deal of overstatement, and a certain extent to which “straw men” are constructed for later demolition.

The aim of this paper is to disentangle the various critiques and to subject each to a critical analysis. Despite the fact that nobody can supply a precise and commonly accepted definition, privacy maintains an enormous popular appeal, in the English-speaking world and beyond.  It attaches to a huge array of policy questions, to a sprawling policy community, to a transnational advocacy network, to an academic literature and to a host of polemical and journalistic commentary.  Furthermore, its meaning has gradually changed to embrace a more collective understanding of the broader set of social problems. The broader critique from surveillance scholars tends to be insensitive to these conceptual developments, as well as to what members of the policy community actually do.