Archives

Sasha Romanosky, David Hoffman, & Alessandro Acquisti, Docket Analysis of Data Breach Litigation

Sasha Romanosky, David Hoffman, & Alessandro Acquisti, Docket Analysis of Data Breach Litigation

Comment by: Kristen Mathews

PLSC 2011

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1986461

Workshop draft abstract:

The proliferation of data breach disclosure (security breach notification) laws has prompted a flurry of lawsuits filed by alleged victims of identity theft against corporations that suffer a breach. Using data collected from Westlaw and PACER, we perform docket analysis on a sample of data breach lawsuits over the period from 1999 to 2010. This method of empirical legal research involves collecting, mining and coding relevant data from court documents (such as the complaints and judicial rulings). While much economic and legal scholarship has been written about data breaches, breach disclosure legislation, and the difficulties that consumers face from breach litigation, to our knowledge, this is the first research that attempts to empirical analyze the lawsuits, themselves.

In this working paper, we present preliminary results showing that the trend of known lawsuits appears to generally follow (and lag) the trend in reported data breaches. Since about mid-2006, the time taken for plaintiffs to organize and file a complaint has been steadily increasing, though the time to dispose of these suits has been steadily decreasing. Moreover, the overall duration of a data breach lawsuit is 15 months, on average. We also find that the settlement rate of data breach lawsuits is substantially lower in our sample (26%) compared with estimates found in other legal scholarship (67%). Finally, the average number of records lost is statistically much higher for known lawsuits than for the sample of all reported breaches (9.5m compared with 340k) and financial institutions are over-represented in breach litigation relative to the sample of known breaches, while government agencies and educational institutions are under-represented. Further, we use a probit regression to estimate the probability that a data breach will result in a lawsuit, and a multinomial logit model to examine the characteristics of lawsuits that impact particular outcomes of data breach lawsuits.

Jules Polonetsky & Omer Tene, Advancing Transparency and Individual Control in the Use of Online Tracking Devices: A Response to Transatlantic Legal and Policy Developments

Jules Polonetsky & Omer Tene, Advancing Transparency and Individual Control in the Use of Online Tracking Devices: A Response to Transatlantic Legal and Policy Developments

Comment by: Catherine Dwyer

PLSC 2011

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1920505

Workshop draft abstract:

For over a decade, behavioral advertising has been a focus of privacy debates on both sides of the Atlantic. Industry actors maintain that targeted ads are essential to supporting the main business model online whereby users benefit from free content and services in return for being subjected to various advertisements. They assert that they do not cause any harm to users given that any data collected and used are anonymous and in compliance with data protection standards. Regulators and consumer advocates insist that many advertising or analytics companies are collecting and using personal data in a manner that does not comply with the principles of privacy laws. They maintain that the dignity of users is impacted by these hidden practices and questions about harm due to the use of data for purposes adverse to users remain unanswered.

The recent publication of the much anticipated FTC Staff Report on reform of the legal framework for privacy protection of consumers  and the Department of Commerce Green Paper on privacy and innovation in the Internet economy  has raised the stakes for both proponents and opponents of behavioral advertising and challenged the market to find solutions that are both privacy protective and commercially feasible. Moreover, the FTC’s proposal to implement a “do-not-track” mechanism echoes voices on the other side of the Atlantic calling for application of the e-Privacy Directive’s consent requirements through a browser based opt out.  Such similarities reinforce our conviction that user expectations, business requirements, and privacy regimes are converging all over the world.

Our paper will draw on literature discussing behavioral economics, privacy enhancing technologies and user-centric identity management to seek a solution to the behavioral advertising quandary. Such a solution must be acceptable by businesses, users and regulators on both sides of the Atlantic and be based on the premise that privacy regulation needs to adapt to the changing market and technological realities without dampening innovation or damaging the business model that makes the Internet thrive.

We will provide a taxonomy of the various mechanisms used by the online industry to track users (e.g., first and third party cookies; flash cookies; beacons; Stored Flash Objects; browser fingerprinting, deep packet inspection; and more); assess under legal, technical and business criteria the feasibility of existing and new proposals for compliance with the latest FTC and EU regulatory requirements; and explore various strategies for solutions such as browser defaults and add-ons, special marking of targeted ads, and short privacy policies.

Sandra Petronio, Privacy Perils: Deciding to Disclose or Protect Confidentialities

Sandra Petronio, Privacy Perils: Deciding to Disclose or Protect Confidentialities

Comment by: Neil Richards

PLSC 2011

Workshop draft abstract:

Privacy is not only a legal matter; it is a matter of human behavior.  Humans manage private information in a systematic, yet complicated manner (Petronio, 2002). Consequently, on the surface, the process may appear straightforward. However, when someone discloses private information to a confidant, a great deal may happen between the individuals before, as, and after that disclosure occurs. The individual making the disclosure may have chosen the confidant because there is a level of trust upon which that the person can depend. The discloser may assume that the confidant will likely abide by his or her “rules,” even if implicit, regarding protecting or revealing the disclosed information to a third party. Often, these assumptions a discloser makes are typically more ideal than real. With a few exceptions, the more realistic outcome tends to be less clear-cut and more complicated (Petronio & Reierson, 2009).  Using a Communication Privacy Management (CPM) theoretical approach, this paper explores the “problematics” found with the confidant’s role (Petronio, 2002). At issue is how privacy is managed by recipients and the impact that serving as a confidant has on the individual recipient.  CPM theory argues that when people become confidants, they are considered co-owners of the information and therefore have a fiduciary responsibility to abide by the privacy rules the original owner wants them to use.

The confidant, at times, faces ambiguities because original owners may not state the expectations they have regarding how confidants should regulate their information. The role of confidant, especially in professions such as medicine, is often multifaceted. The dilemmas that health providers encounter, while not exclusive to medicine, illustrate the perils that confidants can face. Often physicians must weigh the risks and benefits of breaching confidences to meet best practices for the patient. Further, ambiguities may exist surrounding the legitimacy of seeking pertinent information from a family member. The physician may attempt to identify the best treatment plan or home care options but become hampered by the inability to obtain authorization from the patient to discuss the possibilities with family members. Even more complicated may be situations where patients deny authorization to discuss end-of-life options with family members. Then, subsequently the patient loses physical or mental capacity to change that decision leaving the physician to cope with family members who disagreed with the patient’s choices. Legally, the physician is bound to follow the specified choices of the patient if the documents identifying choice are present. Yet, families often believe they have rights to a member’s private information and control to over-rule decisions a member makes that appear to be no longer functional. Physicians are bound by confidentiality and may not be able to reveal the reasons for the patient’s choices. These obligations protect the patient but the physician, as confidant, must navigate the family’s demands for a rationale explaining the patient’s choices.

The challenging role of confidant that physicians encounter may also take a toll on them personally. Acutely difficult situations can exist when physicians must disclose bad news, including revealing medical mistakes to patients and their families. In these cases, physicians must disclose information that technically belongs to the patient. While the information is private to the patient, the physician controls the flow of the patient’s information. The physician must tell patients about something unknown and potentially anxiety producing. Physicians are the keepers and bearers of bad news in their role as confidants. While physicians understand their obligations to the patient, they are also challenged in determining the balance between hope and honesty regarding the health outcomes for the patient. In some cases, physicians opt to err on the side of hope which may mean the patients do not receive a timely or effective disclosure of information related to their own case. In hospice care, for example, the challenges are many on this point. The confidant role for physicians often requires them to carry the burden of someone else’s information that can simultaneously heighten their sense of responsibility and negatively affect them emotionally. Managing this dual role of responsibility to others and impact it has on the physicians is especially evident in disclosing medical mistakes.  The tension between having to reveal an event that is apt to feel like a threat to personal reputation and recognizing the patient’s rights of ownership clearly illustrates the often perilous nature of a confidant’s role for physicians.

While the legal parameters and responsibilities of physicians to patients may seem more clear-cut, the decisions guiding the communicative nature of revealing or protecting private medical information and the role of confidant seems much less clear in day-to-day interactions.

Scott Peppet, Privacy Intermediaries

Scott Peppet, Privacy Intermediaries

Comment by: Allan Friedman

PLSC 2011

Workshop draft abstract:

The Article explores the possibility of introducing “Privacy Intermediaries” into sensitive informational privacy domains. Privacy intermediaries are third parties that take on a neutral role as between two parties to an informational transaction (e.g., a web user and a web site, a search engine and an advertiser, a GPS-enabled smartphone user and a Starbucks, etc.).  Privacy intermediaries gather information from one party and pass it to another, but they can de-identify or alter it to conceal a user’s identity. They have fiduciary duties to those they serve—duties not to reveal information without true consent; duties to secure information; duties to keep confidences. The general idea is that in some contexts, privacy intermediaries may be able to provide the information needed for efficiency purposes while keeping much raw data private.

Privacy intermediaries are not an entirely new idea. Computer scientists have studied “interactive techniques” involving active data administrators who selectively filter and disclose information; health regulators have built the (somewhat related) idea of “health information trustees” or “information custodians” into draft health information legislation; mobile technology developers are exploring the idea of “data vaults” to make sensor data useful but private; and privacy advocates have argued for personal data storage and vendor relationship management. Several recent startups—Personal.com, i-Allow.com, the Locker Project—are pursuing these ideas in the market. This Article is the first legal scholarship to investigate the idea of privacy intermediaries in depth, and the first to explore its legal implications.

Those implications are complex. Most recent privacy scholarship has questioned the growing power of Internet intermediaries; this Article argues that we may need to strengthen, not weaken, intermediaries to bolster privacy.  But how would privacy intermediaries work? What business model makes sense, is there a case for governmental subsidy, and what legal reforms (to Section 230, the third party doctrine, etc.) might be needed to truly effect their potential?

This Article explores these questions. Although it examines a prescriptive idea—the call for neutral third-party fiduciary Internet intermediaries—it is less a prescriptive argument for privacy intermediaries than an investigation of their potential and problems.

Stephanie Pell & Christopher Soghoian, Towards A Privacy Framework For Law Enforcement Access to Location Information

Stephanie Pell & Christopher Soghoian, Towards A Privacy Framework For Law Enforcement Access to Location Information

Comment by: Bryan Cunningham

PLSC 2011

Workshop draft abstract:

Electronic Communication Privacy (ECPA) Reform was an active topic in 2010. The Digital Due Process coalition, a group of civil liberties groups, academic scholars and several major industry players, launched a significant policy initiative that called for reform of the two-decade old law.  Responding to this call, the 111th Congress took a firm interest in the topic, with three ECPA hearings held in the House Judiciary Committee and one in the Senate Judiciary Committee.

In any area of ECPA reform, Congress must strive to find the right balance among the (often competing) interests of law enforcement, privacy and industry. In some areas, it is relatively easy to agree on a common-sense path to improve the law.  The topic of cloud computing proved to be such an area – industry, academia and the public interest community all agreed that a probable cause warrant standard for all content requests would be a major improvement over the current standard, which varies depending on the length of time an email has been in storage, or if it has been read at least once.

Finding this balance in the area of location privacy, however, has proved to be far more challenging for Congress because:  (1) the technologies involved are exceedingly complex, far more so than cloud computing; (2) law enforcement agencies will not–and, in some instances, cannot (without compromising sources and methods)–publicly discuss their needs for and uses of this information; (3) major industry players are reluctant to disclose their own data retention policies for location information or to participate publicly in the legislative process, for example, by testifying at Congressional hearings; and (4) in the area of electronic communication privacy, where the courts have often “punted” , Congress must make proper judgments regarding consumers’ reasonable expectations of privacy and how they can be expressed in equally reasonable access rules.

Drawing on our unique expertise (as, respectively, a Counsel to the House Judiciary Committee in the 111th Congress, and a privacy and security researcher focused on law enforcement surveillance), we will plot a path forward for the location privacy problem.  This article will propose a regime of common sense, practical standards for law enforcement access to location information that is technology neutral, provides clear rules for law enforcement and industry to follow and courts to apply, and balances the interests of the three major ECPA stakeholders: law enforcement, consumer privacy and industry.

Paul Ohm, Big Data & Privacy

Paul Ohm, Big Data & Privacy

Comment by: Susan Freiwald

PLSC 2011

Workshop draft abstract:

We are witnessing a sea change in the way we threaten and protect information privacy. The rise of Big Data—meaning powerful new methods of data analytics directed at massive, highly interconnected databases of information—will exacerbate privacy problems and put particular pressure on privacy regulation. The laws, regulations, and enforcement mechanisms we have developed in the first century of information privacy law are fundamentally hampered by the special features of Big Data. Big Data will force us to rethink how we regulate privacy.

To do that, we first need to understand what has changed, by surveying Big Data and cataloging what is new. Big Data includes powerful techniques for reidentification, the focus of my last Article, but it encompasses much more. Two features of Big Data, in particular, interfere with the way we regulate privacy. First, Big Data produces results that defy human intuition and resist prediction. The paradigmatic output of Big Data is the surprising correlation. Second, the underlying mechanisms that make Big Data work are often inscrutable to human understanding. Big Data reveals patterns and correlations, not mental models. B is correlated with A, Big Data reveals, but it cannot tell us why, and given the counter-intuitiveness of the result, we are sometimes left unable even to guess.

Big Data’s surprising correlations and inscrutability will break the two predominant methods we use to regulate privacy today, what I call the “bad data list” approach and the Fair Information Practice Principles approach. Both approaches rely on transparency and predictability, two things that Big Data fundamentally resists. Neither regulatory method can survive Big Data, and we cannot salvage either using only small tweaks and extensions. We need to start over.

John Nockleby, Why Anonymity?

John Nockleby, Why Anonymity?

Comment by: Katherine Strandburg

PLSC 2011

Workshop draft abstract:

People seek anonymity in a wide variety of contexts. Some desire to remain anonymous for reasons that appear innocuous or even salutary. Those who post anonymous opinions contribute to public dialogue. Avatars permit people to explore different personas in Second Life. Many people desire to walk in public or eat in a restaurant without being identified.  Gays and lesbians fearing ostracism, and others seeking socially-disapproved but harmless outlets for themselves, engage in conduct without fear of outing.  Thus, the ability to remain anonymous carries benefits to individuals as well as to the society.

On the other hand, sometimes anonymity leads to anti-social behavior. The ability to post anonymously online leads some to make defamatory or vicious comments, or to download material otherwise protected under intellectual property law. The ability to conceal one’s identity in public spaces permits criminals to attack others without fear of discovery. Much criminal behavior depends on anonymity (cash transactions to pay underlings; operating through networks of corporate fronts; paying off public officials).  Thus, anonymity has a dark side as well.

As is well-recognized, the digitalization of the world of information, along with the development of new forms of technological surveillance, challenge traditional notions of anonymity and privacy. Some legal commentators propose that law restrain how much these new technologies force people to unveil themselves –whether online or as they live in the natural world. Others assert that anonymity has never been given the same level of legal protection as privacy, and embrace the unmasking technologies.

What do we lose if we embrace these technologies that displace anonymity? Are there virtues of anonymity that we should fear losing? If so, are there ways of protecting some forms of anonymity  while simultaneously forcing visibility on socially harmful behavior?

This essay will examine some of the philosophical and practical issues that underlie these questions.

Helen Nissenbaum, Amanda Conley, Anupam Datta, Divya Sharma, The Obligation of Courts to Post Records Online: A Multidisciplinary Study

Helen Nissenbaum, Amanda Conley, Anupam Datta, Divya Sharma, The Obligation of Courts to Post Records Online: A Multidisciplinary Study

Comment by: Michael Traynor

PLSC 2011

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2112573

Workshop draft abstract:

On the face of it, what could be complicated about placing public court records online? After all, courts are obliged to make these records publicly available—shouldn’t they do so as effectively and efficiently as possible, offering a digitized collection on the Web? Indeed, the advent of PACER for the Federal Court System suggests that the matter has been settled.  Yet at state and local levels around the country, where some of the records richest in personal information are found, privacy concerns have brought this controversial prospect to the fore. Courts continue to grapple with it drafting new administrative rules and looking for guidance in constitutional, legislative, and common law sources, legal scholarship, as well as their own past practices and those of peer institutions.

The paper I am proposing for PLSC 2011 will report on findings of a collaborative project with Amanda Conley, Anupam Datta, and Divya Sharma on the normative question of placing court records online. Drawing on notable work by legal scholars Grayson Barber, Natalie Gomez-Velez, Daniel Solove, and Peter Winn, among others, and focusing on state civil courts, our project asks whether courts have an obligation to post on the Web, for open and unconditional access, records that traditionally have been made available in hard copy from court houses or electronically via local terminals. Guided by the framework of contextual integrity, we map, in detail, the differences in flow afforded by online placement and in so doing, make precise what others have attempted to capture with terms such as “hyper-dissemination” and “practical obscurity”. For the normative analysis, we compare local and online of dissemination in terms of how well they serve values, ends, and purposes attributed to courts, such as dispute resolution, justice and fairness, and accountability.

We reach the surprising (though tentative) conclusion that although courts in free and democratic societies are obliged to provide open access to records of court proceedings and outcomes, this obligation—both online and off—does not necessarily extend to all information that is presently included in these records. This means either that a great deal of personal information could be excised from records without violence to courts’ purposes, or there are reasons driving current practices that have not fully been acknowledged. Both alternatives are in critical need of further exploration.

Adam Moore, Privacy and Government Surveillance: WikiLeaks and the new Accountability

Adam Moore, Privacy and Government Surveillance: WikiLeaks and the new Accountability

Comment by: Colin Koopman

PLSC 2011

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1980812

Workshop draft abstract:

This working paper considers the ‘new accountability’ forced upon corporations and governments by information sharing sites like WikiLeaks.  I will argue that accessing and sharing sensitive information about individuals, corporations, and governments is – in the typical case – morally suspect.  Nevertheless, the wrongness of cases like WikiLeaks is mitigated by two factors.  First, in democratic societies we have a right to know much of the information published by these sharing sites – information that in many instances is unjustifiably withheld by governments.  Second, this sort of sharing is forcing a realignment of power.  For decades corporations and governments have been able to collect, store, and share information about ordinary citizens while walling off access to their own information.  Sharing sites like WikiLeaks are changing this balance of power. And while I agree that “two wrongs don’t make a right” – accessing, storing, and sharing information about citizens and accessing, storing, and sharing information about government activities – for this slogan to be true one has acknowledge the first wrong.

Jon L. Mills, New Media-Old Law

Jon L. Mills, New Media-Old Law

Comment by: William McGeveran

PLSC 2011

Workshop draft abstract:

Throughout the evolution of what we now call “media law”, several characteristics of the media were a constant, and legal tests were based on those realities. Since the times have changed radically, should those tests and assumptions also change?

1.      Gatekeepers — Since the invention of writing there have been barriers to mass distribution of information. With the advent of the printing press and broadcast media there continued to be practical, legal and financial barriers to mass communications. The barriers have included cost, government licenses and editors. Those barriers are gone because of the internet and the new press.

2.      Community Standards and Mores — Legal tests for obscenity and privacy intrusions, such as public disclosure of private facts, include references to community standards. But, who is the community for judging internet distributions? The courts are struggling to determine whether the test should still be community based to a limited geographic area. For example, parts of opinions in Ashcroft v. ACLU suggest that a national standard may be appropriate because the community is now national. Is that appropriate and how would that standard work?

3.      Anonymity — Free speech and press policies have protected and respected anonymity from the earliest times.  Whistle blowers and anonymous political commentary are part of our culture. Should that protection change based on wide distribution of intrusive comments on the internet. There are cases where courts have protected the identity of bloggers even when the statements were highly offensive and intrusive. In the Krinsky case out of California an anonymous poster on a financial message board insulted a number of officers of a Florida corporation, calling them names that included “cockroach,” “mega scum bag,” and “boobs, losers and crooks.”  The court protected the identity of the blogger

The new media has no gatekeepers, amorphous community standards and allows total anonymity. Add to this an instant news cycle and the Shirley Sherrod incident occurs.  An innocent person is slandered and fired by the President of the United States within twenty four hours. The reports are based on an altered YouTube distribution.  How should the law react?

As a further example of the evolution of media, I will describe the media involvement in five high profile cases that have occurred over the last twenty years. I served as counsel in four of those cases.

1.      Rolling Student Murders — Danny Harold Rolling murdered seven University of Florida students in 1990.  He tortured and mutilated the bodies.  Media sought access to the crime scene photos and autopsy photos.

2.      Death of Dale Earnhardt — NASCAR star Dale Earnhardt died during the Daytona 500 race in 2001.  National, global and electronic media sought access to his autopsy photos.

3.      Murder of Gianni Versace — In 1997, world famous designer Gianni Versace was murdered on Miami Beach. Press sought autopsy photos and investigatory materials.

4.      Death of Nicole Catsouras — Teenager Nicole Catsouras died in an auto crash in 2008.  Graphic accident scene photos were distributed by members of the California Highway Patrol.

5.      Death of SeaWorld Trainer Dawn Brancheau- In February 2010, Dawn Brancheau was killed at SeaWorld by a killer whale. The incident was recorded on video cameras. The media sought access to the videos.

Each of these instances involved all of the legal issues described above and involved a balancing of privacy and press interests.