Ariel Porat and Lior Jacob Strahilevitz, Personalizing Default Rules and Disclosure with Big Data
Comment by: Lauren Willis
Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2217064
Workshop draft abstract:
The rise of Big Data has become one of the most important recent economic developments in the industrialized world, while simultaneously posing vexing privacy issues for policymakers. While the use of Big Data to help firms sort consumers (e.g., for marketing or risk-pricing purposes) has been the subject of sustained discussion, scholars have paid little attention to governmental uses of Big Data for purposes unrelated to law enforcement and national security. In this paper, we show how Big Data might transform adjudication, disclosure, and the tailoring of contractual and testamentary default provisions.
The paper makes three key contributions. First, it uses the psychology and computer science literatures to show how Big Data tools classify individuals based on “Big Five” personality metrics, which in turn are predictive of individuals’ future consumption choices, risk-preferences, technology adoption, health attributes, and voting behavior. For example, the paper discusses new research showing that by systematically analyzing data from individuals’ smartphones, researchers can identify particular phone users as particularly extrovereted, conscientious, open to new experiences, or neurotic. Smartphones, not eyes, turn out to be the true windows into our souls. Second, it shows how Big Data tools can match individuals who are extremely similar in terms of their behavioral profiles and, by providing a subset of “guinea pigs” with a great deal of time and information to make choices, it can extrapolate ideal default rules for huge swaths of the population. Big Data might even be used to promote privacy to some degree, by offering pro-privacy defaults to consumers whose past behavior suggests a strong preference for privacy and pro-sharing defaults to consumers whose past behavior indicates little interest in safeguarding personal information. Third, the paper is the first to show that the influential critiques of disclosure regimes (including FIPs-style “notice and choice” provisions) are at bottom critiques of impersonal disclosure. A legal regime that relies on Big Data to determine which relevant information about risks, side-effects, and other potentially problematic product attributes should be disclosed to individual consumers has the potential to improve the efficacy of disclosure strategies more generally. The paper also confronts a number of important objections to personalization in these various contexts, including concerns about cross-subsidies, government discrimination, and personalization in a world where human preferences and behaviors change over time.
The paper seeks to show privacy scholars and advocates precisely what the stakes will be in the looming fight over Big Data. We seek to demonstrate that Big Data battles are not merely fights about privacy versus profits, but implicate a host of unrecognized social interests as well.