In the lead-up to the Law School’s recent Contracting Over Privacy conference, Professor Omri Ben-Shahar, one of the organizers, and Assistant Professor Adam Chilton did something unusual: they circulated a draft of their paper—which examined how the structure and design of privacy disclosures influence respondents’ behavior—before they had actually collected their data.
“We are worried that if the experiment were to yield results consistent with our priors, that our readers would view the results with suspicion,” they wrote to their colleagues 11 days before revealing their findings at the conference. Ben-Shahar, the co-author of a 2014 book, More Than You Wanted to Know: The Failure of Mandated Disclosure, has argued that privacy disclosures are useless. “After all, how often do you see people report empirical results that conflict with their own previously published conjectures?”
Their findings did end up confirming their predictions. The “best practices” designed to make privacy disclosures more effective don’t appear to matter; they don’t change consumers’ behavior or comprehension. For some, the results may seem unsurprising. But for lawmakers and scholars pushing for reforms centered on such “best practices,” the work made a notable contribution: it introduced additional empirical research about the likelihood of improving the effectiveness of a widely used—but widely ignored—consumer-protection tool.
“People are drowning in notices and disclosures about how their personal data is collected and used, but do they care? Do they view these notices as source of information or as nuisances?” asked Ben-Shahar, the Leo and Eileen Herzel Professor of Law and the Kearney Director of the Coase-Sandor Institute for Law and Economics, which hosted the conference. “In this conference we hoped to provide new insights based on empirical and experimental methodology.”
Empirical analysis played a key role in several of the papers presented at the two-day Journal of Legal Studies conference, which brought together nearly two dozen top contract, privacy law, and computer science scholars, as well as Federal Trade Commission attorneys involved in enforcing the law, to examine the role of contracts in policing information privacy—an area of growing concern as social media, online transactions, and other digital conveniences make it easier for companies to collect users’ personal data.
“So many of the key issues in privacy law today converge with contract doctrines. We were trying to create conversations among scholars who don’t hear enough from each other,” said conference co-organizer Lior Strahilevitz, the Sidley Austin Professor of Law, whose own paper included empirical analysis to examine whether the use of explicit policy language would change consumers’ views about their rights. In another paper, legal scholars from Fordham and computer scientists from Carnegie Mellon developed methods to measure the relative ambiguity of different privacy policies and tested whether regulation can improve the clarity. Conference participants also discussed markets for personal data, formal versus informal privacy contracts, and contractual privacy law in both the United States and the European Union.
“Privacy law is an area where there’s an abundance of advocacy and a shortage of facts and sophisticated theory,” Strahilevitz said. “This conference is ones of our efforts to alter that unsatisfying state of affairs.”
In the experiment conducted by Strahilevitz and former student Matthew Kugler, ’15, a census-weighted sample of 1,441 American social media and email users were asked to read excerpts of privacy policies relating to Facebook’s use of facial recognition software and Google’s use of automated email-content analysis, both of which are at issue in pending high-stakes class action lawsuits. Excerpts varied in detail: some randomly selected participants read older policies that had been deemed too vague to adequately communicate the companies’ practices, and others read current policies in which the practices were described much more explicitly and clearly.
As it turned out, the level of detail didn’t have any measurable impact—participants viewed the policies identically, despite evidence that they had read the policies closely. What’s more, many participants believed that they had agreed to allow company practices they saw as “highly intrusive”—and they believed this was the case even if they’d read vague policies.
“Despite the fact that they viewed Google’s analysis of their email content as creepy, consumers think that they have authorized it whether they have been shown language that a lawyer would say has authorized it, or whether they had been shown language that lawyers and a judge have said did not,” Strahilevitz said, adding that the experiment offered evidence that lawyers and lay people tend to view policies differently. Lawyers, he said, focus on the policy’s language, but lay people tend to integrate their previous experience and beliefs; even if a policy makes them uncomfortable, they might assume that companies are legally authorized to engage in invasive practices.
Chilton and Ben-Shahar took on similar issues, testing the impact of disclosure “best practices,” such as the use of clear titles, concrete examples, active language, well-organized information, and an easy-to-read font. To do this, they created a fake online survey purporting to be part of an effort to develop medical treatments for sexually transmitted diseases and then asked respondents sensitive questions about risky sexual behavior. Each respondent saw one of six versions of a privacy disclosure: although the policy was the same in each, the disclosures used best-practices guidelines to varying degrees, ranging from one that used all of the recommendations to one that used none. A control group saw a blank screen.
The best practices didn’t appear to matter: they didn’t affect respondents’ understanding of their rights or their willingness to disclose personal information. The contents didn’t even seem to affect how satisfied people were with the policy.
“We explicitly told people in our ‘best practice’ survey that we were going to sell their data, we were going to give it to insurance companies, we weren’t going to lock it up, that we didn’t take it particularly seriously, and that we reserved the right to do whatever wanted,” Chilton said.
He paused, underscoring an increasingly clear message about how the public interacts with privacy disclosures. “People were very pleased with our policy,” he said, drawing laughs. “The control group [which saw a blank screen] was satisfied as well.”
This article was originally published by University of Chicago Law School.