"Social media companies depend on selling information about their users’ clicks and purchases to data brokers who match ads to the most receptive individuals. But the Federal Trade Commission
and the White House have called for legislation that would inform consumers about the data collected and sold to companies, warning of analytics that have ‘the potential to eclipse longstanding civil
rights protections.’ Does the collection of data by companies threaten consumers’ civil rights?"
"This paper is intended to give an overview of the issues as we see them and contribute to the debate on big data and privacy. This is an area in which the capabilities of the technology and
the range of potential applications are evolving rapidly and there is ongoing discussion of the implications of big data. Our aim is to ensure that the different privacy risks of big data are considered
along with the benefits of big data - to organisations, to individuals and to society as a whole. It is our belief that the emerging benefits of big data will be sustained by upholding key data protection
principles and safeguards. The benefits cannot simply be traded with privacy rights."
"Paul Ohm’s 2009 article ‘Broken Promises of Privacy’ spurred a debate in legal and policy circles on the appropriate response to computer science research on re-identification. In this
debate, the empirical research has often been misunderstood or misrepresented. A new report by Ann Cavoukian and Daniel Castro is full of such inaccuracies, despite its claims of ‘setting the record
straight.’ We point out eight of our most serious points of disagreement with Cavoukian and Castro. The thrust of our arguments is that (i) there is no evidence that de-identification works either in
theory or in practice3 and (ii) attempts to quantify its efficacy are unscientific and promote a false sense of security by assuming unrealistic, artificially constrained models of what an adversary might
"Google thinks I’m interested in parenting, superhero movies, and shooter games. The data broker Acxiom thinks I like driving trucks. My data doppelgänger is made up of my browsing history, my
status updates, my GPS locations, my responses to marketing mail, my credit card transactions, and my public records.Still, it constantly gets me wrong, often to hilarious effect. I take some comfort that
the system doesn’t know me too well, yet it is unnerving when something is misdirected at me. Why do I take it so personally when personalization gets it wrong?"
"In this paper, we will discuss a select group of academic articles often referenced in support of the myth that de-identification is an ineffective tool to protect the privacy of individuals.
While these articles raise important issues concerning the use of proper de-identification techniques, reported findings do not suggest that de-identification is impossible or that de-identified data
should be classified as personally identifiable information. We then provide a concrete example of how data may be effectively de-identified — the case of the U.S. Heritage Health Prize. This example
shows that in some cases, de-identification can maximize both privacy and data quality, thereby enabling a shift from zero-sum to positive-sum thinking — a key principle of Privacy by
"Since 1967, when it decided Katz v. United States, the Supreme Court has tied the right to be free of unwanted govern-ment scrutiny to the concept of reasonable expectations of privacy. An
evaluation of reasonable expectations depends, among other factors, upon an assessment of the intrusiveness of government ac-tion. When making such assessment historically the Court consid-ered police
conduct with clear temporal, geographic, or substantive limits. However, in an era where new technologies permit the storage and compilation of vast amounts of personal data, things are becoming more
complicated. A school of thought known as ‘mosaic theory’ has stepped into the void, ringing the alarm that our old tools for assessing the intrusiveness of government conduct potentially undervalue
privacy rights. Mosaic theorists advocate a cumulative approach to the evaluation of data collection. Under the theory, searches are ‘analyzed as a collective sequence of steps rather than as individual
steps.’ The approach is based on the observation that comprehensive aggregation of even seemingly innocuous data reveals greater insight than consideration of each piece of information in isolation. Over
time, discrete units of surveillance data can be processed to create a mosaic of habits, relationships, and much more. Consequently, a Fourth Amendment analysis that focuses only on the government’s
collection of discrete units of data fails to appreciate the true harm of long-term surveillance—the composite."
"Data brokers collect and store a vast amount of data on almost every U.S. household and commercial transaction. Of the nine data brokers, one data broker’s database has information on 1.4 billion consumer transactions and over 700 billion aggregated data elements; another data broker’s database covers one trillion dollars in consumer transactions; and yet another data broker adds three billion new records each month to its databases. Most importantly, data brokers hold a vast array of information on individual consumers. For example, one of the nine data brokers has 3000 data segments for nearly every U.S. consumer."
"The Health and Social Care Act 2012 (HSCA) gave NHS England the power to direct the Health and Social Care Information Centre (formerly the NHS Information Centre) to collect electronic
patient records from GP practices. This was to be the first part of the ‘care.data’ initiative, the stated purpose of which is that using ‘information about the care you have received, enables those
involved in providing care and health services to improve the quality of care and health services for all’. […] It is undeniable that the rising cost of health and social care provision is a huge
societal problem, and it is also undeniable that health and social care services possess enormous quantities of hugely valuable patient data (whose value lies both in its potential benefits for future
service provision, and in potential commercial benefits to the private sector) but I am by no means certain that the people whose data is involved understand what is proposed, or what the potential
implications are. The suspicion – fair or not – that care.data is merely a front for the monetization of that valuable patient data, the suspicion that attempts were being made to implement it ‘under the
radar’ (remember that, initially, no national publicity campaign, or opt-out procedure was proposed) and the apparent reluctance of its proponents to engage with the complex questions of what
‘anonymisation’ and ‘pseudonymisation’ mean in our increasingly technical world, lead me to doubt that care.data is, currently, proportionate to the problem it seeks to address."
“‘This is really, really a privacy nightmare,’ says Deborah Peel, the executive director of Patient Privacy Rights, who claims that the vast majority, if not all, of the health data collected
by these types of apps have effectively ‘zero’ protections, but is increasingly prized by online data mining and advertising firms. Both the Food and Drug Administration and the FTC regulate some aspects
of the fitness tracking device and app market, but not everyone thinks the government has kept pace with the rapidly changing fitness tracking market. ‘The FTC and even the FDA have not done enough,’ says
Jeffrey Chester, the executive director of the Center for Digital Democracy, who says the lack of concrete safeguards to protect data in this new space leaves consumers at risk. ‘Health information is
sensitive information and it should be tightly regulated.’”
"When a rare ice storm threatened New Orleans in January, some residents heard from a city official who had gained access to their private medical information. Kidney dialysis patients were
advised to seek early treatment because clinics would be closing. Others who rely on breathing machines at home were told how to find help if the power went out.
Those warnings resulted from vast volumes of government data. For the first time, federal officials scoured Medicare health insurance claims to identify potentially vulnerable people and share their names
with local public health authorities for outreach during emergencies and disaster drills."
"Big data drives big benefits, from innovative businesses to new ways to treat diseases. The challenges to privacy arise because technologies collect so much data (e.g., from sensors in everything from phones to parking lots) and analyze them so efficiently (e.g., through data mining and other kinds of analytics) that it is possible to learn far more than most people had anticipated or can anticipate given continuing progress. These challenges are compounded by limitations on traditional technologies used to protect privacy (such as de-identification). PCAST concludes that technology alone cannot protect privacy, and policy intended to protect privacy needs to reflect what is (and is not) technologically feasible. In light of the continuing proliferation of ways to collect and use information about people, PCAST recommends that policy focus primarily on whether specific uses of information about people affect privacy adversely. It also recommends that policy focus on outcomes, on the ‘what’ rather than the ‘how,’ to avoid becoming obsolete as technology advances."
"Big data technologies will be transformative in every sphere of life. The knowledge discovery they make possible raises considerable questions about how our framework for privacy protection
applies in a big data ecosystem. Big data also raises other concerns. A significant finding of this report is that big data analytics have the potential to eclipse longstanding civil rights protections in
how personal information is used in housing, credit, employment, health, education, and the marketplace. Americans’ relationship with data should expand, not diminish, their opportunities and
"Big Data is increasingly mined to rank and rate individuals. Predictive algorithms assess whether we are good credit risks, desirable employees, reliable tenants, valuable customers—or
deadbeats, shirkers, menaces, and ‘wastes of time.’ Crucial opportunities are on the line, including the ability to obtain loans, work, housing, and insurance. Though automated scoring is pervasive and
consequential, it is also opaque and lacking oversight. In one area where regulation does prevail—credit—the law focuses on credit history, not the derivation of scores from data. Procedural regularity is
essential for those stigmatized by ‘artificially intelligent’ scoring systems. The American due process tradition should inform basic safeguards. Regulators should be able to test scoring systems to
ensure their fairness and accuracy. Individuals should be granted meaningful opportunities to challenge adverse decisions based on scores miscategorizing them. Without such protections in place, systems
could launder biased and arbitrary data into powerfully stigmatizing scores."
"This article identifies three uses of big data that hint at the future of policing and the questions these tools raise about conventional Fourth Amendment analysis. Two of these examples,
predictive policing and mass surveillance systems, have already been adopted by a small number of police departments around the country. A third example—the potential use of DNA databank samples—presents
an untapped source of big data analysis. Whether any of these three examples of big data policing attract more widespread adoption by the police is yet unknown, but it likely that the prospect of being
able to analyze large amounts of information quickly and cheaply will prove to be attractive. While seemingly quite distinct, these three uses of big data suggest the need to draw new Fourth Amendment
lines now that the government has the capability and desire to collect and manipulate large amounts of digitized information."
"EU approaches to data protection, competition and consumer protection share common goals, including the promotion of growth, innovation and the welfare of individual consumers. In practice,
however, collaboration between policy-makers in these respective fields is limited. Online services are driving the huge growth in the digital economy. Many of those services are marketed as ‘free’ but in
effect require payment in the form of personal information from customers. An investigation into the costs and benefits of these exchanges for both consumers and businesses is now overdue. Closer dialogue
between regulators and experts across policy boundaries can not only aid enforcement of rules on competition and consumer protection, but also stimulate the market for privacy-enhancing