Beneficial and Exploitative Nudges.
The effectiveness of nudges in raising the welfare of the population hinges on the policymakers employing them. A frequent criticism based on a logical inconsistency questions policymakers’ immunity from the psychological biases of individuals that are the very foundation of nudging interventions. We argue that, rather than being concerned about policymakers’ incapacity to raise the population’s welfare, we should be concerned about their unwillingness to do so. We offer a solution to this problem. We resort to the constitutional level of decision-making in which voters are able to determine the procedures or processes by which governments may resort to nudging. Nudging should not be considered as an innocuous exception to constitutionally based decision-making. It must be admitted, though, that most nudges at first sight do seem to be beneficial to people. In a democracy, even “Liberal Paternalism” may not be imposed on the population without its consent in principle.
Profiled: From Radio to Porn, British Spies Track Web Users’ Online Identities.
HERE WAS A SIMPLE AIM at the heart of the top-secret program: Record the website browsing habits of “every visible user on the internet.”
Before long, billions of digital records about ordinary people’s online activities were being stored every day. Among them were details cataloging visits to porn, social media, and news websites, search engines, chat forums, and blogs.
The mass surveillance operation — code-named KARMA POLICE — was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ.
The revelations about the scope of the British agency’s surveillance are contained in documents obtained by The Intercept from National Security Agency whistleblower Edward Snowden. Previous reports based on the leaked files have exposed how GCHQ taps into internet cables to monitor communications on a vast scale, but many details about what happens to the data after it has been vacuumed up have remained unclear.
Amid a renewed push from the U.K. government for more surveillance powers, more than two dozen documents disclosed today by The Intercept reveal for the first time several major strands of GCHQ’s existing electronic eavesdropping capabilities.
One system builds profiles showing people’s web browsing histories. Another analyzes instant messenger communications, emails, Skype calls, text messages, cellphone locations, and social media interactions. Separate programs were built to keep tabs on “suspicious” Google searches and usage of Google Maps.
The surveillance is underpinned by an opaque legal regime that has authorized GCHQ to sift through huge archives of metadata about the private phone calls, emails, and internet browsing logs of Brits, Americans, and any other citizens — all without a court order or judicial warrant.
Metadata reveals information about a communication — such as the sender and recipient of an email, or the phone numbers someone called and at what time — but not the written content of the message or the audio of the call.
As of 2012, GCHQ was storing about 50 billion metadata records about online communications and web browsing activity every day, with plans in place to boost capacity to 100 billion daily by the end of that year. The agency, under cover of secrecy, was working to create what it said would soon be the biggest government surveillance system anywhere in the world.
On the Supposed Evidence for Libertarian Paternalism.
Can the general public learn to deal with risk and uncertainty, or do authorities need to steer people’s choices in the right direction? Libertarian paternalists argue that results from psychological research show that our reasoning is systematically flawed and that we are hardly educable because our cognitive biases resemble stable visual illusions. For that reason, they maintain, authorities who know what is best for us need to step in and steer our behavior with the help of “nudges.” Nudges are nothing new, but justifying them on the basis of a latent irrationality is. In this article, I analyze the scientific evidence presented for such a justification. It suffers from narrow logical norms, that is, a misunderstanding of the nature of rational thinking, and from a confirmation bias, that is, selective reporting of research. These two flaws focus the blame on individuals’ minds rather than on external causes, such as industries that spend billions to nudge people into unhealthy behavior. I conclude that the claim that we are hardly educable lacks evidence and forecloses the true alternative to nudging: teaching people to become risk savvy.
Liberty cannot be preserved without a general knowledge among the people …
John Adams, 1765
Bounded rationality is not irrationality. … On the contrary, I think there is plenty of evidence that people are generally quite rational; that is, they usually have reasons for what they do.
Herbert Simon 1985
Libertarian paternalism, as it is called, is a variant of soft paternalism that uses a bag of tricks called “nudges” to influence people’s decisions. A nudge is a tool for influencing people without using incentives, which are the lifeblood of economic theory, and without enforcing behavior, the essence of hard paternalism. The program is called “paternalistic” because it tries to guide people and “libertarian” because no choices are taken away. In the best paternalistic spirit, its goal is to protect others from harm. Yet its rationale is not to defend us from enemies outside. Instead, the program wants to protect us from ourselves, from our systematic reasoning errors, inertia, and intuition.
Nudging is nothing new. Governments and marketing agencies have relied on it for a long time. Consider the appointment letters sent in many countries to women above age 50 for mammography screening. These letters contain a preset time and location. This default booking is a nudge that exploits inertia—women might not take the effort to actively sign up and, similarly, not take the effort to decline the appointment. Furthermore, in the letters and pamphlets encouraging screening, it is often stated that early detection reduces breast cancer mortality by 20 %. That figure is a second nudge that exploits people’s statistical illiteracy. Screening reduces breast cancer mortality from about 5 to 4 in 1,000 women (after 10 years), which amounts to an absolute risk reduction of 1 in every 1,000. But this figure is typically presented as a relative risk reduction of 20 %, often rounded up to 30 %, to look more impressive (Gigerenzer 2014a, b).
This example illustrates the difference between nudging and educating. The aim of the appointment letters is to increase participation rates, not understanding. As a result, women in the European Union are less knowledgeable about the benefit of screening than Russian women, who are not nudged by relative risks and similar persuasive techniques (Gigerenzer et al. 2009). Education, by contrast, aims at “a general knowledge among the people” (see introductory epigram), and would require measures to make the public statistically literate and to enforce transparent information policies so that citizens can make informed decisions. But there are often conflicts of interest: in the case of mammography screening, informed citizens might understand that only few women benefit while many are harmed, which would likely decrease participation rates.
The example also serves to illustrate the difference between nudging and hard paternalism. Whereas women in Europe and the US can opt out, the president of Uruguay, an oncologist, made biennial screening mandatory for all working women age 40 to 59 (Arie 2013).
The interest in nudging as opposed to education should be understood against the specific political background in which it emerged. In the US, the public education system is largely considered a failure, and the government tries hard to find ways to steer large sections of the public who can barely read and write. Yet this situation does not apply everywhere.
Risiko: Wie man die richtigen Entscheidungen trifft. Bertelsmann.
Befreiung aus der digitalen Leibeigenschaft.
Personenbezogene Daten verkörpern einen grossen wissenschaftlichen und ökonomischen Wert. Die Menschen sollten die Kontrolle über ihre persönlichen Daten gewinnen. Von Ernst Hafen und Mathis Brauchbar
Weshalb sind wir bereit, für einen Kaffee vier bis fünf Franken zu bezahlen, erwarten aber, dass E-Mail-Konten, Smartphone-Apps, Auskünfte bei Wikipedia oder Speicherplatz für Bilder gratis sind? In Wirklichkeit sind diese Dienste nicht kostenlos: Statt mit Geld bezahlen wir – meist unbewusst – mit unseren persönlichen Daten, die wir den Anbietern überlassen. Dank Cookies werden unsere Bewegungen auf Websites genau aufgezeichnet. Jede Suche im Internet wird mit Ort, Zeit und Computeradresse (IP-Nummer) registriert. In der kurzen Zeit, in der sich das Internet als Informations-, Kommunikations-, Service- und Verkaufsplattform etabliert hat, haben wir diesen Tauschhandel akzeptiert, weil er bequem ist und weil wir fast keine Möglichkeiten haben, Alternativen zu wählen. Der Preis, den wir dafür zahlen, ist die Abhängigkeit von Firmen, die unsere persönlichen Daten sammeln.
Health data cooperatives – citizen empowerment.
Introduction: This article is part of a Focus Theme of Methods of Information in Medicine on Health Record Banking.
Background: Healthcare is often ineffective and costs are steadily rising. This is in a large part due to the inaccessibility of medical and health data stored in multiple silos. Furthermore, in most cases molecular differences between individuals that result in different susceptibilities to drugs and diseases as well as targeted interventions cannot be taken into account. Technological advances in genome sequencing and the interaction of ’omics’ data with environmental data on one hand and mobile health on the other, promise to generate the longitudinal health data that will form the basis for a more personalized, precision medicine.
Objectives: For this new medicine to become a reality, however, millions of personal health data sets have to be aggregated. The value of such aggregated personal data has been recognized as a new asset class and many commercial entities are competing for this new asset (e.g. Google, Facebook, 23andMe, PatientsLikeMe). The primary source and beneficiary of personal health data is the individual. As a collective, society should be the beneficiary of both the economic and health value of these aggregated data and (health) information.
Methods: We posit that empowering citizens by providing them with a platform to safely store, manage and share their health-related data will be a necessary element in the transformation towards a more effective and efficient precision medicine. Such health data platforms should be organized as cooperatives that are solely owned and controlled by their members and not by shareholders. Members determine which data they want to share for example with doctors or to contribute to research for the benefit of their health and that of society. Members will also decide how the revenues generated by granting third parties access to the anonymized data that they agreed to share, should be invested in research, information or education.
Results: Currently no functional Health Data Cooperatives exist yet. The relative success of health data repositories such as 23andme and PatientsLikeMe indicates that citizens are willing to participate in research even if – and in contrast to the cooperative model – the commercial value of these data does not go back to the collective of users.
Conclusions: In the Health Data Cooperative model, the citizens with their data would take the center stage in the healthcare system and society would benefit from the health-related and financial benefits that aggregation of these data brings.
Psychopolitik: Neoliberalismus und die neuen Machttechniken.
Das neue Buch vom Autor der ›Müdigkeitsgesellschaft‹
Nach seinem Bestseller ›Müdigkeitsgesellschaft‹ führt Byung-Chul Han, »der neue Star der deutschen Philosophie« (El País), seine Kritik am Neoliberalismus leidenschaftlich fort. Pointiert legt er die Herrschafts- und Machttechnik des neoliberalen Regimes dar, die im Gegensatz zu Foucaults Biopolitik die Psyche als Produktivkraft entdeckt. Han beschreibt die neoliberale Psychopolitik in all ihren Facetten, die in eine Krise der Freiheit führt.
Im Rahmen dieser Analytik der neoliberalen Machttechnik werden darüber hinaus eine erste Theorie von Big Data und eine luzide Phänomenologie der Emotion vorgelegt. Sein neuer fulminanter Essay entwirft jedoch auch Gegenmodelle zu einer neoliberalen Psychopolitik: reich an Ideen und voller Überraschungen.
The Social Laboratory.
In October 2002, Peter Ho, the permanent secretary of defense for the tiny island city-state of Singapore, paid a visit to the offices of the Defense Advanced Research Projects Agency (DARPA), the U.S. Defense Department’s R&D outfit best known for developing the M16 rifle, stealth aircraft technology, and the Internet. Ho didn’t want to talk about military hardware. Rather, he had made the daylong plane trip to meet with retired Navy Rear Adm. John Poindexter, one of DARPA’s then-senior program directors and a former national security advisor to President Ronald Reagan. Ho had heard that Poindexter was running a novel experiment to harness enormous amounts of electronic information and analyze it for patterns of suspicious activity — mainly potential terrorist attacks.
The two men met in Poindexter’s small office in Virginia, and on a whiteboard, Poindexter sketched out for Ho the core concepts of his imagined system, which Poindexter called Total Information Awareness (TIA). It would gather up all manner of electronic records — emails, phone logs, Internet searches, airline reservations, hotel bookings, credit card transactions, medical reports — and then, based on predetermined scenarios of possible terrorist plots, look for the digital “signatures” or footprints that would-be attackers might have left in the data space. The idea was to spot the bad guys in the planning stages and to alert law enforcement and intelligence officials to intervene.
“I was impressed with the sheer audacity of the concept: that by connecting a vast number of databases, that we could find the proverbial needle in the haystack,” Ho later recalled. He wanted to know whether the system, which was not yet deployed in the United States, could be used in Singapore to detect the warning signs of terrorism. It was a matter of some urgency. Just 10 days earlier, terrorists had bombed a nightclub, a bar, and the U.S. consular office on the Indonesian island of Bali, killing 202 people and raising the specter of Islamist terrorism in Southeast Asia.
Ho returned home inspired that Singapore could put a TIA-like system to good use. Four months later he got his chance, when an outbreak of severe acute respiratory syndrome (SARS) swept through the country, killing 33, dramatically slowing the economy, and shaking the tiny island nation to its core. Using Poindexter’s design, the government soon established the Risk Assessment and Horizon Scanning program (RAHS, pronounced “roz”) inside a Defense Ministry agency responsible for preventing terrorist attacks and “nonconventional” strikes, such as those using chemical or biological weapons — an effort to see how Singapore could avoid or better manage “future shocks.” Singaporean officials gave speeches and interviews about how they were deploying big data in the service of national defense — a pitch that jibed perfectly with the country’s technophilic culture.
The Automation of Society Is Next: How to Survive the Digital Revolution.
The explosion in data volumes, processing power, and Artificial Intelligence, known as the “digital revolution”, has driven our world to a dangerous point. One thing is increasingly clear: We are at a crossroads. We need to make decisions. We must re-invent our future.
[This full colour book includes 32 figures and visualisations. ]
After the automation of factories and the creation of self-driving cars, the automation of society is next. But there are two kinds of automation: a centralized top-down control of the world, and a distributed control approach supporting local self-organization. Using the power of today’s information systems, governments and companies like Google seem to engage in the first approach. Will they even try to build a “digital God” who knows everything and controls what we do? In fact, governments would spend billions to predict the future of our world and control its path.
Given that, every year, we produce as much data as in the entire history of humankind, can we now create a better world? The abundance of data certainly makes it possible to establish an entirely new paradigm for running our societies. Could we even build a data-driven “crystal ball” to predict the future and, given that knowledge implies power, also something like a “magic wand” to optimally rule the world? Will the digital revolution empower a “wise king” or “benevolent dictator”, maybe by means of Artificial Intelligence? In fact, we are much closer to this than you might think. But do we really need large-scale surveillance to understand and manage the increasingly complex systems we have created? Or are we running into a totalitarian nightmare?
What alternatives to master our complex world do we have? What about the principles of the “invisible hand” and the “wisdom of the crowd”, which posit that independent decisions made by many people will produce optimal societal outcomes? In the past, these principles have often failed. So, can bottom-up self-organization really work and if so, what does it take? Could technology make it work? Relying on the “Internet of Things” and complexity science, can self-organization now enable a more efficient, more innovative, more successful, more resilient, smarter and happier society?
Let us explore this now, because this would open the door to a brighter version of the digital society, based on informational self-determination, human dignity, freedom of decision-making, democratic principles, participation, and collective intelligence. It’s time to take the future in our hands!