Your Privacy Is Important to Us! – Restoring Human Dignity in Data-Driven Marketing

about & support

Foreword by Eric K. Clemons

Preface, Acknowledgements and Abbreviations

Bibliography


PART I – INTRODUCTION


1. Why this book?
    (#methodology #delimitations #structure)


2. Data-Driven Business Models
    (#surveillancecapitalism #valueextraction #harm)

PART II – LAW


3. Regulating Markets
    (#law #markets #architecture #consumerprotection)


4. Data Protection Law
    (#gdpr #personaldata #lawfulprocessing #legitimatebasis)


5. Marketing Law
    (#ucpd #professionaldiligence #averageconsumer)

PART III – PSYCHOLOGY AND TECHNOLOGY  


6. Human Decision-Making
    (#agency #psychology #boundedrationality #willpower)


7. Persuasive Technology
    (#technology #choicearchitecture #friction #prompts)


8. Manipulation
    (#coercion #deception #subliminalmarketing #paternalism)


9. Transparency
    (#information #communication #complexity #asymmetry)

PART IV – SOCIETY


10. Human Dignity and Democracy
      (#humanwellbeing #privacy #discrimination #proportionality)


PART V – CONCLUSIONS AND BEYOND


11. Conclusions
      (#humandignity #datadrivenmarketing #beinghuman)


12. Next Steps
      (#action #conversations #future)

CHAPTER ELEVEN

Conclusions

#human dignity  #data-driven marketing  #being human

The ‘boiling frog’-fable suggests that a frog will be cooked to death if it is placed in water which is heated sufficiently slowly: i.e. we fail to perceive dangers that approach gradually rather than suddenly. Digital transformations that gave us data-driven marketing did appear to come gradually; not slowly but with compelling frames and narratives (storytelling). In reality, frogs change location as a ‘natural thermoregulation strategy’,1 and maybe there are good reasons to gauge the temperature of our own waters—if we can recognise and understand what those waters are.

Data-driven business models, including those utilising AI systems, hold a huge potential to make society richer in a broad sense, including in such areas as healthcare and the augmentation of human capabilities. In this book we focus on data-driven marketing, which is a predominant revenue model in data-driven business models.

By using or re-selling the attention of their users, traders can finance products offered free of charge or at a discount to consumers, who ‘pay’ with their attention. The value of attention rests in the potential for behaviour modification, and therefore the consumer also ‘pays’ with agency by accepting being influenced. This revenue model is very old and widely accepted, and it has served to finance the press and news media, which play an important role in democratic societies.

The user’s attention becomes more valuable with data about individuals and groups, and ‘attention merchants’ have a natural interest in understanding and increasing their attention stock.

In step with digital transformations, there has been an increase in tracking capability and observation points, which—coupled with processing power and AI systems—give the trader unprecedented insights into each individual. By analysing data about many individuals, the trader may be able to predict—in probabilistic terms—demographic, sociographic and psychographic traits. Apparently benign data points such as postal codes, first names, photographs and typing patterns may serve as relevant proxies for such traits.

The trader is likely to understand the individual users’ goals, values and preferences, as well as when and how their behaviour is most effectively influenced. Predictions improve with data revealed by users when they sign up for a product or use the product. Such data are usually more precise than inferred data because we need to be honest with the products we use, including what we search for, who we engage with or what data we allow a health device to use.

In real reality, a good salesman may gain similar insights from meeting a consumer and taking advantage of experience, empathy and marketing theory. The main differences in virtual reality are (a) the scale, scope and precision of the data-driven predictions, (b) that tracking capabilities allow the trader to have much more information a priori and (c) that it is usually the consumer who takes the decision to engage with the salesman.

Unsolicited commercial approaches in real reality, such as doorstep-selling, are to a large extent regulated, although they are not prohibited per se. In real reality, it may be relatively easy to detect and reject such attempts at behaviour modification. This is more difficult in virtual reality when the behaviour modification is woven into the fabric of the user experience. The costs of real reality canvassing are higher than in virtual reality, resulting in a natural economic regulation. Low costs are, for instance, one of the reasons behind the prohibition of unsolicited commercial e-mails.

In sum, the question is not whether traders should be allowed to influence the behaviour of consumers by showing advertisements, but rather when and how, including by means of what data and which commercial practices.

From a market perspective, we must recognise that marketing is important for markets to work properly. In efficient markets, benefits from competition are expected to accrue to consumers, who must make rational choices, i.e. follow their individual goals, values and preferences based on available information, including information provided by marketing.

Activities necessary for data-driven marketing will often fall under both marketing law and data protection law. Both the UCPD and the GDPR rest on principles of empowerment and transparency, while striking a balance (proportionality) between the burdens on users and traders, respectively.

In contrast to the UCPD, the GDPR also requires legitimacy and accountability, which means that traders must be able to justify their processing of personal data, while complying with a security principle. The purpose for processing data, its privacy impact and possible consent play an important part in this balancing. Consent is closely related to empowerment and transparency.

To talk meaningfully about empowerment, we must assume that human beings have agency and that transparency and absence of manipulation are prerequisites for consumers’ right to self-determination.

Rational choice theory rests on a thin rationality, which assumes that consumers reveal their goals, values and preference in the choices they make. This idea disregards the complexity of forming goals, shaping values and adapting preferences. Emotions and storytelling play an important part in human decision-making, and we often rely on biases and heuristics in making our decisions. In addition, satisficing, which includes rational apathy, is a maximising strategy because people do not have the time or cognitive power to maximise their economic interests in every decision.

Traders are well aware of consumers’ bounded rationality and bounded willpower, and this knowledge is utilised in the design of marketing, including choice architecture. Human beings are programmable, i.e., we react in somewhat predictable ways to certain stimuli and we can be influenced to adapt our preference.

With access to personal data, traders can adapt marketing to individuals, including their specific vulnerabilities. In personalised environments, the consumer is subjected to a virtual reality that is designed by the trader in ways that are not necessarily in the consumer’s best interests.

Understanding bounded rationality and the sophistication behind data-driven marketing reveals a significant asymmetry in power between consumers and traders. To ensure real empowerment of consumers, law must be informed by insights into human decision-making and persuasive technology, as marketing already is. This is relevant for detecting manipulation and absence of transparency.

In marketing law, the UCPD interferes with traders’ freedom of expression, whereas the GDPR allows traders to interfere with data subjects’ right to protection of personal data. Marketing is the rule, whereas the processing of personal data is an exception—the GDPR requires fairness and the UCPD absence of unfairness. This significant difference is reflected in the GDPR’s requirement for legitimacy.

Marketing is undoubtedly a legitimate purpose for the processing of personal data under the GDPR. It is, however, not clear to what extent the general principles of Article 5(1) GDPR limit such processing, with respect to whether (a) the processing is fair, lawful and transparent, (b) all types of marketing constitute a legitimate purpose (‘purpose limitation’), (c) the data processed are adequate, relevant and necessary (‘data minimisation’), (d) probabilistic data can be considered accurate (‘accuracy’), (e) the data are kept longer than necessary (‘storage limitation’) and (f) the processing ensures appropriate security (‘integrity and confidentiality’). In particular, the term ‘necessary’ must require more than the processing’s being ‘useful’ for the trader.

These principles also apply when the data subject has consented to the processing, thus they restrict both the trader’s processing and the data subject’s right to self-determination. Within these limitations, the user may consent to the processing of personal data, which requires a genuine and informed choice. Compared to the UCPD, legitimacy and accountability set higher standards for empowerment and transparency under the GDPR, especially in the context of consent.

In the context of profiling, transparency entails providing ‘meaningful information about the logic involved’, as well as ‘the significance and the envisaged consequences of such processing for the data subject’. According to case law relating to consent, ‘clear and comprehensive information’ implies that a user is ‘in a position to be able to determine easily the consequences of any consent’ and that the information must be ‘sufficiently detailed so as to enable the user to comprehend the functioning of the [technology] employed’.

The framework of the UCPD may be helpful in understanding and supporting the GDPR requirements concerning information, including in the context of consent. Its application of prohibitions against misleading actions and misleading omissions, emphasises that information must be understood as a function of what is said (information) and, in particular, how it is said (communication). This distinction is also recognised in data protection law and is often showcased in the choice architecture pertaining to cookie-consent pop-ups, including by the design and colouring of checkboxes, etc.

In a market economy, consumers are assumed to exercise due care and read information. However, information does not equal transparency, and to determine transparency both the trader’s encoding and the user’s reasonable decoding of information must be determined. Users must be able to understand how personalisation of the choice architecture can impact their agency and undermine empowerment. It is not sufficient to understand which data are being used for personalised marketing, but also when, how and why, so that the consumer may better understand the logic involved and possible consequences.

Information creates transparency only when it is properly comprehended by the recipient, and to be practical, we must also assume transparency when the consumer has received honest information in a way that engages or appeals to his capacity for reflection and deliberation. We have used the term ‘translucency’ to cover situations where the consumer is aware of ignoring information, the meaning of which can reasonable be foreseen.

The user’s ability to understand the deal is a function of inter alia (1) the complexity, (2) the envisaged consequences (impact), (3) the user’s reasonable expectations, and (4) the trader’s effort to establish genuine transparency, including (5) by sufficiently engaging or appealing to the user’s capacity for reflection and deliberation.

The threshold for transparency is likely to be higher under the GDPR than the UCPD, where the information paradigm still seems dominant with some hints of a communication paradigm. Introducing accountability, and possibly legitimacy, to the UCPD would get closer to a real transparency paradigm.

The genuine choice can, in particular, be questioned when the consumer is unable to refuse or withdraw consent without detriment. This could include situations where the service constitutes important—or even critical—infrastructure in democratic, educational, social or economic terms. In essence, it must be possible to objectively ascertain whether consent is given, which not only relates to information and frames, but also to the use of prompts and friction.

We use ‘manipulation’ to mean practices, including requests for consent, that do not sufficiently engage or appeal to the user’s capacity for reflection and deliberation. We argue that knowledge about human decision-making, including insights into bounded rationality and bounded willpower, together with insights from persuasive technology, including the role of friction, prompts and the design of choice architecture, must be understood in order to determine the meaning of ‘sufficiently’.

The UCPD’s prohibition of aggressive commercial practices also allows consideration of the design of the choice architecture, as well as the average user’s condition. This could entail a prohibition of cookie consent pop-ups that unduly interrupt a service and may prompt and nudge—including by their design—an ill-informed privacy decision that may also affect the economic interests of the consumer.

Choice architecture is not neutral, and it could be argued—with inspiration from ‘nudging’—that such architecture should be designed to preserve consumers’ goals, values and preferences, while ensuring that the default is likely to be a good fit. Such a requirement could be deduced from the GDPR’s principle of data protection by design and by default. For the UCPD, such a requirement could be interpreted as being provided under the concept of professional diligence or made clear by inserting a requirement for legitimacy and accountability.

Empowerment and transparency may be boiled down to a matter of distribution of obligations between user and trader. The question is what level of due care either party must exercise, and ultimately who is to bear the risk of users’ not acting in accordance with their goals, values and preferences.

It may be helpful to consider asymmetries in their relative power, an asymmetry that constitutes the foundation for consumer protection. To understand this (im)balance, we have focused on three tiers of information asymmetry relating to (1) the trader and the offer, (2) human behaviour in general and (3) the individual user.

In step with developments in digital technologies, Tier 3 has become automated and scalable, and behavioural tracking capabilities have made Tier 2 more sophisticated. This allows for individual treatment of users who may consequently lose their sense of direction in the market and means that the trader is likely to know—in real-time—more about how the individual consumer is likely to behave and can be persuaded than the consumer himself.

In contrast to Tier 1, real transparency in terms of Tier 2 and Tier 3 is more difficult to establish with mere information. If some sense of transparency cannot be established, the processing of personal data is unlawful and the commercial practice is likely to be unfair. As with most issues pertaining to the processing of personal data and infringement of privacy, there must be a careful consideration of the consequences of personalisation and the possibility of explaining this in a clear and comprehensible manner.

From a societal perspective, we must recognise that markets are important for democracies and that the European single market constitutes an important pillar of the EU. Markets are subject to democratic oversight and are not above other values and aims of the EU Treaties, including the Charter. Harm from data-driven business models may be felt on personal (well-being, including loneliness, health and shallowness), social (empathy and well-being) and societal (outrage, polarisation and conflict) levels.

EU law must be interpreted in light of ‘EU law as a whole’ and with regard to the objectives thereof. When market activities interfere with the individual rights and freedoms enshrined in the Charter, these rights and freedoms must be added to the balancing of interest. Ignoring the Charter in market law would render the enshrined protection of citizens illusionary and undermine the democracy on which markets rely.

The protection of personal data is in itself a fundamental right, and in addition, we have discussed its possible relevance to human dignity, privacy and non-discrimination. We argue that a constitutional perspective must be applied to secondary law when effects from data-driven business models interfere with the rights and freedom envisaged in the EU Treaties, including the Charter. This is obvious for data protection, which is rooted in the Charter; and the GDPR respects fundamental rights and freedoms beyond the protection of personal data, within its scope of application. This means that human dignity, privacy and non-discrimination cannot be ignored when determining lawful processing of personal data.

Even though the UCPD only regulates to protect the economic interests of consumers, we argue here that fundamental rights and freedoms must also be considered in determining professional diligence, when a commercial practice is within the scope of the directive.

Human dignity is the least concrete of the four fundamental rights. It is closely related to agency (free will) and entails the principle that human beings must not be downgraded to mere objects, and it can be used to corroborate a restrictive interpretation of the possibility of ‘paying’ with data, agency and privacy. When computers are automated to influence at industrial scale and with surgical precision, it is not only a matter of human dignity, but also a matter of human well-being, which the TEU also aims to improve.

The ePrivacy Directive also protects privacy, beyond the mere processing of personal data. In a string of cases concerning data retention, the CJEU has found that general and indiscriminate transmission of traffic data and location data is a particularly serious privacy interference that cannot be justified even for the purpose of fighting serious crime. The privacy interference from data data-driven business models may be at least as serious, and marketing appears to be a poorer justification for it than fighting serious crime.

In our context, non-discrimination is closely linked to human dignity (agency) and privacy (surveillance), and personalised marketing, including by means of AI systems, poses a significant risk of unjustified discrimination.

In essence, the harms from data-driven business models may come from surveillance, including by the processing of personal data, and behaviour modification, including by opaqueness and absence of a free choice. Behaviour modification may consist of manipulating the user to buy or use products or spend more time on the trader’s service by utilising addictive design.

The right to conduct a business (in accordance with the law) does not appear as a strong argument when (a) restrictions are invoked to protect the privacy and economic interests of consumers and (b) alternative means of marketing and revenue are available. Although marketing is important for the regulation of supply and demand in markets, limiting the use of personalised marketing does not hinder their ability to function, as traders still have access to offer their products.

Alternatives include subscription fees and contextual advertising. ‘Free’ is a great egalitarian price point only if the consumer does not have to waive his fundamental rights and freedoms. Paying for services has the benefit of adding friction that engages the consumer’s capacity for reflection and deliberation. Surveillance, including by means of tracking users and observing behaviour, is significantly more intrusive than showing (contextual) advertising.

A first step must be to align the goals of traders and users by asking whether technology and markets serve the user, or vice versa. In understanding this question, we must consider (a) the extent to which commercialisation of privacy should be allowed, (b) possible harms to democracies, including citizens, institutions and social cohesion, (c) the importance of the communication infrastructure, including search and social media, and (d) whether we can reasonably rely on users to read, understand and fend for themselves in commercial markets with significant information asymmetries that are not easily levelled by means of information.

From a legal perspective, we have presented a line of arguments for restoring human dignity in data-driven marketing. It is obvious that data protection law has a significant role to play in the regulation of data-driven marketing corroborated by other fundamental rights and marketing law. The argument is that we do not need new regulation but probably better enforcement, as we touch upon in the following chapter.

In 1986 Lawrence M. Friedman noted that ‘Law […] is too important to be left to the lawyers’.2 While there may be some truth in this statement, lawyers have the capacity to draw inspiration from other sciences in their application of the rule of law, including for the necessary weighing of interests and the consideration of whether a particular result is reasonable, acceptable and sustainable. The CJEU has demonstrated these capabilities in pursuing the objectives of EU law, including in cases concerning data-retention and transfer of personal data.

Even if we ignore that humans are getting more distracted, overloaded, dependant and addicted, the rule of law must require that we look at the growing complexity and personal, social and societal interference posed by technology. We must recognise that even behind an average consumer, there is a natural person who plays a role in society.

As observed by Neil Postman, ‘to be “against technology” makes no more sense than to be “against food”.’3 Just as we expect food to be safe and nutritious, there may be a need to ensure that technology, including the use of AI systems, is designed to serve mankind, as expressed in recital 4 GDPR. In discussions of future legislation, one should consider whether they in fact detract from established rights and/or create inconsistencies and unnecessary fragmentation in law and enforcement. Also to be considered is how access to more information and options draws on consumers’ cognitive resources and may diminish agency and negatively affect the consumer’s right to self-determination.

Given the wide range of interconnected potential harms from data-driven business models, there may be a need to rethink consumer protection law in order to ensure both horizontal coherence (between legal disciplines and different harms) and vertical coherence (with fundamental rights). In the context of data-driven business models, it is important to understand the market as a formal system within a much broader societal framework.

Because technology is moving so fast, there is a risk that democratic institutions will suffer from the equivalent of cognitive overload and decision fatigue, with the risk of losing the bigger picture and grand visions.4 From a human dignity perspective, citizens must be in charge, give permission and be reasonably able to comprehend what is going on. It is not sufficient that we be informed about the fact that ‘something’ is going on. Marketing does not need to be as complex, intrusive and manipulative as the current norm predicts.

One of the most challenging tasks may be to convince the broader public of negative effects from beloved technology, including apps and devices. Data-driven business models rely on compelling frames such as ‘free’, ‘smart’ and ‘freedom’. It may be impossible to reframe ‘free’ to mean expensive, ‘smart’ to mean surveillance and behaviour modification, and ‘freedom’ to mean cognitive overload. There is a role for law to play in ensuring that these frames are used only when (a) products are really free, (b) products are aligned with the goals, values and preferences of the user and (c) the user has a real choice, respectively.

If law can contribute to the lessening of cognitive overload and polarisation, it could improve broader discussions that serve the interests of future generations, including mitigation of harms from the current climate crises. Maybe limitations on data-driven marketing would release AI experts to focus on more important issues.

Virtual realities are important, and the hope must be that they constitute augmented realities rather than abated realities, so that we can experience more agency, well-being and social cohesion.


. . . let’s talk!


1. <https://en.wikipedia.org/wiki/Boiling_frog>.

2. Lawrence M. Friedman, ‘The Law and Society Movement’, Stanford Law Review, 1986, pp. 763–780, p. 780.

3. Neil Postman, The End of Education (Vintage 1995), p. 192.

4. Yuval Noah Harari, Homo Deus (Harper 2017), p. 381: ‘Government has become mere administration—It manages the country, but it no longer leads it.’