Your Privacy Is Important to Us! – Restoring Human Dignity in Data-Driven Marketing

about & support

Foreword by Eric K. Clemons

Preface, Acknowledgements and Abbreviations

Bibliography


PART I – INTRODUCTION


1. Why this book?
    (#methodology #delimitations #structure)


2. Data-Driven Business Models
    (#surveillancecapitalism #valueextraction #harm)

PART II – LAW


3. Regulating Markets
    (#law #markets #architecture #consumerprotection)


4. Data Protection Law
    (#gdpr #personaldata #lawfulprocessing #legitimatebasis)


5. Marketing Law
    (#ucpd #professionaldiligence #averageconsumer)

PART III – PSYCHOLOGY AND TECHNOLOGY  


6. Human Decision-Making
    (#agency #psychology #boundedrationality #willpower)


7. Persuasive Technology
    (#technology #choicearchitecture #friction #prompts)


8. Manipulation
    (#coercion #deception #subliminalmarketing #paternalism)


9. Transparency
    (#information #communication #complexity #asymmetry)

PART IV – SOCIETY


10. Human Dignity and Democracy
      (#humanwellbeing #privacy #discrimination #proportionality)


PART V – CONCLUSIONS AND BEYOND


11. Conclusions
      (#humandignity #datadrivenmarketing #beinghuman)


12. Next Steps
      (#action #conversations #future)

CHAPTER SEVEN

Persuasive Technology

#technology  #choice architecture  #friction  #prompts

As discussed in the previous chapter, insights into human decision-making can be used by traders to influence behaviour. As much of society’s communication takes place online, we will take a look at how influence is carried out online. Three important characteristics of digital technology are that activities can be automated, scaled and personalised easily. In addition, technology allows for real-time feedback.

We don’t make choices in a vacuum . . . our physical environment is shaped by nature and culture, as discussed in Chapter 3 (regulating markets). Our behaviour and movement are constrained by laws of physics and public infrastructure1 (physical architecture). In commercial contexts much of our behaviour and many of our experiences are carefully designed and curated by businesses, as discussed in the previous chapter.2 For instance, milk is usually shelved in the very back of a grocery store, so as to guide people through the entire store.

In virtual realities, such as ‘cyberspace’, we are also subject to constraints stemming from infrastructure (digital architecture) that define our abilities and shape our experiences. Digital technology can work in tandem with our imaginative capabilities to create experiences that defy real-world physics—just think about social media and other computer games with or without real co-players (teammates as well as opponents).

It is more cumbersome to redesign the physical architecture than the architecture of virtual realities, where a few clicks are sufficient to establish connections, create fora and delete persons. As an example, Mark Zuckerberg announced on 11 January 2018 a possibly well-intentioned change in Facebook’s algorithms that turned out to reward outrage and polarisation:3

‘One of our big focus areas for 2018 is making sure the time we all spend on Facebook is time well spent.

We built Facebook to help people stay connected and bring us closer together with the people that matter to us. That’s why we’ve always put friends and family at the core of the experience. Research shows that strengthening our relationships improves our well-being and happiness.

[…] We feel a responsibility to make sure our services aren’t just fun to use, but also good for people’s well-being. So we’ve studied this trend carefully by looking at the academic research and doing our own research with leading experts at universities.

[…] Based on this, we’re making a major change to how we build Facebook. I’m changing the goal I give our product teams from focusing on helping you find relevant content to helping you have more meaningful social interactions.’

Virtual reality’ is real, but it is not the real reality.4 Activities and experiences in virtual realities impact real reality—with which it can easily be confused because of its pervasiveness and human intersubjectivity. As Eric K. Clemons explains, decisions to advertise on Google may be driven more by fear of disappearing than the desire for paid exposure.5 In our context, social media have similar power to make natural (as well as legal) persons disappear if they do not deliver the engagement (or payment) required by the algorithms for exposure. Of course, the legal or natural person will only disappear from the virtual reality, but if important connections and activities of the persons are not sufficiently rooted in the physical world, existence there is also threatened, as we discuss in Chapter 10 (human dignity and democracy).

‘Augmented reality’ is a hybrid in which elements of virtual reality are used to add to or mask parts of real reality. An illustrative example is the mobile game Pokémon Go, which allows the user to interact with virtual creatures, Pokémon (pocket monsters), which appear in the user’s real reality environment identified by GPS signals. As catching Pokémons requires you to be at particular geographic locations, it has been used to drive real visitors to actual McDonald’s restaurants and other sponsors.6

Given our widespread usage of and trust in computers (including smartphones), virtual realities and augmented realities can be created that may be difficult to distinguish from real reality. Given the possibilities of influencing (persuading as well as manipulating), it is not difficult to imagine ‘abated reality’ in the guise of ‘real virtuality’ where the real reality decreases in force or intensity.

1. Distinct advantages

The design of human–computer interaction plays a significant role in how consumers are influenced in the context of data-driven business models.7 As Melvin Kranzberg pronounced in his first law of technology: ‘Technology is neither good nor bad; nor is it neutral.’8 Or as Neil Postman puts it: What we need to know about computers, television and other important technologies ‘is not how to use them, but how they use us.’9

Computers have six distinct advantages over traditional media and human persuaders, as articulated by BJ Fogg in the early 2000s. In particular, computers can:10

  • Be more persistent than human beings.

  • Offer greater [often perceived (author’s addition)] anonymity.

  • Manage huge volumes of data.

  • Use many modalities to influence.

  • Scale easily.

  • Go where humans cannot go or may not be welcome.

Additionally, computers have good memory, as well as the ability to evoke feelings through social cues without getting tired or requiring reciprocity.11 With digital technology, Cialdini’s principles of persuasion introduced in the previous chapter can be automated, scaled and personalised, and applied in real time.

1.1. Friction

Persuasive technology can be distilled into a matter of dispensing ‘friction’,12 which relies on (what Daniel Kahneman has identified as) the preference for ‘cognitive ease’.13 By increasing or reducing friction, the user can be nudged in a desired direction by designing a ‘path of least resistance’. The importance of friction in the design of online experiences is hard to overestimate—as, for instance, in the context of cookie consent pop-ups, as we discuss in Chapter 8 (manipulation). Reading, thinking, clicking, scrolling, writing, paying, all constitute friction.

To appreciate the power of friction it may be helpful to perceive it as obstacles to instant gratification. Consider, for instance, liking and sharing content on Facebook, searching the world wide web through Google, establishing connection on LinkedIn, swiping for dates on Tinder and shopping at Amazon.com. It is very easy and convenient. The same mechanisms apply to free cake at an open house and the use of credit cards, where you don’t have to suffer the experience of departing with money.14 Further friction was removed from payment with the introduction of one-click buying and contactless payment.

By increasing or reducing friction to design a ‘path of least resistance’, the user can be nudged in a desired direction. Most consumers remain unaware of how the user-experience is designed for the individual. Both Amazon’s recommendations and Google’s sponsored links are designed to guide consumers to specific purchases by reducing the effort or ‘friction’ needed to select these over all others.

1.2. Motivation, ability and prompts

Generally speaking, behaviour is affected by motivation, ability and prompts.15 A nostalgic view of marketing may assume that creating motivation is key, whereas today the main focus—especially in digital marketing—is to increase ability (by removing friction: ‘it’s free!’, ‘click here!’) and to use prompts (‘act now!’, ‘click here!’). As expressed by BJ Fogg:

‘prompts are the invisible drivers of our lives’, ‘no behavior happens without a prompt’, and ‘the prompts coming from digital technology are harder to manage than those from junk mail […] Other than getting off the grid, we may never find a perfect way to stop unwanted prompts from companies with business models that depend on us to click, read, watch, rate, share, or react. This is a difficult problem that pits our human frailties against brilliant designers and powerful computer algorithms.’16

2. Personalisation

As discussed in Chapter 2 (data-driven business models), an intrinsic part of data-driven marketing is the ability to personalise marketing activities, often by means of targeted advertising. As expressed by BJ Fogg:

‘Information provided by computing technology will be more persuasive if it is tailored to the individual’s needs, interests, personality, usage context, or other factors relevant to the individual.17

In many situations, the data actively disclosed by the user is only the tip of the iceberg of personal data that are used for mediating content. Much additional information can be inferred both from voluntarily revealed data (e.g. search queries and ‘likes’ on social media18) and from behaviour (sites visited, time spent, clicks, typing speed, mouse hovering etc.). Information may also be purchased from aggregators19 or used by third parties in connection with advertising on e.g. social media and web search engines.

Traders may obtain much data from surveillance of users’ behaviour; data on not only how they behave but also how they react to particular influence. These observations can reveal predictable patterns of behaviour that can be reverse-engineered to give the trader ‘tremendous and unearned leverage in eliciting our compliance’.20 Again, automated, scaled and individualised in real time. Alex Pentland predicted in 2014 that:

‘In just a few short years we are likely to have incredibly rich data available about the behavior of virtually all of humanity—on a continuous basis. The data mostly already exist in cell phone networks, credit card databases, and elsewhere, but currently only technical gurus have access to it.’21

The trader does not necessarily need to know the identity of the particular user, but may e.g. use contextual information such as time, place and behaviour. For instance, age (in probabilistic terms) may be inferred from media use, including e.g. music taste, and demographic information may be utilised in connection with geographic information about the user. The trader may be able to deduce age, emotions, etc., of individual users by analysing e.g. images, search queries and social media usage.22 This can be used for profiling which may include a ‘persuasion profile’.23 We know from behavioural sciences that mood, for instance, significantly affects purchase decisions, and that decision fatigue leaves us vulnerable to traders who know how to time their sales.24 As expressed by Douglas Rushkoff:

‘Psychographic targeting is more effective than demographic targeting because it reaches for an individual customer more directly—like a fly fisherman who sets bait and jiggles his rod in a prescribed pattern for a particular kind of fish.’25

When personalisation is being used, the user will only experience the information that has been selected for him. The choice architecture, which is invisible to the user, creates something akin to a shop that is custom-built for each visitor and each visit. This makes it virtually impossible to fully comprehend the environment and identify the means of persuasion, including what information the user is being deprived of. In the above-mentioned grocery store with dairy shelves at the very back, the consumer at least, has some sense of orientation and ability to navigate. As the algorithms in play are black boxes, we rarely learn about ‘the tribes we “belong” to or why we belong there’.26

This algorithmic mediation of information is e.g. used to make suggestions that are valuable to the user, but may also be applied in ways that are more profitable for the trader than the user.27 Even though we avail ourselves of receiving information through technology, we do not necessarily use technology with the aim of being persuaded or even manipulated. It is, for instance, interesting how we still seem to believe that we—i.e. ourselves—are finding information on the internet rather than being guided to it. Personalisation is closely linked to manipulation and discrimination, as we pursue in Chapter 8 (manipulation) and Chapter 10 (human dignity and democracy), respectively.

2.1. The accuracy principle

Predicting behaviour, including by establishing a persuasion profile, is to be understood in probabilistic terms: ‘People who are like you tend to behave in this way.’ One of the defining characteristics of big data analyses is that they rely on probabilities and as such are imprecise. The principle of ‘accuracy’ (Article 5(1)(d) GDPR) may be important, as it requires personal data to be ‘accurate and, where necessary, kept up to date’. The data controller must take ‘every reasonable step’ to erase or rectify inaccurate personal data.

In that vein, consideration must be given to the purposes for which the data are processed. The extent to which the use of probabilistic data is in conformity with the accuracy principle is not settled in case law. When the purpose is personalisation and/or marketing, it could be argued that—having ‘regard to the purpose’—some flexibility with regard to precision is legitimate. However, from a fundamental rights’ perspective, it could equally be argued that traders should not rely on probabilities when it comes to determining particular traits in human beings.

The latter interpretation could be corroborated by the principle of ‘data protection by design’ found in Article 25(1).28 Even if this (basic) principle cannot constitute an independent right in this context, the principle may exert an influence on the interpretation of other GDPR provisions, including those concerning accuracy and transparency. In the context of profiling, recital 71 GDPR provides that the trader should

‘implement technical and organisational measures appropriate to ensure, in particular, that factors which result in inaccuracies in personal data are corrected and the risk of errors is minimised, secure personal data in a manner that takes account of the potential risks involved for the interests and rights of the data subject and that prevents, inter alia, discriminatory effects on natural persons on the basis of racial or ethnic origin, political opinion, religion or beliefs, trade union membership, genetic or health status or sexual orientation, or that result in measures having such an effect.’

3. Emotions online

As identified by BJ Fogg, computers may use many modalities to influence, including the ability to evoke feelings through social cues, notably without getting tired or requiring reciprocity, as Cialdini has identified as a power tool for traders. Reciprocity may be perceived as friction in real reality relationships; computers, on the other hand, will listen, observe and ‘care’ without expecting the same from you.

In addition to evoking emotions (‘feelings’ are reactions to emotions29) computers may also detect emotions by means of proxies such as word usages in social media postings or by analysing facial expressions in pictures.30 For the purposes of digital marketing, probabilistic data will be sufficient to, for instance, identify people who are likely to be in a depressed state or mood in order to advertise a product with a particular promise of instant gratification.

Some traders have access to real-time biometric data in the context of smart health devices, including fitness trackers. More sophisticated biometric data is likely to come with the development of brain–machine interfaces. As expressed by Neuralink:

‘Every day we’re building better tools for communicating with the brain. With the right team, the applications for this technology are limitless.’31

In a much debated Facebook experiment on emotional contagion,32 it was found that emotions can be manipulated in social media, allowing for ‘massive-scale contagion via social networks’. The experiments took place in the period 11–18 January 2012 (the research was published in 2014). In the expression of ‘significance’, it is stated that:

‘We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.’

The study demonstrated how the exposure to emotional expressions in News Feed affected posting behaviours. It is noted that effect sizes from the manipulations are small, but that due to the scale of networks and interactions, they may have ‘large aggregated consequences’. Part of the summary reads:

‘[…] When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and nonverbal cues are not strictly necessary for emotional contagion, and that the observation of others’ positive experiences constitutes a positive experience for people.’

The study also revealed a ‘withdrawal effect’:

‘People who were exposed to fewer emotional posts (of either valence) in their News Feed were less expressive overall on the following days, addressing the question about how emotional expression affects social engagement online.’

As providers of social media services are expected—probably more by investors than regulators and users—to maximise profits, increased engagement is a likely key performance indicator. In addition, it is found that negative emotions create more engagement than positive emotions. It is clear that virtual and augmented realities can be designed to detect (proxies) for emotions as well as creating, amplifying and muffling emotions in real realities. These emotions can be used as a driver of engagement, habits and addictive behaviour.

Habits consist of prompts, routines and rewards,33 and a four-step ‘Hooked Model’—consisting of trigger, action, variable reward and investment—can be used to build habit-forming products.34 Similarly, retailers in real reality also explores ways of getting shoppers to shop longer.35 Thus profit-maximising algorithms are likely to have real reality consequences. For the user, it makes little difference whether qualitative insights are programmed into their virtual realities or automated through Artificial Intelligence that is optimising for, e.g., engagements as a proxy for profits. James Williams argues that

‘much, if not most, of the advertising research that occurs behind the closed doors of companies could be described as “secret mood manipulation experiments.”’36

This of course raises questions as to legitimate purpose, data minimisation, accuracy and legitimate basis for processing of personal data as well as the privacy implication of the surveillance, which is dealt with in Chapter 8 (manipulation) and Chapter 10 (human dignity and democracy), respectively.

3.1. Research ethics

Traders have always had an interest in knowledge about their customers’ goals, values and preferences. With the scalability of technology and precision from personal data, data-driven business models may facilitate the creation and utilisation of an unprecedented quality of such insights—and, notably, also insights about how audiences and individuals are most effectively influenced to further the trader’s aims.

As demonstrated in the above-mentioned 2012 Facebook experiment, obtaining such knowledge may involve fiddling with users’ emotions. If a similar experiment were to be carried out by an academic researcher, it would have to follow ethical protocols. Traders may not feel compelled to follow similar ethical standards. Following negative press coverage, an ‘editorial expression of concern’ was issued by the publisher of the Facebook study:

‘[…]

Questions have been raised about the principles of informed consent and opportunity to opt out in connection with the research in this paper. The authors noted in their paper, “[The work] was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.” When the authors prepared their paper for publication in PNAS, they stated that: “Because this experiment was conducted by Facebook, Inc. for internal purposes, the Cornell University IRB [Institutional Review Board] determined that the project did not fall under Cornell’s Human Research Protection Program.” This statement has since been confirmed by Cornell University.

Obtaining informed consent and allowing participants to opt out are best practices in most instances under the US Department of Health and Human Services Policy for the Protection of Human Research Subjects (the “Common Rule”). Adherence to the Common Rule is PNAS policy, but as a private company Facebook was under no obligation to conform to the provisions of the Common Rule when it collected the data used by the authors, and the Common Rule does not preclude their use of the data. Based on the information provided by the authors, PNAS editors deemed it appropriate to publish the paper. It is nevertheless a matter of concern that the collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining informed consent and allowing participants to opt out.’37

There is no reason to believe that Facebook has ceased its internal research programs because of negative press coverage. On the contrary, Francis Haugen’s revelations disseminated, inter alia, through The Wall Street Journal’s ‘Facebook Files’ include internal research revealing that the Instagram app (owned by Facebook; now Meta) makes body image issues worse for teenage girls.38

The possibility of doing such research corroborates the asymmetrical power relationship between traders and consumers, as we discuss in Chapter 9 (transparency). Similar research can be carried out by all traders with computers and large quantities of data. The scope may be narrower, but the impacts may have similarly significant effects on individual users.


1. See, for instance, Jane Jacobs, The Death and Life of Great American Cities (Modern Library 2011, first published 1961).

2. See also Paco Underhill, Why We Buy (Simon & Schuster 2009, first published 1999), p. 195, about design of brick-and-mortar shops.

3. See also Keach Hagey & Jeff Horwitz, ‘Facebook Tried to Make Its Platform a Healthier Place. It Got Angrier Instead’, The Wall Street Journal, 15 September 2021.

4. You can determine whether an entity is real by asking whether it can suffer, cf. Yuval Noah Harari, Homo Deus (Harper 2017), p. 177.

5. Eric K. Clemons, New Patterns of Power and Profit (Springer 2019).

6. See, e.g., Josh Constine, ‘Pokémon GO reveals sponsors like McDonald’s pay it up to $0.50 per visitor’, TechCrunch, 31 May 2017.

7. See also Robert M. Bond et al., ‘A 61-million-person experiment in social influence and political mobilization’, Nature 489, 2012, pp. 295–298; Eliza Mik, ‘The erosion of autonomy in online consumer transactions’, Law, Innovation and Technology, 2016, pp. 1–38; and Ryan Calo, ‘Digital Market Manipulation’, George Washington Law Review, 2014, pp. 995–1051.

8. Melvin Kranzberg, ‘Technology and History: “Kranzberg’s Laws”’, Technology and Culture, 1986, pp. 544–560.

9. Neil Postman, The End of Education (Vintage 1995), p. 44.

10. BJ Fogg, Persuasive Technology (Morgan Kaufmann 2003), p. 7.

11. See also Nicholas Carr, The Shallows (W. W. Norton & Company 2010), pp. 202–205.

12. BJ Fogg, Persuasive Technology (Morgan Kaufmann 2003). See also Roger McNamee, Zucked (Penguin 2019).

13. Daniel Kahneman, Thinking, Fast and Slow (Farrar, Straus and Giroux 2011), p. 67.

14. See also George A. Akerlof & Robert J. Shiller, Phishing for Phools (Princeton University Press 2015), p. 67.

15. BJ Fogg, Tiny Habits (Houghton Mifflin 2020).

16. BJ Fogg, Tiny Habits (Houghton Mifflin 2020), pp. 97 and 105–106.

17. BJ Fogg, Persuasive Technology (Morgan Kaufmann 2003), p. 38.

18. See e.g. Michal Kosinski, David Stillwell & Thore Graepel, ‘Private traits and attributes are predictable from digital records of human behavior’, PNAS, 9 April 2013, pp. 5802–5805, <https://doi.org/10.1073/pnas.1218772110>.

19. E.g. Acxiom (<www.acxiom.com>) and Factual (<www.factual.com>).

20. Douglas Rushkoff, Coercion (Riverhead 1999), p. 56.

21. Alex Pentland, Social Physics (Penguin 2014), p. 12.

22. See, e.g. Seth Stephens-Davidowitz, Everybody Lies (Bloomsbury 2017), pp. 85–87.

23. Eli Pariser, The Filter Bubble (Penguin 2011), pp. 121–123.

24. Roy F. Baumeister & John Tierney, Willpower (Penguin Books 2011), p. 103.

25. Douglas Rushkoff, Coercion (Riverhead 1999), p. 178. Similarly, Seth Godin, This is Marketing (Portfolio 2018), pp. 168–169.

26. Cathy O’Neil, Weapons of Math Destruction (Crown 2016), p. 173. See also Frank Pasquale, The Black Box Society (Harvard University Press 2015) and Philip N. Cohen, ‘Open letter to the Pew Research Center on generation labels’, Family Inequality, 26 May 2021.

27. See e.g. Google shopping and Amazon.

28. Measures to ‘[…] integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects’.

29. See in general Paul Ekman, Emotions Revealed (2nd ed., St. Martin’s Press 2007, first published 2003).

30. See also the proposal for an Artificial Intelligence Act, Article 52(2).

31. <https://neuralink.com/> (visited October 2021).

32. Adam D. I. Kramer, Jamie E. Guillory & Jeffrey T. Hancock, ‘Experimental evidence of massive-scale emotional contagion through social networks’, Proceedings of the National Academy of Science 111(24), 2014, pp. 8788–8790.

33. See Charles Duhigg, The Power of Habit (Random House 2012) and BJ Fogg, Tiny Habits (Houghton Mifflin 2020).

34. See Nir Eyal (with Ryan Hoover), Hooked (Penguin Business 2019, first published 2014), addressing the question of how ‘successful companies create products people can’t put down?’.

35. Paco Underhill, Why We Buy (Simon & Schuster 2009, first published 1999), p. 32.

36. James Williams, Stand Out of Our Light (Cambridge University Press 2018), p. 63.

37. Signed by Editor-in-Chief Inder M. Verma, see <https://www.pnas.org/content/111/29/10779.1>.

38. Georgia Wells, Jeff Horwitz & Deepa Seetharaman, ‘Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show’, The Wall Street Journal, 14 September 2021.