I read with interest your explanation of your new project on measuring emotional experiences. It is exciting to be part of the birth of a new technology, and the wonder of innovation is clear in your AURA project which will translate sensed emotions into light. I think this will provide new opportunities to investigate processes of human emotion, especially for the ‘quantified self’ community already engaged in measuring and tracking their own experience of the world.
I question, however whether tracking one’s changing emotional state as one experiences media, or anything in fact, is part of a ‘customer journey’. This is not just about sensing, but about investigating the border between software and wetware – technology that aims to connect to and enhance the human brain.
It is interesting to its corporate sponsors because it promises new forms of access not to ‘the customer’ but to people, in all our idiosyncrasy and physicality. Those forms of access are not necessarily more accurate than asking people what they think, but they will be more seamless and frictionless, blending into our lives and becoming something we are rather than something we do.
You ask whether consumers need to choose between their privacy on the one hand and the comfort of personalized services on the other. I think this question may distract attention from a more central one: can we separate our existence as consumers from our existence as citizens, partners, workers, parents? Our emotions are an essential bridge between ourselves and others, and what we show or hold back determines the kinds of relationships we can form, and who we can be in relation to our social world.
The language of choice may not be the right language here: your project uses only volunteers, but is it clear what they are volunteering? Your technology has a 70-per-cent accuracy, according to test subjects. But there is profound disagreement amongst brain specialists as to what we measure when we study emotions.
William James, one of the founders of psychology, argued that our experience of emotions actually results from their physical expression: we feel sad because we cry and we feel happy because we smile, not the other way around. If this is true, the sensors you are developing will have better access to the biological content of our emotions than we will, which has implications for – among other things – our freedom to form our own identities and to experience ourselves.
I am reminded of a project of Facebook’s that was recently discussed in the media. The company’s lab is attempting to produce a brain-computer speech-to-text interface, which could enable people to post on social media directly from the speech centre of their brains - whatever this means, since there is no scientific consensus that there is such a thing as a "speech centre".
The company’s research director claims this cannot invade people’s privacy because it merely decodes words they have already decided to share by sending them to this posited speech centre. Interestingly, the firm will not confirm that people’s thoughts, once captured, will not be used to create advertising revenue.
You ask what is needed to establish trust in such a system. This is a good question, because if trust is needed the problem is not solved. This is one of a myriad initiatives where people are being asked to trust that commercial actors, if given power over them, will not exploit it for commercial purposes. Yet this is tech and media companies’ only function. If their brief was to nurture our autonomy and personhood, they would be parents, priests or primary school teachers.
The one fundamental rule about new technologies is that they are subject to function creep: they will be used for other purposes than their originators intended or even imagined. A system such as this can measure many protected classes of information, such as children’s response to advertisements, or adults’ sexual arousal during media consumption.
These sources of information are potentially far more marketable than the forms of response the technology is currently being developed to measure. How will the boundary be set and enforced between what may and may not be measured, when a technology like this could potentially be pre-loaded in every entertainment device? Now that entertainment devices include our phones, tablets and laptops, as well as televisions and film screens, how are we to decide when we want to be watched and assessed?
Monitoring technologies produce data, and data’s main characteristic is that it becomes more valuable over time. Its tendency is to replicate, to leak, and to reveal. I am not sure we should trust commercial actors whose actions we cannot verify, because trust without verification is religious faith.
Yours,
Linnet Taylor
TILT (Tilburg Institute for Law, Technology and Society)