More than creepy

Much more than creepy – the spooky reality of surveillance after privacy breach

Yleinen

Professor Irina Shklovski and colleagues from the University of Copenhagen, Denmark, have written quite extensively about creepiness through leakiness, when thinking about our contemporary landscape of pervasive computing. Where Alexa randomly chuckles to herself in response to a bad joke spoken at the far back corner of a room (see e.g., Seberger, Shklovski, Swiatek & Patil, 2022; Shklovski & Grönvall, 2020; Shklovski, Mainwaring, Skúladóttir & Borgthorsson, 2014). As their work testifies, people resignatedly accept the fact that while they find it creepy to be constantly monitored by the systems they live with, they are not necessarily overly happy with this reality. Similarly, a study we recently carried out, led by our Postdoctoral Researcher Ville Vakkuri and supervised by Shklovski in Copenhagen, showed that perhaps tools for surveillance are not so much reluctantly accepted, but perhaps even actively ignored. Our paper, currently under review, reports the findings of a simulation that occurred in virtual reality in which the individual experience of privacy was tested across several dimensions.

Scary AI

So, this active ignorance and resignated acceptance, is one (or two) thing(s). Yet, when individuals experience a level of disturbance in response to the systems that surround them… that is another. This is what we refer to as spooky – a state in which we constantly feel disturbed and watched. Unaware of the ‘creepiness’ concept when we first endeavoured to put this project together, spookiness was the point of departure upon which we placed several assumptions, one of which was based on the effect of past experience and how it influences our lack of ease with ‘all seeing, all hearing’ information technology around us. As if we are surrounded by ghosts, negative past experiences such as privacy breaches and harassment give a face to the data-driven dynamics addressed in initiatives such as the General Data Protection Regulation (GDPR).

In other words, where ordinarily individuals may imagine a great faceless unknown when considering the collectors, users, and exploiters of their data, those who have experienced personal harm that has infringed their boundaries and personal space, will more likely place the face of their predator (if known) to this space. This is especially the case of subsequent privacy infringement attempts. It is not surprising to know that suffers of Post Traumatic Stress Disorder (PTSD) for instance, are often hypervigilant in light of potential threats (Kimbell et al., 2014). This is characterised by increased anxiety and attentional bias for instance (Bögels & Mansell, 2004). Moreover, people who have experienced intimate personal trauma as compared to nonintimate noninterpersonal trauma are significantly more likely to experience intrusive reexperience, hypervigilance, startle response and avoidance of reminders (Forbes et al., 2013).

So, what has this got to do with technology and artificial intelligence (AI)? Well, if what we have seen in a sample of participants who have not reported major past trauma are actively avoiding mechanisms of surveillance (this indicates stress), then, those who have suffered trauma – particularly interpersonal or intimate interpersonal – overlay these ‘everyday experiences’ of surveillance technology with another dimension of concern. Moreover, some would say, “Well, it’s not the technology that’s bad, it’s the people who use it.” And yes, of course, the people behind the technology and those who use it give the technology significance (if there were no people, and no significance, there is no technology), and therefore do engage in ill-intended activities with the technology. But, what we have to remember is that the more devices and systems that are around, the more opportunities deviant thinkers have to engage in augmented wrong-doing. The more sophisticated the technology, the more sophisticated the crimes (McGuire, 2012). Although efforts are being made to not only stay abreast of the developments in crime, but also predict it (Leese, 2024), there is no denying that damage is already being done at epidemic proportions. We are very soon heading towards a state of global trauma, in which one monster or another poses as a spook in the machine.

On November 5th (2024), the BUGGED team joins forces with SYNTHETICA to present a critical seminar titled, “Spooky AI – privacy concerns and algorithms that go ‘bump in the night’”. With us will be Associate Professor of Communication Toija CinqueCritical Digital Infrastructures and Interfaces (CDII) Research Group, Deakin University, Melbourne, Australia, who will talk to us about, “Super-Augmentation: Exploring the Intersections of AI, Hyper-Personalised Data, and Society” in which she probes the privacy dynamics of personal interactions with AI on a daily basis. So, join us near the VME Interaction Design Environment (in Technobothnia, room TF4103, Vaasa) or via zoom (to enrol please send an email to me, Rebekah.rousi@uwasa.fi). We will shine a light on the spooks in the algorithms and discuss possible avenues to bust the global ghost town.

 

References

Bögels, S.M. & Mansell W. (2004). Attention processes in the maintenance and treatment of social phobia: hypervigilance, avoidance and self-focused attention. Clinical Psychology Review, 24 (7), pp. 827-856

Forbes, D., Lockwood, E., Phelps, A., Wade, D., Creamer, M., Bryant, R. A., … & Meaghan, O. (2013). Trauma at the hands of another: Distinguishing PTSD patterns following intimate and nonintimate interpersonal and noninterpersonal trauma in a nationally representative sample. The Journal of clinical psychiatry74(2), 21205.

Herman, J. L. (1995). Crime and memory. Journal of the American Academy of Psychiatry and the Law Online23(1), 5-17.

Kimble, M., Boxwala, M., Bean, W., Maletsky, K., Halper, J., Spollen, K., & Fleming, K. (2014). The impact of hypervigilance: Evidence for a forward feedback loop. Journal of anxiety disorders28(2), 241-245.

Leese, M. (2024). Staying in control of technology: Predictive policing, democracy, and digital sovereignty. Democratization31(5), 963-978.

McGuire, M. (2012). Technology, crime and justice: the question concerning technomia. Willan.

Seberger, J. S., Shklovski, I., Swiatek, E., & Patil, S. (2022, April). Still creepy after all these years: The normalization of affective discomfort in app use. In Proceedings of the 2022 CHI conference on human factors in computing systems (pp. 1-19).

Shklovski, I., & Grönvall, E. (2020, October). CreepyLeaks: Participatory speculation through demos. In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society (pp. 1-12).

Shklovski, I., Mainwaring, S. D., Skúladóttir, H. H., & Borgthorsson, H. (2014, April). Leakiness and creepiness in app space: Perceptions of privacy and mobile app use. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 2347-2356).

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts