Generating Conversational Data: Attempts at Invading Machine Learning Systems in a World Made of Artifices. By Guillaume Saur B.FA, Concordia University, 2020 A Thesis Support Paper Submitted in Partial Fulfillment of The Requirement for the Degree of Master of Fine Arts Emily Carr University of Art + Design 2022 © copyright Guillaume Saur 2022 >Acknowledgements Coming from Montréal to Emily Carr University in August 2020, with a background in Photography and Installation, my whole artistic practice has completely pivoted, most particularly toward investigating the relationship between art and technology through my current multimedia and cyber art practice. While ECUAD has, throughout my entire degree, provided an ideal climate for learning and cultivating knowledge around my subjects of exploration, a subsequent portion of my technical abilities were also outsourced from off-campus seminars, workshops, and online classes. In bringing my interests toward surveillance technology from my undergraduate investigations in Photography to Emily Carr University, where I instantly dove into new media experimentations, I decided to take on new technical and conceptual challenges, which would ultimately take me on a journey where learning, making, failing, and questioning went hand in hand. Beyond the entanglement and sense of alienation, the undergoing Covid-19 crisis has set on so many of us, I found an amazing cohort composed of incredible peers who supported each other through difficult times, and created an undivided community that ultimately developed into strong friendships. I would like to, first, thank every single one of my classmates for being there for me as much as I have been there for them, if not more. I would also like to thank both my extraordinary advisors Ruth Beer and Justin Langlois for their support, guidance, honesty and, most of all, their undeniable fairness and humanity, without which I would not have taken this incredible path to find myself – becoming more understanding of others and concurrently meeting my authentic self in the process. The whole faculty, technicians, and facilities have also played an extremely important role in my education at Emily Carr University, and have proven to be available and attentive to my research through times when I needed guidance and advice. ii As an uninvited guest to the land of the Coast Salish peoples – Sḵwx̱wú7mesh (Squamish), Stó:lō and Səl̓ílwətaʔ/Selilwitulh (Tseil-Waututh) and xʷməθkʷəy̓əm (Musqueam) Nations - I have been continuously influenced and reminded of the strong history of the indigenous people of these territories, and am extremely grateful for their generosity, and hospitality, while participating and working together toward truth and reconciliation. Finally, I would like to thank my partner, Flavia, for her unlimited generosity, care, and love, without which I would not be the person I am today, nor certainly the person I wish to become. iii >Table of Contents >Acknowledgments …... ii >Table of Contents …… iv >Abstract …… 1 >Introduction …... 2 >Positionality …… 6 >Methodology …… 10 >Attempts …… 17 i. Featuring Extraction …… 17 ii. Monitoring …… 19 iii. Prototypes ……. 26 iv. Avatars ……. 29 v. Beta ……. 35 >Artistic References ……. 41 >Conclusion ……. 46 >Post-Defence Reflection ……. 48 >Bibliography …… 50 iv algorithmic anxiety’ revolves around concern about the extent to which we live our lives as imagined, self-transparent subjects in relation to algorithmic technologies. Algorithmic anxiety is not a sentimental subjectivity, or a personal pathology related to one’s feelings regarding algorithms. Instead, it revolves around the position of the self in algorithmic culture. It questions the normative affects of algorithmic culture on a self immersed in a regime of visibility that itself remains largely invisible. 1 1 Patricia de Vries and Willem Schinkel, Algorithmic anxiety: Masks and camouflage in artistic imaginaries of facial recognition algorithms, Big Data & Society, January–June 2019, 3. v >Abstract The instrumentalization of biometric data orchestrated through surveillance technologies can press today’s users of communication tools, devices, and platforms to opt out of such systems. This graduate research thesis aims to reveal the imperceptible forces that underwrite recognition technologies, through a series of experiments and the creation of multimedia installations that address the unethical relationship between active users and structures of control. Initiated at Emily Carr University, Vancouver, from September 2020 to May 2022, this research project hopes to uncover new possibilities for machine learning systems to propose protective devices based on image datasets composed of theatrical masks. This project explores the potential to apply generative models simultaneously as a response to facial recognition technology and as a mode of resistance to the dominance of these models. Finally, this research contributes to raising awareness toward extractive systems and surveillance technologies, while questioning the impact of artificial intelligence and its potential future outcomes. 1 >Introduction Over the last two decades, the internet has become a large manufacturer of behavioral data in which users are producing, posting, sharing, and liking multimedia content.2 3 This content – primarily text, sound, video, and images – circulates predominantly within social media platforms, where it is subjected to extraction by big tech corporations recognizing data as a new raw material to be harvested. This extractive process is called data mining, and is characterized today as one of the most lucrative industries in the world.4 Yet it is contingent on each country’s legislation, and has fortunately begun to be curtailed, however slightly, during the last few years in North America - particularly in Canada.5 Today’s “Tech Giants” also known as GAFAM – which stands for Google, Apple, Facebook, Amazon, and Microsoft – are the architects of most of our modern web infrastructure. They are in some ways the skeleton of the internet. However, they also support complex systems of control in mismanaging the data that circulates in their platforms. For instance, users are largely unaware of how these companies use their own data surplus to predict future customer behavior.6 Yet, the lack of transparency with their mode of data operations has created backlashes and a growing sense of mistrust from their users, which sometimes even bursts into public matters leading to trials, such as the Anna Briers, “Conflict in My Outlook”, Exhibition essay, The University of Queensland Brisbane, August 2020, Last accessed January 2022, https://www.conflictinmyoutlook.online/exhibition-essay 2 3 Shoshana Zuboff, The Age of Surveillance of Capitalism, The Fight for a Human Future at the New Frontier of Power, Profiles Books, 2019, 67-70. 4 Kate Crawford, The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, 2021, 113-114. 5 The Canadian Press, "B.C., Alberta, Quebec watchdogs order Clearview AI to stop using facial recognition tool”, www.cbc.ca, December 14th 2021, last accessed January 2022, https://www.cbc.ca/news/canada/clearview-ai-facial-recognition-1.6286016 6 Shoshana Zuboff, The Age of Surveillance of Capitalism, The Fight for a Human Future at the New Frontier of Power, Profiles Books, 2019, 96-97. 2 2018 Facebook Analytica scandal.7 It is fair to say that, aside from lawsuits, big tech companies are responsible for most of the online data trafficking, profiting from personal user information regardless of their consent. A commonly known strategy used by tech companies to monetize data is targeted advertising, such as stalker ads8 9. Yet, big tech corporations continue to present themselves as the builders of trustworthy platforms.10 Aside from “Tech Giants”, private software companies such as Palantir or Clearview A.I., specializing in big data analytics and facial recognition, are also participating in the economy of data mining. They primarily collect, buy, and train biometric data and models, solidifying image recognition and classification, which ultimately perfects surveillance technologies.11 The facial recognition algorithms these companies develop are components of facial detection and recognition which primarily use a geometric approach to distinguishing features and attribute what is called a facial signature. Each individual possesses its own distinct signature, which is exceedingly valuable to these software companies as it identifies one individual from the next. The data extracted is finally sold to law and police enforcement agencies, and serves to study, survey, and control individuals, ultimately with the effect of marginalizing minority groups and communities.12 Even if few North American cities, such as Portland, Oregon or states, such as Maine, are engaged in applying strong facial recognition bans, the technology remains part of a multimillion-dollar economy from which private companies Alvin Chang, “The Facebook and Cambridge Analytica scandal, explained with a simple diagram” www.vox.com, May 2nd 2018, last accessed January 2022, https://www.vox.com/policy-andpolitics/2018/3/23/17151916/facebook-cambridge-analytica-trump-diagram 7 Brian X. Chen “Are Targeted Ads Stalking You? Here’s How to Make Them Stop” www.nytimes.com, August 15th 2018, last accessed January 2022, https://www.nytimes.com/2018/08/15/technology/personaltech/stop-targeted-stalker-ads.html 8 9 Stalker ads: Advertising strategy suggesting ads according to one’s research history on the internet. Aisha Counts, “Trust in Big Tech is in free fall, according to a new survey” www.protocol.com, October 4 2021, last accessed February 2022, https://www.protocol.com/bulletins/trust-big-tech-facebook 10 11 Biometrics is the measurement and statistical analysis of our unique physical and behavioural characteristics. They are unique to each individual, like our fingerprints. Kashmir Hill, “The Secretive Company That Might End Privacy as We Know It” www.nytimes.com, January 18th 2020, last accessed January 2022, https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html 12 3 profit in order to serve governmental intelligence and security agencies. This algorithmic culture ultimately serves police state, classism, and racism where facial biometrics become a proprietary object.13 It seems difficult today to deny the fact that our own data is instrumentalized against us when we can genuinely assume that one of our selfies, family portraits, or wedding pictures, is presumably part of a database, training an algorithm on facial recognition.14 This whole system of power, known as Data Industrial Complex, seems to have circled back to us since our own images are feeding systems of surveillance. Yet, I cannot help but feel a sense of responsibility for being part of this economy, considering that my own photographs have presumably contributed to perfect surveillance technology which ultimately increase injustice and discrimination.15 I would like to highlight here that the history and development of photography as a tool for classification has always been connected to tradition of surveillance and facial recognition, particularly through practices of cataloguing and archiving photographs. Criminal photography, for instance, is directly connected to the first mugshots and the late nineteenth century predominant facial and racial studies, aggravated by the research on eugenics developed by anthropologist Francis Galton.16 At the time, the categorization of people according to their physiology was a big motivator of the development of photography. These methodologies manifest today in the algorithmic methods of classification and recognition, but also through biased databases composed predominantly of Caucasian faces. Ultimately, these strategies of visibility and 13 Patricia de Vries and Willem Schinkel, Algorithmic anxiety: Masks and camouflage in artistic imaginaries of facial recognition algorithms, Big Data & Society, January–June 2019, 6. 14 Kate Crawford, The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, 2021, 106. 15 Crawford, The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, 133. Eugenics is the scientifically erroneous and immoral theory of “racial improvement” and “planned breeding”. 16 4 recognizability are embedded in the history of photography, and continue to manifest through algorithmic and social media cultures today. The recent emergence of social media platforms has accelerated the production and consumption of photographic content, completely shifting the social, cultural, and political order of consumer culture online. In 2017, Marco Briziarelli and Emiliana Armano’s The Spectacle 2.0 suggested that the social web is framing social experiences around selfpromotion, performance and participation: Through free work/labour people organize their life around ‘creativity’ and self-activation (Armano and Murgia 2013), according to which the hetero-direction logic typical to Fordist mode is replaced by a new sphere of participation, self-promotion of subjective resources (Armano, Chicchi, Fisher, and Risi 2014) and self-responsabilization (Salecl 2010). 17 Today, it is through digital images that this economy of attention places selfies as the most commodified photographic artifact of the 21st century.18 They operate as digital goods exchanged on the social web and are extracted for their biometric value in order to train machine learning systems in recognizing and classifying facial signatures. The selfie culture has become increasingly profitable to Silicon Valley’s tech giants and created inequalities in data production and capitalization.19 This economy locates social media users - like me - who, on the one hand, feed selfies to web platforms while inherently participating in the culture of display, and tech giants on the other, who profit from biometric data that ultimately amplifies and enhances surveillance technologies. This contradiction acts as my point of departure in investigating this economy. 17 Marco Briziarelli and Emiliana Armano, The Spectacle 2.0: Reading Debord in the Context of Digital Capitalism, London: University of Westminster Press, 2019, 40. Kate Knibbs “Selfies are now the most popular genre of photo; in related news everyone’s the worst” www.digitaltrends.com, June 20th 2013, Last accessed April 2021 https://www.digitaltrends.com/socialmedia/selfies-are-now-the-most-popular-genre-of-picture-and-in-related-news-everyones-the-worst/ 18 19 Shoshana Zuboff, The Age of Surveillance of Capitalism, The Fight for a Human Future at the New Frontier of Power, Profiles Books, 2019, 22-23. 5 >Positionally Growing up with the internet as a terrain for representation, validation, and selfexpression, virtual environments such as video games, online chat-rooms or, more recently, social media platforms have been spaces for refuge when my IRL experiences were too polarized or dividing.20 However, in recent years, I have slowly started to grow more apprehensive of my URL experiences, while becoming increasingly aware of the invisible forces of surveillance capitalism operating within online environments. 21 I have started to remove images of myself from most online platforms - feeling progressively uncomfortable with online self-representation - and the ways in which my own biometrics were commodified and instrumentalized within platforms I was using on a daily basis. My own personal relationship with photography and self-representation has slowly shifted from being active and trackable to being self-muted and obscured. This bodily removal from hyperspace has acted as a response to the predominance of systemic online surveillance, but has also unfortunately operated as a sort of self-censorship imposed by invisible forces of power. My work aspires to imagine a world where some of our most intimate photographic moments don't become the foundation of new biometric profiles of tomorrow.22 In challenging systems of surveillance, such as facial recognition, I am hoping to inform my audience toward invisible forces embedded in such structures. As an artist working primarily with the intention of expanding systems of knowledge through disseminating and democratizing mediums such as artificial intelligence, I am interested in producing work that could demystify the complexity and opacity of these technologies. Ultimately, my research intends to activate a sense of agency in the viewer and to raise awareness toward facial recognition technologies. 20 IRL: In Real Life (offline experiences) 21 URL: Uniform Resource Locators (online experiences) 22 Adam Harvey, TODAY'S SELFIE IS TOMORROWS BIOMETRIC PROFILE. Think Privacy, Poster Paper, 2016. https://ahprojects.com/think-privacy/ 6 My own experiments and interactions with technology initially framed this research, but also acted as methodological tools to produce knowledge and participation. It seemed honest to initiate work based on my own experiences with computational and communication technologies while using myself as an experimental subject. In my research, I am asking many questions relating to the issues brought by surveillance technologies: What are the responsibilities and liabilities of tech-corporations? Of governments? Of common users? What type of resistance can be offered from machine learning systems when they are also a core technology built upon systems of distrust, transgression, and control? Can users free themselves from a system that feeds on their personal information to fuel a data mining industry? And if so, can they do it while surviving the culture of online production and labour they have been indoctrinated into? In beginning to investigate these questions, I have familiarized myself with the concept of digital capitalism introduced in March 1999 by historian of information and communications Daniel Schiller. In his essay, he discusses the impact of the democratization of the internet, and its implications with agents such as government, military, and educational tools in the proliferation of a neoliberal system sourced in data.23 Today, almost a quarter-century later, cyberspace has dramatically expanded its boundaries from institutional infrastructure to infiltrate the private lives of billions of users within the realm of what activist Shoshana Zuboff calls Surveillance Capitalism: an economy founded on data mining and undisclosed activities.24 It is within this politico-economic paradigm that my own research investigates the interconnected relationship between internet users, digital surveillance systems, and new technologies, while ultimately proposing a counter rhetoric to systems constructed upon artifices. In this thesis, the term artifice refers to the idea that algorithmically-generated content is fundamentally deceiving by nature, and creates systems of mistrust between 23 Daniel Schiller, Digital Capitalism, Networking the Global Market System, MIT Press, 1999. 24 Shoshana Zuboff, The Age of Surveillance of Capitalism, The Fight for a Human Future at the New Frontier of Power, Profiles Books, 2019 7 users and technology. However, I do not suggest that there is something authentic, original, or unpolluted about non-digital content, but the new media age has amplified the artificiality that surrounds our experiences of the world. A recent example of the degree to which these artifices manifest today is the late phenomenon of deepfakes.25 Another manifestation of that notion of mistrust is the concept of algorithmic anxiety discussed by Patricia de Vries and Willem Schinkel around authenticity and deception within algorithmically generated content.26 Their article on algorithmic anxiety is studied in further depth in my methodology section along with the work of Adam Harvey and Sterling Crispin, who both investigate various masks and camouflages as a response to surveillance technologies. In this thesis paper, I use the term resistance as a way to embody how I refuse and confront surveillance technologies in various series of experiments and artworks. However, the work produced within this research does not intend to solve issues brought by surveillance operations, but rather aims to reduce the opacity of the terrain in which they manifest. I am also primarily using the technical and more transparent term machine learning systems which suggest the function of the technology rather than the more opaque and trendy term artificial intelligence, which according to Kate Crawford goes in and out of fashion in keeping with scientific trends.27 Machine learning systems are large series of algorithms that make predictions or decisions based on sample data, also known as training data. In this research machine learning systems are used to train various series of images to predict or generate a new series of images. I am for the most part using a In synthetic media, it is the algorithmic process of replacing someone’s face with another person’s facial features. This phenomenon publicly emerged in 2016 with a viral deepfake video of president Barack Obama giving a fake speech. 25 26 Patricia de Vries and Willem Schinkel, Algorithmic anxiety: Masks and camouflage in artistic imaginaries of facial recognition algorithms, Big Data & Society, January–June 2019. 27 Kate Crawford, The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, 2021, 8-9 8 specific kind of machine learning system called generative models which are further discussed in the Attempts section of this paper. Considering the level of privilege associated with access to contemporary technologies, my research focuses primarily on the democratization of digital experiences within URL terrains of Western societies. Furthermore, my own experiences with technology are certainly different from the ones had by people of color, for instance. Identifying as a white person, I am not exposed to the same level of inequality as minority communities, but nevertheless attempt to investigate the effects of such systems of control within a series of experiments relative to my own experiences with surveillance technology. Yet, in acknowledging that online experiences are both distinct and unique to each user, my research hopes to investigate the trajectory following the global increase of mediated experiences, which now occur in everyday synchronized-network activities. Finally, instead of adopting a techno-dystopian approach and condemning new technologies, my research seeks ways in which machine learning systems can be put at the service of resistance, and used as potential counter-surveillance apparatuses, or as means to imagine non-dystopian futures. 9 >Methodology In my practice, Isaac Asimov’s 1956 short story The Last Question acts as both a conceptual reference for my research, but also as a methodological tool in that it investigates the meaning of asking questions to machine learning systems. In Asimov’s story, a question unanswered and fundamentally deemed unanswerable by humanity is asked to a universe-scale computer system called Multivac, which, in response, provides the same answer over and over again: “INSUFFICIENT DATA FOR MEANINGFUL ANSWER.”28 By the end of the story, humanity’s godlike descendants, who in time merge with the Multivac’s latest form called the AC, ask the same question before uniting with the computer and disappearing forever. The AC is once more unable to provide an answer, but goes on reflecting until time, space, and all life vanishes, ultimately declaring a chilling, “LET THERE BE LIGHT!”.29 Multiple interpretations of this well-known story have emerged since it was first released, but the comprehensive moral presumes that once everything ends, there is rebirth; the story can therefore start all over again in a repeating cycle. The end then becomes the beginning of something else with a new series of questions and inspirational answers. In the context of this research, Asimov’s vision is interpreted as a set of instructions that suggest persistently going back to the original question until it exhausts itself. Consequently, the methodology of this research follows a series of steps initiated by a question followed by interpretable data, ultimately manifesting itself in an artwork which introduces an opportunity to return to the initial question, or to a new set of inquiries. In initiating questions directed at A.I.-powered technologies, this research methodology is characterized, on the one hand, by a process of inquiry, and on the other by a collection of interpretable data. While asking questions to machine learning systems serves as the 28 Isaac Asimov, The Last Question (When the World Ends), 1956, PDF file, 3 last accessed November 2021, https://www.physics.princeton.edu/ph115/LQ.pdf 29 Asimov, The Last Question (When the World Ends), 9. 10 starting point for this research, the answers provided ultimately manifested in data-based sculptural pieces. The generated output for these questions most frequently takes the form of a large number of images called a “dataset”, which will induce the tangible forms and materials of the artwork. For instance, a trained dataset could manifest itself in a video work, a 3Dprinted object, or an engraved image, in the way that it interrelates with the dataset itself, but also responds to the original question. The resulting art objects become a vehicle, or a form of material translation for the computational answer to manifest itself in a conversation between the digital and physical. Finally, the work emerges in multimedia installations that circle back to the initial question through a language of images, hence initiating new series of inquiries and material explorations. Specifically, the methods in which this research is conducted are deeply grounded in an iterative process, that loops conceptual exploration sourced in the global architecture of new technologies to material investigations revealed in data-formed artworks and back again. Accordingly, these interrogatory systems allow this research to consider material investigations, not formally answering any initial questions, but translating what the current contemporary problematics with technologies are, and how they could potentially unfold in the future. In short, while navigating the speculative realm, the art pieces of this research suggest potential directions rather than definite trajectories. Working with both data mining and image processing is intrinsically part of my methods. I attempt to mimic some of the methodology of the algorithmic processes that I am researching by mining, scraping, and classifying publicly available images. Rather than using this data to perfect facial recognition technology, I am attempting to algorithmically generate a new series of original images that would further confuse facial recognition. Consequently, I am replicating some of the methods operated by facial recognition software companies but reversing their means. I am then putting myself into a machinecollaborator position where I provide non-biometric data – such as images of theatrical masks - to an algorithm that will potentially challenge systems of facial recognition. These methods are discussed in more detail in the section covering the production of the artwork 11 Monitoring, but I would like to clarify here my relationship with the use of artificial intelligence as both a medium and a tool for art making in my practice. In approaching this new way of producing art, I have considered machine learning systems, and more particularly generative models as a medium with their own code, aesthetic, history, and processes. Considering that art making with machine learning systems is a more recent practice, it would seem more appropriate to consider the technology as more of a tool assisting artists or leading to the outcome of the work. However, I have instead imagined my relationship with the medium of machine learning systems as more of a collaboration in conversation with the technology through a back-and-forth data-fed exchange. In catering or selecting specific materials for generating data, I was anticipating a certain result and collaborating with the technology in order to put meaning into my investigations. All the materials and installation strategies in my work refer to either the culture of display cultivated online, or the infrastructure and appearances of most online platforms, such as Facebook, for instance. The use of the color blue in my own artwork, for example, mirrors the predominance of the most used color in social web today, and acts as a thread that connects common internet experiences and refers to a collective and familiar space. Further, the language and terminology of visual merchandising is the primary and most recurrent methodological tool through which the artwork of this research is demonstrated, most fundamentally because images today cannot exist outside of that economy. I would also like to mention here that Emily Carr University's large series of white walls, nooks, and gallery spaces have been the perfect work environments for my installations and operated as an extension of the work itself. My process consistently starts with selecting a space to which my art pieces will respond. This element is important in that I want to create a sense of embodiment with the viewer in the environment that the work occupies. Selecting and then curating a gallery space with materials allows me to have a better sense of control in relation to my subject of exploration, and to deliberately create art pieces that fully respond to the environment they inhabit. My visual language is sourced in theoretical references that support part of the decision- 12 making and gestures of the work produced in both my material and installation investigations. They also operate as a source of inspiration for producing and generating new ideas. Four major academic references have directed and assisted the methodology of this research. First of all, Kate Crawford’s Atlas of AI, published in 2021, has given me structure for understanding the deep political implications of machine learning systems in contemporary societies, while providing a framework for this research demonstrated in generative outputs. According to Crawford, the extractive strategies adopted by today’s tech giants are at the core of a planetary network of centralized powers profiting from the data of internet users. In considering these politics of extraction, this research intends to mimic data mining strategies while critically training machine learning systems–that would normally operate as surveillance agents–to unmask these methods. In light of Crawford’s reports that classification and recognition are at the foundation of what machine learning systems accomplish today within a context of online surveillance, we might suggest that applying similar, or reconfigured, algorithms to an art making framework has the potential to subvert such systems, and could even generate tangible apparatuses of defiance and confrontation, such as 3D printed masks. Critically training machine learning systems also means training objects, such as theatrical masks, for instance, which do not possess a biometric signature. While these masks reference biometric attributes such as mouths or noses, they do not convey quantifiable body measurements and calculations related to human characteristics. Conventionally, software companies that specialize in big data analytics would use biometric elements such as human faces to be trained in systems of recognition and classification; in this research, non-interpretable biometric masks are utilized in generative systems that imagine, rather than classify. The masks generated by and during this research can be understood as positioned against the broad political framework that implicates the machine learning systems described by Crawford, but also as objects drawn on post-humanist theories, such as Donna Haraway’s 1985 essay The Cyborg Manifesto. In this piece, Haraway explores the 13 idea of mixing machine-humans with both creatures of social nature and creatures of fiction, while proposing to deconstruct the cyborg myth about transgressed boundaries and dangerous possibilities. Haraway positions herself against masculinist technoutopian enhancements of the body by modern technologies. Thus, she opposes dualistic forms of understanding such as mind/body, nature/culture, male/female, primitive/civilized. Her main intention is to transcend the constructed body conceived by patriarchal society, and therefore offers to look at bodies not as non-normative westernized entities, but rather as part of a collective consciousness. The cyborg is therefore not a quest for fragmentation between mind and body, but rather a biological development that evolves in a complex system, from which technology is not presented as an upgrade. Like Haraway, this research understands that science fiction only perpetuates a myth to foresee human bodies as a piece of hardware. Rather than being seen as augmenting devices, the 3D-printed masks generated for this research are understood as apparatuses reprocessing human-like features towards protection and transfiguration, which ultimately transgresses gender and signatures. Additionally, researcher and activist Shoshana Zuboff examines contemporary neoliberal economies of data exploitation, as well as their concealed implication and oppressive application within today's mediated experiences. In her latest book, The Age of Surveillance of Capitalism published in 2019, she analyzes the instrumentalization of behavioral data initiated by America’s tech giants. Zuboff suggests that these companies have provoked a new political regime of surveillance capitalism that echoes totalitarian systems and profit from users to anticipate their needs. Furthermore, Zuboff observes that the power of Surveillance Capitalism stems from cyberspaces and has direct implication in the architecture of control of the masses. In aligning with and building from Shoshana Zuboff’s analysis, my research recognizes this contrived geopolitical economy founded on data mining, black box algorithms, and undisclosed activities, while proposing counter strategies of representation and 14 protectiveness.30 Although behavioral data is at the core of the issues Zuboff discusses, this research focuses on data that is either not interpretable by surveillance systems or invaluable in an economy of readable and interpretable images. Therefore, the artwork produced for this thesis exists outside biometric values and suggests reconfiguration, rather than monetization. Finally, the article “Algorithmic Anxiety: Masks and camouflage in artistic imaginaries of facial recognition algorithms” published in 2019 by Patricia de Vries and Willem Schinkel serves as a helpful framework to my methodology in the making of my own algorithmic generated masks, but also in analyzing the work by artists such as Sterling Crispin, Zach Blas, and Adam Harvey, who also center some of their projects around the face. De Vries and Schinkel suggest that technological advancements have placed the face at the center of political battlefields today and created what they call “algorithmic normativities” which are defined as a social-technical capture of the face.31 They claim that the face is becoming a political landscape accentuated by the selfie culture which some artists such as Harvey, Blas or Crispin attempt to transcend through the refusal of the classic and artistic tradition inherent to portraiture. These three artists are in various ways reinventing camouflage strategies to imagine masks or fabric patterns as “cryptographic material.”32 Like them, I have understood my own masks as being some sort of encrypted barriers resisting facial recognition algorithms. For instance, Harvey’s 2017 project HyperFace, in which he created a false-face camouflage pattern overwhelming computer vision, resembles how my own dysmorphic masks intend to thwart similar systems.33 De Vries and Schinkel understand the notion of camouflage as a way to feature or 30 Black box algorithms are part of artificial intelligence systems whose data and metadata are not visible to users or other interested parties. 31 Patricia de Vries and Willem Schinkel, Algorithmic anxiety: Masks and camouflage in artistic imaginaries of facial recognition algorithms, Big Data & Society, January–June 2019, 2. 32 Patricia de Vries and Willem Schinkel, Algorithmic anxiety: Masks and camouflage in artistic imaginaries of facial recognition algorithms, Big Data & Society, January–June 2019, 4. 33 Adam Harvey, HyperFace, https://ahprojects.com/hyperface/ 15 highlight the unrecognizable. “[…] camouflage does not so much pertain to complete invisibility, but rather to becoming unrecognizable. Camouflage involves both revealing and concealing (Leach, 2006: 244). It is thus a tactic of invisibility through visibility.” 34 This notion is important in relation to my own masks as I similarly did not attempt to accomplish invisibility but rather wished to impersonate a camouflage type of mask, which was later characterized in a fully disguised avatar. While this notion can somehow seem contradictory, it is important to underline that my goal was not to become invisible in this process of camouflage but rather unrecognizable to facial recognition algorithms. By doing so, my own experiments created a sense of empowerment through the representation and personification of an avatar confronting what Crispin describes as the “Technological Other.”35 34 Patricia de Vries and Willem Schinkel, Algorithmic anxiety: Masks and camouflage in artistic imaginaries of facial recognition algorithms, Big Data & Society, January–June 2019, 5. 35 Sterling Crispin Data-masks biometric surveillance masks evolving in the gaze of the technological other, 2014, last accessed March 2022: http://www.sterlingcrispin.com/Sterling_Crispin_Datamasks_MS_Thesis.pdf 16 >Attempts i. Featuring Extraction Featuring Extraction is a 12’x8’ sculptural piece presented in the context of the “State of Practice” exhibition at Emily Carr University in September 2021. It is the final work produced during my first year of material research and installation strategies. Through the use of a protest banner, this piece operates within the critical structure of refusal work and questions the political framework in which algorithmically generated images are produced. Figure 1: Featuring Extraction, plywood, vinyl banner, paint, tape, 12’x8’, 2021. Within the context of digital capitalism, this work understands machine learning systems as an extractive industry supported by multimillion-dollar tech companies, and proposes 17 to challenge systems of digital surveillance such as facial recognition.36 In recreating the hoarding of an excavation site, this installation situates the work within the territory of colonial extraction embodied by lithium, data, and computational technologies. Similarly, interventions made using spray paint and wheat-pasted aerial images of major global mining lithium sites were applied in order to parallel or mimic excavation sites. The grid-like images were displayed on a 10’ vinyl banner and created by a generative model based on a dataset composed of social media users’ selfies using augmented reality filters.37 38 This dataset, consisting of more than 1,200 images, was first run through a facial recognition algorithm. Then, all the images not recognized as faces were fed to an algorithm generating new faces and features out of images composed of bits of both AR filters and faces. This installation was a way to explore AR filters as having the potential to serve as a form of protection against facial recognition and to imagine new faces out of the canvas of recognizable biometrics and surveillance tactics. Finally, Featuring Extraction draws upon the artificiality of both digital and physical spaces, and confronts structures of power embedded in machine learning systems while proposing images that suggest, in the form of a protest banner, resisting the gaze of contemporary technologies. Featuring Extraction was one of my first experiments with machine learning systems; it acts as an introduction to the work presented in this thesis support paper draft, and prompts the point of departure for the more significant work of my research investigations. 36 Kate Crawford, The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, 2021, 15. 37 Generative models are unsupervised machine learning algorithms that can be used to generate new examples from an original dataset. In my research, I am use generative models to re-imagine faces, or masks. 38 Augmented reality filters are enhanced effects superimposing on real time images. They are widely popular on social media platforms and can manifest as futuristic glasses or cartoon characters. 18 ii. Monitoring Envisioning augmented reality filters as online protective devices that counter facial recognition, this research almost naturally led me to ponder the possibility of creating tangible masks to operate as physical, bodily protection for “in real life” individuals. While thinking about individuals as brands or products, I turned to the theoretical context of manifested artifices outlined by the revisited theories of Guy Debord, as discussed in The Spectacle 2.0. In Chapter 1, Briziarelli and Armano describe this revived concept as such: The Spectacle 2.0, as the name suggests, takes cues from the evolution of web media from the first generation (so called 1.0) of bounded environments in which users were constrained in utilizing the products. […] Both the shift in the political economic model of production and the new participatory perspective materialized via web 2.0 based applications have created a social and cultural milieu allowing the formation and exchange of users-generated content in the social media.39 It is within this specific context of proliferation and transaction of images that the disguises of the commedia dell’arte presented themselves to me both as visual cues and conceptual references initiating a new body of work, where individuals can be reimagined as theatrical devices inducing a sense of personification and protectiveness. Similar to the method I employed for compiling social media image datasets applied to my piece Featuring Extraction, I “scraped” thousands of images depicting various types of protective masks, with the exception here of using Google.ca as a new terrain for collecting data.40 By relying on deep neural networks associated with issues of privacy, Google is known for being both at the forefront and the origin of today’s surveillance capitalism structure.41 42 By scraping images out of Google, known to be the most 39 Marco Briziarelli and Emiliana Armano, The Spectacle 2.0: Reading Debord in the Context of Digital Capitalism, London: University of Westminster Press, 34. 40 Image scraping is a way to automate the pulling of large number of images out of a URL page. 41 Shoshana Zuboff, The Age of Surveillance of Capitalism, The Fight for a Human Future at the New Frontier of Power, Profiles Books, 2019, 10-22 Jason Tanz, “Soon We Won't Program Computers. We'll Train Them Like Dogs”, www.wired.com, May 17th 2016, last accessed October 2021, https://www.wired.com/2016/05/the-end-of-code/ 42 19 democratized search engine in the world, I intend to apply an extractive methodology similar to the one the multi-billion-dollar company has been profiting from for a decade, with the intention of reversing the means of image collection.43 In using “theatrical mask” as the original key word search for this data collection, my attention is directed toward a very specific type of mask, which proposes an extended artifice. As opposed to a non-character type of disguise, theatrical masks personify users with the identity of a persona. One becomes Harlequin, Pantalone, or Zanni, while protecting their own anonymity. This suggests a new impersonated identity. When discussing persona and masks, de Vries and Schinkel tell us: […] the concept of person comes from the Latin persona, denoting a theatrical mask. Less well known is that persona is a more complex concept altogether. It signifies movement and sound, a sounding through the face, literally a form of per sonare. The theatrical concept of the persona stands for both the mask and for the part played, but also for the face.44 Accordingly, I am approaching masks as being part of the construction of a persona revolving around the face of a theatrical character. I am not hiding behind the mask, but rather imagining myself as a new algorithmically reprocessed avatar confronting facial recognition. These theatrical masks serve my research in the sense that they exemplify the very same conflict found in today’s culture of online representation through the self-merchandising of individuals on the social web. By extension, publishing self-representative images online, such as selfies, while performing constructed personae, can be understood as a sort of protection and enhancement of the self by way of masking. Augmented reality filters, for instance, can be seen as a manifestation of that culture. My own datasets of masks further embody a notion of camouflage as described by de Vries and Schinkel as 43 The top 500 sites on the web, www.alexa.com, last accessed October 2021, https://www.alexa.com/topsites 44 Patricia de Vries and Willem Schinkel, Algorithmic anxiety: Masks and camouflage in artistic imaginaries of facial recognition algorithms, Big Data & Society, January–June 2019, 5. 20 “a form of unrecognizability”, which will later manifest as a counter strategy to the anxiety provoked by facial recognition algorithms.45 While compiling this new dataset of “theatrical masks”, I also meticulously collected and classified other types of masks, such as face masks, beauty masks, light therapy masks, VR masks, gas masks, sleeping masks, goggle masks, respiratory masks, hole masks, and other various types of face shields. Each of these somehow fall into the category of defensive devices and constitute an ideal material to reimagine in relation to protectiveness. The notion of defensive devices as objects cascade across various economies or practices, but all of which serve as a kind of layer or threshold between humans and the world around them. After compiling multiple datasets of these various types of masks, two selected datasets were trained against one another on the online platform Playform which now enables artists to explore artificial intelligence in a no-code environment. The two datasets were trained through a generative model called Creative Morph, which, as indicated by its name, morphs the two datasets into one final series of images. The first dataset composed of theatrical masks, labeled Inspiration, was used as a reference for each training session, while the second dataset, Influence, was composed of various types of masks. Some examples of these training episodes would then read as follows: theatrical masks against VR masks, theatrical masks against hole masks, or even theatrical masks against respiratory masks. While the Inspiration dataset informs the shapes and contours of the results, the Influence dataset will iteratively morph to resemble its own set and determine its colors. This process then allowed me to generate a final series of unique images of a new genre of masks, merging both the theatrical and protective properties of the masks for each training session. 45 Patricia de Vries and Willem Schinkel, Algorithmic anxiety: Masks and camouflage in artistic imaginaries of facial recognition algorithms, Big Data & Society, January–June 2019, 6. 21 A second series of investigations also took place within the same platform, using a similar generative model called Freeform, which, as opposed to the Creative Morph model, only requires one dataset of images to train. This process generates a sequence of images into an amalgamation of a single collection that also results in unique images of masks. This generative model allowed me to train a unique genre of masks as inspiration instead of using an inspiration and an influence simultaneously. By doing so, each type of mask generates new images using only itself as a reference. This process allowed each type of mask to be created in a closed circuit, referencing its own category and corresponding features. However, multiple categories of masks can then be combined together within the same dataset in order to “confuse” the algorithm and generate images referencing two or more categories in the same trained dataset. In many cases, this technique successfully generated images that had the potential to create distinctive masks, while proposing potential protective outcomes. For instance, the trainset of “theatrical masks + hole masks” visually stood out from this series of investigations and initiated the first physical piece in a new body of work, which considers data as its central material. Figure 2: th/ho trainset, Freeform Generative Model, 2021. In training these image datasets, I am interested in researching what kind of answers machine learning systems can provide regarding potential strategies of resistance toward surveillance technologies. In considering that training a generative model is posing a metaphorical question to the machine, this research investigates the following inquiry: 22 What type of protective and anthropomorphic devices can be imagined for the future? Ultimately, I am also interested in determining whether these devices can propose avatar-like types of protection, which is originally connoted in the commedia dell’arte masks; the one wearing the mask would not only be protecting themselves but would become a re-transfigured character. This question will be further expanded later in the work. By providing physiognomic references to the human face, such as noses, mouths, eyes, and chins, the input images of masks used in this series of investigations act as biometric references that the generative model can later interpret as anchor points to imagine new features. They also provide other points of reference strangely distinctive to the human face. They sometimes exaggerate features modifying and multiplying them–by either adding a nose, subtracting an eye, or even replacing almost all recognizable human facial traits with re-fabricated features. Here, these discrepancies are understood as opportunities to provide some sort of shadowy features to the algorithm in order to, on the one hand, potentially confuse facial recognition technologies later on, and on the other to propose unique and uncommon features that suggest the ghost in the machine. The “theatrical masks + hole masks” Freeform trainset, that I will subsequently refer to as “th/ho” for simplification purposes, was then selected for creating a new sculptural/video piece entitled Monitoring. This piece takes on the form of a video diptych: two monitor screens operate a two-minute loop of the moving-image outcomes of the “th/ho” dataset. The video’s slow motion reveals the process of computational generative images which, contrary to common assumption, is a time-consuming, almost organic procedure which embodies a mechanical method of seeking formation and formulation of data. Both screens featured in Monitoring are attached to the wall at a seven-foot height using a mounting monitor arm, which creates an intimidating sensation both inherent to the authority of technology and as a result of the imposing gaze of the monitors viewed by the audience from below. However, by borrowing from the semiotics of portraiture and 23 through the use of a diptych display, this piece tends to be more conversational than it is intimidating, whereby the images are presented to the audience as portraits gazing back at them. Figure 3: Monitoring, 2-min video loop installation, 7’ x 3’, monitors, mounting arm, cords, 2021. This work also proposes to look at how, in both an algorithmic and anthropomorphic manner, data-fed machine learning systems can reconfigure human features. As a result, we understand that the algorithmically generated masks have taken these features into consideration and suggest a human presence within a computational process. The conglomerated faces and masks embody that presence and seem to acknowledge human beings in the machine. Incorporating metadata and microscope-like types of imagery, each video in Monitoring is framed by an overly zoomed-in, blurry detailed strip of the morphing sequence. By exposing the transformative growth of the mask inherent to that digital process, this visual strategy reveals the underlying architecture and artifice of the images. 24 Finally, both monitors are connected through a series of blue power cords referencing ethernet cables and hyperlinks. These suggest the nervous system of the internet, where data is connected through a rhizome-like network, in which blue is the default color. According to Dom Hennequin, the color mimics the predominance of blue in the “real world” (e.g., ocean, sky), and is primarily used to create a familiarity within digital spaces.46 As in Hito Steyerl’s 2014 video installation Liquidity Inc, I have been using the color blue as a metaphor for digital information, IT infrastructure and interfaces. The cables also provide some weight to the piece and accentuate the bodily presence of the overall installation, which follows the bracket mounting support mimicking human arms, and the two monitors resembling human eyes. The theatricality of the piece imitates an altar-like type of display in symmetry, composition, and presentation strategies, hereby accentuating the notion of artificiality within this dysmorphic sculptural work. Monitoring ultimately offered me the opportunity to create a series of prototypes imagined as apparatuses confronting recognition technology, which is discussed in the next chapter. Dom Hennequin, “Why is Blue the Internet’s Default Color”, www.envato.com, Jan 9th 2018, Last accessed November 2021, https://envato.com/blog/blue-internet-default-color/ 46 25 iii. Prototypes Prototypes is a series of algorithmically generated masks based on four images extracted from the video piece Monitoring, formerly composed of the “th/ho” dataset. These 3D-printed prototypes were originally presented in December 2021 at Emily Carr University in a small gallery space commonly referred to as the Nook, along with four video pieces that will be discussed later in this paper. Figure 4: Prototype 01, 02, 03, 04, algorithmically generated masks, masking tape, 19”x6”, 2021. As opposed to the widely known “V for Vendetta” theatrical masks - which are today associated with hacktivism and civil disobedience - my own prototypes are not imagined as symbols of protest.47 They manifest as camouflage to be solely worn in front of facial recognition technologies in a series of enclosed experiments in my own studio. 47 Nick Thompson, "Guy Fawkes mask inspires Occupy protests around the world". CNN World, 5 November 2011, last accessed March 2022: https://www.cnn.com/2011/11/04/world/europe/guy-fawkesmask/ 26 Consequently, they are different from activist-type masks such as Zach Blas’ Facial Weaponization Suite (2012-14), which the artist describes as “intersect[ing] with social movements’ use of masking as an opaque tool of collective transformation that refuses dominant forms of political representation.”48 My own prototypes also engage differently with facial recognition from the surveillance work of American artist Sterling Crispin. While Crispin aims to reverse the gaze on machine vision through his Data-Masks (2013-15) project, my own masks intend to thwart facial recognition algorithms by accentuating dysmorphic features vaguely resembling the human face. This coincides with my initial intention of using generative models because they generate uncanny facial features that have the potential to further confuse recognition technologies. This was proven to be accurate when I later experimented with Runway ML’s recognition algorithms, which is discussed in the next chapter. Additionally, Crispin describes his own masks as ‘‘animistic deities, brought out of the algorithmic spirit-world of the machine and into our material world, ready to tell us their secrets, or warn us of what’s to come.’’49 Crispin’s definition parallels some of my discoveries in the making of my own prototype masks, which can be described as ghostly faces. Accordingly, in giving the “th/ho” dataset a tangible reality, these prototypes have inexorably increased the eerie dimensions of the algorithmically generated images, which somehow was not expected. I became more and more aware of the presence of a socalled Ghostly Image, or Aesthetic of Absence in the field of generative models.50 Zach Blas, “Facial Weaponization Suite”. Zachblas.info. Last accessed March 2022, https://zachblas.info/works/facial-weaponization-suite/ 48 49 Sterling Crispin, Data-masks, 2013, last accessed march 2022, http://www.steringcrispin.com/datamasks.html. 50 Spirit Media, MIT Media Lab 2018, last accessed January 2021, https://spirits.media.mit.edu/index.html 27 Although not central to my own research, I have started to interrogate the computational tools I am using in order to understand what makes these uncanny images symptomatic of machine learning aesthetics. The making of these prototypes led me to question what gives algorithmically generated images such an uncanny or even threatening nature? Being familiar with concepts such as the Ghostly Image or the Uncanny Valley, the discoveries I made did not come as a surprise in the research, but the level at which the masks translated a certain eeriness was unexpected.51 In understanding that the ghostly aspect is inherent to the algorithmic process itself, I realized that the transfer from digital to physical exaggerated the threatening effect these uncanny images can have on an audience. While the over-exaggerated ghostlike presence was not initially expected in the masks, I decided to welcome what was presented to me by the algorithm. While the initial intention in imagining these defensive artifacts was to think first and foremost about resisting facial recognition apparatuses, the spookiness of these masks can be recognized as a form of protection. Their ghostly aspect, while they can be seen as threatening to some, can also be interpreted as a metaphorical fortification to the user’s precious biometrics, in which case their role is to build a system of preservation and camouflage. Yet, in deciding to confront the gaze of the camera, and in intending to create a space for resistance, I wish to oppose the oppression of recognition technologies. This aspect of the work has taken form in another series of experiments, introducing an avatar embodying a fictional character for invading machine learning systems while staging a laboratory for capturing images, which is discussed in the next chapter. In the late 1970’s, Japanese researcher and roboticist Masahiro Mori described this effect within his own domain of study as the “Uncanny Valley”. His theory has been proven to be increasingly relevant in recent years, as the booming field of robotics produces uncanny creatures resembling science fiction characters from Mori’s era. 51 28 iv. Avatar In Chapter 4 of Atlas of A.I., Kate Crawford described the various biases inherent to facial recognition as an “epistemic machinery.”52 In understanding the ways in which this oppressive technology is primarily made to increase systems of inequity regarding people of color, as described in Joy Buolamwini’s research at MIT and featured in the 2020 Netflix documentary Coded Biases, this research understands the act of wearing a mask as a gesture against the extensive oppression of surveillance technologies.53 Figure 5: Still from the Avatar Performance with the “YOLOv3” object recognition algorithm, 2021. This new body of work is constructed around a series of experiments with recognition technology through a full embodied experience and the formation of an avatar that serves as a subject for investigation. The artificial environment personified by a virtual blue background delimits this exploratory space and establishes a shielded territory for interacting with recognition technologies. While mimicking cinematic and imaginary 52 Kate Crawford, The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, 2021, 131 53 Shalini Kantayya, Coded Bias, 7th Empire Media, 2020, https://www.netflix.com/title/81328723 29 representations of cyberspace, this virtual laboratory then operates as a safe controlled environment. The color blue here refers to online interfaces’ default color, thus creating a meta-space for this avatar to exist in. I engage in impersonating an avatar embodied by a translucent, or semitransparent, poncho and a prototype mask; in this process I wish to remain anonymous, but also aim to create a full physical persona that further embodies ideas of protectiveness, simulation, and artificiality. While transparent textures are not as easily distinguishable to the technology, the whole costume intends to confuse the algorithm in recognizing a human body or figure. In performing for the recognition algorithm, I create a moment of confrontation, allowing for simultaneous vulnerability and protection, and thus determining the appointed costume for this series of experiments. When discussing the work of social anthropologist Tim Ingold, De Vries and Schinkel tells us: the mask is not a disguise intended to hide the identity of the bearer’ (2000: 123). Rather, practices of masking and camouflage intervene in the way the self becomes visible in relation to the self, others and to its environment in the first place. To avoid being captured by recognition algorithms, camouflage provides a way to vanish in the background, to nonidentity.54 In my experiments with facial recognition technologies, I do not intend to hide myself from the world but rather to embody a new impersonated character unrecognizable to facial recognition, and paradoxically visible and significant to myself and to my audience. The avatar becomes a form of empowered character directly facing the gaze of surveillance algorithms. The online platform Runway ML has served as an ideal terrain for investigating with recognition technology for its accessibility and popularity within the machine learning community for content creators. I first experimented with their facial recognition model 54 Patricia de Vries and Willem Schinkel, Algorithmic anxiety: Masks and camouflage in artistic imaginaries of facial recognition algorithms, Big Data & Society, January–June 2019, 6. 30 but, as anticipated, it did not recognize my own face while performing with the mask, because its biometrics are different from my own. I then quickly moved to one of their object recognition models, humorously named “YOLOv3”, to discover that it would, for the most part, not recognize my avatar as a person but as either “a dog, a sheep, a donut, a horse, a teddy bear, a bird, or a cow”. Not knowing that this object recognition algorithm was trained on this series of objects and animals, I was surprised to receive such amusing feedback. Additionally, these ridiculous results seemed to mock the threatening aspect of recognition technologies, hereby demystifying their application, which ultimately served this research in creating a sense of empowerment. This demystification embodies my larger research questions and resistance strategies. Figure 6: Avatar, 01, 02,03, 04, LCD display screens, digitally engraved acrylic sheets, mounting arms, electrical cables, 2021. Every interaction with the “YOLOv3’ algorithm was then screen-recorded and edited into four video pieces corresponding to each of the four masks. The final videos were then 31 meant to be seen as the documentation of my interaction with recognition technology, and to represent a flux of images and glitches embodying the tension and dissonance created within such complex systems. By performing through the avatar, gazing at the camera, and shaping movement with my whole body, I intended to provoke object recognition systems, but also to distress the technology itself. This performance acts as a way to confront facial recognition systems and embodies an idea of resistance inherent to the ways my own disguise and movements deceive the accuracy of the algorithm. Ultimately, the video pieces revealed my own interaction with recognition technologies, but also introduced a sense of novelty in the work by highlighting the absurdity and imperfection of the technology. The four video pieces were then installed in sequence along with their four respective masks in an installation entitled Attempts at Invading Machine Learning Systems, which took place at the Nook gallery in December 2021. Similar to the piece Monitoring, they were exhibited as sculptural works mimicking a bodily form, thus giving them a sense of ghostly presence. The video pieces were displayed on four 7” monitor screens attached to mounting arms and each fixed onto the wall through a blue acrylic sheet engraved with shadowy, indiscernible biometric patterns. All four monitor screen pieces were then plugged into power outlets located on both adjacent walls of the Nook through an extensive network of blue power cable, which emphasized the corporeal nature of the work. While the central objective of this installation is to reveal the inaccuracy of the recognition algorithm with a certain sense of novelty, the work was also perceived as intimidating which was accentuated by the eeriness of both the images and audio component of the work. To some extent, this installation has both the ability to mock surveillance technologies, but also the faculty to remind the audience of their sinister nature. 32 Figure 7: attempts_at_invading_machine_learning_systems, LCD display screens, digitally engraved acrylic sheets, mounting arms, electrical cables, algorithmically generated masks, masking tape, 2021. The sound of this installation was composed of defect noises matching the video glitches along with ambient instruments, and recordings of in-motion 3D printers. It also consisted of both male and female robotic voices addressing what the algorithm did or did not recognize. Sentences such as “I see a bird” or “I don’t recognize this object” were embedded in the soundtrack as an audio reminder of the algorithmic experiments, which ultimately only increases the haunting effect of the video. The overall installation aimed to exhibit my own investigations with facial recognition software, thereby creating a space for resisting such systems, in order to sensibilize an audience to the predominance of these invisible forces. Today’s lack of agency and transparency regarding surveillance technologies is highly symptomatic of systems of mistrust embedded in mediated experiences. 33 Ultimately, Attempts at Invading Machine Learning Systems reveals the imperceptible forces associated with recognition technologies, thus rendering an anxiety-driven mise en scène that not only mimics these apparatuses but amplifies their effects. By doing so, this work aims to unveil opaque and authoritative systems of surveillance operating in the dark. This installation was the conclusion of my tangible experiments with facial recognition before investigating the significance of surveillance technologies in cyberspace. Attempts at Invading Machine Learning Systems left me wondering whether looking for unrecognizability and simultaneously for visibility could be achieved by using a digital avatar. Navigating a brand-new terrain, I decided to experiment with three-dimensional animated environments in order to better infiltrate and examine surveillance issues within cyberspace. 34 v. Beta This new body of work, entitled Beta, is a considerable shift from my previous work produced in this research, which does not follow the same methodological structures investigating machine learning systems. Instead, Beta emerged from my assumption that virtual environments would not necessarily expose me to facial recognition algorithms. As mentioned in my positionality statement, I have for a long time imagined some of my favorite virtual environments as ideal territories for refuge, in which my own biometrics would not be instrumentalized. However, despite tech-giant Meta’s latest statement on shutting down facial recognition within their Facebook platform, recent articles have shown that Meta and Microsoft’s metaverse environments are in fact subjecting users with new and improved facial recognition patterns through the use of virtual reality masks or augmented reality glasses.55 56 57 These technologies are becoming normalized within virtual environments and are increasingly incorporating biometric sensors, putting users in an even more vulnerable position.58 Other menacing technologies such as iris recognition for VR and AR headsets are also under development by tech companies like Apple, for instance. It is important to note that these new headsets are required to enter metaverse environments such as Meta’s Horizon World, which cannot be simply accessed from a phone, a tablet, or a computer.59 Toby Tremayne and Ryan Gill, “We need to Kick Big Tech out of the Metaverse” wired.com, July 7 2021, last accessed march 2022: https://www.wired.co.uk/article/metaverse-big-tech 55 Jerome Pesenti, “An update on our Use of Face Recognition, fb.com, November 2, 2021, last accessed March 2022: https://about.fb.com/news/2021/11/update-on-use-of-face-recognition/ 56 Khari Johnson, “Facebook Drops Facial Recognition to Tag People in Photos”, wired.com, November 2 2021, last accessed February 2022: https://www.wired.com/story/facebook-drops-facial-recognition-tagpeople-photos/ 57 Magnopus UK, “Biometrics Level Up VR And Provide The Next Leap Forward In Human/Computer Interaction“ medium.com, May 5 2021, last accessed march 2022: https://medium.com/xrlo-extendedreality-lowdown/biometrics-level-up-vr-and-provide-the-next-leap-forward-in-human-computer-interaction293c03983f15 58 59 Khari Johnson, “Facebook Drops Facial Recognition to Tag People in Photos”, wired.com, November 2 35 This has prompted me to question metaverse environments and initiated a new body of work that investigate the implications of surveillance technologies within virtual environments promoting safety. Shoshana Zuboff describes what she calls “the right to sanctuary” as “The human need for a space of inviolable refuge which has persisted in civilized societies from ancient times.” She also suggests that this right is now “under attack as surveillance capitalism creates a world of ‘no exit’ with profound implications for the human future at this new frontier of power.”60 It is within this context that my new body of work investigates virtual environments as spaces for refuge while acknowledging that the instrumentalization of biometrics follows us everywhere. Figure 8: Still from Beta, three-dimensional animated movie, 2022. While addressing their journey venturing the potential of reinventing oneself within virtual spaces, American curator and writer Legacy Russel said, “I searched for opportunities to 2021, last accessed February 2022: https://www.wired.com/story/facebook-drops-facial-recognition-tagpeople-photos/ 60 Shoshana Zuboff, The Age of Surveillance of Capitalism, The Fight for a Human Future at the New Frontier of Power, Profiles Books, 2019, 21. 36 immerse myself in the potential of refusal.”61 In my own practice, Beta serves as an opportunity for refusing systems of surveillance within immersive environments. By wearing an algorithmically generated mask as a symbol of resistance, my own avatar seeks protectiveness. The didactic approach of the work – through the use of a critical monologue – embodies a strategy of resistance and provides the viewer with extensive information about the implications of surveillance technologies within virtual environments. I was heavily influenced by Zach Blas’ 2017 video installation im here to learn so :)))))), where he resurrected Tay, an A.I. chatbot developed by Microsoft through the creation of a reanimated 3D avatar. Blas’ video piece humorously pastiches contemporary internet cultures and sassy language, through a monologue describing the fabrication of a dysmorphic avatar. In my own work, I have followed a similar strategy using a critical monologue that follows the construction of my own avatar. I also became increasingly interested in photorealistic renderings of digital avatars manifesting in the video work of CGI artist Ed Atkins. I was taken by the high-definition texture and humanlike details portrayed in his avatars, and the way they simultaneously conveyed a feeling of unease, but also a certain sense of attachment. Adkins’ HD Avatars have been described by new media researcher Francesco Spampinato as “hyper-real melancholic self-portraits, avatars that talk about loneliness and illness, which according to the author are triggering an “audio-visual illusion of transcendence”, suggesting that we can feel for and by a computer-generated character.62 The effect Adkins’ avatars had on me became important in the making of Beta, considering that I was ultimately interested in creating a sense of empathy for my own avatar and his struggle to find a place to exist. Considering that many individuals create their own cyber avatar with the intention of escaping recognition technologies, my work responds to these experiences and illustrates 61 Legacy Russel, Glitch Feminism: a manifesto, 2020, 14. 62 Francesco Maria Spampinato, “Ed Atkins: Melancholic Avatars in HD”, 2014. 37 the process of fabricating a digital self. Phenomenon such as escapism or technoutopianism are not new to the internet. These ideologies have been part of early online video game experiences, from mmorpg to online multimedia platforms such as Second Life.63 However, the metaverse that big tech corporations are building seems to replicate expanded systems of surveillance within cartoonish, friendly, emoji-esque gaming environments, which tends to be misleading. This idea was recently promoted by Meta CEO Mark Zuckerberg in a video presentation entitled “The Metaverse and How We'll Build It Together” released in October 2021. In this promotional video, Zuckerberg introduces his new company and respective mediated environments as “fun” and “safe.”64 Figure 9: Beta, multi-channel video Installation, video projectors, prototype masks, projector stands, 2022. 63 Second Life is an online multimedia platform that allows people to create an avatar for themselves and have a second life in an online virtual world. Meta, “The Metaverse and How We’ll Build It Together” YouTube Video, 1:17:26, October 18, 2021, last accessed March 2022, https://www.youtube.com/watch?v=Uvufun6xer8&ab_channel=Meta 64 38 Beta is a three-dimensional immersive exhibition that considers the ways in which structures of control are embedded in environments known as metaverse, and intends to depict uncanny environments contrasting the ones suggested in metaverse platforms such as Horizon Worlds, or Decentraland. Furthermore, questioning the definition of the metaverse - formerly presented by tech giants Microsoft and Meta - as an idealistic place promoting escapism and techno-utopianism, this hypnotic environment unveils opaque and authoritative systems of surveillance embedded within virtual environments regulated by major tech companies. Beta was exhibited in the context of my thesis exhibition. The RBC New Media Gallery, located on the second floor at Emily Carr University, has presented itself as an ideal venue to enhance the original video experience in a space that expands from the two-dimensionality of the video to a fully embodied threedimensional environment. The work manifests through the projection of three multichannel videos in a space allowing the audience to be surrounded by the projected images. The exhibition takes the form of a hypnotic and empathic dreamscape influenced by a large scope of emotional nuances: dislocation of both space and time but also fear, vulnerability, embodiment, and compassion. The three asynchronous video channels occupy all of the RBC gallery walls, further increasing the overall immersive experience. Reproducing an omnipresent surveillance atmosphere, this strategy gives the impression of constantly being watched within the space; one video could be playing in front of the viewer, while another starts behind them. Additionally, five of my previous prototype masks were attached to tripods at various heights in the gallery space. This visual strategy intensifies the haunting presence of the work in a space populated by new sculptural bodily objects. Each sculpture faces one of the projector screens and invites the viewer to look at the video through the masks. By doing so, the viewer identifies with these uncanny beings and shares a collective experience in a space occupied by fabricated and non-fabricated bodies. The way the sculptures have been placed in the space also invite the audience to the center of the gallery space in order to be fully immersed in the work. 39 Furthermore, the transcendent musical score accentuates the emotional relation between the avatar and the viewer. The tone, rhythm, and choreography of the sound component is mesmerizing, bringing the audience in and out of space awareness, while suggesting attention and care for the avatar. The monologue is composed of an A.I.-generated voice intertwined with my own. This audio strategy aims to disorient the viewer and to question the narrative voice itself. Where does it come from? The artist? The avatar? Both? By doing so I am reminding the audience of the fabricated nature of the avatar, but also humanizing it. Ultimately, this strategy allowed me to accentuate a sense of empathy for the avatar which paralleled my own personal conflict to construct an online self that resembles me but that also differs from me. The overall experience of the exhibition creates both a feeling of unease relating to the effects of algorithmic anxiety, but also a sense of agency regarding issues with facial recognition in virtual environments today, by suggesting the role and impact of facial recognition on users online. This first attempt at investigating immersive spaces like the Metaverse or similar virtual 3D-rendered worlds as both a subject for experiment but also as a new medium has been very liberating in my practice. The limitations I encountered with previous photography or sculptural work were transcended in working with 3D modeling and animation. Spatial awareness has always been important to my practice, and the ability 3D software provided in constructing my own spaces has radically changed the way I approach art making. Working with virtual video pieces conjured a new language for articulating my subject of investigation and allowed me to engage with my ideas in a more straightforward way. 40 >Artistic References In addition to grounding this research in theoretical thinking, artists such as Hito Steyerl or Zach Blas play important roles as inspirational resources, but also as a foothold for my own investigation in new media. Hito Steyerl’s early video work, for instance, has initiated a strong interest for experimentation with video content as an art form. In her 2013 video essay How Not to Be Seen: A Fucking Didactic Educational .MOV File, Hito Steyerl humorously pastiches and utilizes Monty Python’s Flying Circus sketch of the same name, to reveal the danger of surveillance technologies. Steyerl attempts to demystify contemporary systems of control by suggesting absurd protective strategies that demonstrate the ability of earth observation satellites, and the artifices embedded in digital images. Throughout the piece, Steyerl proposes a didactic approach to mock the gaze of global observation systems and to suggest a sense of resistance. In her work, Steyerl explores with speculative fiction and first-person narrative while investigating cinematic experiences that transcend the documentary form into comprehensive video essays. In grounding her work in contemporary political contexts, Steyerl is interested in anticipating potential outcomes regarding the impact of today’s mediated experiences within the culture of image proliferation. In mimicking the form of an educational video, Steyerl hosts a staged performance for the viewer while providing informational content assorted in different chapters. She guides the experience of the viewer into an uncanny digital territory supported by documentary footage, virtual environments, and constructed studio sets. Steyerl first transports her audience to a fictional space where reality is absorbed by the artificiality and architecture of the video form. The viewers are then pulled away from that fabricated reality, and forced to recognize the artifices constructed by the artist. For instance, Steyerl material use of green screens acts as literal backgrounds, but also as metaphorical dividers that reveal the construction of digital artifacts. Steyerl then adopts green screens as symbols for the artificiality of surveillance imagery and the deceptive reality of such technologies. 41 In addition to articulating a language of illusion and deception, Steyerl concentrates on exploratory strategies against surveillance and communication technologies to evoke a sense of resistance toward invisible power systems. While some of these strategies can be seen as inefficient or even inadequate to apparatus of surveillance, they remain a way for understanding contemporary technologies while deflating the threatening effect they inflict on individuals today. This aspect of Steyerl’s work has increasingly influenced my own research and introduced opportunities for resisting systems of controls in forthcoming video performances. Similar to Steyerl’s, my research investigations strive to intervene in systems of knowledge that inform issues regarding privacy and representation while adopting playful and didactic strategies. For instance, in deconstructing the absurdity of images produced via machine learning systems, my video work refers to the construction of algorithmically generated images by referring to the process, but also highlights the uncanny and absurdity of generative images. Additionally, my research intends to replicate the intimidating and at times humoristic tone articulated in Steyerl’s video work, which is inherent to the technology but also provoked by the bizarre imagery produced by generative models. Furthermore, Steyerl’s strategies of seeking invisibility resemble the implication of physical masks generated in my own research, in that they do not unquestionably offer suitable protective apparatuses against surveillance technologies, but rather propose unusual forms of representation that exist through a language of reconfiguration and playfulness proper to both irony and novelty. This language allows for a decontextualization of the terrain of the work, but also for challenging the critical rapport between humans and technologies. Additionally, the benefit of fictional spaces and the use of fabricated backgrounds constitute a material choice which allows for hosting performative and staged experimentations in both Steyerl’s and my own video work. The materiality of these digitally fabricated spaces embodies the boundless wall of the fictional realm, and operates as a territory in which resistance and critical thinking can effortlessly manifest. 42 Finally, Hito Steyerl’s video essay suggests that counter-surveillance strategies – whether they are efficient or not – provoke a sense of empowerment and embodiment, but also inform the audience on privacy issues, while proposing alternative ways for selfrepresentation, which is further explored in my own research. Like Steyerl, Zach Blas investigates the politics of surveillance technologies, and attempts to reveal the infrastructure embedded in computer vision systems. In his 20142016 installation and performance work entitled Face Cages, Blas collaborated in a research-based project with three queer artists, including Micha Cárdenas, Elle Mehrmand, and Paul Mpagi Sepuya. While his previous project, Facial Weaponization Suite, initiated early explorations with biases of biometrics associated with surveillance technologies and stigmatized groups, Face Cages extends his research in a visual laboratory that focuses on biometric diagrams associated with what the artist defines as “Queer Opacity”. In describing his project as ‘a dramatization of the abstract violence of the biometric diagram, Blas generated face cages mapped out of the four participant artists’ biometric signatures.65 Resembling torture devices such as handcuffs or prison bars for the face, his fabricated metal face cages make a direct reference to slavery and medieval apparatuses of torture. Blas recontextualizes yesterday’s systems of control and domination in a new form of metal face cages, which are ultimately worn in enduring performances captured on video. The final work materializes in a gallery installation regrouping four metal face cages along with their respective performance video. Zach Blas, “Face-Cages”. Zachblas.info, last accessed November 2021, https://zachblas.info/works/face-cages/ 65 43 Zach Blas is an American visual artist and writer based in London, UK, who investigates the politics of technologies and queerness while engaging with contemporary issues regarding ethics and biases of inbuilt computational frameworks and devices. Blas’s performative project reacts to the current authority of new technologies and their impact on stigmatized groups within Western societies. The point of departure is personal to the American artist, who identifies with the Queer community, and unfolds somehow into a universal experience dictated by the dominance of surveillance systems. This process of starting from the personal realm to reach a collective understanding of today’s digital hegemony is what makes this collaborative project so successful. In engaging with data visualization, Blas’s project manifests in tangible objects that highlight the importance of representing the unseen, prominently because understanding invisible forms of power reveals what we are unconsciously subjected to. Blas’ Face Cages then act as an illustration to invisible and immaterial forces that instrumentalize biometrics. Thus, Blas’s cages embody a sense of reveal rather than an exercise of masking. The tension in the work described as “Queer Opacity” by the artist, lays a stage for resistance and, paradoxically, a desire for visibility. Transparency and opacity oppose one another but also complement each other in a work that highlights the tension between visibility and invisibility within mediated environments. The work culminates in an endurance performance that demonstrates the stigmatization of the queer community, in which the artists’ gazes are directed at the audience in order to induce a sense of empowerment and resistance. Like Blas, my own research takes interest in biometrics, but rather than using signature profiles as a starting point for the work, it uses images of various types of masks as initial data input. These images do not possess a biometric signature and allow for unrecognizability. My research proposes to adopt masks as references to the human face, but also as anonymous devices that do not reveal individualities but rather conglomerate human features into regenerated wearables, which ultimately accord a sense of protection. 44 Finally, in operating against forms of biometric control and exploitation, my research parallels Blas’s and comments on the intertwined relationship between humans and technology, as well as the ever-growing presence of digital artifices around us. 45 >Conclusion Artifices can be seen as protective devices offering a sense of defensiveness, but also operating as forms of resistance within mediated systems. As suggested in this thesis support paper, the emergence of surveillance technologies capitalizing on individuals’ biometrics, can press today’s users of communication tools, devices, and platforms to opt out of such systems. In considering the current proliferation of surveillance technology in the social web, this research addresses the unethical relationship between active users and structures of control. In concentrating on my own relationship with such systems and interfaces, I have established a practice of investigating models of resistance according to my experiences, which are – like those of millions – invariably subjected to invisible forces induced by a syndrome of algorithmic anxiety. Ultimately, I am hoping that my artwork allows for a sense of agency in the viewer as much as it did for me in the making of this work, while seeking to demystify systems of control and dominance. While regarding the relationship between human and technology within a context of artmaking and cultural production, my research is characterized by a process of learning and disseminating knowledge. In supporting various directions of investigations, the continuous learning process also constitutes a fundamental element of the work. Uncovering the tools, materials, traditions, and contexts of new technologies, such as machine learning systems, while simultaneously producing a responsive artwork remains the most significant challenge of my research. Ultimately, contradictory ideas such as visibility versus invisibility or presence versus absence, do not oppose one another or even cancel each other out, but rather create a tension necessary for creating a space for resistance and refusal. Using facial recognition technologies in order to respond to problems associated with privacy, in some way reappropriates a language of subversion that allows for reversing the gaze on systems of powers and revealing their strategies. Finally, these explorations contribute to raising 46 awareness toward extractive systems and surveillance technologies, while questioning the application of artificial intelligence and its potential future outcomes. 47 >Post-Defence Reflection In the light of the exceedingly rich conversation that unfolded during my defense, I would like to make use of this post-defense reflection to clarify the nuanced balance between techno-utopianism and techno-dystopian in my work, and how I have approached these two notions over my two years of research at Emily Carr University. During my first year of investigation, I came across some relatively threatening terms like data mining, facial recognition, or system of control. I quickly realized that I was confronted with the sinister nature of my terrain of investigation, and started thinking about ways in which these systems and technologies could be resisted through camouflage apparatuses. As a reaction to the menacing aspect of these systems, I started to develop, without necessarily realizing it, a more techno-dystopian understanding and relationship to these practices, because both camouflage and resistance presume a danger or external attack. Consequently, the work I started to produce initiated a process of active defensiveness responding to an existing menace in the “in real life” world. However, I have also started to recognize how much facial recognition software are adopted and circulated in an experience of desire and play through social media platforms or filter applications, which induce a two-way relationship between users and technology. In this context, this interrelationship is not an external attack directed to users, but rather more of a conversation or exchange with facial recognition technology. Furthermore, there is also a notion of desire and almost pleasure in investigating virtual spaces as places of refuge when IRL experiences seem to be under attack by systems of control and surveillance, which can be understood as a more techno-utopian relationship with technology. In a piece like Beta for instance, I started to build a relationship of empathy and an aesthetic of delight and connection which brought a more nuanced understanding between terms such as techno-utopianism and techno-dystopian. I think that over the two years of research I have slowly started to reconcile these two concepts, and ultimately produce work that would not adopt one over the other, but rather dilute their meaning through conversational art pieces rather than confrontational ones. 48 Finally, I would like to end this post-defense reflection by prompting the words of Wired Magazine Global Director Gideon Lichfield as a closing statement to this thesis: To me, the answer begins with rejecting the binary. Both the optimist and pessimist views of tech miss the point. The lesson of the last 30-odd years is not that we were wrong to think tech could make the world a better place. Rather, it’s that we were wrong to think tech itself was the solution—and that we’d now be equally wrong to treat tech as the problem. It’s not only possible, but normal, for a technology to do both good and harm at the same time. 66 Gideon Lichfield, “Welcome to the New WIRED”, wired.com, January 3, 2022, last accessed April 2022: https://www.wired.com/story/welcome-to-the-new-wired/ 66 49 >Bibliography Abrams, Loney. You're Being Watched: Trevor Paglen on How Machine-Made Images Are Policing Society & Changing Art History. Sept 15th 2017. www.artspace.com. Last accessed October 2021. https://www.artspace.com/magazine/interviews_features/qa/qatrevor-paglen-on-how-machine-made-images-are-a asserting-power-over-society-54992 Asimov, Isaac. The Last Question (When the World Ends).1956. PDF file. Last accessed November 2021. https://www.physics.princeton.edu/ph115/LQ.pdf Blas, Zach. “Face-Cages”. Zachblas.info. Last accessed November 2021. https://zachblas.info/works/face-cages/ Blas, Zach. “Facial Weaponization Suite”. Zachblas.info. Last accessed March 2022, https://zachblas.info/works/facial-weaponization-suite/ Briziarelli, Marco and Armano, Emiliana. (eds). The Spectacle 2.0: Reading Debord in the Context of Digital Capitalism. 2017. London: University of Westminster Press. 2017 Briers, Anna. “Conflict In My Outlook.” Exhibition essay. The University of Queensland Brisbane. August 2020. Last accessed January 2022. https://www.conflictinmyoutlook.online/exhibition-essay Chang, Alvin. “The Facebook and Cambridge Analytica scandal, explained with a simple diagram.” www.vox.com. May 2nd 2018. Last accessed January 2022. https://www.vox.com/policy-andpolitics/2018/3/23/17151916/facebook-cambridge-analytica-trump-diagram Chen, Brian X. Are Targeted Ads Stalking You? Here’s How to Make Them Stop. www.nytimes.com. August 15th 2018. Last accessed January 2022: https://www.nytimes.com/2018/08/15/technology/personaltech/stop-targeted-stalker-ads.html Crispin, Sterling, Data-masks biometric surveillance masks evolving in the gaze of the technological other, 2014, last accessed March 2022: http://www.sterlingcrispin.com/Sterling_Crispin_Dataasks_MS_Thesis.pdf Counts, Aisha. Trust in Big Tech is in free fall, according to a new survey. www.protocol.com. October 4th 2021. Last accessed February 2022: https://www.protocol.com/bulletins/trust-big-tech-facebook Crawford, Kate. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. 2021. de Vries Patricia and Schinkel Willem. Algorithmic anxiety: Masks and camouflage in artistic imaginaries of facial recognition algorithms, Big Data & Society, January–June 2019. Find Biometrics, “Apple’s VR Headset Could Support Iris Biometrics: Kuo,” www.findbiometrics.com, March 23, 2021. Last accessed march 2022: https://findbiometrics.com/apples-vr-headset-could-s upport-iris-biometrics-kuo-73202101/ Haraway, Donna. "A Cyborg Manifesto: Science, Technology, and Socialist Feminism in the Late Twentieth Century," in Simians, Cyborgs and Women: The Reinvention of Nature. New York; Routledge. 1991. 50 Hennequin, Dom. Why is Blue the Internet’s Default Color, www.envato.com, Jan 9th 2018, Last accessed November 2021: https://envato.com/blog/blue-internet-default-color/ Hill, Kashmir, The Secretive Company That Might End Privacy as We Know It. www.nytimes.com. January 18th 2020. last accessed January 2022: https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html Johnson, Khari. Facebook Drops Facial Recognition to Tag People in Photos, wired.com, November 2nd 2021, last accessed February 2022: https://www.wired.com/story/facebook-drops-facialrecognition-tag-people-photos/ Kantayya, Shalini. Coded Bias. 7th Empire Media. 2020, https://www.netflix.com/title/81328723 Knibbs, Kate. “Selfies are now the most popular genre of photo; in related news everyone’s the worst” www.digitaltrends.com, June 20th 2013. last accessed April 2021: https://www.digitaltrends.com/social-media/selfies-are-now-the-most-popular-genre-of-pictureand-in-related-news-everyones-the-worst/ Magnopus UK, “Biometrics Level Up VR And Provide The Next Leap Forward In Human/Computer Interaction. “ medium.com, May 5 2021. last accessed march 2022: https://medium.com/xrlo-e xtended-reality-lowdown/biometrics-level-up-vr-and-provide-the-next-leap-forward-in-human-c omputer-interaction-293c03983f15 Meta. “The Metaverse and How We’ll Build It Together” YouTube Video, 1:17:26. October 18, 2021, last accessed March 2022. https://www.youtube.com/watch?v=Uvufun6xer8&ab_channel=Meta Lichfield, Gideon. Welcome to the New WIRED, wired.com, January 3, 2022, last accessed April 2022: https://www.wired.com/story/welcome-to-the-new-wired/ Pesenti, Jerome. An update on our Use of Face Recognition, fb.com, November 2, 2021, last accessed March 2022: https://about.fb.com/news/2021/11/update-on-use-of-face-recognition/ Russell, Legacy. 2020. Glitch feminism: a manifesto. Tanz, Jason. Soon We Won't Program Computers. We'll Train Them Like Dogs. www.wired.com. May 17th 2016. last accessed October 2021: https://www.wired.com/2016/05/the-end-of-code/ Tremayne, Roby and Gill, Ryan. We need to Kick Big Tech out of the Metaverse. wired.com. July 7 2021. Last accessed march 2022. https://www.wired.co.uk/article/metaverse-big-tech The Canadian Press. “B.C., Alberta, Quebec watchdogs order Clearview AI to stop using facial recognition tool.” www.cbc.ca, December 14th 2021. Last accessed January 2022, https://www.cbc.ca/news/canada/clearview-ai-facial-recognition-1.6286016 The top 500 sites on the web, www.alexa.com. last accessed October 2021: https://www.alexa.com/topsites Thompson, Nick. "Guy Fawkes mask inspires Occupy protests around the world". CNN World, 5 November 2011, last accessed March 2022: https://www.cnn.com/2011/11/04/world/europe/guyawkes-mask/ Spampinato, Francesco Maria. “Ed Atkins: Melancholic Avatars in HD.” (2014). 51 Schiller, Daniel. Digital Capitalism, Networking the Global Market System, MIT Press, 1999. Spirit Media, MIT Media Lab 2018, last accessed January 2021: https://spirits.media.mit.edu/index.html Zuboff, Shoshana. The Age of Surveillance of Capitalism, The Fight for a Human Future at the New Frontier of Power. Profiles Books. 2019. 52