AI and 'Being Human'
Authors: Mariam Khaled and Mahima Jain
Format: Desktop Documentary
Duration: 14′ 36″
Published: June 2025
AI and 'Being Human'
Authors: Mariam Khaled and Mahima Jain
Format: Desktop Documentary
Duration: 14′ 36″
Published: June 2025
This is what our regular chatGPT outings look like:
You said:
What does it mean to be human?
ChatGPT said:
To be human is to experience self-awareness, complex emotions, and the capacity for reason, while engaging in cultural, social, and creative practices.
You said:
what does it mean to be human in the face of technologies like artificial intelligence?
ChatGPT said:
In the face of technologies like artificial intelligence (AI), being human involves reflecting on and valuing our unique capacities for empathy, creativity, and ethical reasoning, which distinguish us from machines and shape our interactions with technology.
This seemingly neutral dialogue already demonstrates how AI systems, like ChatGPT, draw from dominant epistemologies that often marginalize or erase other ways of knowing and being (Benjamin, 2019; Noble, 2018). The video essay situates these encounters not merely as isolated instances of bias, but as symptomatic of the larger systemic tendencies of AI to racialize and dehumanize, reinforcing global hierarchies of knowledge and power.
In the face of technologies like ChatGPT, being human involves reflecting on and valuing unique capacities for empathy, creativity, and ethical reasoning, which distinguish the human from the machine and therefore shape human interactions with technology (Floridi, 2014; Turkle, 2011; Brey, 2008; Boddigton, 2017; Coeckelbergh, 2021). However, the shown correspondence with chatGPT fails to be embracive of other cultures, races, and humanity than the “standard,” which is often presented and portrayed as white (Cave & Dihal, 2020). This raises the question of where different nuances of life and humanity disappear in technologies like ChatGPT.
This video essay situates these encounters not merely as isolated instances of bias, but as symptomatic of the larger systemic tendencies of AI to racialize and dehumanize, reinforcing global hierarchies of knowledge and power.
As such, the video takes the stance that AI displays cultural bias, as Blackwell puts it: “AI is shaped by culturally-specific imaginaries and implemented through the situated action of cultural agents, including the engineers that create algorithms, the workers ‘hidden’ within the elaborate technological costumes, and business people who deploy rhetoric and showmanship to package and sell the resulting spectacle as ‘AI’” (2021). AI is rooted in the culture it is produced in, and hence carries its inherent biases, prejudices, and colonial identity. It not only derives and is informed from the culture of its creators, it also informs the culture it is consumed in.
The video essay is seen from the personal screens of the authors, and through their discussions, their interaction with AI, and the exploration of digital subjectivities. As such, the video uses a decolonial and critical lens to interrogate how AI systems both narrow definitions of humanness and perpetuate the dehumanization of racialized Others through colonial and systemic violence (Benjamin, 2019). By doing so, the video essay thus focuses on how technology has been a tool of colonialism, and how AI continues that probe. It hence examines how AI technologies shape experiences of humanity by exposing how dehumanization is perpetuated at multiple levels: in their production, through the exploitation of labor and extraction of data, which the video stages through layered desktop sequences showing invisible infrastructures; in their development and operations, where colonial and racialized logics are encoded into algorithms, made tangible in the video through dialogues with ChatGPT that flatten or erase cultural complexities; and in their consumption, where these logics are normalized in everyday encounters, which the video confronts by performing moments of rupture, ambivalence, and resistance on screen. Technology giants in the West routinely rely on the Global South, which are often invisible at any of these stages. It highlights how AI continues to perpetuate historical power dynamics, racialization, and colonial ideologies, reinforcing inequities by assigning value, based on the color of one’s skin, culture or nationality. As such, the authors critique popular representations of AI as “objective” and “neutral”, by highlighting the human labor and colonial biases embedded within these technologies through personal experiences.
The aim is to ask: how to be human in the face of AI? And how would you make an AI beyond humanity?
The starting point for the video is to challenge the viewer’s perspectives on AI by exploring notions and practices of the technology from the Global South (Cave & Dihal, 2020; Patrick & Huggins, 2023). As such, the video focuses on how colonial divides and subjects are being reproduced by AI technologies and companies, but also how Indigenous and cultural practices are attempting to resist such divides (Noble, 2018; Mignolo, 2011), which is best shown through the performativity of the video.
While some AI tools are employed in the making of the video, they are applied with a critical and self-reflexive lens, aiming to juxtapose aesthetics and values of human production with AI production through the artistic integration of visuals and audio. As such, there is a heightened focus on how to navigate both the AI tools and the different interpretations they offer, while remaining critical toward the outcome (Bender et al., 2021).
The video departs from the premise that AI is not only a technological system but also a cultural artifact that shapes and reflects dominant understandings of humanity. By exploring how AI perpetuates specific ways of imagining what it means to be human, the project centers the following research question: What does it mean to be human in a time with Artificial Intelligence?
Following Ruha Benjamin (2024, 2019), Safiya Noble (2018), Joy Buolamwini (2023) among many other prominent voices in critical AI, the video is to be situated similarly as a video that is critically engaged with AI’s sociocultural implications, bringing in perspectives of humanity from Moroccan and Indian culture. Therefore, the research question attempts to bring in literature, as well as indigenous knowledge, from different disciplines together in a contemporary interpretation of their impact on understanding humanity through the lens of AI.
The initial ideas and frameworks of the video emerged from the reading group called AnotherAI at the Department of Digital Design and Information Studies at Aarhus University. The focus is to challenge our own perspectives on AI by exploring perceptions and practices from the Global South. It focuses on how colonial divisions and subjects are reproduced by AI technologies and companies, but also on how indigenous and cultural practices resist such divisions (Kursuskatalog - Aarhus University, n.d.). Rooted in postcolonial technology studies and technofeminism, the approach deliberately challenges dominant narratives of AI and technology as universal, neutral, and Western-led innovations. These conventional discourses, often prevalent in critical AI studies and mainstream design fields, tend to overlook or marginalize perspectives from the Global South, framing them primarily as passive users or sites of technological deficit. By foregrounding alternative ways of being with technology inspired by Global South epistemologies, the project exposes how these narratives erase the histories, contributions, and resistances of Global South communities in the development, circulation, and critique of AI systems. By incorporating the concept of ‘slowness’ (Andersen & Cox, 2023) into the experience, the video raises speculation about alternative frameworks for designing and engaging with technology, advocating for practices that resist the speed, efficiency, and extractive logics of current AI systems. This research builds upon existing critiques from scholars such as Virginia Eubanks (2018) and Ruha Benjamin (2019), and advances the conversation by emphasizing the need for more inclusive and equitable technological development. As such, the video not only contributes to the discourse on AI’s societal impacts but also aligns with broader efforts to decolonize technology and explore how technofeminist and post-colonial perspectives can reshape contemporary interactions with digital tools.
Importantly, working from Global South and feminist perspectives brings specific epistemic and political commitments to the project. These perspectives challenge dominant techno-scientific imaginaries rooted in Western rationality, efficiency, and universality, foregrounding instead relational, situated, and pluriversal ways of knowing and being with technology. This implies not only critiquing AI’s embedded racial, colonial, and patriarchal logics but also engaging in speculative practices that reimagine AI beyond extractive, exploitative, and dehumanizing paradigms.
The project acknowledges its location within academic and technological infrastructures of the Global North, and seeks to navigate these tensions by centering the authors’ own hybrid identities, experiences, and the embodied knowledge they carry from Moroccan, Indian, and diasporic contexts. By doing so, the work attempts to resist extractivist tendencies often found in research on the Global South, instead fostering situated engagements that recognize the complexities and contradictions of working within and against colonial systems.
Situated within the fields of Science and Technology Studies (STS), Postcolonial Theory, and Critical AI, the video essay ultimately probes into the use and effect of metaphors in shaping understandings of AI. Rather than treating metaphors solely as objects of analysis, the video essay employs them as situated, embodied, and performative tools within its autoethnographic framework. Metaphors become both narrative and visual strategies that allow the authors to surface and confront the racializing, dehumanizing, and colonial dynamics embedded in AI systems. Through desktop recordings, screen interactions, and voiceovers, these metaphors materialize as part of the authors’ lived, affective encounters with AI, turning abstract critiques into visceral, relational experiences. This approach activates metaphors as critical devices that expose how AI systems operate on personal, epistemic, and systemic levels - while reflecting on how these violences are encountered, internalized, and resisted by the authors themselves. The project takes its point of departure in STS (Danholt & Gad, 2021) due to its focus on the intersection between technology, society, and (decolonial) subjectivity. This approach allows for a nuanced analysis of the social and visual construction of technology, emphasizing the role of language, histories, and culture in shaping perceptions of AI (Pinker, 2008), and thus enabling a deeper exploration of how such systems contribute to broader epistemologies within cultural contexts (Forsythe, 2002).
In addition, the project employs autoethnography as a core method to reflexively position the authors within the central inquiry of the video: What does it mean to be human in a time with AI? Autoethnography grounds the exploration in the authors' lived experiences and positionalities, enabling an engagement with AI technologies as they are encountered in daily life. This method reveals how racializing, dehumanizing, and colonial dynamics manifest not only systemically but intimately, shaping subjectivities and everyday encounters. Autoethnography foregrounds the embodied and affective dimensions of these interactions, positioning the authors’ own screens, bodies, and dialogues as critical sites for examining how AI produces, regulates, and denies certain forms of humanity.
Using autoethnography enables an exploration of a rich yet contradictory relationship with the selves and the world generated by AI systems (Williams, 2015). Through this, the authors enact a political resistance towards the technologies they critique (Kideckel in Reed-Danahay, 1997), by confronting the modes of control and power such technologies exert. In doing so, the project aligns with McCarthy & Wright’s (2004) assertion that “we don’t just use technology; we live with it,” embracing a co-living, co-producing (Jasanoff, 2004) relationship with the colonial perspectives of AI. This is operationalized through the desktop documentary as genre and the video essay as mode, which offers a space to stage, perform, and interrogate these encounters.
The autoethnographic lens also extends to critique the practices of academic knowledge production and artistic intervention themselves, reflexively interrogating how the authors’ methods are shaped by the institutional and cultural logics of the Global North. By foregrounding the messiness, contradictions, and affective tensions of working from within spaces entangled in colonial, capitalist, and technocratic systems, the project challenges modes of detached critique or universalizing artistic intervention that risk reproducing extractive gestures. In this way, the video essay not only critiques AI systems but also opens a space to question the politics of critique itself, emphasizing the need for situated, embodied, and accountable practices of knowledge-making and intervention.
The desktop documentary format amplifies the autoethnographic critique by leveraging the screen as both site and medium of inquiry. This approach draws on Galibert-Laîné’s (2020) concept of netnographic cinema as a cultural interface, wherein the desktop becomes a performative space that makes visible the entanglements between users, algorithms, and interfaces. Through the layering of browser windows, chat logs, and screen recordings, the video essay materializes the aesthetics of the netnographic encounter, reflecting on how modes of seeing, knowing, and narrating are mediated by platform-specific interfaces. Additionally, following Anger and Lee’s (2023) argument in Suture Goes Meta, the project critically deploys strategies of meta-suture to foreground the ruptures between user, machine, and screen, making the audience hyper-aware of the fractures inherent in both AI’s outputs and the video’s own narrative form. These frameworks support the project’s methodological ambition to use the desktop documentary not as a transparent window, but as a reflexive, layered site where critique, speculation, and self-implication are staged simultaneously.
Desktop documentaries' affordance to make and present narrative simultaneously becomes central to this methodology (Kiss, 2021). By practicing what Kiss describes as “purposeful deformative criticism,” the video disrupts dominant narratives about AI systems, reflecting the performative nature of its critique. Transparency, identified by Kiss as a key characteristic of desktop documentary, resonates here, as the critique of AI’s lack of transparency parallels the form’s capacity to expose its own constructedness.
These theoretical frameworks are translated into the video essay’s aesthetic and narrative strategies through personal desktop recordings, screen captures of interactions with ChatGPT, and fragmented visual compositions that deliberately expose the constructedness of the medium. The inclusion of ruptures, distortions, and layered visuals performs a “deformative critique” (Anger & Lee, 2023; Kiss, 2021), unsettling narratives of AI as seamless and objective. The authors’ cultural references, affective reactions, and moments of ambivalence are foregrounded through voiceovers, annotations, and abrupt shifts between personal reflection and critical commentary. This allows the video to render critiques of AI’s racializing and dehumanizing logics not only through argumentation but through the viewer’s embodied and disjointed experience, making the political personal and the personal political.
An important methodological consideration is the deliberate engagement with multiple AI platforms, including ChatGPT, Google Gemini, RunwayML, StableDiffusion, and AIImageGenerator. Each platform embodies different technological imaginaries, user demographics, and modes of interaction, which influence how “being human” is represented, negotiated, and contested within them. Their differing use becomes a deliberate choice to expose how AI systems, depending on their design and audience, frame subjectivities differently - shaping, disciplining, and sometimes flattening complex identities in distinct yet interconnected ways. ChatGPT and Google Gemini operate as conversational agents with wide, generalist audiences, often reflecting and reinforcing Western-centric linguistic and cultural norms under the guise of neutrality and inclusiveness. In contrast, RunwayML and StableDiffusion, used primarily within artistic and creative communities, offer affordances for speculative visual and narrative interventions, while still carrying biases rooted in their training data. By juxtaposing these platforms, the video critically explores how different AI systems frame subjectivities, racialized bodies, and forms of knowledge differently depending on their intended audiences and operational logics. This layered approach allows the video to expose not only the cultural inscriptions embedded in these tools but also to reflect on the shifting, often contradictory, positionalities the authors occupy when engaging with them, highlighting the unstable ways in which AI technologies produce and discipline ideas of humanity.
By centering these personal encounters with AI, the video essay allows the authors to expose the subtle yet pervasive ways in which AI systems enact dehumanization —- from flattening complex identities to reproducing stereotypes and colonial narratives. These moments of interaction are not treated as anecdotal but become central analytic scenes where AI’s racializing and dehumanizing logics are confronted and disrupted through critical reflexivity, desktop documentary aesthetics, and performative acts.
The outcome of this project is a video essay that critically intervenes in dominant narratives about AI by situating these systems within the lived, embodied, and messy realities of human experience. The video offers a reflexive, autoethnographic engagement with AI platforms, not only critiquing their racializing and dehumanizing logics but also exposing how such logics entangle with the authors’ own positionalities, practices, and encounters. Through the use of the desktop documentary format, the video performs a situated critique that is both affective and analytic, making visible the complex entanglements between users, interfaces, and the broader epistemic regimes of AI.
Drawing on Galibert-Laîné’s (2020) idea of netnographic cinema as a cultural interface, the video uses the desktop as both a representational and epistemic site to critically engage with AI and its conditions of encounter. Following Anger and Lee (2023), it employs meta-suture techniques to foreground ruptures between human and machine, unsettling viewers by exposing contradictions and resisting the aestheticization of critique.
In terms of methodological contribution, the project expands desktop documentary’s potential as both a research method and a mode of critical AI intervention. By combining autoethnography with desktop documentary’s deformative and reflexive affordances (Kiss, 2021), the video does not only document encounters with AI but actively perform them, making the processes of critique, doubt, complicity, and resistance visible and visceral. This approach seeks to move beyond textual or purely analytic critique, proposing desktop documentary as a space for speculative, affective, and situated modes of knowledge production that foreground positionality, opacity, and relationality.
Furthermore, by juxtaposing engagements across multiple platforms - ChatGPT, Google Gemini, RunwayML, StableDiffusion, and AIImageGenerator- the video essay generates a layered and comparative exploration of how different AI platforms discipline, constrain, and imagine “being human” in divergent yet overlapping ways. This comparative approach highlights the ways in which platform-specific interfaces and user cultures participate in shaping the racial, colonial, and gendered imaginaries embedded in AI systems, while also reflecting on how the authors’ own engagements shift depending on the platform and their role as user, critic, or creator.
Ultimately, the project contributes with both a critical video artifact and a methodological provocation for researchers working at the intersection of critical AI studies, postcolonial theory, and visual ethnography. It proposes desktop documentary, informed by netnographic cinema and autoethnographic reflexivity, as a generative space for interrogating the politics of AI while remaining attentive to the embodied, affective, and fractured ways such systems are lived, resisted, and reimagined.
AI won’t replace humans — but humans with AI will replace humans without AI. (2023, August 4). Harvard Business Review. https://hbr.org/2023/08/ai-wont-replace-humans-but-humans-with-ai-will-replace-humans-without-ai
Andersen, C. U., & Cox, G. (2023). Toward a minor tech. A Peer-Reviewed Journal About, 12(1), 5–9. https://doi.org/10.7146/aprja.v12i1.140431
Anger, J., & Lee, K. B. (2023). Suture goes meta: Desktop documentary and its narrativization of screen-mediated experience. Quarterly Review of Film and Video, 40(5), 1–18. https://doi.org/10.1080/10509208.2022.2033066Lého Galibert-Laîné+2Taylor & Francis Online+2Taylor & Francis Online+2
Atske, S., & Atske, S. (2024, April 14). Artificial intelligence and the future of humans. Pew Research Center. https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
Benjamin, R. (2024). Imagination: A Manifesto
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.
Bender, E. M. (2024). Resisting dehumanization in the age of “AI.” Current Directions in Psychological Science, 33(2), 114–120. https://doi.org/10.1177/09637214231217286
Bitter, A. (2024, April 3). Amazon’s Just Walk Out technology relies on hundreds of workers in India watching you shop. Business Insider. https://www.businessinsider.com/amazons-just-walk-out-actually-1-000-people-in-india-2024-4?op=1
Bodditon, P. (2017). The ethics of artificial intelligence. Journal of Ethics, 21(2), 91–104.
Brey, P. (2008). Human enhancement and personal identity. Ethics and Information Technology, 10(2–3), 93–99. https://doi.org/10.1007/s10676-008-9164-6
Buolamwini, J. (2023). Unmasking AI: My mission to protect what is human in a world of machines. Random House
Buolamwini, J., & Gebru, T. (2018, January 21). Gender Shades: Intersectional accuracy Disparities in commercial gender classification. PMLR. https://proceedings.mlr.press/v81/buolamwini18a.html
Cave, S., & Dihal, K. (2020). The whiteness of AI. Philosophy & Technology, 33(4), 685–703. https://doi.org/10.1007/s13347-020-00415-6
Coeckelbergh, M. (2021). AI ethics and human flourishing. Oxford University Press.
Danholt, P., & Gad, C. (2021). Technological mediation and subjectivity in science and technology studies. Science, Technology, & Human Values, 46(1), 30–52.
DSpace. (n.d.). https://theses.ubn.ru.nl/items/745e1950-8ea8-4369-b0ee-4b217f33a241
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press.
Floridi, L. (2014). The fourth revolution: How the infosphere is reshaping human reality. Oxford University Press.
Forsythe, D. E. (2002). Engineering knowledge: The construction of knowledge in artificial intelligence. MIT Press.
Galibert-Laîné, C. (2020). Netnographic cinema as a cultural interface. Iluminace, 32(2), 53–70. https://doi.org/10.58193/ilu.1665ResearchGate
Gondwe, G. (2023). CHATGPT and the Global South: How are journalists in sub-Saharan Africa engaging with generative AI? Online Media and Global Communication, 2(2), 228–249. https://doi.org/10.1515/omgc-2023-0023
Graham, M., & Sengupta, A. (2017, October 5). We’re all connected now, so why is the internet so white and western?The Guardian. https://www.theguardian.com/commentisfree/2017/oct/05/internet-white-western-google-wikipedia-skewed
Grosser, B. (2024, May 4). ORDER OF MAGNITUDE [Video]. Vimeo. https://vimeo.com/333795857
Huffington, A. (2024, January 22). How AI can help humans become more human. TIME. https://time.com/6565048/ai-help-humans-become-more-human/
Jasanoff, S. (2004). States of Knowledge: the co-production of science and the social order. https://asu.pure.elsevier.com/en/publications/climate-science-and-the-making-of-a-global-political-order
Kiss, M. (2021, May 16). Desktop documentary: From artefact to artist(ic) emotions - NECSUS. NECSUS. Retrieved January 7, 2025, from https://necsus-ejms.org/desktop-documentary-from-artefact-to-artistic-emotions/
Kong, Y. (2022). Are “Intersectionally fair” AI algorithms really fair to women of color? A philosophical analysis. 2022 ACM Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3531146.3533114
Kursuskatalog - Aarhus universitet. (n.d.). https://kursuskatalog.au.dk/da/course/125246/Aktuelt-forskningsemne-En-anden-kunstig-intelligens
Poor Things (Y. Lanthimos, 2024, USA)
Lentz, C. (2023, December 27). Meet Erin Reddick of ChatBlackGPT — The Girls Innovation Club. The Girls Innovation Club. https://www.thegirlsinnovationclub.org/blog/tgic-innovator-erin-reddick-chatblackpgt
McCarthy, J., & Wright, P. (2004). Technology as experience. MIT Press.
Mignolo, W. (2011). The darker side of western modernity: Global futures, decolonial options. Duke University Press.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
Patrick, S., & Huggins, A. (2023, August 15). The term “Global South” is surging. It should be retired. Carnegie Endowment for International Peace. https://carnegieendowment.org/posts/2023/08/the-term-global-south-is-surging-it-should-be-retired?lang=en
Pinker, S. (2008). The stuff of thought: Language as a window into human nature. Viking Press.
Reed-Danahay, D. (Ed.). (1997). Autoethnography: Rewriting the self and the social. Altamira Press.
Roessler, P., Pengl, Y., Marty, R., Titlow, K. S., & Van De Walle, N. (2022). The cash crop revolution, colonialism and economic reorganization in Africa. World Development, 158, 105934. https://doi.org/10.1016/j.worlddev.2022.105934
TikTok - Make your day. (n.d.). https://www.tiktok.com/@butthatsmyopinion/video/7353755280821505326
Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books.
West, D. M., & Allen, J. R. (2018, April 24). How artificial intelligence is transforming the world. Brookings. https://www.brookings.edu/articles/how-artificial-intelligence-is-transforming-the-world/
Williams, K. (2015). An anxious alliance. In Proceedings of The Fifth Decennial Aarhus Conference on Critical Alternatives (CA '15). Aarhus University Press, Aarhus N, 121–131. https://doi.org/10.7146/aahcc.v1i1.21146
Will machines become more intelligent than humans? (n.d.). Caltech Science Exchange. https://scienceexchange.caltech.edu/topics/artificial-intelligence-research/machines-more-intelligent-than-humans
All reviews refer to the original research statement which has been edited in response to what follows
Review 1: Invite resubmission with major revisions of practical work and/or written statement.
AI & 'BEING HUMAN' aligns closely with the desktop documentary mode and its ability to use screen-capturing software to document and critically engage with the evolving ways in which we inhabit the online landscape. This time, the authors Mariam Khaled and Mahima Jain take on the challenge of engaging with the growing presence of an element that transforms our desktops in a profound way – AI – focusing specifically on its cultural and racial biases and the way in which it perpetuates dehumanization.
A welcome contribution of this essay is its autoethnographic dimension, which, for me, implicitly builds upon the “netnographic” impulse in desktop documentaries (Galibert-Laîné 2020). This approach utilizes the desktop format for investigating online cultures as well as our responsibility and vulnerability in approaching them as researchers, artists, and users at the same time. Here, the authors’ approach enables us to experience AI as both a social context we cannot entirely escape and one that, when confronted directly, we may also destabilize, defamiliarize, and potentially even overturn.
To these ends, the authors employ several strategies. Their clever use of prompts in Gemini and ChatGPT exposes the biases and limitations of generative AI models, while their on-camera presence adds a layer of embodied response, allowing viewers to witness their real-time reactions to the AI-generated content. The Google Meet dialogue in which one of the authors reflects on her decision to keep the desktop space as personal as possible presents another moment of vital negotiation of one’s place in relation to technology. Overall, gestures such as these are necessary in a struggle to reinterrogate what it means to be human – a struggle that is likely to remain urgent for years to come.
However, I would argue that to be truly effective, the authorial perspective indicated by the essay needs to become more pronounced, self-assured, and cohesive. While moments of self-reflexivity are present, they fail to be “sutured” into a distinct and consistent voice and style. A stronger authorial presence would move beyond merely presenting encounters that illustrate AI’s racializing and dehumanizing tendencies and instead offer deeper reflections and arguments on what these encounters mean. Further, a better way of linking the individual segments and motifs together would make the essay more coherent, while streamlining and adjusting the rhythm of certain moments, as well as introducing more varied choices of sounds and music, would prevent the intended “slowness” from occasionally slipping into unnecessary tedium.
As it stands, the video essay feels like a well-informed synthesis of contemporary research and critique on AI and dehumanization, peppered with moments of playful self-reflexivity – which is fine. Nevertheless, I believe that a more clearly articulated research perspective and a few formal and structural refinements would make the essay’s call for a critical yet distinctively human engagement with technology more impactful.
Review 2: Invite resubmission with major revisions of practical work and/or written statement.
There are some very effective and evocative aesthetic moments within this piece, which highlight well the desktop documentary aspects of its making.
There is also a broad exploration of a critique of AI, particularly in the context of race, with a focus on the lens of the global south. The accompanying statement highlights the role of autoethnography as a lens that the authors use to frame the work, however it feels unclear how that autoethnographic lens is being used to further draw out the significant arguments of the research question, this is an area that would benefit further exploration in the written work, e.g. what are the aspects of studying in this context, of making work in this context, how is the global south and feminist lens reflected upon after the making and doing. Does the autoethnography extend to question of the academy, or artistic interventions.
With such a wide-ranging approach to AI being critiqued, the autoethnographic becomes an interesting lens from which to situate this work, but it feels like there needs more reflection upon in the accompanying statement to make this a significant new contribution either in its reflections on the wider discourses of technology and or on the state of the video essay.
One area I would have been interested in understanding from the written work, was the choice of platforms that were explored, Chaptgpt, for example has a different audience to runwayml, did that have implications for the type of being human, e.g. an artist, an academic, a student, or a technology gig worker, does these differentiations matter?
All reviews refer to the original research statement which has been edited in response
Back to Volume 15.1