Skip to main content Scroll Top

Interview — Maya Indira Ganesh

Portrait of Maya Indira Ganesh

Maya Indira Ganesh is a researcher, educator, and writer who works at the intersection of AI and digital technologies, culture, and society. She is an Associate Director at the Leverhulme Centre for the Future of Intelligence (LCFI) at the University of Cambridge, UK. Her first book, Auto-correct is based on her PhD about AI, ethics, and the driverless cars. Maya is also an invited speaker, curatorial advisor, and writer with arts and cultural organisations including Transmediale (2018-2019), AI Anarchies (2022) and Ether’s Bloom (2023). Maya was a Rockefeller Foundation Bellagio Centre Resident Fellow on AI (2019) and has won awards from the Media Cultures of Simulation (MECS) Institute for Advanced Study at Leuphana University, Lüneburg (2018), Digital Earth/Hivos (2020), and the Mellon-Sawyer Seminar on Histories of AI (2021).

Shikha Aleya (SA): Thank you for generously agreeing to this interview Maya, we’re happy to have you here! A swift first question to help set up context. What are your first thoughts when you consider the term digital intimacy, and in those first thoughts, what is the connection you make with sexuality?

Maya Indira Ganesh (MIG): We can think of digital intimacy in a few different ways: intimacy through chat/language, sharing videos, images with other people via the internet, mobiles, messaging. This is through every possible digital platform we have. It can be casual or serious, long term or fleeting, paid or unpaid. Digital intimacy can be with another person. And then there’s intimacy with bots. On that point, what started as something considered slightly shameful, or evidence of people being fooled, as in the 2015 Ashley Madison scandal, has now become normalised with people finding spiritual, sexual, and personal intimacy with systems like ChatGPT. The Ashley Madison scandal was when a website for married people to have affairs was hacked and data from it, including users’ identities, was released. The scale of men to the scale of scripted chatbots became visible. There weren’t enough women on the site to have intimate chats with men and many of the men didn’t know they were having ‘sexy chats’ with bots.

I think the connection to sexuality is how sexuality is so much about disembodiment, about language and media. In some ways you can think about this as not just disembodiment, but this idea that our physical bodies extend beyond the confines of our skin.

I did some research about ten years ago about how queer communities of activists in Nairobi use digital tech; here digital intimacy becomes a matter of survival and negotiation. For these activists the digital sphere provides a vital space for a social, emotional, and sexual life that is often criminalized or dangerous in the physical world. However, this intimacy is fraught with the tension between visibility and anonymity. For example, I observed practices of graduated anonymity where individuals would share seemingly benign but sensitive personal information online to build trust before ever meeting in person. While this facilitates connection, it also introduces risks of entrapment or blackmail. The same tools used for intimacy can be weaponized for lateral surveillance by family or community members.

To me, the connection between digital intimacy and sexuality is fundamentally ambivalent. It is a space that allows for the construction of a visible community and the experience of desire. But it is deeply entangled with the threat of exposure and violence. It changes our notion of distance, allowing us to bridge gaps between others and ourselves. We must remain critical of how these platforms are designed and who they exclude.

SA: The ‘effects of AI in the lived environment’ is a phrase used in a report you have co-written, published last year, AI in the Street: Lessons from everyday encounters with AI innovation. Though this was in the context of everyday physical environments, the term ‘lived environment’ made me think of the term ‘lived experience’. So, to begin here, would you say that there are multiple lived environments in the digital world, depending upon who, or what, is inhabiting or passing through these? What role does this play in human relationships and intimacy?

MIG: I would agree that there are multiple lived environments online. These environments are not neutral backdrops. When we speak of the lived environment, whether it is the physical street or a digital platform, we are talking about a space where power, visibility, and vulnerability intersect. These spaces are controlled by whoever controls a platform, app, or even just the device itself.

First, there is the environment of graduated anonymity and lateral surveillance. A digital lived environment that is a vital space for a social, emotional, and sexual life often criminalized in the physical world. However, this space can be monitored by family and community members as well as the state. For years people have been ‘creeping’ on others online, seeing who is talking to or commenting on whose post. Being tagged in photos can reveal who you were with.

The issue is that people trust the default setting of everything always being turned to ‘on’: full visibility, full data capture. So intimacy here involves a constant negotiation of visibility. You want some people to see you but not others. This was very apparent in the early days of social media, but since then people have found ways to evade lateral surveillance but also to be selectively visible to people whom they want to be visible to.

Second, there is the environment shaped by a fundamental lack of mutual recognition. In our AI in the Street report, we found that the introduction of AI creates a sense of detachment for everyday publics. People feel surveilled by smart infrastructure, like billboards with cameras, but have no reciprocal relationship to look back to. This treats people as aggregates and statistics rather than individuals. The technology becomes machine-facing rather than human-facing, turning the urban space from a site of communal dwelling into a transactional space where humans are merely markets or obstacles. This failure of two-way engagement erodes trust in both the technology and local governance.

SA: Computational ethics plays an important part in your work and explorations. Please help us understand this better. Also, is this connected to the concept of machine learning, which I loosely understand as the ability of a machine to pick up on patterns of users, that is, human behaviour, learn from them, take decisions based on this, and so also perpetuate those behaviours? Would this line of inquiry take us to everyday spaces in human relationships that now need a fresh look?

MIG: Computational ethics is an approach to embed values to drive decision-making in computational systems. This automates decision-making, which is not per se a bad thing. In fact, we need it and use it all the time, whether that is a values-driven choice to regulate energy use in a building automatically, or, to decide what kinds of content we view or allow online in social media. It has become very common to censor certain words or ideas online and which is why people also use shorthand or codes, like spelling things in a way that machine systems cannot recognise like sixual for sexual, or PDF files for paedophiles (referring to revelations from the Jeffrey Epstein emails) or simply using the watermelon emoji. My concern is that this transforms ethics from a social negotiation of human values into a valuation based on the processing of data. By framing social morality and everyday ethics as a set of programmable rules we risk creating a moral navigation system that offloads difficult societal questions onto machines. This treats AI as individual moral agents capable of reason rather than as technologies that humans make, use and adapt. And we’re seeing more of these discussions of AI as individual moral agents as their language capacities increase.

SA: In this passage through an environment where the lines between known and unknown blur, is there a pause state possible? Is it even desirable, like a reset? A sort of – who am I including, digital me? And correspondingly, who are we – as including digital us?

MIG: A complete reset or a clean exit from these systems is a fantasy. In my view, there is no “outside” to return to because the digital is not a separate realm; it is the very social, cultural, and affective infrastructure we inhabit. Even when we’re utterly alone in some remote and beautiful natural environment, we pick up our phones to document it. Perhaps dream states are solitary, but even with that we can capture our data and share and analyse it. That said, it remains intriguing to ask: what remains outside of the digital? Why? Or, if we struggle to find it then how do we consider what it means to be almost always digital?

Some people see a reset as a detox or switching off. This is not always a bad thing. People are trying to get their children away from digital technologies as there is increased awareness of how it affects youth cognitive development. For some people there is no outside because it could be that your work is entirely controlled by an app, like a place-based gig worker like a delivery person, or a micro-task worker online.

I think what’s more interesting is this ambivalence of refusing to ever be fully reconciled with the digital. It allows us to sit with the discomfort of our entanglement rather than seeking a purified existence that does not exist. I think a lot of my research and teaching and writing is about trying to create a sort of “padded room” where people can experience the realization of the complexity of these systems.

SA: A big thank you – you’ve given us a lot to think about! A last question, for some take-home thoughts. Are there some personal insights you would like to share about your own evolving sense of self and identity, that have come to you through your work in this field?

MIG: That is a lovely question to end on. My sense of self has been profoundly shaped by crossing boundaries. I’ve moved between activism and the academy and the work of cultural practice; and in analysing the “fantasy” of the digital and AI, and the messy reality of the world. I often recall the computer scientist Phil Agre’s description of the vertigo he felt when trying to read outside his field: the overwhelming realization of the difficulty of interdisciplinary study and practice. Which is less of an outcome but more of a way of being in the world. I feel that vertigo. In my current role at a predominantly science university, and as a humanities-social science scholar working on AI, I sometimes find myself in rooms with peers who ask why we need to study the human or social dimensions of AI at all, believing it is “all about the models”. Holding my ground as a feminist scholar whose knowledge is different from, but equal to, technical expertise has been a significant part of my evolving identity. It’s not a bad or challenging fight, it’s just something that has to be done. I try to cultivate humour and appreciation for irony and core strength to deal with discomfort and difference.

Cover image by David Johnson Portraits