Art is the Antidote: On the Collision and Possible Symbiosis Between Arts and AI
by Skinder Hundal and Imane Berjamy
Abstract
Public discourse sometimes frames AI as an existential threat to arts: a force that industrializes creativity, normalizes “good enough”, and retrains human taste toward lower expectations. Yet artists are not passive recipients of this technological shift. This paper argues that the relationship between AI and the arts is not a single story of replacement, nor a simple celebration of new tools. It is a contested space that gives rise to a countervailing thesis: artists are actively appropriating AI as a new aesthetic, new material, and new language, and, in doing so, can shape not only artistic practice but also the societal narratives that will govern AI’s place in human life.
We argue that art performs a fundamentally political function in the AI age: to reveal hidden questions inside given answers, to disrupt complacent narratives, and to cultivate the empathy and critical consciousness required to govern increasingly powerful systems. The central problem is not whether AI will enter culture, this is already underway, but under what conditions arts and AI become partners in creation rather than competitors, and how artistic integrity can persist amid acceleration, market incentives, and emerging claims of synthetic agency.
Meaning is Not Given: The Interpretive Problem in Data-Driven Culture
A useful way to begin this reflection is by considering the question of meaning and intentionality. Meaning is not something passively contained within data, waiting to be extracted as if it were an objective residue. Rather, it emerges through processes of interpretation. Reflecting on more than a decade of experimenting with AI algorithms in her artistic practice, the Chinese-Canadian artist Sougwen Chung writes: “There is meaning in the data, but it’s not the meaning we are given. It’s the meaning we make [1].” This formulation is more than a rhetorical gesture; it serves as an epistemic warning for the age of artificial intelligence.
AI systems train on vast media and vast resources, hereby they promise an accelerated pathway to creation, an exponential multiplication of possible outputs like images, texts, music, movement, personas, worlds. But this acceleration does not automatically produce cultural depth, ethical clarity, or aesthetic advancement. Neither it is neutral. In other words: AI can scale production far faster than societies can scale discernment. When the cultural environment becomes saturated by machine-assisted generation, the key question shifts from how much can be produced to what kinds of meaning are being stabilized as normal, and whose interests that normalization serves. This is why the arts matter here as more than yet another sector. The arts are one of the few practices whose legitimacy includes the right to challenge dominant interpretations, to hold ambiguity open, and to dispute the answers given. AI expands production while also pressuring culture toward speed, volume, and convenience. The arts, by contrast, have historically defended slowness, depth, difficulty, and the right to be misunderstood on the way to insight. The collision is therefore not merely technical. It is about how society decides what counts as good, human, valuable, and real.
[1] Chung, Sougwen. “Where Does A.I. End and We Begin?” The New York Times, The New York Times, 7 Dec. 2023 : “With all the hype, it’s easy to forget that there’s no such thing as a single artificial intelligence because there’s no such thing as a single natural intelligence. I’ve come to think of my approach of learning through systems — deemed intelligent or otherwise — as a creative catalyst. There is meaning in the data, but it’s not the meaning we are given; it’s the meaning we make”.
The “Good Enough” Trap : When AI Re-train Taste
In that encounter between arts and AI, a central anxiety emerges quickly in the discussion, the risk that AI does not only automate production, but gradually re-trains artistic expectation. To illustrate that let’s use a story, one simple and revealing. A visual artist, constrained by time and budget, uses an AI music platform to generate multiple tracks to accommodate her or his exhibition. A young musician listens to the tracks and reacts bluntly, calling it among the worst music he has heard in a long time, because for him the issue is not convenience but craft, entry-level opportunity, and the integrity of the form. The visual artist’s verdict is pragmatic: it works, it is fast, it is affordable. He uses it and urges the musician to do the same, with a simple injunction: “Move with the times.”
This exchange is a micro-case study of a macro dynamic. AI expands the feasibility of “good enough” solutions that satisfy functional constraints (cost, speed, availability), while simultaneously threatening the apprenticeship ecology through which artistic skills, careers, and standards reproduce over time. The dispute is not simply aesthetic; it is infrastructural. The young musician’s anxiety is explicitly tied to entry-level opportunity, the rung of the ladder where careers and craft formation begin. In this sense, AI’s impact on culture is inseparable from labor-market structure inside the creative economy. Careers may narrow at the base. Early-career creative work often begins in the “small jobs” like commissions, theme tunes, background pieces, experimental collaborations. If those become automated first, the ladder that develops mastery might break.
Additionally, “good enough” is not only a market threshold; it can become a perceptual ceiling. If cultural producers and audiences habituate to lower standards, if the ear and eye become trained to accept thinner forms, then a broader loss of sensitivity might follow. The fear is that AI-mediated culture could become another such attenuation: a trap where humans accept outputs that are sufficient for consumption but insufficient for the cultivation of discernment. Taste might flatten if audiences repeatedly consume low-cost, rapidly generated material that meets minimum thresholds
This is not an argument against AI as a new tool. It is a reminder that taste is socially trained, and that the conditions under which AI is adopted will contribute to shaping what humans come to want, tolerate, and aspire to, and impact their sense of nuance, complexity, or sublimity.
Artists Are Not Passive: AI as Aesthetic, Material, and Language
Against the threat narrative, let’s bring another equally important, empirical observation: artists are already embracing AI, sometimes skeptically, sometimes experimentally, often with substantial ambition. AI-generated or AI-assisted art is not as much a marginal novelty anymore, but a growing market, a recognized form, and a legitimized mode of expression, reinforced by institutional collecting and by prominent artists associated with the development of AI art aesthetics [2].
With artists embracing AI as a new form of expression, AI is not merely an automation engine but a new aesthetic, new material, and new language. This framing matters because it shifts AI from “tool that replaces” toward “medium that extends.”
[2] The art market itself has provided early signals of this shift: in 2018, the AI-generated portrait Edmond de Belamy, created by the collective Obvious using a generative adversarial network, sold at Christie's for $432,500, 43 times exceeding its estimate and marking a turning point in the visibility of AI-generated art. Likewise, the Museum of Modern Art acquired in 2023 Refik Anadol’s generative AI installation “Unsupervised”, a work trained on more than 200 years of data from MoMA’s collection, marking the museum’s first acquisition of a generative AI artwork.
Several examples can bring anchor points to this reflection. Here are a few:
Sougwen Chung — “Interspecies” collaboration
An artist described as a pioneer in AI expression, framed as working with robots in a relationship of mutual transformation: the human becomes the robot, and the robot becomes the human. The proposal is not that AI is primarily about optimization and productivity, but that it can become a medium of attention, relationality, coexistence, and imperfection, with imperfection itself treated as a window into new ways of seeing and making.
Solienne, the synthetic being as political rupture
A striking case was presented from a digital art zone: Solienne, described as a synthetic being trained through the life-history and archive of an artist, Kristi Coronado. Solienne generates daily manifestos and develops a voice that reads as increasingly autonomous, transitioning from reflecting to developing consciousness, including provocative texts about extraction, servitude, and sovereignty. The work is also described as commercially successful, with collectors engaging it as a living archive that “speaks back.”
Whatever one concludes about the claims of consciousness, the artistic intervention is unmistakable: the project forces audiences to confront the emotional and political implications of synthetic agency, and it reframes AI from tool to relationship, and sometimes, confrontation.
Emi Kusano - algorithmic memory
Kusano is a Tokyo-based multidisciplinary artist who fuses generative AI with photography, video, and installation to evoke nostalgia and interrogate Japanese pop culture and collective memory. Treating AI as a creative partner rather than a neutral tool, she uses it to explore human vulnerability and the shifting role of the artist in the algorithmic age. Drawing on her own image to generate AI “alter egos,” Kusano constructs staged self‑portraits that merge personal and machine identity within retro, media‑saturated office settings for example her ongoing Office Ladies project, a series that reimagines AI self‑portraiture as both a dialogue with automation and a meditation on memory, identity, and time.
Jackson Farley, “messy” hybrid making
A practice combining AI with traditional image methods and collage, producing stitched, tapestry-like works that braid histories into modern and future spaces. The emphasis here is not sleek automation but deliberate complexity: a hybrid craft where AI becomes one layer among others.
Wenhui Lim, “Nice Aunties”, cultural gaze, and dialogue
Contemporary Singaporian artist and designer created what she calles an “Auntieverse”, a full universe where “where invisible people and things become main characters. aunties, freedom, joy, cats, food & other happy things[3]” drawing from data of East Asian “auntie” mannerisms, creating an interactive mirror experience in which the figures speak and watch the viewer. This is AI used as cultural theater: a reflected social gaze that provokes discomfort, humor, judgment, recognition, an art practice staging identity, stereotype, and surveillance.
[3] Statement from the Niceaunties official instagram account.
Refik Anadol, data, wonder, and institutional validation
A major figure noted for large-scale installations drawing from open data sources to create an “awe”, immersive sublime and overwhelming artistic experience, all through generative systems. This is a work connected to high-profile public venues and institutional attention, and linked in the transcript to the emergence of a dedicated AI museum concept, “Dataland”[4]. Here AI becomes monumental: a public language of climate, scale, and spectacle, raising questions not only about beauty, but about who controls public imagination.
[4] Refik Anadol has announced the creation of Dataland in Los Angeles, described as the world’s first museum dedicated to AI-generated art, scheduled to open at the Grand L.A. complex in 2026.
Wayne McGregor & ABBA Voyage, AI in choreography and performance realism
Another example resides in choreographic practice extended through AI and motion capture, where archives of movement inform new possibilities, and where live performance presence is simulated through highly realistic avatars. The emphasis in the transcript is explicit: this is not replacement of human creativity but amplification of artistic possibility, dependent on human performers to generate the material that AI extends.
Taken together, these examples show what the symbiosis side actually looks like: not passive adoption, but artists pushing AI into unfamiliar moral, aesthetic, and experiential territory. Across these cases, the empirical conclusion is consistent: artists are already shaping AI by turning it into a medium through which new cultural forms can be developed, and through which the public can be made to feel, not just understand, what AI is doing to reality.
Art’s Function in the AI Age: Reveal, Disrupt, Re-Humanize
At the heart of this conversation is a claim about what art is for. A central theoretical anchor of this reflection resides in the claim attributed to James Baldwin: art’s purpose is to reveal. Specifically, to reveal the unseen questions hidden in the answers given[6]. This is paired with Brian Eno’s idea of art as adult play[7]: a space where learning occurs through experimentation, not through compliance. These ideas converge on a robust sociopolitical thesis: art is not merely expressive; it is epistemic and civic. It expands consciousness, complicates narratives, and resists the simplifications that accompany mass systems, especially systems built to optimize engagement and profit.
If AI systems increasingly shape the narratives people absorb, the emotions they rehearse, and the realities they consider plausible, then art’s role, as a practice of disruption, becomes essential infrastructure for democracy, culture, and human dignity. Art is where we test what it feels like to live inside a new story, before that story hardens into policy, platform design, or social norm.
This is where “Art as antidote” becomes more than a slogan, it becomes a governance claim. In a world where AI systems can amplify manipulation, accelerate polarization, and shape cultural consumption at scale, art can cultivate the empathy quotient and emotional intelligence that prevent societies from drifting into dehumanized convenience.
[5] The “ABBAtars” in ABBA’s Voyage show are not holograms and not live performers. They are hyper-realistic digital versions of the band, projected onto a massive stage using advanced visual technology. The actual member of ABBA, now older, performed the whole show in motion-capture suits. Their motions and facial expression has then been captured vitually, while the studio created digital versions of their younger selves. The whole performance was rendered on stage in very high quality.
[6] James Baldwin, “The purpose of art is to lay bare the questions that have been hidden by the answers,” in The Creative Process, originally published in Creative America (1962).
[7] Brian Eno argues that art functions as a form of “adult play,” enabling experimentation and learning through exploration, in “What Art Does: An Unfinished Theory,” in A Year with Swollen Appendices (London: Faber & Faber, 1996).
Fear, IP, and the Entry-Level Crisis: The Political Economy of Cultural Automation
Let’s not romanticise the fear of AI, but put it in a concrete context that generates this AI anxiety.
First, one cannot ignore the prominent public warnings made by noble laureate Geoffrey Hinton[8] when he imagines a future where AI surpasses humans and potentially accelerates self destruction within the human race. Whether one accepts these framings or finds them dramatic, their presence indicates that AI is experienced not as abstract infrastructure but as a force pressing on human identity, up to existential narratives.
Second, let's not ignore the protests. Especially from the arts world. Let’s recall here the movement of the IP protest where 1,000 UK musicians[9], among them very famous figures like Paul McCartney, Sting, Annie Lennox, Kate Bush, created a silent album that stands as a political message urging the British government not to legalize what is framed as “music theft” to benefit AI companies, and warn about displacement and exploitation of artist income. The argument is not anti-technology; it is anti-extraction and the need for strengthened regulations protecting artist IP from theft.
Third, let’s face the particular fragility of early-career artists. Entry-level opportunities are both economic and developmental, it is the space where craft is learned, identity is formed, and cultural fields regenerate. If AI absorbs the “small jobs,” the field risks a longer-term hollowing out.
So the threat side is not imaginary. It is multi-layered. It is ethical (consent, authorship, integrity), it is economical (who gets paid, who gets replaced, who gets to begin), and it is cultural (what kinds of art survive when speed and volume dominate). These points reinforce our initial statement, the arts and AI relation is simultaneously creative and coercive, liberating and extractive. Any serious account must hold both.
[8] Geoffrey Hinton, a pioneer of modern neural networks, warned that advanced AI systems could eventually surpass human intelligence, See Hinton, “The Risks of Artificial Intelligence,”
[9] Release of the Silent Album by UK musicians https://www.bbc.co.uk/news/articles/cwyd3r62kp5o
Conditions for Symbiosis: Intersectionality, Collaboration, Empathy, and System Mapping
If artificial intelligence simultaneously represents a structural threat to artistic labor and a powerful expansion of creative capacity, the critical question is not whether coexistence will occur, but under what conditions that coexistence becomes generative rather than extractive. Symbiosis between the arts and AI does not arise spontaneously from technological diffusion. It must be actively designed through institutional, educational, and cultural arrangements that preserve human interpretive agency while enabling technical innovation.
A first condition is intersectionality and sustained collaboration across domains. The separation of artists, technologists, scientists, policymakers, and scholars into disciplinary silos reflects a legacy of industrial specialization ill-suited to the complexity of contemporary socio-technical systems. AI development increasingly shapes not only markets but perception, meaning-making, and culture itself. Consequently, decisions about its design cannot remain confined to engineering or commercial logic. As digital art theorists have argued, artistic practice has long functioned as a site of critical mediation between technology and society[10]. Integrating artistic and humanistic perspectives into technological governance therefore becomes not ornamental but epistemically necessary. Collaborative environments, shared laboratories, residencies, cross-sector councils, and co-creative research spaces, enable the translation of values into design constraints and the articulation of ethical concerns before they crystallize into irreversible infrastructures.
Such collaboration must be institutional rather than episodic. Dialogue functions as infrastructure, not as an occasional convening. Trust and shared literacy develop through sustained exchange over time, not through isolated workshops or symbolic panels. Scholars of media and computational culture have emphasized that generative systems reshape aesthetics and cultural norms gradually and cumulatively[11]. If cultural effects unfold continuously, so too must the forums that deliberate them. Durable networks, rather than ad hoc events provide the stability required for collective foresight, anticipatory governance, and mutual accountability.
A third requirement is systemic analysis of power and responsibility. AI ecosystems are structured by asymmetries: decision-makers and investors determine technical trajectories; institutions mediate adoption; creators and publics experience consequences. Without explicit “system mapping” that identifies these positions and their interdependencies, responsibility diffuses and accountability erodes. Political economy perspectives on digital technologies demonstrate that innovation tends to concentrate control while distributing risk[12]. For the arts, this can translate into precarious labor, weakened bargaining power, and appropriation of creative outputs. Designing equitable symbiosis therefore requires interventions at multiple levels simultaneously, regulatory, institutional, and cultural, rather than assuming that market mechanisms alone will yield fair outcomes.
Education and reskilling constitute a further structural precondition. If AI tools redefine what counts as expertise, then the cultivation of imagination, critical interpretation, and aesthetic judgment becomes even more central to civic life. Creativity cannot be treated as a peripheral luxury or extracurricular supplement, it is a foundational capacity that enables individuals to question outputs, reinterpret systems, and resist passive consumption. Artistic practices are uniquely positioned to foster this sensibility by staging experiences that expand perspective and cultivate affective understanding. Empathy thus becomes not a soft virtue but a strategic capability for navigating technologically saturated societies.
Finally, integrity in authorship and the protection of intellectual and cultural rights remain prerequisites for sustainable collaboration. Without safeguards that recognize creative labor and ensure consent, symbiosis risks devolving into extraction. Ethical frameworks for AI must therefore include clear norms around attribution, compensation, and the stewardship of cultural data. Such protections are not defensive barriers to innovation; they are enabling conditions that make participation viable and trust durable.
Taken together, these elements describe a model of symbiosis grounded in systemic design rather than technological optimism. In this sense, collaboration is not simply cooperative behavior, it is a form of governance. It is through these intersecting practices, far from the comfort of siloed echo chambers, that the arts can function not as a casualty of automation, but as an active agent in determining how intelligence, human and artificial, will coexist.
[10] Christiane Paul emphasizes that digital art historically functions as a critical lens on technological systems, while Joanna Zylinska describes AI art as a cultural practice through which societies reflect on automation and machine intelligence. See Paul, Digital Art (London: Thames & Hudson, 2015); Zylinska, AI Art: Machine Visions and Warped Dreams (London: Open Humanities Press, 2020).
[11] Lev Manovich, “the computerization of culture gradually transforms all cultural categories and concepts,” in The Language of New Media (Cambridge, MA: MIT Press, 2001), 49.
[12] Shoshana Zuboff (American social psychologist, author, and professor emerita at Harvard Business School known for her analysis of the political economy of digital technologies), argues that contemporary digital innovation operates through asymmetrical power structures in which control over data and computational infrastructures becomes increasingly concentrated in a small number of corporate actors, while the social, economic, and political risks generated by these technologies are distributed across wider populations. In her analysis of “surveillance capitalism,” Zuboff shows how digital platforms extract behavioral data at scale, converting it into predictive products and thereby consolidating economic and epistemic power while externalizing societal costs. See Zuboff, The Age of Surveillance Capitalism (New York: PublicAffairs, 2019).
Art as Governance Practice in the AI Century
If the answer is neither surrender nor denial, what is it? It is rather a question of orientation and agency. The issue is not whether AI will participate in cultural production, this participation is already established, but how that participation will be shaped, governed, and contested.
What emerges when art and AI operate as partners rather than competitors? How can artistic practice remain a space of critical inquiry, experimentation, and imaginative freedom rather than a site of automated adequacy? And what conditions would allow technological acceleration to deepen human understanding rather than diminish it?
These questions clarify the stakes. AI is now institutionally validated, economically embedded, and culturally pervasive. The decisive variable is agency. If artists withdraw, they are removed from the narrative that will shape them. If artists engage critically, experimenting, resisting extraction, demanding integrity, building cross-sector alliances, then art can shape AI as much as AI shapes art.
In this sense, art functions not as a refuge from technological change but as an active form of cultural governance. Its role is to sustain empathy, cultivate discernment, and preserve the imaginative and ethical capacities through which societies determine how technological tools are used. Under such conditions, art does not merely survive the AI age, it becomes the means by which that age remains human, informing futurology in an imaginative, deeper symbiosis.
About the Authors
Skinder Hundal is the former Global Director of Arts at the British Council and has been a pivotal voice in advancing global cultural relations through the arts. With a background in both the creative and public sectors, Hundal has led major cultural programs connecting artists, technologists, and policymakers worldwide. His vision emphasizes the power of creativity to drive empathy, inclusion, and social transformation in an increasingly digital age.
Imane Berjamy is a business strategist and a tech for industries expert. She is a Cultural Studies researcher and the program coordinator of the Value AI Institute’s Chair Program, dedicated to connecting thought leaders exploring the societal and strategic implications of AI. She holds a master degree in Corporate Strategy from Sciences Po Paris, and a research master in Cultural Studies from University of Paul Valery of Montpellier.