
Why A.I. is (already) making art

Science-fiction author Ted Chiang published an essay on The New Yorker entitled “Why A.I. Isn’t Going to Make Art” in an effort to expose the inherent limitations of generative artificial intelligence (Gen AI) in the artistic realm. He argues that art is the result of a multiplicity of choices, which are deeply personal and intentional.
For instance, in the context of fiction writing, all the words in a story represent a choice made by the writer, which are reflective of the author’s experience. Large language models (LLMs), by contrast, simply extrapolate patterns derived from pre-existing data in order to generate outputs that are either average or bland, because they lack the decision-making that the artistic process requires.
Even when prompted with a highly detailed query, LLMs can only respond based on statistical patterns. The same is true for visual art. While Gen AI models like Midjourney or DALL-E can generate images based on text prompts, the prompt cannot capture the thousands of decisions that an artist makes in the creative process. This lack of decision-making is what, according to Chiang, constitutes the fundamental difference between human creativity and AI’s (simulated) creativity.
Ted Chiang also criticizes the idea that AI could democratize creativity by reducing the amount of the effort involved in making art. Indeed, if the creative process is inherently tied to a countless number of choices made by the artist, that decision-making effort is essential to producing work that is meaningful and worthy of attention. Hence, even if it can assist artists in certain tasks, Gen AI is unlikely to succeed in revolutionizing art in the same way as photography did. That is because, even if a photo can be taken with the simple click of a button, a photographer must engage in a large number of decisions in order to master photography as an art form. Gen AI lacks such a decision-making capacity and is therefore incapable of producing art.
The debate around whether machines can actually create art is as old as the advent of machines themselves. The arguments against it often circle back to the claim that machines lack the intentionality necessary for true artistic expression. However, these arguments might overlook the potential role that Gen AI could play in the realm of creativity.
The automation of creativity is not a new phenomenon. It can be traced back to the 1960’s in France, with the creation of the Oulipo (Ouvroir de Littérature Potentielle) group by Raymond Queneau and François Le Lionnais. Queneau’s “Cent mille milliards de poèmes” (1961) — which exploited the combinatorial possibilities of ten sonnets subdivided in 14 sections each to create 100 trillion unique poems — was a landmark in the exploration of automated creativity, demonstrating how mechanical processes could be harnessed to produce an almost infinite number of literary works.
Two decades later, the ALAMO (Atelier de Littérature Assistée par la Mathématique et les Ordinateurs) group was born, dedicated to the exploration of computer-assisted literature. The group has pioneered new ways of combining human creativity with algorithmic processes, by e.g. generating poems via algorithmic permutations, creating branching narrative paths determined by mathematical formulas, or the development of software tools capable of producing literary works (the so-called “littéraciels”).
Neither Oulipo nor ALAMO regarded the machine not as an “autonomous artist”; rather, they saw it as a valuable collaborator in the creative process. The idea underpinning these automated literary works was that their value does not lie solely in the algorithms that generate them, but also — or rather — in their capacity to trigger emotional resonance in the human mind. Their significance, in other words, is not within the machine that generates the text, but within the human who interprets it. The same idea can be applied to AI-generated art.
While Ted Chiang is right to say that Gen AI lacks intentionality and decision-making capabilities, his claim that AI-generated art is devoid of meaning may overlook the impact that such art can have on its audience. The interpretation of art is a deeply subjective experience. If Queneau’s mechanical permutations can produce poetry that resonates with readers, so too can AI-generated works evoke emotions, provoke thought, and even inspire new creations.
Chiang’s concerns about the attribution of false meaning and significance to AI-generated content are understandable. Yet, these should not be leveraged to categorically reject the potential for AI to contribute to the artistic process. Instead, the debate on whether generative AI can produce art might benefit from a more nuanced approach. While AI may not “intend” to express meaning in the way humans do, it nonetheless plays a role in the artistic process by — albeit unintentionally — producing works that can be meaningful to human audiences. In particular, if we acknowledge that art is not (just) about the creator’s intent but (also) about the viewer’s experience, then AI can in fact make art — art that is derived from a large pool of human culture, interpreted by human minds, and appreciated for its ability to resonate, challenge, and inspire humanity.


