Issue 1/2024


ArtGPT

Editorial


Everyone is talking about AI. Since various text and image generators have been unleashed on humanity, discussions have been raging on many levels: whether this is not completely undermining the value of human creativity; whether this is not gradually making human actors superfluous in more and more areas; whether this is not opening the door to counterfeiting and deep fakes; and whether this is not encouraging a potentially authoritarian regime that turns us all into technological subjects. However, questions like these are just the tip of an iceberg that has in fact been growing for decades and is currently coming to the attention of a wider public. One could also say that the various – and indeed very different – AI applications have reached such a critical mass that hardly any area of society remains unaffected by it.
It is precisely this “massing”, in relation to the arts and the cultural field, that we want to look at in more detail in this issue. The focus here is less on the problem of whether the use of AI will at some point render everything human obsolete. This warning, which has been sounded time and again since the dawn of the machine age, reflects little more than a desperate desire to hold on to the exceptional status of Homo sapiens. In contrast, we want to explore the broader question of the extent to which human, non-human and systemic forms of intelligence and creativity have always been much more closely interwoven than is generally assumed. The topic here is therefore also the more variable human-machine couplings that underlie the current scenario – in a discursive setting that often enough oscillates between thoughtless euphoria (regarding the use of innovative apps in everyday life) and exaggerated panic reactions (AI as a malicious dictator).
The contributions in this issue, entitled “ArtGPT” in reference to the infamous text generator ChatGPT, attempt to leave this dualistic view behind. Right at the beginning, Anuradha Vikram asks to what extent artists who make use of various AI applications occupy a kind of middle position between authorial users and vicarious agents forced into blunt passivity. Vikram also addresses the increasingly frequent copyright concerns that are being voiced in relation to the training of AI models with billions of non-authorized data sets. This issue is also addressed in a contribution by a ten-strong US collective of authors, in which the specific impact on cultural professionals is highlighted and critically discussed.
A roundtable with the artist Manu Luksch, the computer scientist Arthur Flexer and the author Thomas Raab also explores these effects, focusing on current art production in the context of AI from different perspectives. How can generative processes be used for artistic purposes in general without prematurely throwing all criteria of creativity overboard? Are established cultural techniques such as sampling, appropriation and remixing undergoing a fundamental revision here? And what possibly stereotypical aesthetics does artistic work with AI promote?
In the conversation with AI artist Beth Frey included here, what becomes evident is that “generic” art – as opposed to genuinely “generative” art that profoundly transforms what has gone before – is gaining the upper hand. In his essay, Clemens Apprich explains the extent to which this can be traced back to the theoretical thinking model underlying much of today’s AI. According to Apprich, the implementation of the so-called “connectionist” approach in neural networks has led to a prioritization of inductive methods (as opposed to other logical methodologies). This can also be seen in the artistic results in question, which sometimes lack central creative features, such as the role of intuition or methodologies directed against their intended usage. The latter is also emphasized by Louis Chude-Sokei, who recalls the history of “dysfunctional” uses of autonomous systems, especially in African-American culture, which often led to surprising, sometimes historically groundbreaking results.
Questions of power and violence, whether spoken or unspoken, are always inscribed in the breakthrough of new technologies. In their interview, Yannick Fritz and Meredith Whittacker, President of the Signal Foundation, focus on the concentration of power in the hands of the big tech companies as an effect of the expansion of AI. As Whittacker states, what is increasingly being overlooked is, among other things, the concrete working conditions that thousands and thousands of people working in the AI sector are subject to. In contrast to this, it is important to keep in mind the extent to which the history of computer science is intricated with the continuous expansion of economic, social and political power technologies. To this, Anthony Downey adds the equally relevant military aspect. According to his essay, this can be paradigmatically traced in the development of algorithmic systems of machine vision which has been closely associated with (neo-)colonial endeavors.
From all of this, it should become clear what major transformations have already been triggered by the spread of AI. “ArtGPT” attempts to make these transformations comprehensible starting from the field of art and progressing through various interdisciplinary realm, without giving singular (e.g. social, economic, etc.) weightings sole primacy in the process.