Machine Visions

Roberto Alonso Trillo, Peter Nelson, Marek Poliks (Composer)

Research output: Non-textual formExhibition

Abstract

This exhibition explores how machine learning tools are being integrated into artistic practice. The works on show are the result of a two-year exploration of how machine learning can be used to synthesise music and synthesise 3D objects. In the time elapsed since the commencement of this project, online visual culture has reacted to and absorbed a host of new techniques, from image recognition to style transfer, to natural language synthesis and more recently the text-to-image synthesis pipelines offered by tools such as MidJourney and Dall-e. Underneath these rapidly evolving creative toolkits lie a common computational approach of a dataset, a neural network, and a newly synthesised output based on what features the network can understand in the original dataset. As the utility of these tools and the quality of their results improve, various cultural debates have been spawned, such as who ‘owns’ the collective cultural databases on which these systems are trained, and who therefore owns the works that these systems generate? Is there a tipping point where the human creative input relative to automated machine output shifts balance to the degree that we no longer consider the human to be the author of the work? In her overview of modern visual communication, Joanna Drucker notes that representational strategies evolve historically with changes in technological production, from the relationship between 16th-century developments in optics and Renaissance painting to mechanised assembly lines and the industrial geometric abstractions of modernist artists such as Paul Klee and Wassily Kandinsky. Considered in this broader trajectory, what we are witnessing is human creativity once again adapting to a paradigm shift, namely that of automation and artificial intelligence. It would be difficult to produce a definitive exhibition of how machine learning is changing the creative process, simply because these techniques are being integrated very quickly and across a wide number of applications and tools. Instead, this exhibition presents a bespoke exploration of three techniques – synthesising 3-dimensional shapes, synthesising music, and synthesising human motion. We present various artworks, sound installations, and musical performances made using these tools, alongside educational panels explaining the machine learning approaches behind these works. We hope that this exhibition can make a modest contribution to the rapidly evolving conversation of machine learning, artificial intelligence, and creativity.
Original languageEnglish
PublisherOsage Gallery
Media of outputOther
Publication statusPublished - 19 Nov 2022

Scopus Subject Areas

  • Arts and Humanities (miscellaneous)
  • Music
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Machine Visions'. Together they form a unique fingerprint.

Cite this