Not Art Yet: Algorithms as Contemporary Assembly Lines
In June 2015, Alexander Mordvintsev, Christopher Olah and Mike Tyka published a post on the Google Research blog about a visualization tool written to gain understanding in how artificial neural networks develop their data. Their specific focus was to improve knowledge on how interconnected layers of artificial neural networks communicate and enhance their individually developed information – what gets created when, and how this is then adapted while being passed from one layer to the next. They explain a self-feeding system whose feedback loop's initial layers tend to highlight and produce visual artefacts (e.g. by strengthening contours), and whose later, “higher level” layers use these previous computation’s results to further manifest visual ciphers. They note that “[b]y itself, [this system] doesn’t work very well”, but that by manually inserting a visual bias (a specific animal, a building, etc.), the algorithm is able to manifest these objects – even from the neutrality of random visual noise. To this end, all layers work serially to gravitate towards specific visual preferences – they curate. Based on crowdsourced Big Data imagery, the algorithm thus visualizes how the virtual neural network imagines things to look like. The blog post uses the image of a dumbbell as example for a virtual neural network's semantic error: for the AI, it mistakenly seems to always include a lifting person's hands. Understanding the AI's projected semantics is crucial to its development, with misinterpreted data serving as proof of the algorithm's potency.
Ultimately though in its blog post, the team wonders “whether neural networks could become a tool for artists—a new way to remix visual concepts—or perhaps even shed a little light on the roots of the creative process in general”. Might the currently described algorithm have more potential than your average Photoshop filter? Could it be used for artistic processes in a way that mattered aesthetically – or even societally, as is expected from contemporary arts? Could the underlying algorithm constitute a self-learning, self-adaptive system similar to contemporary art practices, or even be related to the cultural, non-teleological drive of our species?
I would like to elaborate on art as a contingent process closer related to developers researching their topic (here: neural networks), than to mechanically applying results (here: an algorithmic visualization method) of any previously contingent process. Although this case initially produces precanonized visual data, the highly volatile nature of canonization results in anything new quickly becoming established – and hence losing its freshness. The attribution of "newness" is therefore less relevant when found within an object, than when constituting a processes' target. Mechanically applying an algorithm stands far apart from a definition of art that cares about finding individual parameterizations for quality, since there no longer is the need to gain any personal understanding of the process's quality attributes. Calling an algorithm's output "art" turns out to be similar to calling a Fordist assembly line's output "art": it is the consequence of treating art as a mechanistic, teleological dynamic.
Art and Craft
While working on my PhD thesis on the effects of digital culture on the expectations of contemporary painting, it became clear that a purely image-analytical approach would not facilitate a clear understanding of any visual medium's attribution. Instead, although by some understood as “fool’s errand”, I realized for the thesis to require a clear cut definition of contemporary art as its foundation – without it, discussing media specifics seemed inappropriate: how could a specific medium be discussed without first knowing the system it operates within?
I ultimately defined art as a process towards understanding individual quality attributions – resulting in its native media independence, and highly volatile nature: artistic practices are, most of all, dynamics with only temporary validity. Media specific thinking in terms of e.g. painting, dancing, cooking is not required to discuss art ontologically, and doesn't necessarily help in understanding artistic processes. Instead, it can even block the view on what art can be.
Craft on the other hand denotes the mechanical, latently repetitive application of a known process towards an preknown, teleological quality (a "goal"). Just like art, craft is a volatile dynamic – changing not only its object, but also the craftsman's individual mental and physical knowledge of it (something not necessarily the case with algorithms). Repeatedly doing something might feel to clone actions, but actually yields different results because of the operation's impact on object and subject.
I understand art and craft as a Möbius strip's two sides: Every action within the artistic process (whether mental, physical, digital, algorithmical etc.) is in constant individual and societal flux between the poles of art and craft, with neither being able to exist without the other. Making a painting can fulfill art's criterion, as long as the painting process's qualities are unclear to its author. Cooking a soup can qualify as well, as long as its "quality" hasn't individually been clearly understood. If instead it is known (which is usually the case once a “recipe” has been formulated), the process qualifies as craft.
Image: Art and Craft as one Dynamic, by Christian Bazant-Hegemark, 2015
Both terms' modernist narratives therefore need to be transcended: there is no generally attestable beauty in art's objects or processes, and there is no hierarchy between art and craft. Calling something "art" or "craft" does not elevate or diminish it.
Assembly Lines
Canonization dynamics are highly relevant in understanding the connection between art and craft: what is initially new and fresh (precanonized), quickly becomes canonized on individual and/or societal levels (not necessarily influencing each other). Once individually canonized, mechanical repetition can be established to have an artistic process become (individual) craft; this establishing process itself usually can be understood as art: mechanizing/streamlining an open process requires an understanding that usually doesn’t match the initial artistic process.
Image: General Canonization Dynamic, by Christian Bazant-Hegemark, 2015
The Google Research blog post includes algorithmically produced imagery which at first seems visually fresh and pre-canonized. Skimming through more than just a few of these "deep dream" images (e.g. at http://psychic-vr-lab.com/deepdream/) helps to understand their algorithmic base: strong brilliance and contrasts, strengthened contours, and a preference for curves. Like a Photoshop filter, the neural network’s visualization operates on adapted output from some initial image input. What makes the neural network’s algorithm stand out is its capacity for a curatorial preference, used to implement an allegedly individual goal – to have it operate autonomously. This must not be mistaken with agency though – it’s still the human user who choses the visual bias, not the algorithm itself.
Let's compare this consistent, algorithmically produced and guaranteed likeness, to the serial consistency offered by Fordist production lines. Do they constitute art? Isn't it rather the process (potentially resulting in the definition of said assembly line) that can qualify as such – because it is there that newness (precanonization) is aimed for? Shouldn't these autonomous agencies be treated as art's defining factors? How do algorithms satisfy these parameters?
Removing humans from production processes will always result in minimized contingency, therefore purifying any algorithmic process and output. But isn’t art (and culture) exactly about contingency? Can’t craft be translated to algorithmic form especially because of its lack of contingency - and isn’t that the reason why art doesn’t yet exist in algorithmic form?
Image: Craft and Contingency, by Christian Bazant-Hegemark, 2015
Art and craft form a circular topology in which they continuously mutate into each other – neither can exist without the other, and both happen only because of the other. The question of “which came first” constitutes another chicken/egg causality dilemma.
Where Fordist mass production's core is the production line, post-Fordism extends it to embrace a later generation's individualization and fragmentation – in a way, Fordism establishes and parallelizes craft processes, into which post-Fordism tries to insert “art” as an illusion of contingency (industrially produced deckle edges on books are an easy example for this).
Today's Big Data algorithms then natively blend Fordist and post-Fordist strategies to implement even more streamlined production environments; by using masses of anonymized user data, Big Data appears to be smart without requiring a human workforce: artificial intelligence. By offering an unexpected update on nonnegotioable bartering agreements (free services instead of monetary payment, in return for unlimited usage of user data), Big Data’s AI is based on unlimited harvesting of individual data. This leads to the production of objects without physicality, creating an environment that's virtual except for its economic worth. All of this seems to only be possible because of the societal change in economic expectations: we don't need physicals as much as even just ten years ago – lives have become far more virtual than expected (in 2015, Amazon.com's biggest profit gain didn’t happen because of their sale of physical objects, but through AWS, their virtual cloud computing service). So many physical domains have been replaced by virtual ones, that Star Trek's holodeck can easily be understood as the (virtual) invention of a pre-virtual mind: communication, education, leisure activities; movies, books, music, letters, money; transportation, physical storage spaces, etc.
In this global economy, humans are no longer the limiting factor: plattform efficiency can be improved through parallelization and virtualization of processes (hence AWS’ success), transcending health or social security payments – and many other constraints of physical existences. Algorithms are a new workforce that only requires humans for being established – but not for being operated. The algorithmic creation of algorithms would diminish this requirement for human’s agency even further – and it would be only then that we’d have to discuss art as algorithmic process.
In a way, the autonomous agency that makes humans such excellent problem solvers is what still keeps us far apart from machines. It's also what drives culture, and why art matters: for as a species, we survive because we expand our cultural normatives.
Algorithms
Since they are not teleological, artistic processes are always pre- or post-Fordist – they can't easily be assembly lined, whether their medium is painting, or the team effort of creating a video game. Only craft can be Fordist (e.g. the repetitious operation of creating video game assets), and therefore be represented in algorithmic form: algorithms constitute craft – it's only their creation that can qualify as art. Algorithms can be understood as artistic processes once they are able to exist according to their own agency.
Postmodernist individualism severly disrupts the Fordist production line: in addition to individualized (physical) mass products (individually engraved iPods, custom covers for smartphones, etc.), contemporary digital mass products tend to contain virtual elements to implement their individualization natively – something an earlier era of desktop computing didn't have to account for. Search engines, cloud based services (archives or document editing, mail services, etc.) are all natively virtual platforms, replacing a previous generation's need for their physical predecessors.
Because it works along a predefined process, it's possible to clearly locate Google's deep dream algorithm as craft – any deterministic algorithm with predefined goals can only count as such: It's only self-developing, self-modifying algorithms that could implement artistic processes – and with their individualized agenda, their own agency, they most probably would. Their media will be as varied as their results: for that is the single attribute uniting art through the millennia: the strive for the unknown, for the precanonized. For what is culturally essential for later generations, but doesn’t yet exist: fire, the wheel, telegraph and phone, the touch paradigm etc. Once machines are able to adapt themselves individually in a currently unknown mode of agency, we will be able to discuss the difference between humans and machines for real. For now, agency in algorithms is mostly faked by processing global data sets.
Ultimately, algorithms will gradually replace all mechanical, repetitive labor (sorting, assessing, driving etc.):
"Interconnectedness and increasing computational power will continue to automate work and outsource any job that can be standardized. New businesses are employing fewer employees, while manufacturing is moving to an increased use of robots." (http://jarche.com/2015/06/turmoil-and-transition/)
Algorithms continuously dig deeper into traditionally analog domains, as these slowly get digitalized (Google Books Library Project for digitalizing preexisting objects; the Internet of Things for digitalizing nonexisting attributes traditionally analog). For now though, art can well be defined as whatever can't be produced algorithmically.
An algorithm designing and implementing a process similar to an artist entering an empty studio – creating meaning from nothing, rephrasing and permuting semantics, highlighting individual preferences. Autonomous algorithms with an individual agenda.
Art not only as culture, but as humanity's defining driving force. Once code is be able create art, we will have to rethink what it means to be human.
About the author: After working for Rockstar Vienna as a programmer for six years, Christian Bazant-Hegemark studied Fine Arts at the Academy of Fine Arts (Vienna/Austria), and is currently awaiting the viva voce examination for his PhD thesis on the influence of digital culture on contemporary painting (Elisabeth von Samsonow, Felicitas Thun-Hohenstein). He lives and works in Vienna/Austria. You can check out his work at www.bazant-hegemark.com, and follow his research on contemporary painting at this blog, www.beyondmimesis.tumblr.com.