[iDC] notes on media remix

Lev Manovich lev at manovich.net
Wed Apr 19 04:22:44 EDT 2006

Greetings to everybody

My Pratt Manhattan gallery lecture earlier this month was my first public
presentation of some ideas on media remix I have been developing lately; a
long text Understanding Metamedia which goes into details will be posted on
my web site next month.

I am not sure if I was successful in presenting the ideas correctly at this
time - but for now, I wanted to add to the discussion two text statements
which summarize what I wanted to convey in the lecture and what I am trying
to develop in more detail in the forthcoming text.

I am giving another talk in NYC this coming Saturday April 22 where I will
try to approach these ideas again from a somewhat diffirent POV than in the
previous talk:


The first segment is from my article "Abstraction and Complexity" (2003):

One result of the shift from separate representational and inscription media
to computer metamedium is proliferation of hybrid images - images that
combine traces and effects of a variety of media. Think of an typical
magazine spread, a TV advertisement or a home page of a commercial web site:
maybe a figure or a face of person against a white background, some computer
elements floating behind or in front, some Photoshop blur, funky Illustrator
typography, and so on. (Of course looking at the Bauhaus graphic design we
can already find some hybridity as well similar treatment of space combining
2D and 3D elements ­ yet because a designer had to deal with a number of
physically distinct media, the boundaries between elements in different
media were sharply defined.)

This leads us to another effect - the liberation of the techniques of a
particular media from its material and tool specificity. Simulated in
software, these techniques can now be freely applied to visual, spatial or
audio data that has nothing to do with the original media. In addition to
populating the tool pallets of various software applications, these
virtualized techniques came to form a separate type of software ­ filters.
You can apply reverb (a property of sound when it propagates in particular
spaces) to any sound wave; apply depth of field effect to a 3D virtual
space; apply blur to type, and so on.

The last example is quite significant in itself: simulation of media
properties and interfaces in software has not only made possible the
development of numerous separate filters but also whole new areas of media
culture such as motion graphics (animated type which exist on its own or
combined with abstract elements, video, etc). By allowing the designers to
move type in 2D and 3D space, and filter it in arbitrary ways, After Effects
has affected the Guttenberg universe of text at least as much if not more
than Photoshop affected photography.


The second segment comes from this new long text Understanding Metamedia
which will be available shortly. In this segment the idea of media
remixability is developed in relation to visual langauge of moving images.
However just as I tried to do this in the lecture, I am working to apply the
idea of medix remixability to other areas of digital media.


The use of After Effects is closely identified with a particular type of
moving images which became commonplace to a large part because of this
software ­ ³motion graphics.² Concisely defined by Matt Frantz in his Master
Thesis as ³designed non-narrative, non-figurative based visuals that change
over time,²  motion graphics today include film and television titles, TV
graphics, dynamic menus, the graphics for mobile media content, and other
animated sequences. Typically motion graphics appear as parts of longer
pieces: commercials, music videos, training videos, narrative and
documentary films, interactive projects.
While motion graphics definitely exemplify the changes that took place
during software revolution of the 1990s, these changes are more broad.
Simply put, the result of this revolution is a new hybrid visual language of
moving images in general. This language is not confined to particular media
forms. And while today it manifests itself most clearly in non-narrative
forms, it is also often present in narrative and figurative sequences and
For example, a music video may use life action while also employing
typography and a variety of transitions done with computer graphics
(example: video for Go by Common, directed by Convert / MK12 / Kanye West,
2005). Or it may imbed the singer within the animated painterly space (video
for Sheryl Crow¹ Good Is Good, directed by Psyop, 2005.) A short film may
mix typography, stylized 3D graphics, moving design elements, and video
(Itsu for Plaid, directed by Pleix collective, 2002 ).
In some cases, the juxtaposition of different media is clearly visible
(examples: music video for Don¹t Panic by Coldplay; main title for The
Inside by Imaginary Forces, 2005). In other cases, a sequence may move
between different media so quickly that the shifts are barely noticeable
(GMC Denali ³Holes² commercial by Imaginary Forces, 2005). Yet in other
cases, a commercial or a movie title may feature continuous action shot on
video or film, with the image being periodically changing from a more
natural to a highly stylized look.
While the particular aesthetic solutions vary from one piece to the next and
from one designer to another, they all share the same logic: the appearance
of multiple media simultaneously in the same frame. Whether these media are
openly juxtaposed or almost seamlessly blended together is less important
than the fact of this co-presence itself.
Today such hybrid visual language is also common to a large proportion of
short ³experimental² (i.e. non-commercial) films being produced for media
festivals, the web, mobile media devices, and other distribution platforms
The large percentage of the visuals created by VJs and Live Cinema artists
are also hybrid, combining video, layers of 2D imagery, animation, and
abstract imagery generated in real time. (For examples, consult The VJ book,
VJ: Live Cinema Unraveled, or web sites such as www.vjcentral.com and
www.live-cinema.org. )  In the case of feature narrative films and TV
programs, while they are still rarely mix different graphical styles within
the same frame, many now feature highly stylized aesthetics which would
previously be identified with illustration rather than filmmaking ­ for
instance, TV series CSI, George Lucas¹s latest Star Wars films, or Robert
Rodriguez¹s Sin City.

What is the logic of this new hybrid visual language? This logic is one of
remixability: not only of the content of different media or simply their
aesthetics, but their fundamental techniques, working methods, and
assumptions. United within the common software environment, cinematography,
animation, computer animation, special effects, graphic design, and
typography have come to form a new metamedium. A work produced in this new
metamedium can use all techniques which were previously unique to these
different media, or any subset of these techniques.
If we use the concept of ³remediation² to describe this new situation, we
will misrepresent this logic ­ or the logic of media computing in general.
The computer does not ³remediate² particular media. Instead, it simulates
all media. And what it simulates are not surface appearances of different
media but all the techniques used for their production and all the methods
of viewing and interaction with the works in these media.
Once all types of media met within the same digital environment ­ and this
was accomplished by the middle of the 1990s - they started interacting in
the ways that could never be predicted nor even imagined previously. For
instance, while particular media techniques continue to be used in relation
to their original media, they can also be applied to other media. (This is
possible because the techniques are turned into algorithms, all media is
turned into digital data stored in compatible file formats, and software is
designed to read and write files produced by other programs.) Here are a few
examples: motion blur is applied to 3D computer graphics, computer generated
fields of particles are blended with live action footage to give it enhanced
look, a virtual camera is made to move around the virtual space filled with
2D drawings, flat typography is animated as though it is made from a liquid
like material (the liquid simulation coming from computer graphics field),
and so on. And while this ³cross-over² use by itself constitutes a
fundamental shift in media history, today a typical short film or a sequence
may combine many such pairings within the same frame. The result is a
hybrid, intricate, complex, and rich visual language ­ or rather, numerous
languages that share the basic logic of remixabilty.
I believe that ³media remixability² which begins around middle of the 1990s
constitutes a new fundamental stage in the history of media. It manifests
itself in different areas of culture and not only moving images ­ although
the later does offer a particularly striking example of this new logic at
work. Here software such as After Effects became a Petri dish where computer
animation, live cinematography, graphic design, 2D animation and typography
started to interact together, creating new hybrids. And as the examples
mentioned above demonstrate, the result of this process of remixability are
new aesthetics and new media species which cannot be reduced to the sum of
media that went into them. Put differently, the interactions of different
media in the same software environment are cultural species.


More information about the iDC mailing list