In cooperation with the University for the Creative Arts, Farnham, the symposium examines the interactions between animation and audio from a scientific perspective. Researchers and artists were asked to submit contributions on the subject of Synaesthetic Syntax: Sounding Animation/Visualising Audio. This scientific/artistic survey is kicked off by the media artist Rose Bond, who offers insights into her artistic work in public spaces, followed by panel discussions on the topics of “Hearing Color Seeing Sound,” “In Front of Your Eyes and Ears,” and “The Kinaesthetics of Music and Vision.”
Synaesthetic Syntax – Keynote: Sounding Together – Choreographing the Unpredictable
Given the theme “Synaesthetic Syntax: Sounding Animation – Visualizing Sound,” this keynote will address the synesthetic pairing of animation and music. Rose Bond will take a cursory look at the experimental coupling of sonic vocabulary and abstract animation in work by Hans Richter and Viking Eggeling, jump ahead to Stan VanDerBeek’s expanded cinema era in the 60s and 70s and then fast track to three 21st century sound/sight collaborations with symphonies. Her goal is to raise questions and evolve some notions about the ‘experimental’ and the political nature of expanded cinema. The latter half of this talk will address ‘syntax’ as in structures and ordered arrangements in what she likes to call the visual choreography of multi-screen animated projection. Rose Bond will concentrate on a recent work for Luciano Berio’s Sinfonia that was set to premiere on the 14th of March 2020. Two days before the opening, gatherings of 250+ people were banned. During this keynote is the first time excerpts of her animation will be screened publicly.
Synaesthetic Syntax – Expanded Cinema and Para Animation: More than Audio and Visual
Cinema communicates through the audio and the visual and rarely exploits or examines the possibility of appearing beyond this twofold sensory relationship. This presentation proposes that in many works of expanded and experimental cinema, perceptual overspill exceeds the two more distance senses of seeing and hearing to encompass the proximity of touch, and this third sense is also central to synaesthesia in cinema. Sound is received through vibrations that come into contact with membranes in the ear, and in this regard, it is the sense of touch that enables us to hear. Certain works of expanded cinema deploy technologies of sound reproduction in ways that stimulate the whole body as a listening organ. In Bicycle Tyre Track (2012), Vicky Smith cycled along a 16mm strip of clear film with greased bicycle wheels, leaving black blocks and lines that both looked and sounded noisy. Performed at Tate Britain (2014), the bass was amplified to the degree that, upon projection, the tracks generated a pounding rumble that could be heard through the feet. If conventional cinema tends to disregard the potential of sound to touch our whole bodies, that which is visual is even less exploited for its capacity to contact us at a physical level. Nevertheless, some theories and practices do propose the visual in terms of its tactile qualities. Filmic imagery that is highly textural and emphasizes the material substrate of grain and exposure for example, is difficult to see and so promotes a tactile gaze. Such imagery encourages a multisensory bodily relationship between the viewer and the image (Marks 2000: 171-2). Annabel Nicolson’s Matches (1975), is an example of haptic looking that lies at the extremities of expanded cinema. This piece demands absolute darkness, broken only by the flicker from several matches lit by performers who read for the brief duration of the flame. In consideration of a nomadic relationship to space, Deleuze suggests that stretches of snow and sand are ‘smooth’ and navigated at close range, they become haptic. In contrast to these bright expanses where reflective light makes long-range sight difficult, Matches works with the material of darkness. When the light is extinguished, the visual space is re-framed to become smooth and tactile. Further, the role of flicker and intermittent seeing in Matches gives the work a quality of para-animation. The haptic qualities in Matches and other of Nicolson’s pieces will be discussed along with Smith’s works and in terms of how consideration of the role of touch in the audio/visual hierarchy of cinema produces novel forms of animation. The haptic will be proposed as a means to stimulate a rethinking of the relationships between the senses and as a way of opening up new readings of expanded cinema. Deleuze, G. and Guattari, F. (1987). ‘The Smooth and the Striated”, A Thousand Plateaus: Capitalism and Schizophrenia, trans Brian Massumi, Minneapolis: University of Minnesota Press, 474-500. Marks, L. (2000). The Skin of the Film: Intercultural Cinema. Durham. N.C: Duke University Press.
Synaesthetic Syntax – Presence and Interaction in Synaesthetic Space
In this 20 minute lecture, Alexander will introduce the interactive VR project “Innerland,” which moves between synesthesia and a multi-perspective narration. The art project is based on the piano concerto Opus 25 by Viktor Ullmann (1898–1944). It consists of a technically complex new recording of the piano concerto in 3D sound and a virtual reality media art installation. In the interactive installation “Innerland,” genre-spanning film and sound recordings made during the 3D music production are combined to create dreamlike scenes. In the medium of the game engine, a visual language is created for the cinematic presentation of a concert, which enables multi-perspective views. Together with the free choice of the viewer’s point of view in VR, an alternative spatial narrative is created. Documentary/biographical scenes form a separate walk-in background to a largely abstract image-art space experience, in which the work establishes a strong thematic reference to early abstract film works, in whose time of the 1920s and 1930s the piano concerto was also created. Objects, colours, movements, and sound are directly intertwined, as all instruments were recorded individually and can thus be freely grouped. The viewer decides for himself how he wants to attend the music performance. The topic of abstract experimental film and Alexander Stublić’s involvement with it began about 20 years ago. The aspects mentioned will be shown in extracts from the installation.
Synaesthetic Syntax – A Hidden Order – Revealing Connections Between Geometry and Music Through Harmony and Mathematics
A Hidden Order is an award-winning project that explores the relationship between music and visual arts. The project has been performed or exhibited across the globe, including the Saatchi Gallery London, Abu Dhabi Art Fair, Istanbul, Miami, Seoul South Korea, Athr Gallery in Jeddah Saudi Arabia, and the Aga Khan Museum in Toronto. At the core of this project is Sama Mara’s groundbreaking method that translates directly between music and visual art. The system reveals a one-to-one correspondence between rhythm and pattern, and musical notes are translated into colour. This is made possible by drawing upon the mathematical foundations of both music and pattern and applying geometric principles found in traditional Islamic Arts and modern mathematical fields, including fractal geometry and aperiodic tiling. The method of translation was originally implemented in a computer program using Processing, making possible a real-time translation from music to visual arts for live performances, animations, an interactive platform, and high-quality digital prints. The reverse translation from visual art to music was implemented by hand and was a core part of the creative process that led to the original showing of A Hidden Order in 2014 made in collaboration with composer Lee Westwood. The programme is currently being recoded in C++ and TouchDesigner, enabling a far richer and more captivating visualisation. For Expanded Animation 2020 Sama Mara presents the core aspects of the method at the heart of the A Hidden Order project, sharing a selection of the works, and discussing aspects of the unique creative processes that were explored. For example, it is possible to design and create geometric patterns entirely through musical composition, or to create musical motifs from a geometric pattern, and to translate back and forth between these two during the creative mediums to arrive at a final work that is at once a musical piece and a visual artwork. We are also able to apply musical compositional devices directly into a visual counterpart, for example the development of themes, layering of motifs, and structuring of the final composition that are so common in musical composition all have an equivalent visual representation.
Giusy Caruso, Bavo Van Kerrebroeck & Pieter Jan Maes
Synaesthetic Syntax – PIANO PHASE for Two Pianists in VR
The digital revolution across our contemporary culture is leading towards a new way of thinking and placing the artistic practice that contributes to removing the boundaries between art and science/technology (Tanaka, 2014). Consequently, the interaction with technology in performing arts is increasingly gaining prominence as a catalyst for the transformation of the traditional aesthetic definitions of musical experience and approaches (Paine, 2002; Frederickson, 1989). The application of cutting-edge technological tools is boosting an aesthetic renovation and, besides classical approaches, artists and researchers are all the more stimulated to achieve new performative actions and expressions (Vanoeveren, 2018; Leman 2016). How does the impact of technology modify and renovate the aesthetic and the creative approach to music interpretation and practice?
To what extent does the interaction with cutting-edge technology affect and potentiate performers’ musical expression?
How can technological and immersive musical experiences enhance the communication of music and the engagement of the contemporary audience? To explore ways to apply interactive and immersive technology in music performance, this project proposes a (re)interpretation of the contemporary pieces “Piano Phase” (1967), originally written for two pianos (or piano and tapes) by the American composer Steve Reich. This multimedia performance was conceived at the ASIL lab in Ghent, among the studies on musical expressivity and interaction with technology conducted at the Institute of Psychoacoustic and Electronic Music (IPEM) of Ghent University. The minimalist piece “Piano Phase” presents the challenge for two pianists asked to perform a repetitive twelve-note melodic figure in the attempt to apply Reich’s “phasing” technique. Starting in sync, the two pianists have to slightly shift with occasional re-alignments of the twelve successive notes against each other. Reich’s artistic challenge was to experience and show the process of phasing and dephasing in music performance. This perspective was the triggering idea that led to conceiving a multimedia and innovative immersive performance for piano and electronics in VR. A pianist was asked to perform the piece in real-time by interacting with the sound, previously recorded, and with her avatar projected in VR. This performance presents an immersive audio-visual experience for listeners performed by one live, ‘real’ pianist interacting with his/her virtual counterpart. Both pianists are placed inside a virtual environment, which can be accessed through a VR head-mounted display. The virtual counterpart is created using motion capture and audio recording transformed into a controlled animation during the performance. The live, ‘real’ pianist, was originally also motion tracked and animated in the VR environment, but for this performance, only the hands will be tracked and visualised using a Leapmotion controller. The live pianist plays on a digital keyboard generating MIDI output from which his/her tempo and relative phase in the piece are tracked. This tempo and phase information is then used to dynamically couple and control the virtual pianist using the Kuramoto model for non-linear coupled oscillators. The “mirroring process” of a pianist playing together with her avatar in virtual reality pursues both a scientific and artistic aim. This project has several scientific goals. First, we want to investigate the differences in subjective experience, kinematics, and musical output when playing with real or virtual musical partners. Second, we want to create controlled conditions allowing to investigate underlying principles in musical interaction from the viewpoint of coordination dynamics. The artistic goal is to experiment with the creative ways to integrate technology in music performance, in this case, by a re-interpretation of an existing repertory for piano and electronic, in augmented reality. The aim is to boost innovation in music performance practice and create new artistic and interactive performer-machine formats to enhance the communication of contemporary music to the audience by triggering immersive experience in VR. Documentation: a demo executed at ASIL LAB – De Krook, Ghent https://youtu.be/GlVaMPCotzM
Synaesthetic Syntax – Towards a “Live Synaesthetic Visualisation”? Considerations in Artistically Visualised Sound
The concept of live audio visualisation and related techniques are incredibly commonplace in today’s art and media world. Due to its technical accessibility, many art music and popular music performances use generated light, lasers, and projections that follow the rhythm or the other modalities of the performed music. However, there still exists a gap between the perceived aesthetic multi-sensory quality of live visualisations and that of pre-made animations created to visually reflect the qualities of music. This gap is mostly due to the fact that many live visualisation techniques rely on translating a single musical parameter (such as the rhythm, pitches, chord types, etc.) into a single visual parameter (brightness, colour, etc.), usually making use of mathematical correspondences. Furthermore, pre-made animations have the advantage of careful consideration during the creative process, making more integrated audiovisual approaches possible. While a performance with live visualisation can be prepared in a similar way, such an intricacy is usually not possible in improvised performances. In this lecture Umut Eldem will introduce the problem of “Live Synaesthetic Visualisation”—a possible live visualisation method that would take the synaesthetic syntax and cross-modal correspondences as a starting point of translating multiple modalities of musical information, as an alternative to mapping singular mathematical correspondences. Such a live visualisation is valuable both in prepared and improvised audiovisual performances, as the spontaneity of the musician is reflected in the visual response. This results in a more integrated experience between the senses, and in an audiovisual unity that is traditionally captured in pre-animated visual music. The experience of shapes of different sizes, textures, and visual movement among with the experience of colour is existent in certain of cases of sound-colour “strong synaesthesia.” [Martino, G., and Lawrence E Marks. 2001. “Synesthesia: Strong and Weak.” Current Directions in Psychological Science 10 (2): 61–65.] There also exists a correlation between synaesthesia and cross-modal associations to a certain degree, within the parameters of pitch height, timbre, brightness, and visual shape. [Ward, J., Huckstep, B., and Elias Tsakanikos. 2006. “Sound-Colour Synaesthesia: to What Extent Does It Use Cross-Modal Mechanisms Common to Us All?.” Cortex 42 (2). Elsevier: 264–80.] Taking these correspondences and the qualities of synaesthesia as a starting point, it can be possible to construct flexible live visualisation tools that are both multi-modal (having multiple musical elements correspond to multiple visual elements), perception-based (instead of mathematical correspondences), and intuitive. Such an approach creates new and interesting possibilities of audiovisual performance. The lecture will introduce examples of historical visualisation methods from the development of colour organs to current software. The possible tools for such a multi-modal visualisation will be discussed, and audiovisual examples derived from such methods will be presented. Such a discussion will hopefully create new perspectives for people from both aural and visual disciplines.
Synaesthetic Syntax – Algorithmic Conflation and Re-Configuration of Audiovisual Space and Movement in the Series of Experiments With Financial Data Audio-Visualisations as Immersive Artworks
This paper/presentation introduces motivation, main concepts, and practical implementation results for the series of audio-visual experiments that maps time-sequenced change of multiple values (financial data time-series) in an immersive, sensorium-intensive real-time ‘audio-visualization’ artwork. It is intended as an interactive VR/AR installation and audio-visual performance interface. Predating the emergence of virtual reality technologies, cognitive neuroscience led research in rhythm and movement in music has flourished over the last decades. Questions that originated within the domains of philosophy and music theory can now be additionally investigated using computing and AI technology-assisted experimental approaches. The tacit knowledge exchange between academics, industry, and creators has brought concepts, tools, and algorithms from specialized, niche research labs, teams to new DIY and off-shelf tools for sonic and visual sequencing, sampling, realtime-effect modification, etc. It has affected contemporary academic and popular music and audio-visual creation. Motivated by the interest in investigation phenomenology of the somewhat speculative concept of induced or ‘artificial synaesthesia’ e.g., either as a consequence from hypermedia (Bolter, 1991) or creating specific conditions where ‘Adults Can Be Trained to Acquire Synesthetic Experiences’ (Bor, D., Rothen, N., Schwartzman, D. et al., 2015). It proposes a framework for spatially expressive, ‘synthetic anisotropy instruments’ for the VR/AR staging of live data: a theoretical and practical investigation into the continuum between immersive analytics and VR/AR artwork. The investigative aspects of this project are embedded in the emerging research area of immersive analytics, that is considered as a fusion of more recent developments in visualization, auditory displays, computing, and machine learning, that has been developing in an ad-hoc way, and there have been recent efforts of elaborating the definition and proposing and organizing framework for the further research (Skarbez, Polys, Ogle, North, Bowman, 2019). The analogy of simulated anisotropy—as an algorithmically simulated phenomenon of anisotropy known in physics, chemistry, microfabrication, neuroscience as direction-varying properties of materials, tissue, and space. Further in the text, terms’ ensemble visualization’ or ‘ensemble data’ are used in the context of data visualization as ‘concrete distributions of data, in which each outcome can be uniquely associated with a specific run or set of simulation parameters’ (Obermaier and Joy, 2014). Although there has been a range of historic and more recent examples of 3D visualization and ‘auditory displays’ of financial or stock data as efforts to enhance professional trading interfaces, this project tries to bridge extremes of unaesthetic usability and ‘sublime dysfunctionality’ within the aesthetic experience. Various time zoom scales reveal the phases of past, historical trends, that emphasize the position current trade execution as a spatially expressive metaphor. Spectators can activate (via touch screen) further levels of complexity that range from observing single- to multiple-value relationships. The progressing or visitor-triggered mode-shifts induce a challenge to audio-visual sensorium: the experience of the conflation within multiple reference systems, and plays with visitor’s perception effort of ‘sense-making’. It is achieved by the juxtaposition of spatially organised visual and sonic cues (perspective, sharpness/blur manipulation), audio-panning and timbral modulation, spatial effects, etc.
João Pedro Oliveira
Synaesthetic Syntax – Gesture Interaction Between Sound and Image
This presentation analyzes several possibilities of interaction between image movement and sound, under the perspective of gestural and textural relations. Departing from theorists such as Hatten, Smalley, Wishart, and Chion, who analyzed the idea of gesture and texture in music, we present several examples from the cinema repertoire (Hitchcock, Lawrence, Kubrick, Reggio, Tarkovski), as well as some author’s experimental videos, where specific gestures in the image interrelate with sound/music gestures, in ways that can have multiple meanings, or go beyond the direct translation of one into the other. Gesture will be analyzed structurally related to its energetic potential, emotion induction, movement, and meaning/metaphor. Other concepts such as causality, energy-motion-trajectory, articulation of a continuum, hierarchical levels, and disturbances, analyzed by the authors mentioned above, will also be applied to the sound-image relation.
Synaesthetic Syntax – A Hypothesis-Based Approach to Visual Synthesizer Design
“With Romanticism and the rise of instrumental music, composers developed a language that allowed them to connect directly with human emotion, one in which an idea and its expression were one. Music’s content is perceived entirely through its tonally moving forms.” (Herzog) All of the other arts, painting most of all, shared in an envy of music so that in 1888 the critic Walter Pater famously wrote: “all art constantly aspires to the condition of music.” Modernist art arose in part from this search for principles to function something like the basso continuo in music (Herzog). For many artists, it was also driven by their search for a new visual aesthetic, something that had been in the air since Newton’s Optics was published in the 17th century. By 1912, the poet and art critic Guillaume Apollinaire was predicting that modern art was about to become “an art that would be to painting…what music is to literature (p. 197).” And in his 1923 essay The Future of Painting, Wright argued that modern art was less about painting than about light and movement and more a performance art, like music. Many modern artists wrote in specific terms about how painting and music inform and complement each other. In this paper, I describe how I use insights from painters such as Paul Klee, Leopold Survage, Georgia O’Keefe, and Karl Gerstner as hypotheses to guide the design of a visual synthesizer. Kandinsky, for example, proposed that amplitude in music relates to thickness of line in painting: “The pressure of the hand upon the bow corresponds perfectly to the pressure of the hand upon the pencil (Kandinsky, p. 618).” I apply such hypotheses to creating visual interpretations of musical passages. The resulting clips play a role much like that of the pencil tests produced by animators while conceiving and developing a film. Synthesizer inventor Robert Moog was clear about the challenges of instrument design. “Music-making requires both the musician and the listener to function at the very limits of their perceptive and cognitive capabilities. Therefore a musical instrument has to be as effective as possible in translating the musician’s gestures into the sonic contours that he is envisioning. When he performs, the musician feels his instrument respond as he hears the sounds that it produces. In terms of modern information theory, the musician-instrument system contains a multiplicity of complex feedback loops (Pinch and Trocco, p. vi).” This project engages a similar sensibility around the design of synthesizers for our eyes. Apollinaire, Guillaume, “On the Subject in Modern Painting,” 1912 in Apollinaire On Art, L. C. Breunig [ed.], 1960, p. 197. Kandinsky, Wassily, “Point and Line to Plane,” 1926 in Kandinsky: Complete Writings On Art, K. C. Lindsay and P. Vergo [eds.], 1994, p. 618. Trevor Pinch and Frank Trocco, Analog Days: The Invention and Impact of the Moog Synthesizer, 2002, p. vi. Pater, Walter, “The School of Giorgione, “ Fortnightly Review, 1888. Wright, Willard Huntington, The Future of Painting, 1923
Synaesthetic Syntax – Physical Presence and Material Desire: Eric Dyer’s Sculptural and Performative Animation Art Practice
Animation is still a young art form with many underexplored avenues of expression. Although the lay-person’s view of animation is still predominantly as entertainment on screens, it can also be used for participatory sculpture, immersive kinetic environments, live performance, and other experiences of exploration and discovery. I draw on these expanded ideas of animation through works based on one of the keystones of animation—the zoetrope, or “wheel of life.” This paper focuses on two branches of my art practice—sculptural animation and performance animation. My initial forays into zoetrope-based art-making began as a rebellion against my then usual process of making work at a computer screen. I needed to touch and feel the animation again, so when I discovered around the recent turn of the century that digital video cameras could replace zoetrope slits, I began to make films from spinning paper sculptures. Moving beyond process alone, materiality also became critical to my art’s final manifestation. With the help of engineers, I developed systems that synthesized modern microcomputers, sensors, and LEDs with older optical toys and pre-cinematic technologies—for new animated sculptures and installations that demand very specific types of attention on the part of their “audience,” requiring direct participation in order to make the works “come to life.” The public’s wonder-filled reaction to the work and their participation with it seems related to my deep need to create it—I believe this is because the developed world has very recently experienced a dramatic shift. Work, play, and socializing had formerly involved our bodies in motion, our collected senses, and our physical presence. Today these activities can be and often are accomplished remotely, virtually, and with our nearly static selves, seated and/or staring at screens. Perhaps we are collectively feeling the loss of physicality and tactility, leaving us with a craving for human connections to our world. And with fabricated realities so common in traditional media, witnessing material objects coming to life in a fantastic way reawakens our sense of wonder. I’ve also been experimenting with live performance, spinning zoetrope-discs live, like a DJ spins records, with a camera fed to a projection instead of a needle and amp. The collective experience of performer and audience is another facet of human connection, another way to answer the cravings. This burgeoning practice mashes-up elements of magic lantern storytelling, DJing, animation, and improvisation. I am currently collaborating with composer-musician Rudresh Mahanthappa, Director of Princeton University” s Jazz Program. Contemporary jazz performance weaves together intuitive interpersonal communication, artistic individuality, and creating in-the-moment—applying such parameters to my method of visual performance has been an exciting challenge and a welcome counterpoint to the often precise and calculated processes of making animated art. My work in the above areas helps reveal the numerous artistic paths zoetrope art could take—an expansive territory of expressive potential stretches out in front of this tactile, participatory, mandala-like form of motion-art. For this paper, I will present my processes, reflections, and completed sculptural and performative animated artworks.
Birgitta Hosea (SW/UK) is an artist, filmmaker, and researcher in expanded animation. Exhibitions include Venice & Karachi Biennales; Oaxaca & Chengdu Museums of Contemporary Art; InspiralLondon; Hanmi Gallery, Seoul. She has a solo exhibition at ASIFAKeil, Vienna in April 2020. Included in the Tate Britain and Centre d’Arte Contemporain, Paris, archives, she has been awarded […]
Juergen Hagler (AT) studied art education, experimental visual design, and cultural studies at the University for Art and Design in Linz, Austria. He currently works as a professor of Computer Animation and Animation Studies in the Digital Media department at the Hagenberg Campus of the University of Applied Sciences Upper Austria. Since 2014 he a […]
Rose Bond (CA/US) produces work at the juncture of expanded cinema, experimental animation, and experiential design. Her large-scale animated installations navigate the allegories of place and illuminate urban space. Recent work includes multi-screen projections for avant-garde composers Olivier Messiaen‘s Turangalîla-Symphonie and Luciano Berio’s Sinfonia. With roots in frame-by-frame, hand-drawn animation, she now focuses on public […]
Dr. Vicky Smith (UK) is an experimental animator. Her practice-driven Ph.D. pursued an inquiry into relations between the body, technology, and materials with regard to experimental film and modernist theory.
Alexander Stublic (DE) studied media theory, philosophy, and media art at the HfG Karlsruhe. With new techniques, 3D objects, and VR, he emphasizes the penetration of simulation into the real in his most recent work. In addition to exhibitions at the ZKM | Karlsruhe and the Kunsthalle Baden-Baden, he took part in international festivals such […]
Sama Mara (UK) is an award-winning Visual Artist and Geometer. Having completed his bachelor’s degree in Music and Visual Arts at Brighton University in 2004, he went on to study for his MA in Traditional Arts at the Prince’s School of Traditional Arts in London, 2010. His research has been guided by his two-decades-long inquiry […]
Giusy Caruso (BE) is an artist-researcher, musicologist, and professional concert pianist, oriented to a futuristic approach to music performance that connects art and science applications. Her research explores on one side the novel forms of human-machine interaction, in particular, the role of the cutting-edge technology for the analysis of gestures and sound in piano playing […]
Umut Eldem (BE) took part in several interdisciplinary projects as part of his composition studies at the Royal Conservatoire of Antwerp. The most notable are the Word-Composition, Dance-Composition, and Sketch the Sound. For these projects, the Composition Department collaborated with the Word Department, Dance Department, and the Royal Academy of Fine Arts. Eldem participated in the Kolla Festival and […]
Jānis Garančs: Initially trained in classical arts, Jānis Garančs (LV) works in technological and algorithmic art genres – interactive multi-media installations and performances, Virtual Reality, video, digital print, and mixed material painting. His artworks are characteristic with inspiration from various concepts of contemporary theoretical physics, communication and evolution theories. The works can be a mixture of dystopian, […]
João Pedro Oliveira (US) is a Professor at the Department of Music at the University of California, Santa Barbara.
Fred Collopy (US) is a Professor Emeritus of Design and Innovation at Case Western Reserve University. He published in the areas of visual instrument design, managing as designing, forecasting methodology, and information technology.
Eric Dyer: Artist and filmmaker Eric Dyer (US) brings animation into the physical world with his sequential sculptures and installations. His work has been widely exhibited at events and venues such as the Smithsonian National Gallery of Art, Ars Electronica, the London International Animation Festival, the screens of Times Square, and the Cairo and Venice Biennales. He […]