So much of the collaborative work between me and Laurie has involved some element of media, that it seemed natural for us to create a performance piece which was fairly media-rich and tech-heavy. I have a running love-hate relationship with technology. The “hate” part arises mostly because I am often operating the devices during the presentation, and I’m rather nervous if something becomes unplugged, the software freezes, a breaker is thrown, etc. If I’m also performing, it can become even more fraught with anxiety. Yet I continue to add these elements into my work. With each new piece of software I learn, and each new MIDI controller I buy, I dig myself deeper into this type of work.
We have shared the stage on numerous occasions, and we’ve worked together on other people’s projects. But those collaborative productions of ours, initiated by one or the other, have utilized these digital embellishments: Women are Water (2012), Hetaerae (2013), Tales of Lost Southtown (2014), Serpientes y Escaleras (2015), and what I’m calling the Invoc8 Trilogy (2016).
Invocation continues along this technology trajectory. And I thought I’d add a bit about those elements. I should preface this all by mentioning that even though I’ve been engaged in real-time video projection for about six years, I’m completely self-taught. My experience comes almost exclusively from using my own equipment and the handful of programs I have purchased over the years. Trial and error, and the intense classroom of just going out there and doing it…in front of an audience—this is a wonderful way to learn. (I would like to add that there are some amazingly talented people in this town doing this sort of work on a much larger scale using devices I probably wouldn’t recognize. San Antonio, being a high-tier convention town, means that the A/V presentation industry is quite healthy here, and those folks are the real pros.)
The performance dates for Invocation haven’t been set. But we know we have at least three months to fine-tune things, so all this might be slightly modified.
What the audience will see are the two of us, seated. In front of each of us will be a music stand and a microphone. On a table between us are two fishbowls, as well as a MIDI controller device. Projected images will be displayed above us. And, flanking us, will be speakers (to amplify our voices as well as various audio elements). When the performance begins, each of us will take turns randomly picking a number from a fishbowl for the other. That number is associated with a short piece of prose the other will read. It’s also associated with a numbered button on the MIDI controller which, when pressed, will cause an animated image to be projected above, as well as triggering a looped audio clip. Once the short piece has been read, the readers switch, as another number is chosen. A different button is pressed, and a new image will appear; and even though a new audio clip will begin playing, the previous one will continue. And so on. Until a dense soundscape is running and building to accompany the short stories.
In an attempt to make things as simple as possible, we’re using only one computer, a MacBook Pro. Our USB audio interface into the computer handles both the microphone inputs as well as the master audio outputs fed into our PA. The video setup is simple. A Thunderbolt to VGA dongle sends the visuals from the computer to our 3000 lumen projector. The MIDI controller (we’re using the APC mini, because of the handy sliders) is linked to both Ableton Live (which is running our sound clips and processing our microphone signals) and Resolume Arena (which is processing the images, as well as allowing a certain degree of mapping to the projection surface).
Even though the APC mini’s sliders are linked to the volume controls of some of the various audio channels in Ableton (allowing for some slight adjustments during the performance), the plan is really that everything—images and audio—has been automated in such a way that all we will be doing is pushing a single button each time we draw a number from the fishbowl.
One of the nice things about Ableton is that there is a setting which allows looped sounds to be automated as they loop. Therefore, a new piece of sound can begin loud and assertive, and then drop in volume, merging with the general audio bed, as the performer begins to read.
Probably most people would think that much, if not all, of this could be handled by a tech crew. But part of the fun is learning how to make all the happen. And adding to the growing cache of very specific gear.
GEAR LIST.
Editing and performance:
MacBook Pro, Retina 13-inch, 3 GHz, 16 GB
Resolume Arena 5.0.1
Ableton Live 9 Suite
Focusrite Scarlett 2i2
Akai ACP Mini
NEC NP500 3000 lumen LCD projector
Behringer Europort PPA500BT PA system
Pair of Shure SM48 microphones
For field recording:
Zoom H4N
Pair of MXL 603S microphones
Pair of Greenten 1Pc piezo contact microphones
Sony MDR-V6 headphones
Images acquired via various cameras.