Robotic instruments that could be programmed to play music, respond to human musicians, and even improvise were a source of fascination for Steven Kemper during his graduate student days at the University of Virginia, where he studied music composition and computer technology.
To bring to life his machine music vision, Kemper and colleagues Scott Barton and Troy Rogers founded Expressive Machines Musical Instruments, and began designing the Poly-tangent Automatic (multi)Monochord, also known as “PAM.”
This stringed instrument’s pitches are controlled by tangents — the equivalent of fingers — each of which is driven by a solenoid. Messages are sent from a computer via a USB to an Arduino microcontroller, which switches the solenoids on and off.
PAM also can receive data from musical and gestural input devices — such as a MIDI keyboard, joystick or mouse — or from environmental sensors, allowing it to improvise its own music based on the programmer’s parameters and instructions.
It seems to be a lot of work just to create music — something human musicians can do just fine on their own. For Kemper and other robotic music researchers, however, it’s not a matter of robotic instruments replacing the human variety. Rather it’s about finding new ways to make music — and perhaps new forms of music, as well.
“These instruments are not superior to human performers,” Kemper, now an assistant professor of music technology at Rutgers University, told TechNewsWorld. “They just provide some different possibilities.”
Since PAM, EMMI has created a variety of instruments, and each one has its own set of possibilities and strengths. All the instruments can be programmed to play in multiple genres and situations, and musicians have begun to incorporate them into performances and recordings.
“These instruments can improvise based on structures we determine or by listening to what performers are playing,” said Kemper. “We work with the free improv aesthetic and [our instruments] don’t fit into a particular musical genre. It’s improvising based on any decisions the performers make.”
Creating New Worlds of Music
Using robotic instruments and instrumentation, musician Chico MacMurtrie and his Amorphic Robot Works crew have worked to envision entirely new models for performance and musicianship, such as the Robotic Opera, which MacMurtrie created in the early 90s with computer scientists Rick Sayer and Phillip Robertson, and composer Bruce Darby.
“Rick and Phillip were working in a multitasking language known as Formula Forth,” MacMurtrie told TechNewsWorld. “At this time, it meant a lot of hard coding. Bruce created a lot of the tunings for the musical machines and wrote the compositions for them. Phillip adapted the machines to Bruce’s compositions, and human musicians played the more complicated elements live on the musical machines.”
After the opera concluded, MacMurtrie became fascinated with the possibility of machines playing all by themselves, with minimal input from human musicians or programmers.
“We really started to concentrate on how to teach the machines to strike the drums and strum the strings,” explained MacMurtrie. “Some of the time the machines used a closed loop system to get more sophisticated things to happen with precision. However, the majority of the machines ran with simple on/off control, and the workload [involved] creating layer upon layer of sequences which overlapped and added to the complex nature of what the sound would become.”
In recent years, MacMurtrie’s work with robots has evolved and broadened to create an even richer artistic landscape. The Amorphic Landscape, for instance, is a large-scale robotic installation and performance.
“Central to the 20-meter-long Amorphic Landscape is an organic environment engineered to provide both a physical and narrative structure for hundreds of individual robots,” said MacMurtrie. “This environment, however, is more than a passive context for its robotic inhabitants. The landscape is, itself, a robotic form capable of movement and transformation.”
Another recent project, The Robotic Church, is composed of 35 computer-controlled 12- to 15-foot pneumatic sculptures forming a “Society of Machines” that explores the origin of communication through rhythm.
“While responding to computer language, they are anthropopathic in nature and channel air to activate their inner biology,” explained MacMurtrie. “The evolutionary path towards machines with more kinetic abilities has led to the creation of a Society of Machines with their own language and expression.”
The Future Is Now
Robotic instruments — and what designers, performers and audiences expect from them — are evolving and changing. No longer seen as competitors with human musicians, they are instead seen as an integral part of all kinds of music-making.
“There are many artists, musicians, and scientists and roboticists working in musical machines these days, and a wide range of things are being created,” said MacMurtrie. “I personally don’t feel they will ever be better than humans, because of the emotional aspect, because of the creative act. The piece will never sound exactly the same when played by a human. Machines will be able to create compositions we have never heard before, but ultimately the programmer will be there in the process.”
What it comes down to, perhaps, is the process of creating and being creative with whatever instruments or technologies are on hand. Making music, after all, has always been an interaction of humans with the technologies they create to facilitate their art.
“It’s a new way to make music,” Eric Singer, artistic director with the League of Electronic Musical Urban Robots, told TechNewsWorld.
“When a new technology comes along, it pushes the possibilities of what you can do,” he observed. “Robots are adding to that. They’re visually interesting, though the sound is the most important thing. They can be interactive. They can go places that traditional instruments might not be able to go.”