I grew up as an artsy+nerdy kid, singing in choir, playing in band, as comfortable with a soldering iron, with fixing or hacking an old radio or electronic organ, as with chord progressions or improvising harmonies on the fly. In high school, I sang in every choir, played in every band, and did theater and speech. I also kept a keen and interested eye toward technology, especially music technology.
My original goal in going to conservatory in 1973 was to become a band/choir teacher, double-majoring in trombone and voice, with education and techniques courses for choir and band certification. But something fateful happened; I discovered my music school had an electronic music and recording studio. Also around that time, and at the urgings of my trombone teacher, I became a voice major. But what really happened is that I became a de facto major in a recording and electronic music program that my music school didn’t have (yet). I spent every available minute in those studios, also doing location recordings, editing tapes, soldering patch cords, and reading, reading, reading every book and journal I could find on audio, acoustics, recording, and electronic music.
I loved the studio work so much that in 1976, I ended up dropping out to become a sound engineer for about five years. I did lots of outdoor and indoor sound reinforcement gigs, some system designs, lots of building, installing, repair, and some studio work as well, both as an engineer and singer. All the while I was building up and working feverishly in my own home studio, collecting (or building) a good variety of synthesizers, recording gear, and effects devices. I made lots of music. But the more I worked as a sound engineer, the more I realized there was math and science I needed to know to be as creative, and valuable, as possible.
So I went back to school in 1981, this time in electrical engineering (EE), but also finished my music degree in the process. Pretty much every course I took in my EE program, I was asking myself how it applied to sound, acoustics, and music. I finished with honors, and even though I was now dual-degreed, I still knew there was much more that I didn’t know. So I applied to graduate schools, and got into Stanford University, and found myself at the holy city (for nerds like me): the Center for Computer Research in Music and Acoustics, also called CCRMA (pronounced like karma).
There, I got to work with brilliant people like John Chowning (the inventor of FM sound synthesis, and pioneer of spatial sound and compositional uses of sound synthesis), DSP guru Julius O. Smith, Chris Chafe, Dexter Morrill, Max Mathews (the father of computer music), John Pierce (former Bell Labs Executive Director of Communications Sciences), and many, many others. I worked on physical modeling, new performance interfaces, created countless new software programs (thanks NeXT Machine!) for all sorts of things, and researched and developed physics-based voice models for singing synthesis, which was the topic of my Ph.D. thesis.
CCRMA taught me so much about so many topics, but possibly the most important thing was that art, science, math, and engineering can (and should) be linked together. I observed students that study in this way learn differently, better, and create amazing and novel things just as part of their coursework. Pretty much all of the curricular elements of CCRMA are STEAM (science, technology, engineering, arts, math) in nature; math, music, physics, psychoacoustics, engineering(s), and other technical/design/art areas are woven together tightly and constantly.
When I moved to Princeton University in 1996, I got to take over a course that Ken Steiglitz (EE/CS) and Paul Lansky (Music) had created, called "Transforming Reality Through Computer." It was really an applied DSP course, but with musical examples and projects. For quite a while I had been teaching a CCRMA short course every summer with Xavier Serra, called "Introduction to Spectral (Xavier) and Physical (Perry) Modeling." My 10 lectures had turned into a fairly formal introduction, a set of notes, and eventually book chapters, to which I added a couple of spectrum analysis chapters, and a couple more on applications, and it became the book Real Sound Synthesis for Interactive Applications. That book and course was my first "scratch-built" STEAM curriculum, cross-listed in CS, EE, and Music at Princeton. The focal topic of the book is sound effects synthesis for games, VR, movies, etc. That topic also earned me a National Science Foundation (NSF) CAREER grant.
At Princeton I also introduced a course called, "Human Computer Interface Technology," developed jointly with Ben Knapp and Dick Duda at San Jose State University (they got an NSF grant for this), Chris Chafe and Bill Verplank at CCRMA, and other faculty at the University of California, Davis, and the Naval Postgraduate School in Monterey. The emphasis at Stanford and Princeton was on creating NIMEs (New Interfaces for Musical Expression), putting sensors on anything and everything to make new expressive sound and music controllers. Another STEAM course was born.
I had so many wonderful STEAM students at both CCRMA and Princeton, but I’ll just name a couple. One undergraduate student, Ajay Kapur, took my courses at Princeton and my life has never been the same. Ajay went on to get his Ph.D. from University of Victoria with my former grad student George Tzanetakis, then he went on to talk me into teaching some at CalArts, joining his Machine Robot Orchestra, and co-founding Kadenze (more on that later). One of my Princeton graduate students, Ge Wang, is now faculty at CCRMA, and co-founded SMule, arguably the most successful participatory music app in the world, with hundreds of millions of users making music in solo or groups. SMule co-founders Jeff Smith and Ge talked me into joining the company as an advisor and consultant, which I still do today. The important thing about these stories is that if not for STEAM, these companies could not have been dreamed up, nor could they have found and hired the new employees necessary to grow.
I continued to weave musical and artistic examples into all of my teaching and student advising. The next major new STEAM curriculum creation was the Princeton Laptop Orchestra, founded in 2005 by Dan Trueman (a former grad student who then joined the music faculty at Princeton) and myself. This course combined art, programming, live performance (some of it live coding in front of an audience!), engineering, listening, recording and studio techniques, and much more. Dan and I begged and cajoled around the Princeton campus to get money to get it off the ground, getting funds from Music, CS, the Dean of Engineering, the Freshman Seminar Fund, the Sophomore Experience Fund, and other sources to put together an ensemble of 15 "instruments" consisting of a laptop, a 6-channel hemispherical speaker, amps, and controllers. Result? BIG Success. As just one example of hundreds, here is a quote from a PLOrk member, a female undergraduate music major, a cellist who had never programmed before:
"However, when everything worked the way it was supposed to, when my spontaneous arrangement of computer lingo transformed into a musical composition, it was a truly amazing experience. The ability to control duration and pitch with loops, integers, and frequency notation sent me on a serious power trip… This is so much better than memorizing French verbs."
Within a year or so, we had applied for and won a $250,000 MacArthur Digital Learning Initiative grant, allowing PLOrk to build custom six-channel speakers with integrated amps; buy more laptops, controllers, and hardware, and grow to 45 total seats in the orchestra.! We also toured, played Carnegie Hall, hosted and worked with world famous guest artists, and inspired a horde of new laptop orchestras (LORks) around the world. Dan also worked on modifying the Princeton undergrad music curriculum to incorporate PLOrk courses, and I worked to see some of the PLOrk course sequence would count for Princeton CS and Engineering credit.
For his Ph.D. thesis in Computer Science at Princeton, Ge Wang created a new programming language called ChucK. It was designed from the ground up to be real-time, music/audio-centric, and super-friendly to inputs from external devices ranging from trackpads and tilt sensors to joysticks and music keyboards. ChucK was the native teaching language of PLOrk, and then SLOrk (the Stanford Laptop Orchestra, formed by Wang when he became a CCRMA faculty member), and many other LORks. It also was and is used for teaching beginning programming in a number of art schools and other contexts.
A few years ago, Ajay Kapur and I won an NSF grant for "A Computer Science Curriculum for Arts Majors" at the California Institute for the Arts. We created and crafted the curriculum, and taught it with careful assessments to make sure the art students were really learning the CS concepts. We iterated on the course, and it became a book (by Ajay, me, Spencer Salazar (another Princeton Undergrad and now Ge’s grad student at CCRMA) and Ge). The course also became a massive open online course (MOOC) whose first offering garnered over 40,000 enrolled students.
Now to Kadenze, which is a company Ajay, myself, and others co-founded and launched a year ago. Kadenze’s focus is to bring arts and creative technology education to the world, by assembling the best teachers, topics, and schools online. My Real Sound Synthesis topic is a Kadenze course offered by Stanford. The CalArts ChucK course is there, as are courses on music tools, other programming languages, and even machine learning, all created for artists and nerds who want to use technology to be creative.
The genesis of Kadenze is absolutely STEAM. Artists need to know technical concepts. They need to program, build, solder, design, test, and use technology in their art-making. Engineers and scientists can benefit greatly from knowing more about art and design. Cross-fertilizing the two is good, but it’s my feeling having both in one body is the best of all. Not all students need to get multiple degrees like I did, one in music, one (or more) in EE, but all of the names I mentioned in this short "STEAM teaching autobiography" actually are considered both artists and scientists by all those around them. They do concerts and/or create multimedia art works. They research and publish papers. They create both technology-based works of art and artistic works of code, design, and technology. The "Renaissance Person" can and should be. We need many many more.
Specialization is necessary to garner expertise, but striving and working to become a skilled multi-disciplinary generalist creates a whole person that can create, cope, build, refine, test, and use in practice. Plus, they can explain difficult concepts to novices, and carry the magic of combining art and technology to others. In other words, they are good teachers too.
That’s been my goal in life, and I think I’m succeeding (so far).
ACM Fellow Perry R. Cook is Professor (Emeritus) of Computer Science, with a joint appointment in Music, at Princeton University. He also serves as Research Coordinator and IP Strategist for SMule, and is co-founder and executive vice president of Kadenze, an online arts/technology education startup.
No entries found