I believe that programming languages and artificial intelligence algorithms have a tremendous capacity to augment human creativity, and I explore this through music. Although I also sometimes compose music without the involvement of a machine, much of my current musical work has focused on using the computer as a partner in the composition process rather than simply as a tool for more standard music production. I have also recently been experimenting with sound manipulation and visualizations.
Algorithmic Compositions using Kulitta
Kulitta is a framework for algorithmic and automated composition system that I developed as the subject of my dissertation at Yale University. It (or “she”) is the main subject of my ongoing work in music and artificial intelligence. Kulitta uses a combination of generative grammars and geometric models from music theory to break down complex composition tasks into an iterative process. GitHub repository: https://github.com/donya/KulittaCompositions
Etude by Kulitta (PDF score, listen on SoundCloud) – composed by Kulitta and performed by me on piano. This piece illustrates Kulitta’s capacity for handling performance constraints as well as Kulitta’s stylistic blending capabilities, mixing models and rules for classical music (some of which are derived from Bach chorales) with models for jazz harmony.
Vesicuaria (watch on YouTube, listen on SoundCloud) – created by playing phrases generated by Kulitta through analog and digital synthesizers with real-time manipulation of the synthesizer parameters. This piece uses two kinds of generative grammars in Kulitta: one for melodic motion (heard in the first part of the piece), and another for modeling harmony. Vesicularia was performed at Electronic Music Midwest 2016 at Lewis University. Visuals were created algorithmically.
Tourmaline (watch on youTube, listen on SoundCloud) – a three-movement work created with Kulitta and with digital synthesizers. Each movement utilizes different features of Kulitta. This piece was part of the Paul Hudak Memorial Symposium Listening Room at Yale University in April, 2016. Visuals were generated algorithmically.
Creative Coding with Processing
Since joining the Center of Creative Computation at SMU, I have been using Processing to teach students how to program. I have also created audio/visual programs in Processing that have been showcased on a four-screen display outside the Center. GitHub repository: https://github.com/donya/CreativeCoding
Pentatonic Piano Rectangles (watch on YouTube) – this is a screen-captured version of a program of mine that was on display in October outside the Center for Creative Computation at Southern Methodist University. Three agents wander around the screen, choosing combinations of musical rectangles to activate, each of which produces a particular pitch. The screen periodically re-arranges itself into new shapes. This project makes use of a MIDI-handling library I have been developing for Processing.
Abstract Sound I (watch on YouTube). Stochastically generated visual patterns appear and trigger sounds created from manipulated recorded audio. Some of the sounds are very strong and clear, while others are more subtle with microtonal differences, which leads to interesting sonic interactions.
In the last several years, my original musical work outside of Kulitta has focused on sound manipulation and the creation of gradually evolving sonic scenes. Previously, I have also worked on more traditional-style scores for acoustic and digital instruments. The following illustrate a combination of these approaches.
Fanatasy for Bottles (watch on YouTube, listen on SoundCloud) – written for a collection of glass bottle virtual instruments I developed using the Euterpea library for Haskell. I wrote the note-level score for this manually and then rendered it to audio using the Euterpea-based virtual instruments. Visuals were created algorithmically.