The MUSICA project I’m involved in has a side to it focused on interactive jazz. In exploring approaches to algorithmic jazz, or “algo-jazz,” I started working with pattern-based generation for melodies and parts for other instruments, such as walking bass. Some of my early strategies are documented on my post about an Infinite Jazz Music Generator in Haskell. The pieces on this page were created as part of my exploration of those topics, and development of this series is ongoing. My use of the computer ranges from letting it be fairly autonomous in writing the score to being more of a compositional partner.
The original inspiration for this series was actually a Processing program for creating neon-looking algorithmic art. A family member commented that it looked like an avant-garde jazz album cover generator. The MUSICA project needed jazz algorithms and the art demanded music! If you want to hear how this series started, check out this video on YouTube. You can also take a look at a web-compatible version of the original Processing sketch.
Album on Bandcamp
You can get a collection of the tracks listed here as uncompressed WAV files on BandCamp:
Compositions are listed in reverse chronological order.
Tonkotsu Ramen (July 2019): a slight modification of the band from “The Cube Squared” applied to a lead sheet of an old, human-made piece of mine called Ramen Noodles. Ending fade-out added in a DAW after exporting to MIDI.
The Cube Squared (July 2019): I wondered what “The Cube” would sound like as bossa nova. The computer answered my question. Tempo changes were added in a DAW.
The Cube (July 2019): experimenting with syncopation and jumps in the bassline. Composed in a text editor and then the tracks were imported in a DAW to assign instruments. The title refers to the sculpture known as The Cube at Astor Place in NYC.
Hypnotize (April 2019): chill algo-jazz using Euterpea/Haskell and a new type system and generative formalism for the generative strategies employed in my earlier algo-jazz pieces. There are no human-made notes in this, and there is no high-level arranging either. It’s a solid, single generation that can also continue indefinitely without ever repeating. It is the most hands-off piece in the series so far. Post-code human influence in this was the choice of random seed to kick it off (which chosen pretty haphazardly) and some “production” sorts of choices: selecting virtual instruments, a bit of track-level volume shifting to match the instruments’ behavior, adding reverb, and fading the start/end. The visualization was made with Processing using a Java MIDI handler I wrote a couple years ago to catch events from the music and spawn shapes.
Ugly Purse Dog (February 2019): algo-bebop generated with models using Euterpea/Haskell. Algorithmically generated parts for all instruments, including drums. Large sections were arranged at a high-level in a DAW. Some small edits were made to the hihats at boundaries between the drum section starting from and ending into silence to smooth the transitions. The title is inspired by a certain type of small canine that I periodically see poking out of trains and subways in NYC.
Lobophyllia II (December 2018): an algo-jazz adaptation of an old piece of mine. This one about 50% algorithmic. It uses a output from Python-based interactive bossa nova program I made that was given a lead sheet for the main progressions in the original Lobophyllia. Certain melodies and bass riffs were taken from the original piece and mingled with the algorithmic components. Unlike 33rd and 5th (further down on this page), this piece is using the algorithms as a composition tool rather than showcasing them in a more isolated sense.
Phyllorhiza (December 2018): algorithmically generated parts for everything except the drums. Some parts were printed out as sheet music and recorded in through a ROLI seaboard block. The algorithms used were implemented in Haskell/Euterpea.
33rd & 5th (September 2018) 100% algorithmic material from an infinite jazz generator implemented with Haskell/Euterpea. The parts were output to MIDI files (separately, one per part) and then placed and assigned virtual instruments in a DAW. There are no human-added or human-edited notes in this. It’s basically just one long algorithmic run where I decided when to bring in each instrument and added a delay line to one of them.