Jazzy Beach Critters

Last modified: Jan 16, 2020 @ 4:44 pm

Jazzy Beach Critters is a proof of concept demonstration of functional models for real-time music generation to a game scene. There are two kinds of sound in this demo: sound effects that are in response to critter/user actions and generated music. Each critter represents a musical part (solo, harmony, or bass) and emits notes corresponding to the notes played. The music produced by the group will change if the critters’ moods change, but it does so while preserving harmonic and metrical coherency. The user can interact by petting a critter (makes it happier), poking a critter (makes it angrier), dropping food for critters to eat, and clicking on the beach ball to change the overall style. Happy critters will play in key with each other while angry critters will become chromatic/atonal.

Jazzy Beach Critters was a collaboration with Christopher N. Burrows, who designed the game scene with Unity and C#. The generative algorithms are implemented in Haskell. Music is generated at the level of individual notes, which trigger single-note samples within the Unity framework. All of the music generated is stochastic, and all sound synthesis takes place within the game in real-time. Unlike a number of other procedural soundtrack implementations for games, there are no pre-recorded musical passages in this scene.

Generative Models for Music

The models for generative music are derived from my existing work on generative jazz and other interactive music. The same models have been used to produce a number of stand-alone compositions as well as to create interactive systems for live performance. Because the same models have been used for real-time interactive musical systems where the machine responds to a human musician’s melodies, it would be possible to extend something like Jazzy Beach Critters to allow the user to interact more deeply with the soundtrack. For more information on these generative models for music, see the following: