Music ecosystem feedback loop

Each feeds the other:

  1. An ecosystem of generative/adaptive melodic elements.  
  2. An ecosystem of apps to compose/listen/mutate those elements.

The first ecosystem is one of musical material, where a human-guided algorithm spawns mutations and hybrids of existing melodies, which become elements of new pieces and themselves become inputs to the same algorithm.

Natural selection occurs as preferred melodies have more musical offspring (variations) over time, and musical family-trees arise that trace the genealogy of parts within new pieces. Musical elements "escape the lab" into the wild, to recombine generative seeds with other such elements, in entirely unforeseen musical settings. 

The second ecosystem is one of software components. Music composition/production apps enable creation of adaptive melodies, and generative analysis of existing ones. Those elements become moving parts within larger musical pieces, where they undergo further mutation in adaptive/interactive listening apps. These in turn provide new inputs to the first ecosystem (of musical elements). 

Currently this takes the form of a desktop app that controls parts within Ableton Live. The app is written in Swift (on MacOS) and communicates with Live via OSC (Open Sound Control) and Max-for-Live. Pieces created in such a form are ready for adaptive playback by other software, such as location-based apps, data auralization, fitness apps, mobile entertainment, game engines. In each case the musical output is affected by user actions or parameters that control the relative influence of musical inputs as the larger piece unfolds in time. 

The two ecosystems drive each other: content creation and music consumption become part of a continuum where music evolves not just in the hands of trained composers but via the actions and musical results navigated by users. For composers the attraction is music that has a life of its own, spawning other music as it encounters new applications and influences. For listeners the attraction is music that adapts to personal situations and never gets worn out from repeated hearings.

Human-guided, computer-aided music variations

Coord assumes that the most compelling music is (and will be) created by humans, not computers. So it aims to leverage and animate a composer’s output, rather than replace that composer.

    Ableton Live MIDI clips placed onto Coord's GUI to create a landscape of morphing melodic variations.

 

Ableton Live MIDI clips placed onto Coord's GUI to create a landscape of morphing melodic variations.

This approach selectively injects “moving parts” into a larger piece of music that is otherwise left undisturbed. Here the bass and lead parts are varied in real time by the user, as a form of interactive improvisation. The software does not evaluate the output; it is entirely up to the user to steer the algorithm and to home in on preferred results. 

 

 

Theory paper published

Very happy that Journal of Mathematics and Music has published a peer-reviewed paper on the theory that underlies the algorithmic approach used by the software described on this site.

The paper describes a self-similar map of rhythmic coherence and syncopation, across all nested combinations of:

  • repetition of anticipation
  • anticipation of repetition

This map collapses a seemingly large number of branching possibilities onto a concise set, a fractal-like space of interrelated (non-fractal) rhythms.  Anticipation figures, and repetitions of those figures, are mapped onto the odd binomial coefficients as arranged on Pascal’s triangle, and from there onto the Sierpinski gasket.

With that, one can enumerate/characterize an axis of rhythmic structure, forming a new starting point for algorithmic analysis/composition.

The article is here.

Many thanks to the journal's Editors-in-Chief Thomas Fiore and Clifton Callender for their guidance and engagement, and to Dr. Callender for proposing the above depiction of the rhythmic components.

Application ported to Swift

The Coord app and its underlying code libraries have been ported to Apple's programming language Swift. Previously the code was in the closely related language Objective-C. which is in turn closely related to Smalltalk.

Objective-C was the programming language used on the NeXT Computer, which is what originally lured me into software development for purposes of making algorithmic music. Smalltalk (VisualWorks, Squeak) has been the dev platform at several of my jobs. Swift (along with some Objective-C) is what I use at my current position in Zurich. Coord was prototyped in Smalltalk before its initial implementation in Objective-C.

Quite a bit of refactoring and redesign happened along with the port, and I expect this will shorten the time and effort required to do a number of related proof-of-concept apps:

  1. Desktop music composition (DAW companion app)
  2. Location-based playback app that uses the adaptive music created by the desktop app
  3. Re-development of Chromat, a color-based music authoring app for iPhone
  4. Multi-user apps where collective actions drive generative music output

(I'm often asked if/when there will be a Windows version of Coord. Because Swift/Obj-C is the most productive platform for me, I'm locked into the Mac and iPhone as deployment platforms. Until/unless there is interest enough to move beyond proof-of-concept demos, it's unlikely I can allocate time for a Java or C++ port. If a partnership ever arises that brings additional dev resources, the code is already structured in such a way that a Windows port should be very straightforward.)