Music is what happens when a listener makes emotional and structural sense of sound patterns, and vice-versa in the case of a musician.  It exists only when someone is actually listening, remembering and/or playing. 

Splicing an intelligent algorithm into that real-time experience is very different from offline approaches such as deep learning. Coord exploits a connection between number theory and musical grammar to co-process the real-time interplay between musical patterns and human coherence and actions. It narrows the gap between making sense of music and actually making music. 


Generate original music by improvising variations and morphing between riffs and melodies, and navigate by ear to create a spectrum of musical compositions. These algorithms exploit what makes music different from any other data, in order to engage musical structure in real time. Instead of mining data that just happens to be music, that was engaged by someone else at some other time.

Details on the theory, algorithms, and approach are in these journal/conference papers. A more informal overview can be found in this article on Medium.