I was reading an article on Hacker News about Musical User Interfaces and it made wonder why the current music tools don’t allow you to code your song. By coding a song I mean writing a song as a program. Imagine your DAW provides an IDE of some kind. You have a Lisp-like REPL, a set of domain specific objects (tracks, loops, samples, filters, etc.), and some kind of time-based flow construct. Add sound sources, set their play time, and apply functions (i.e., filters) to your objects at whatever time you choose.
What would be the advantage of coding a song? I can think of a few potential benefits. Precise control over all elements of the song such as presets since you would specify values in code. Once filters, sequences, and other effects are in code they can be stored easily for re-use. With code you get all the advantages of programming tools like version control and a robust configuration management. Maybe even bug checking.
By coding songs you would also be able to do things that are currently hard or impossible to do with the visually oriented DAWs. You would access to all the features of a programming language such as iteration, complex functions, and algorithmically generated sequences. Plugins become libraries that you can call as needed. There is no reason you couldn’t still have visual tools, only in this case they would reflect your code.
I know things like this already exist in various DAWs but they appear to be limited to scripting. The visual UI is primary, while I’m arguing for the code to be primary. I can’t help think that the default, skeuomorphic UIs are in some ways holding things back. All those virtual knobs aren’t that easy to work with anyway. I’d like to see a musical IDE. Code is in motion just like music so it doesn’t seem like a big stretch to me.