
Stephen Tetley wrote:
Hi Mike
There used to be some slides available commenting on Haskore's CSound interface. But they seem to have vanished (I have a copy rendered to pdf when they were available). Like all slide presentations there's a lot of reading between the lines to get a good picture after the fact:
http://www.nepls.org/Events/16/abstracts.html#hudak http://plucky.cs.yale.edu/cs431/HasSoundNEPLS-10-05.ppt -- vanished
This looks like a great resource. Maybe Dr. Hudak can get me a copy. He clearly has the experience to implement a CSound "compiler" as gracefully as anyone could.
Maybe you're doomed to frustration though trying to implement your system in "Haskell". For the argument I'm about to make, I'd say a working programming language has two things - syntax, semantics and libraries and a repertory of prior art. Stateful programming clearly has some syntax burden in Haskell, stateful monadic programming has some benefit of 'stratification' - you have precise control of 'what state is where'. It's a matter of taste whether you like Python's flexibility or Haskell's, erm, 'locality' (precision?).
As for the second half of what you get from a programming language, your system description frames what you want to do with an emphasis on dynamic aspects. This seems a good way off from the prior art in Haskell. For instance there are Haskell synthesizers - George Giorgidze's yampa-synth and Jerzy Karczmarczuk's Clarion (actually in Clean, but near enough). Here you build signal processing modules - unit generators, of course - Yampasynth uses arrows to do this Clarion uses infinite streams. With both you would build a synthesizer statically from unit generators and somehow run it to produce sounds[1].
There is also the prior art of embedded hardware description languages, Lava, Hydra, Wired, Gordon Pace's HeDLa, soon Kansas Lava. One could view constructing synthesizers from unit generators as usefully analogous to constructing circuits - and certainly if you are 'compiling' to another system to do do the real work (in your case CSound) the papers on Wired, HeDLa, and Kansas Lava have useful insights on 'offshoring' compilation. But again these systems have strong bearing in the 'static description' of a circuit rather than its dynamical operation.
Thanks for this detailed review. I will investigate these things. My system sits halfway between a low-level signal processor language like CSound and a high-level music description language like Hudak's Haskore. My work as a composer will be done at the highest level possible, which means thinking in terms of "notes"---things that go "boo" at a certain time, place, frequency, amplitude, timbre, etc. But I want to express things beyond, say, MIDI, like indicating that a group of notes should be played legato---which doesn't mean "play them individually with no separation of notes" but actually means "modify the csound instrument's behavior at the time of note connections." So in one small breath I can say, "Make this legato" and at the low level the elves are scurrying around like mad, rearranging code, changing out instruments, merging notes, etc. I also have a bad case of "Not Invented Here Syndrome"---seriously, I want to use this system to do experimental composition, by which I mean any crazy idea I dream up can be implemented by adding to or modifying my system, which gives me a preference to write it myself.
If neither of those 'genres' is close to what you really want to do then the Haskell prior art is running a bit thin. You could look at dataflow - the dynamic PD / Max systems are often describe as a dataflow systems. There are some outposts in Haskell of dataflow programming - Gordon Pace has an embedding of Lustre available from his homepage and there has been work on dataflow programming via comonads. There is also reactive programming, but musical examples are thin on the ground (nonexistent?) so it might be a long haul to come up with something.
But this all sounds great to study. Thanks, Mike