
Reaktor has a few limitations though.
1. It's virtually impossible to debug the thing! (I.e., if your synth doesn't work... good luck working out why.)
2. It lacks looping capabilities. For example, you cannot build a variable-size convolution block - only a fixed-size one. (If you want to draw *a lot* of wires!) If you look through the standard library, you'll find no end of instruments that use a "hack" of using voice polyphony to crudely simulate looping... but it's not too hot.
Would be nice if I could build something in Haskell that overcomes these. OTOH, does Haskell have any way to talk to the audio hardware?
Nyquist is a music language from a whlie back that I liked (in theory, not so much in practice). It has a functional model in that "instruments" are just functions that return sample streams, so there's no difference between signal processing and signal generating functions (or orchestra and score, for the csounders out there). This was made efficient by lazily evaluating the streams and some custom hackery where it would e.g. notice that one signal going into (+) had ended and simply unlink the entire operation from the call graph, and some gc hackery to reclaim sound blocks quickly. The other interesting feature was "behaviours" which were just dynamically scoped variables that would propagate down to the nearest function that cared to interpret them, e.g. (transpose 5 (seq (melody 1) (melody 2))) would transpose the tones generated by (melody) by setting a dynamic variable that would later be read by the underlying oscillators or whatever that melody used. "seq" used the mechanism to shift or scale the beginnings end endings of notes. It was also more powerful than e.g. csound, supecollider, or reaktor style languages because in the former you have to compile a static call graph (the "orchestra") and then "play" it with signals dynamically (the "score"), whereas in nyquist there's no orchestra and score division. The disadvantage is that the immutable signals made it hard to implement real-time synthesis. The practical problem was that it was implemented in a hacked up XLisp which was primitive and hard to debug. Added to that, you had to be careful to wrap signals in functions or macros so the eager interpreter wouldn't evaluate the whole signal and kill performance. To get this back to haskell, at the time I wondered if a more natural implementation might be possible in haskell, seeing as it was more naturally lazy. Not sure how to implement the behaviours though (which were simply macros around a let of *dynamic-something*). I'm sure people have done plenty of signal processing, and there's always haskore... but what about a sound generation language like csound or clm or nyquist? It could fit in nicely below haskore. Also, as another reaktor user I agree it would have been so much nicer were it simply a real language. Drawing those boxes and lines and the complete lack of abstraction (beyond copy and paste) is a real pain. Supercollider is more promising in that way, but less polished of course. It also has that two-level imperative model though where you create and modify your signal graph with imperative techniques, then run it, rather than your program *being* the signal graph.