
My primary claim for success is that the representation of Picture in Haven type checks and doesn't appeal to IO; IO only creeps in when we attempt to render a Picture.
You did something much more meaningful to me that what you say here.
Thanks. ;-)
[...]
I think what you did in Haven (based on memories of our conversations at the time and looking at your slides just now) is substantively different. You gave precise, complete, and tractably simple *denotation* to your types. Complete enough to define the correctness of the rendering process.
Does the Haven API live up to your goal of semantic purity for a vector graphics library? If not, where specifically does it fall short?
Yes, if my understanding about denotational precision and completeness is correct. Is it?
Yes and no. "Yes" in the sense that every type in Haven had a simple definition using Haskell's type system and I used these types to specify signatures of a set of functions for 2D geometry that was relatively complete. "No" in the sense that I never bothered to implement the various geometric functions directly in Haskell; I depended on the underlying implementation to do so. For simple things like points, lines and affine transforms I don't think this should be too controversial, but it's a bit less clear for clipping and constructive area geometry on complicated Bezier paths. From a library user's point of view there isn't much distinction between what I did and a pure implementation, but I can't really claim it's a rigorous or complete semantics without a pure reference implementation, and there's obviously no way to prove claims such as "Shapes form a monoid" without giving a direct definition of the composition and clipping operators. -Antony