Functional programming for processing of large raster images

Recently Vo Minh Thu wondered if Haskell (or, I generalize, functional programming) can be of much use for computer graphics programming. I'd like to point out a project that experimentally shown that functional programming is of great help for processing of large raster images (24-bit PPM files). The paper describing the system has been accepted for `Science of Computer Programming' and is in press: Combining Partial Evaluation and Staged Interpretation in the Implementation of Domain-Specific Languages Christoph A. Herrmann, Tobias Langhammer The code for the system is available online http://infosun.fmi.uni-passau.de/cl/metaprog/imagefilter2-2005-10-30.tgz The previous [slightly obsolete] version of the paper, presented at the MetaOCaml'04 workshop, is available online http://www.fmi.uni-passau.de/forschung/mip-berichte/MIP-0410.html The paper includes a few pictures and the table with benchmark, so we can observe the benefits. Now, with offshoring and native MetaOCaml fully implemented, we can gain bigger benefits. In this project, functional programming does what it is probably best suited: to develop a `compiler' -- a compiler from a filter specification to quite efficient code. The code is specialized to all available static information. To be precise, the authors implement an interpreter -- which is then specialized to the source program and so interpretative overhead is removed and the code is optimized. Staging an interpreter automatically gives a compiler. Futamura projections are not only the fascinating, but useful, too.

Thanks for pointing this out, although I knew that kind of answer via
papers about Pan.
It means I'll have to improve my compiler writing knowlegde :)
mt
2006/6/21, oleg@pobox.com
Recently Vo Minh Thu wondered if Haskell (or, I generalize, functional programming) can be of much use for computer graphics programming.
I'd like to point out a project that experimentally shown that functional programming is of great help for processing of large raster images (24-bit PPM files). The paper describing the system has been accepted for `Science of Computer Programming' and is in press:
Combining Partial Evaluation and Staged Interpretation in the Implementation of Domain-Specific Languages Christoph A. Herrmann, Tobias Langhammer
The code for the system is available online http://infosun.fmi.uni-passau.de/cl/metaprog/imagefilter2-2005-10-30.tgz
The previous [slightly obsolete] version of the paper, presented at the MetaOCaml'04 workshop, is available online http://www.fmi.uni-passau.de/forschung/mip-berichte/MIP-0410.html
The paper includes a few pictures and the table with benchmark, so we can observe the benefits. Now, with offshoring and native MetaOCaml fully implemented, we can gain bigger benefits.
In this project, functional programming does what it is probably best suited: to develop a `compiler' -- a compiler from a filter specification to quite efficient code. The code is specialized to all available static information. To be precise, the authors implement an interpreter -- which is then specialized to the source program and so interpretative overhead is removed and the code is optimized. Staging an interpreter automatically gives a compiler. Futamura projections are not only the fascinating, but useful, too.

I think the issue wasn't using functional programming for large image processing, it was using Haskell. OCaml is notoriously fast and strict. Haskell/GHC is... lazy. Everyone knows that laziness is supposed to be a virtue. In practice, though, I'm one of the people who either can't wrap their heads around it or just find themselves having to fight it from the start. Should we all switch to OCaml? I wish I had a criteria to determine when to use Haskell and when to use OCaml. On Jun 21, 2006, at 8:15 AM, minh thu wrote:
Thanks for pointing this out, although I knew that kind of answer via papers about Pan. It means I'll have to improve my compiler writing knowlegde :) mt
2006/6/21, oleg@pobox.com
: Recently Vo Minh Thu wondered if Haskell (or, I generalize, functional programming) can be of much use for computer graphics programming.
I'd like to point out a project that experimentally shown that functional programming is of great help for processing of large raster images (24-bit PPM files). The paper describing the system has been accepted for `Science of Computer Programming' and is in press:

Hello Joel, Wednesday, June 21, 2006, 1:24:05 PM, you wrote:
I think the issue wasn't using functional programming for large image processing, it was using Haskell. OCaml is notoriously fast and strict. Haskell/GHC is... lazy.
+1 :)
Everyone knows that laziness is supposed to be a virtue. In practice, though, I'm one of the people who either can't wrap their heads around it or just find themselves having to fight it from the start.
i think that i just don't notice cases where i use lazy evaluation in Haskell, it just silently works. just for example - i have in my program filelist which can contain, say, 100.000 files. it is splitted to sublists by file extension and each sublist is splitted again to 100 items chunks. the each sub-sub-list is processed in inner cycle. that is done via lazy list evaluation and i even don't think how this actually works - for me it' just split+mapM_ combination
Should we all switch to OCaml? I wish I had a criteria to determine when to use Haskell and when to use OCaml.
i never tried to work in it but i've read books. for me it's just too irregular, uncomfortable language. from the theoretic point of view, Haskell has some stronger facilities (because it was created much later), such as polymorphic recursion and type classes. although OCaml is much later than original ML and contains numerous extensions, they are more pragmatic and not really integrated into the language (imho). Haskell, on the other side, grows with new high-level concepts that are moved from the academician fields right to the real work (say, STM and GADT) language-of-my-dream is strict Haskell with ability to explicitly specify lazy evaluation on need. it was discussed on this list several months ago (search for "strict Haskell"). one possibly interesting variant of these "strict Haskell" can be Clean language - it's also lazy by default, afaik, but contains better features to specify strictness, compiler that generates fast code (at the level of OCaml/jhc), plus IDE with GUI libs. comparing to Haskell, it seems somewhat like "Visual Basic" comparing to plain Basic. again, i don't tried it (and it's not free, afair), so look himself. it's a list of files i downloaded from site but not yet tried. first is compiler, other - docs (and it's a HUGE docs): HsCleanAll2.0.2.zip [http://aszt.inf.elte.hu/~fun_ver/2003/software/HsCleanAll2.0.2.zip] CleanBookI.pdf 1-up [ftp://ftp.cs.kun.nl/pub/Clean/papers/cleanbook/CleanBookI.pdf] CleanBookI2up.pdf 2-up [ftp://ftp.cs.kun.nl/pub/Clean/papers/cleanbook/CleanBookI2up.pdf] III.1.ProgramDevelopment.pdf pdf [ftp://ftp.cs.kun.nl/pub/Clean/papers/cleanbook/III.1.ProgramDevelopment.pdf] III.2.ProgStylesParadigms.pdf pdf [ftp://ftp.cs.kun.nl/pub/Clean/papers/cleanbook/III.2.ProgStylesParadigms.pdf] III.3.Efficiency.pdf pdf [ftp://ftp.cs.kun.nl/pub/Clean/papers/cleanbook/III.3.Efficiency.pdf] object-io.pdf [ftp://ftp.cs.kun.nl/pub/Clean/supported/ObjectIO.1.2/doc/tutorial.pdf] -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Bulat Ziganshin wrote:
one possibly interesting variant of these "strict Haskell" can be Clean language - it's also lazy by default, afaik, but contains better features to specify strictness, compiler that generates fast code (at the level of OCaml/jhc), plus IDE with GUI libs. comparing to Haskell, it seems somewhat like "Visual Basic" comparing to plain Basic. again, i don't tried it (and it's not free, afair), so look himself.
Yourself... ... You will find that Clean IS FREE under LGPL, although commercial versions exist as well. Don't spread dubious "truths". Jerzy Karczmarczuk

Hello Jerzy, Wednesday, June 21, 2006, 4:32:40 PM, you wrote:
one possibly interesting variant of these "strict Haskell" can be Clean language - it's also lazy by default, afaik, but contains better features to specify strictness, compiler that generates fast code (at the level of OCaml/jhc), plus IDE with GUI libs. comparing to Haskell, it seems somewhat like "Visual Basic" comparing to plain Basic. again, i don't tried it (and it's not free, afair), so look yourself.
... You will find that Clean IS FREE under LGPL, although commercial versions exist as well.
can you write more about Clean, about experience with it? i don't know exact LGPL rules - can it be used to develop commercial software and which parts (if any) is omitted in LGPL version? it will be very interesting to hear about Clean programming environment comparing with well-known RADs - .NET, Eclipse, Delphi? can you enlighten me? -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

hi Bulat,
shortly :
with GPL you have to make your code GPL if it uses GPLed code.
with LGPL you dont have to make your code LGPL/GPL if it only links
against LGPLed library.
regards,
mt
2006/6/21, Bulat Ziganshin
Hello Jerzy,
Wednesday, June 21, 2006, 4:32:40 PM, you wrote:
one possibly interesting variant of these "strict Haskell" can be Clean language - it's also lazy by default, afaik, but contains better features to specify strictness, compiler that generates fast code (at the level of OCaml/jhc), plus IDE with GUI libs. comparing to Haskell, it seems somewhat like "Visual Basic" comparing to plain Basic. again, i don't tried it (and it's not free, afair), so look yourself.
... You will find that Clean IS FREE under LGPL, although commercial versions exist as well.
can you write more about Clean, about experience with it? i don't know exact LGPL rules - can it be used to develop commercial software and which parts (if any) is omitted in LGPL version?
it will be very interesting to hear about Clean programming environment comparing with well-known RADs - .NET, Eclipse, Delphi? can you enlighten me?
-- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Joel Reymont wrote:
I think the issue wasn't using functional programming for large image processing, it was using Haskell. OCaml is notoriously fast and strict. Haskell/GHC is... lazy.
Everyone knows that laziness is supposed to be a virtue. In practice, though, I'm one of the people who either can't wrap their heads around it or just find themselves having to fight it from the start.
Perhaps laziness is more "foundational", in that you can write if2 c x y = if c then x else y However: 1) What's the advantage of being able to define if2? 2) Many control constructs, like >>=, foldl', map etc don't require laziness because they already take a function as argument rather than a plain expression. In fact, I can't think of any useful user defined control constructs that actually use laziness in the same way as if2, and the presence of laziness in foldr, foldl etc just seems to be a real pain leading to excessive heap consumption. (Counterexample? ) 3) "Lazy lists as glue" can easily be replaced by force/delay lists + an extension to pattern matching where pattern matching against [a] forces the argument and the syntax [h|t] is used as in Prolog, instead of h:t (This would also free : to be used for "with type" or "with partial type" instead of ::) 4) Other examples of the utility of laziness can turn out to be impractical chimera. For example, the famous repmin replaces the traversal of a tree twice with the dubious "advantage" of traversing it "only once" and the building up of a cluster of expensive thunks instead, and since the thunks effectively encode the structure of the tree, evaluation of them effectively constitutes the second traversal. So nothing's gained except difficulty of understanding the all-too-clever code (very bad software engineering practice imho), more heap consumption, and more time consumption.
Should we all switch to OCaml? I wish I had a criteria to determine when to use Haskell and when to use OCaml.
Everything else about Haskell is so great and well thought out (eg type classes, no side effects, higher rank polymorphism, existentials) it seems a pity to throw all this away just because of one unfortunate feature. An alternative would be to write a converter that reads a file of Haskell source which is to be interpreted strictly and outputs a file of lazy Haskell source with stictness annotations everywhere, with desugaring of [a] into force/delay representation (*). (Also in general, !x could mean "force x" ie x(), and ~x could mean \()->x (in value syntax) or ()->x (in type syntax), and !x could be allowed in patterns also (when the type at that position is ~x)) foo : ~Int -> Int -- also taking the opportunity to replace :: by : foo !x = x desugared to foo :: (() -> Int) -> Int foo cx = case cx () of x -> x The main challenge I think would be in converting bindings in let and where to appropriate case bindings to ensure that as far as possible, redundant bindings for a given path of control flow are not made, and analysis of mutually recursive bindings to ensure that everything is bound before being used. Some issues would need to be resolved about what the user can expect. For example whole program optimization could allow some expressions to be optimized out (eg by splitting a function returning a pair into two functions, one for each element) that would otherwise be non-terminating, whereas with lazy Haskell, there is an easy rule that (effectively) expressions are only evaluated when needed, regardless of optimization. Then an interesting investigation would be to see how easy it is to port lazy Haskell to the new, totally strict, language, and if there are any situations where existing code has used laziness (for algorithmic reasons or just for efficiency) without being aware of it. (*) So that eventually a new compiler could be written to take full advantage of the side-effect-free nature of the language + the strictness which means that values can be accessed without having to go through the "if unevaluated then evaluate, store, and return else return" for each unoptimized access. Because OCaml and SML have side effects, which limit optimizations due to possible aliasing etc, it might be possible to compile such a language to code which is certainly no slower, and possibly *even faster* than OCaml!!! :-) Regards, Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us. http://www.metamilk.com

Hi, I happen to like laziness, because it means that when I'm not thinking about performance, I don't have to think about evaluation order _at all_. And since my computer is a 750Mhz Athlon with Hugs, I never find any need to worry about performance :) If it ever becomes an issue I can move to GHC or buy a faster computer without too much hassle.
1) What's the advantage of being able to define if2? What about &&, || ? Should they be "built in"? What about and, which is just && a lot of times, should that be lazy? At what point do you say no? Should I be able to define implies correctly?
3) "Lazy lists as glue" can easily be replaced by force/delay lists + an extension to pattern matching where pattern matching against [a] forces the argument and the syntax [h|t] is used as in Prolog, instead of h:t (This would also free : to be used for "with type" or "with partial type" instead of ::) That seems like more "thought" when writing the program, maybe its worth it, maybe its not, but it doesn't seem as "neat" as what we already have.
4) Other examples of the utility of laziness can turn out to be impractical chimera. For example, the famous repmin replaces the traversal of a tree twice with the dubious "advantage" of traversing it "only once" and the building up of a cluster of expensive thunks instead, and since the thunks effectively encode the structure of the tree, evaluation of them effectively constitutes the second traversal. So nothing's gained except difficulty of understanding the all-too-clever code (very bad software engineering practice imho), more heap consumption, and more time consumption. Laziness doesn't have to be exploited in complex ways - minimum = head . sort is a nice example. isSubstr x y = any (isPrefix x) (inits y) is another one. Often by just stating a definition, laziness gives you the performance for free. Of course, if you wanted to think harder (and I never do), you can write better performing and strict-safe versions of these, but again its more effort.
The other thing you loose when moving to strictness is the ability to inline functions arbitrarily - consider: if2 c t f = if x then t else f Consider the expression: if2 True 1 undefined Now lets inline it and expand it, and in Haskell we get 1, which matches the evaluation. In strict Haskell the inlining is now invalid, and thats quite a useful optimisation to make. While it seems that compilers can get round this, my concern is for the poor programmer - this nice property of viewing functions as just "replace this with that" has disappeared. I suspect that in years to come, lazy languages will also have the upper hand when it comes to theorem proving and formal reasoning, but I guess thats a matter for future consideration. While laziness may not be all good, its certainly not all bad :) Thanks Neil

Am Mittwoch 21 Juni 2006 21:30 schrieb Brian Hulley:
Perhaps laziness is more "foundational", in that you can write
if2 c x y = if c then x else y
However:
1) What's the advantage of being able to define if2?
Well, you can easily define you own control structures (when, unless etc). This feature is wildly used in combinator libraries. Also, in a non-strict language recursive definitions are not limited to function types. Users of parser combinators heavily rely on this feature. Just try to define/use parsing combinators ins a strict language. The problem with eager evaluation is that it is too eager (expressions are sometimes evaluated too early) just as the problem with lazy evaluation is that it is too lazy (evaluations sometimes happen too late). Cheers, Ralf

Hello Ralf, Thursday, June 22, 2006, 12:02:43 AM, you wrote:
limited to function types. Users of parser combinators heavily rely on this feature. Just try to define/use parsing combinators ins a strict language.
C++ Boost contains one parsing combinators library. part of it is, of course, a lazy eveluation sub-library :) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Ralf Hinze wrote:
Also, in a non-strict language recursive definitions are not limited to function types. Users of parser combinators heavily rely on this feature. Just try to define/use parsing combinators ins a strict language.
Anyone care to comment on what goes wrong with parser combinators in an eager language? Is it mainly a space usage problem (where the lazy version might essentially perform a depth-first-search, while the eager version is breadth-first)? Or is there something else I'm missing? As a reference, back when I was trying to understand monads, I ported the parser combinators from the Hutton and Meijer paper to perl... http://sleepingsquirrel.org/monads/parser/monad_parser.txt Thanks, Greg Buchholz

On 6/22/06, Greg Buchholz
Ralf Hinze wrote:
Also, in a non-strict language recursive definitions are not limited to function types. Users of parser combinators heavily rely on this feature. Just try to define/use parsing combinators ins a strict language.
Anyone care to comment on what goes wrong with parser combinators in an eager language? Is it mainly a space usage problem (where the lazy version might essentially perform a depth-first-search, while the eager version is breadth-first)? Or is there something else I'm missing?
Slide 22 ("Combinator Libraries") of http://research.microsoft.com/~simonpj/papers/haskell-retrospective/ shows that in an eager language, you have to make the argument explicit, which destroys the Parser abstraction. Indeed I rolled my own sort of monads and made my own parser combinators in C# and they were a lot like your Perl combinators: very imperative and verbose (~10x more code than Haskell for the same parser), instead of clean and declarative like BNF or Haskell parser combinators. Jared.
As a reference, back when I was trying to understand monads, I ported the parser combinators from the Hutton and Meijer paper to perl...
http://sleepingsquirrel.org/monads/parser/monad_parser.txt
Thanks,
Greg Buchholz _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- http://www.updike.org/~jared/ reverse ")-:"

On Jun 21, 2006, at 3:30 PM, Brian Hulley wrote:
Joel Reymont wrote:
I think the issue wasn't using functional programming for large image processing, it was using Haskell. OCaml is notoriously fast and strict. Haskell/GHC is... lazy.
Everyone knows that laziness is supposed to be a virtue. In practice, though, I'm one of the people who either can't wrap their heads around it or just find themselves having to fight it from the start.
Perhaps laziness is more "foundational", in that you can write
if2 c x y = if c then x else y
[snip some conversation...] For those who haven't seen this already, here is a presentation by Simon PJ in which he discusses his views on laziness (among other things). http://research.microsoft.com/~simonpj/papers/haskell-retrospective/ HaskellRetrospective.pdf Takeaway point about laziness: "Laziness keeps you honest" by not allowing you to slip in side effects. Bonus takeaway: read Wadler's papers :-) Rob Dockins Speak softly and drive a Sherman tank. Laugh hard; it's a long way to the bank. -- TMBG

| Everything else about Haskell is so great and well thought out (eg type | classes, no side effects, higher rank polymorphism, existentials) it seems a | pity to throw all this away just because of one unfortunate feature I thought it might be worth mentioning that GHC (well, the HEAD, which will become 6.6) supports "bang patterns". See http://haskell.galois.com/cgi-bin/haskell-prime/trac.cgi/wiki/BangPatter ns Bang patterns make it much more convenient to write a strict function. E.g f (x, !y) = ... is strict both in the pair (of course) but also in the second component of the pair, y. You can also use them in lets let !x = <rhs> in <body> which will evaluate <rhs> before <body>. It's an experimental feature, and I'm interested to know how useful, or otherwise, it turns out to be. Simon

On Thu, Jun 22, 2006 at 09:22:34AM +0100, Simon Peyton-Jones wrote:
To: Brian Hulley
, Joel Reymont Cc: haskell-cafe@haskell.org From: Simon Peyton-Jones Date: Thu, 22 Jun 2006 09:22:34 +0100 Subject: RE: [Haskell-cafe] Re: Functional programming for processing oflargeraster images http://haskell.galois.com/cgi-bin/haskell-prime/trac.cgi/wiki/BangPatterns
Bang patterns make it much more convenient to write a strict function. E.g f (x, !y) = ... is strict both in the pair (of course) but also in the second component of the pair, y.
i am ecstatic to hear that :). if it really means that 'y' will be fully evaluated (not top level normal form, but whatsthenameforthis, in the way ocaml evaluates expressions), it's something i have been missing so much that i was thinking of switching back to a strict language again. will upgrade as soon as i can, thanks! m.

hi,
2006/6/22, Simon Peyton-Jones
It's an experimental feature, and I'm interested to know how useful, or otherwise, it turns out to be.
Coudn't you predict it (both in terms of the programmers and compiler writers) ? thanks, vo minh thu

I'm afraid the _meta_ programming aspect of the image processing project may be overlooked. Joel Reymont wrote:
I think the issue wasn't using functional programming for large image processing, it was using Haskell. OCaml is notoriously fast and strict. Haskell/GHC is... lazy.
Well, in the raster image processing project, a dialect of OCaml was used to _generate_ code. If the author used offshoring (which I think they did later), the generated code was in C. That C code can be used stand-alone or be linked with some other C (Java, etc) code. Our FFT paper did exactly that: our MetaOCaml program produced C code, which we plugged as it was in the FFTW testing framework (written in pure C) for benchmarking. By 'plugged' above I meant moving the C code file from one directory to another. We used both GCC and Intel C compilers for benchmarking. Offshoring can also produce Fortran code. It's quite feasible to generate Verilog so we can program an FPGA and get image processing even faster. The generator doesn't have to be a speed demon; it is not that relevant if the generator is lazy or strict. What really matters if the generator can be easily judged correct. It immensely matters that the generated code has correctness properties; at least, it should be well-formed and well-typed (and preferably, has some other properties, like space bounds). It is disheartening to see errors when compiling the generated code (as happens with some other generators), because it is very difficult to trace these errors back to the generator. Here's the reference to another, quite large and very real project http://www.spiral.net/ which generates the fastest ever FFT, DCT, etc. codes for variety of architectures. The source language is a DSL for linear algebra. The point is that highest-performance computing nowadays is all about code generation/meta-programming. And in this area, functional programming and Haskell have definite advantage.

Usually once a year somebody starts a discussion on the merits of functional/lazy paradigms, etc., in an applicative context, and it is quite good. People compare Haskell and Ocaml, from time to time somebody says that - apparently - Clean has better handling of strictness issues [saying at the same time that he/she doesn't use Clean...], people divide into different philosophical branches, some complain that it would be nice to have strict Haskell, others say, that they don't care, and what is important is the provable/enforced correctness, and laziness is helpful. People say that the laziness permits to consider the conditionals as functions, others ask what for, and the discussion is usually quite interesting. And here apparently I am one of rare people - I am not proud of it, rather quite sad, who defends laziness as an *algorithmisation tool*, which makes it easy and elegant to construct co-recursive codes. Circular programs, run-away infinite streams, hidden backtracking etc. In the domain of images this can be helpful for making filters, especially infinite-response, recursive filters. For two-dimensional data this might be clumsy, but for 1D, for example for the sound generation/processing, you may transform a recurrential equation yielding Y out of X: Y[n+1] = a*X[N+1] + b*Y[n] usually (imperatively) implemented as a loop, into a stream definition: filtr a b x@(x0:xq) = y where y = (x0:yq) yq = a*xq + b*y with (*) and (+) conveniently overloaded (or replaced by specific obvious ops). In such a way you can program in 2 - 6 lines some quite exquisite musical instruments (for example the Karplus-Strong "guitar", or a flute), construct the reverberation filters, make ever-rising Shepard/Risset paradoxical sounds, etc. etc. With laziness it is a sheer pleasure and fun, without - a pain. If you wish, find my PADL talk on it... In this context, I found Clean more helpful than Haskell, for ONE reason. Clean has a primitive datatype: unboxed, spine-lazy but head-strict lists. The co-recursion works, as the construction of the tail is postponed, but there is no pollution of the space by thunks - unevaluated list *elements*. This I really do miss in Haskell... But perhaps I simply don't know how to obtain a similar behaviour? For image processing (rather: constructing, but with incremental algorithms) Clean usage of unique arrays was for me more natural than monadic stuff in Haskell, but this is probably just a question of style. I agree that here the laziness is secondary... Jerzy Karczmarczuk

jerzy.karczmarczuk@info.unicaen.fr wrote: [snip]
you may transform a recurrential equation yielding Y out of X: Y[n+1] = a*X[N+1] + b*Y[n] usually (imperatively) implemented as a loop, into a stream definition: filtr a b x@(x0:xq) = y where y = (x0:yq) yq = a*xq + b*y
Can you explain how this transformation was accomplished? I don't see how yq = a * xq + b * y relates to Y[n+1] = a*X[n+1] + b*Y[n] -- (assuming the X[N+1] was a typo) since y is a longer list than yq but Y[n] is an earlier element than Y[n+1], so it seems that the function is multiplying b by a later factor than it should. So: 1) Someone reading the code needs to do a lot of work to try to recover the original equation 2) Wouldn't an imperative loop, using the original equation directly, have made everything much simpler? 3) Therefore laziness has lead to obfuscated code.
with (*) and (+) conveniently overloaded (or replaced by specific obvious ops).
In such a way you can program in 2 - 6 lines some quite exquisite musical instruments (for example the Karplus-Strong "guitar", or a flute), construct the reverberation filters, make ever-rising Shepard/Risset paradoxical sounds, etc. etc. With laziness it is a sheer pleasure and fun, without - a pain. If you wish, find my PADL talk on it...
In this context, I found Clean more helpful than Haskell, for ONE reason. Clean has a primitive datatype: unboxed, spine-lazy but head-strict lists. The co-recursion works, as the construction of the tail is postponed, but there is no pollution of the space by thunks - unevaluated list *elements*. This I really do miss in Haskell... But perhaps I simply don't know how to obtain a similar behaviour?
If you only needed the head-strict aspect, something like data HSList a = Empty | Cons !a (HSList a) (GHC also has unboxed types so perhaps something like data HSList = Empty | Cons Double# HSList but see the restrictions on their use at http://www.haskell.org/ghc/docs/latest/html/users_guide/primitives.html ) Regards, Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us. http://www.metamilk.com

Brian Hulley wrote:
jerzy.karczmarczuk@info.unicaen.fr wrote:
you may transform a recurrential equation yielding Y out of X: Y[n+1] = a*X[n+1] + b*Y[n] usually (imperatively) implemented as a loop, into a stream definition: ... Can you explain how this transformation was accomplished? I don't see how yq = a * xq + b * y relates to Y[n+1] = a*X[n+1] + b*Y[n] -- (assuming the X[N+1] was a typo)
since y is a longer list than yq but Y[n] is an earlier element than Y[n+1], so it seems that the function is multiplying b by a later factor than it should.
Sure, y[n] is earlier, so defining the tail by the stream itself refers an element to its predecessor. No problemo Baby, as used to say Terminator 2, whose relevance to our problem is obvious, since he also couldn't terminate him(it)self... Let's write the stream construction once more. Starting with x=(x0:xq) and parameters a and b, assuming that for n<0 you take zero: y = (a*x0:yq) -- Here I was in error, "a" missing yq = a*xq + b*y with, of course, a*xq meaning map(a*) xq; x+y meaning zipWith(*) x y ... y0 = a*x0 Look yourself: y1 = a*x1 + b*y0 y2 = a*x2 + b*y1, etc. So, first, this is correct, element by element. Don't tell me you have never seen the assembly of all non-negative integers as an infinite list thereof: integs = 0 : (integs + ones) where ones = 1:ones it is quite similar (conceptually). y IS NOT a longer list than yq, since co-recursive equations without limiting cases, apply only to *infinite* streams. Obviously, the consumer of such a stream will generate a finite segment only, but it is his/her/its problem, not that of the producer.
So: 1) Someone reading the code needs to do a lot of work to try to recover the original equation 2) Wouldn't an imperative loop, using the original equation directly, have made everything much simpler? 3) Therefore laziness has lead to obfuscated code.
1. Such codes are not meant to be thrown at an unprepared audience. Either the reader knows something about stream processing, or some introduction is necessary. 2. Are we speaking about assembly-style, imperative programming, or about functional style? Please write your loop in Haskell or any other fun. language, and share with us its elegance. Please note that for stream-oriented applications, as the sound processing I mentioned, this "n" index has no meaning whatsoever, it just denotes a position within the stream. Index-less style is more compact, with less redundant entities. 3. Full disagreement. There is NOTHING obfuscated here, on the contrary, the full semantics is in front of your eyes, it requires only some reasoning in terms of infinite lists. See point (1). Thanks. Jerzy Karczmarczuk

Jerzy Karczmarczuk wrote:
Brian Hulley wrote:
jerzy.karczmarczuk@info.unicaen.fr wrote:
you may transform a recurrential equation yielding Y out of X: Y[n+1] = a*X[n+1] + b*Y[n] usually (imperatively) implemented as a loop, into a stream definition: ... Can you explain how this transformation was accomplished?
y = (a*x0:yq) -- Here I was in error, "a" missing yq = a*xq + b*y
with, of course, a*xq meaning map(a*) xq; x+y meaning zipWith(*) x y
y0 = a*x0 Look yourself: y1 = a*x1 + b*y0 y2 = a*x2 + b*y1, etc. So, first, this is correct, element by element.
Don't tell me you have never seen the assembly of all non-negative integers as an infinite list thereof:
integs = 0 : (integs + ones) where ones = 1:ones
it is quite similar (conceptually).
Thanks for the explanation.
y IS NOT a longer list than yq, since co-recursive equations without limiting cases, apply only to *infinite* streams. Obviously, the consumer of such a stream will generate a finite segment only, but it is his/her/its problem, not that of the producer.
I still don't understand this point, since y = (a*x0 : yq) so surely by induction on the length of yq, y has 1 more element?
2. Are we speaking about assembly-style, imperative programming, or about functional style? Please write your loop in Haskell or any other fun. language, and share with us its elegance.
Starting with: Y[n+1] = a*X[n+1] + b*Y[n] filtr a b (x0:xs) = y0 : filtr' xs y0 where y0 = a * x0 filtr' (x_n1 : xs) y_n = y_n1 : filtr' xs y_n1 where y_n1 = a * x_n1 + b * y_n In a sense I'm still using a lazy stream, but the difference is that I'm not making any use of the "evaluate once then store for next time" aspect of lazyness, so the above code could be translated directly to use the force/delay streams I mentioned before. Also, the original formula now appears directly in the definition of filtr' so it can be understood without an initiation into stream processing. (Piotr - I see my original wording "imperative loop" was misleading - in the context of functional programming I tend to just use this to mean a simple recursive function)
3. Full disagreement. There is NOTHING obfuscated here, on the contrary, the full semantics is in front of your eyes, it requires only some reasoning in terms of infinite lists. See point (1).
The same could be said for all obfuscated code: it's always fully visible but just requires reasoning to understand it! :-) Still I suppose perhaps the word "obfuscated" was a bit strong and certainly with your explanation, which you mentioned as a prerequisite in answer to my point 1), I now understand your original code also, but not without making some effort. Best regards, Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us. http://www.metamilk.com

2006/6/22, Brian Hulley
Jerzy Karczmarczuk wrote:
Brian Hulley wrote: [snip] y IS NOT a longer list than yq, since co-recursive equations without limiting cases, apply only to *infinite* streams. Obviously, the consumer of such a stream will generate a finite segment only, but it is his/her/its problem, not that of the producer.
I still don't understand this point, since y = (a*x0 : yq) so surely by induction on the length of yq, y has 1 more element?
y and yq are infinite... mt

minh thu wrote:
2006/6/22, Brian Hulley
: Jerzy Karczmarczuk wrote:
Brian Hulley wrote: [snip] y IS NOT a longer list than yq, since co-recursive equations without limiting cases, apply only to *infinite* streams. Obviously, the consumer of such a stream will generate a finite segment only, but it is his/her/its problem, not that of the producer.
I still don't understand this point, since y = (a*x0 : yq) so surely by induction on the length of yq, y has 1 more element?
y and yq are infinite...
But how does this change the fact that y still has 1 more element than yq? yq is after all, not a circular list. I don't see why induction can't just be applied infinitely to prove this. Regards, Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us. http://www.metamilk.com

On 2006-06-22 at 15:16BST "Brian Hulley" wrote:
minh thu wrote:
y and yq are infinite...
But how does this change the fact that y still has 1 more element than yq? yq is after all, not a circular list.
infinity+1 = infinity
I don't see why induction can't just be applied infinitely to prove this.
because (ordinary) induction won't go that far. -- Jón Fairbairn Jon.Fairbairn at cl.cam.ac.uk

Jon Fairbairn wrote:
On 2006-06-22 at 15:16BST "Brian Hulley" wrote:
minh thu wrote:
y and yq are infinite...
But how does this change the fact that y still has 1 more element than yq? yq is after all, not a circular list.
infinity+1 = infinity
Surely this is just a mathematical convention, not reality! :-)
I don't see why induction can't just be applied infinitely to prove this.
because (ordinary) induction won't go that far.
I wonder why? For any finite list yq, |y| == |yq| + 1 So considering any member yq (and corresponding y) of the set of all finite lists, |y| == |yq| + 1 Couldn't an infinite list just be regarded as the maximum element of the (infinite) set of all finite lists? Regards, Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us. http://www.metamilk.com

On 2006-06-22 at 15:45BST "Brian Hulley" wrote:
Jon Fairbairn wrote:
infinity+1 = infinity
Surely this is just a mathematical convention, not reality! :-)
I'm not sure how to answer that. The only equality worth talking about on numbers (and lists) is the mathematical one, and it's a mathematical truth, not a convention.
I don't see why induction can't just be applied infinitely to prove this.
because (ordinary) induction won't go that far.
I wonder why? For any finite list yq, |y| == |yq| + 1 So considering any member yq (and corresponding y) of the set of all finite lists, |y| == |yq| + 1
But the infinite lists /aren't/ members of that set. For infinite lists the arithmetic is different. |y| == |yq| +1 == |yq| If you don't use the appropriate arithmetic, your logic will eventually blow up.
Couldn't an infinite list just be regarded as the maximum element of the (infinite) set of all finite lists?
It can be, but that doesn't get it into the set. -- Jón Fairbairn Jon.Fairbairn at cl.cam.ac.uk

Brian Hulley wrote:
Couldn't an infinite list just be regarded as the maximum element of the (infinite) set of all finite lists?
Brian, I will say something you might find acrimonious and impolite, but it is serious, you might find this in some philosophical works. If you are right, then YOU JUST PROVED THE EXISTENCE OF GOD. ========================================= More seriously... Perhaps you (and possibly Piotr Kalinowski) would look up some materials on intuitionism in mathematics, on the constructive theory of sets, etc. Jerzy Karczmarczuk

Actually Brian's intuition is right on target. One way to define an infinite list is as the limit of an infinite chain of partial lists (which, in domain theory, is essentially how all elements are defined). A chain is a sequence a1 <= a2 <= ... <= an, where <= is the domain ordering. A partial list is any list ending in _|_ (i.e. bottom). So, for example, the infinite list of ones can be defined as the limit of the following chain: _|_ <= 1 : _|_ <= 1 : 1 : _|_ <= 1 : 1 : 1 : _|_ ... To verify that this is a chain, remember that (:) is right associative, and _|_ <= x for all x. Or, another way to look at this, is that the infinite list is the LUB (least upper bound) of the infinite set of all of these partial (but finite) lists. That explanation corresponds most closely with Brian's intuition. If anyone thinks that this explanation is baroque, I should also point out that in a pragmatic sense this idea forms the basis for doing inductive proofs on programs generating infinite lists (as described in my book, as well as in many other sources). -Paul jerzy.karczmarczuk@info.unicaen.fr wrote:
Brian Hulley wrote:
Couldn't an infinite list just be regarded as the maximum element of the (infinite) set of all finite lists?
Brian, I will say something you might find acrimonious and impolite, but it is serious, you might find this in some philosophical works. If you are right, then YOU JUST PROVED THE EXISTENCE OF GOD. ========================================= More seriously... Perhaps you (and possibly Piotr Kalinowski) would look up some materials on intuitionism in mathematics, on the constructive theory of sets, etc.
Jerzy Karczmarczuk

Paul Hudak wrote:
Actually Brian's intuition is right on target. One way to define an infinite list is as the limit of an infinite chain of partial lists (which, in domain theory, is essentially how all elements are defined).
as the answer to Brian Hulley conjecture criticized by myself:
Couldn't an infinite list just be regarded as the maximum element of the (infinite) set of all finite lists?
If you are right, then YOU JUST PROVED THE EXISTENCE OF GOD.
Perhaps his intuition is right, but there are fundamental differences - A. Between the chain of partial lists and the set of finite lists B. Between a limit and the maximum element of a set. OK, I think that this subject matured enough to rest in peace... Jerzy Karczmarczuk

Jerzy Karczmarczuk wrote:
OK, I think that this subject matured enough to rest in peace...
I would have to agree with that, although...
Couldn't an infinite list just be regarded as the maximum element of the (infinite) set of all finite lists?
Perhaps his intuition is right, but there are fundamental differences - A. Between the chain of partial lists and the set of finite lists
Well, each partial list is finite. Of course it isn't the set of ALL finite lists, but it is the set of all those finite lists that approximate the given infinite list.
B. Between a limit and the maximum element of a set.
But the limit of a chain IS the maximal element of the set of all elements comprising the chain, since the LUB, in the case of a chain, is unique, and thus we don't have to worry about choosing the "least" element (i.e. it reduces to the upper bound, or maximal element). So I'd say that Brian has at least come close to discovering God :-) -Paul

On Fri, 2006-06-23 at 09:38 -0400, Paul Hudak wrote: . . .
But the limit of a chain IS the maximal element of the set of all elements comprising the chain, since the LUB, in the case of a chain, is unique, and thus we don't have to worry about choosing the "least" element (i.e. it reduces to the upper bound, or maximal element).
There must be something additional going on, since in general the fact that an ordered subset of a set has a LUB in the set does not imply that the LUB is in the subset. For example, the subset {1 - 1/n : n in N+} of Q has LUB = 1, but 1 is not an element of the subset. It would seem that while the infinite list is the LUB of the chain of finite lists, it is not itself a member of the chain of finite lists. So, what am I missing? -- Bill Wood

Bill Wood wrote:
On Fri, 2006-06-23 at 09:38 -0400, Paul Hudak wrote:
But the limit of a chain IS the maximal element of the set of all elements comprising the chain, since the LUB, in the case of a chain, is unique, and thus we don't have to worry about choosing the "least" element (i.e. it reduces to the upper bound, or maximal element).
There must be something additional going on, since in general the fact that an ordered subset of a set has a LUB in the set does not imply that the LUB is in the subset. For example, the subset {1 - 1/n : n in N+} of Q has LUB = 1, but 1 is not an element of the subset. It would seem that while the infinite list is the LUB of the chain of finite lists, it is not itself a member of the chain of finite lists. So, what am I missing?
I don't think you missed anything, just provided more detail than I thought was needed. As the LUB is indeed an infinite object, it is not in the set of finite, partial lists. But that's pretty common in domain theory, and is analogous to the example that you gave. -Paul

"Brian Hulley"
But how does this change the fact that y still has 1 more element than yq? yq is after all, not a circular list.
infinity+1 = infinity
Surely this is just a mathematical convention, not reality! :-)
Not even that. Infinity isn't a number, and it doesn't really make sense to add one to it.
Couldn't an infinite list just be regarded as the maximum element of the (infinite) set of all finite lists?
The point of infinity is that there is no maximum :-) -k -- If I haven't seen further, it is by standing in the footprints of giants

On Thu, 2006-06-22 at 15:16 +0100, Brian Hulley wrote: . . .
But how does this change the fact that y still has 1 more element than yq? yq is after all, not a circular list. I don't see why induction can't just be applied infinitely to prove this.
The set of all non-negative integers has "one more element" than the set of all positive integers, however they have the same cardinality, aleph-null. This phenomenon is the hallmark of infinite sets. -- Bill Wood

Bill Wood wrote:
On Thu, 2006-06-22 at 15:16 +0100, Brian Hulley wrote: . . .
But how does this change the fact that y still has 1 more element than yq? yq is after all, not a circular list. I don't see why induction can't just be applied infinitely to prove this.
The set of all non-negative integers has "one more element" than the set of all positive integers, however they have the same cardinality, aleph-null. This phenomenon is the hallmark of infinite sets.
Therefore the list of non-negative integers is longer than the list of positive integers. I agree they have the same cardinality but this doesn't mean they have the same length. Regards, Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us. http://www.metamilk.com

On Thu, Jun 22, 2006 at 03:32:25PM +0100, Brian Hulley wrote:
Bill Wood wrote:
On Thu, 2006-06-22 at 15:16 +0100, Brian Hulley wrote: . . .
But how does this change the fact that y still has 1 more element than yq? yq is after all, not a circular list. I don't see why induction can't just be applied infinitely to prove this.
The set of all non-negative integers has "one more element" than the set of all positive integers, however they have the same cardinality, aleph-null. This phenomenon is the hallmark of infinite sets.
Therefore the list of non-negative integers is longer than the list of positive integers. I agree they have the same cardinality but this doesn't mean they have the same length.
Are you saying that some of the (0,1,2,3,4,5,...), (1,2,3,4,5,...) and (1-1,2-1,3-1,4-1,5-1,...) lists have different lengths?

Therefore the list of non-negative integers is longer than the list of positive integers. I agree they have the same cardinality but this doesn't mean they have the same length.
Are you saying that some of the (0,1,2,3,4,5,...), (1,2,3,4,5,...) and (1-1,2-1,3-1,4-1,5-1,...) lists have different lengths?
Q: Which list is longer, [0..] or [1..] ? A: MU! (see http://en.wikipedia.org/wiki/Mu_%28negative%29 ) I am un-asking the question. They don't have length. Length only makes sense for lists with [] in them and infinite lists do not use []. Jared. P.S. If you still don't believe me, this code should put this mystery to rest: length2 x y = f 0 0 x y where f a b [] [] = (a, b) f a b [] (y:ys) = f a (b+1) [] ys f a b (x:xs) [] = f (a+1) b xs [] f a b (x:xs) (y:ys) = f (a+1) (b+1) xs ys length2 [0..] [1..] Feel free to get back to us with the results! -- http://www.updike.org/~jared/ reverse ")-:"

Stepan Golosunov wrote:
On Thu, Jun 22, 2006 at 03:32:25PM +0100, Brian Hulley wrote:
Bill Wood wrote:
On Thu, 2006-06-22 at 15:16 +0100, Brian Hulley wrote: . . .
But how does this change the fact that y still has 1 more element than yq? yq is after all, not a circular list. I don't see why induction can't just be applied infinitely to prove this.
The set of all non-negative integers has "one more element" than the set of all positive integers, however they have the same cardinality, aleph-null. This phenomenon is the hallmark of infinite sets.
Therefore the list of non-negative integers is longer than the list of positive integers. I agree they have the same cardinality but this doesn't mean they have the same length.
Are you saying that some of the (0,1,2,3,4,5,...), (1,2,3,4,5,...) and (1-1,2-1,3-1,4-1,5-1,...) lists have different lengths?
I'd say that the list [0,1,2,3,4,5,..] is longer than [1,2,3,4,5,..] and that [1-1,2-1,3-1,4-1,5-1,..] ie [0,1,2,3,4,..] is the same length as [1,2,3,4,5,..], assuming that they all grow at the same rate. I think of them as physically growing objects, which at any moment have a finite length, not as some kind of fixed "infinite" structure. I don't believe in the abstract mathematical notion of infinity. I just see it as a kind of hack that makes conventional mathematics workable to some extent, although as everyone knows there are plenty of contradictions in it. This doesn't mean that these contradictions reflect reality - just that maths hasn't yet reached a true understanding of reality imho. For example, why do people accept that infinity == infinity + 1 ? Surely this expression is just ill-typed. infinity can't be a number. Maths was developed before programming languages so perhaps the concept of well typed expressions was not known when infinity was invented. Still, as others have pointed out in offlist emails, I can't really take this line of reasoning further without providing an alternative system, and to be honest, I don't have one (yet :-) ). If I was going to try to re-write the foundations of mathematics, I'd start by re-reading the book "On the origin of objects" by Brian Cantwell Smith, which blows apart many of the things that logical systems of thought often seem to take for granted about the nature of reality. For example he talks about particularity and individuality, and the idea that something can be a particular entity without being an individual, so (this is just my thought now) perhaps one of the problems with (infinity + 1) is that it is an attempt to add a particular number which is not an individual number, namely infinity, with an individual number, so there is no "individual component" of infinity to add the 1 onto... In any case, if I just use the subset of Haskell equivalent to strict evaluation with force/delay lists (as in my alternative definition of the filter function), I can reason about my programs without entering the areas of maths that I don't believe in, which is one more reason for my desire to have a totally strict version of Haskell ;-) Best regards (also thanks to everyone else who replied), Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us. http://www.metamilk.com

On Thu, 2006-06-22 at 20:13 +0100, Brian Hulley wrote: . . .
filter function), I can reason about my programs without entering the areas of maths that I don't believe in, which is one more reason for my desire to have a totally strict version of Haskell ;-)
This may also explain why those who *do* believe in some of those maths resist the move to totally strict Haskell :-). -- Bill Wood

On 22/06/06, Brian Hulley
This doesn't mean that these contradictions reflect reality - just that maths hasn't yet reached a true understanding of reality imho.
Well, I for instance believe that contradiction IS the true nature of reality... ;)
For example, why do people accept that infinity == infinity + 1 ? Surely this expression is just ill-typed. infinity can't be a number.
This equation is just a shortcut, so I can't see how can it be ill-typed. It means something like: if you add one element to an infinite list, will it be longer? I found the explanation in terms of defining bijection between both lists very appealing (along with a "metaphor" of taking one element at a time from both lists and never being left with one of the lists empty, which was demonstrated here as well). Seems I don't understand your problem with infinity after all :) Regards, Piotr Kalinowski -- Intelligence is like a river: the deeper it is, the less noise it makes

Piotr Kalinowski wrote:
On 22/06/06, Brian Hulley
wrote: ... This doesn't mean that these contradictions reflect reality - just that maths hasn't yet reached a true understanding of reality imho.
Well, I for instance believe that contradiction IS the true nature of reality... ;)
Perhaps I should have said: are the *particular* contradictions found in maths relevant to reality? (Obviously they are in the sense that maths as an artifact of human endeavour is just as much part of reality as anything else and reflects aspects of the current and past human condition in the way it's formulated, and in turn possibly influences how we perceive reality...)
For example, why do people accept that infinity == infinity + 1 ? Surely this expression is just ill-typed. infinity can't be a number.
This equation is just a shortcut, so I can't see how can it be ill-typed. It means something like: if you add one element to an infinite list, will it be longer?
What does your intuition say about this?
I found the explanation in terms of defining bijection between both lists very appealing (along with a "metaphor" of taking one element at a time from both lists and never being left with one of the lists empty, which was demonstrated here as well).
But this explanation might just be vapid sophistry. Do you *really* want to trust it? Especially when the explanation makes use of the physical notion of "taking one element at a time from both lists". Can we really rely on intuitions about taking an element from an infinite list when we are trying to prove something counter-intuitive about adding an element to an infinite list?
Seems I don't understand your problem with infinity after all :)
Just that there is a conflict with intuition no matter which option you choose: if I think that the list would be longer, I have to reject any proof to the contrary, but then my intuitions about valid proof are confounded, whereas if I accept the proof, my intuition about physical objects is confounded: if the list doesn't get longer, then where *is* the thing I added to it? Did it just disappear? So for these reasons I find that infinity is a troublesome concept. Regards, Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us. http://www.metamilk.com

On 23/06/06, Brian Hulley
This equation is just a shortcut, so I can't see how can it be ill-typed. It means something like: if you add one element to an infinite list, will it be longer?
What does your intuition say about this?
It won't be longer. How can it be? It's already infinite ;) It's like throwing things into bottomless hole and expecting it to get more "full."
But this explanation might just be vapid sophistry. Do you *really* want to trust it?
I perceive it as a way to explain to beginner students where bijection idea comes from. It's all it means to me. I suppose the whole idea is to start at something intuitive and then extend it to completely counter-intuitive notion of being infinite.
Just that there is a conflict with intuition no matter which option you choose: if I think that the list would be longer, I have to reject any proof to the contrary, but then my intuitions about valid proof are confounded, whereas if I accept the proof, my intuition about physical objects is confounded: if the list doesn't get longer, then where *is* the thing I added to it? Did it just disappear? So for these reasons I find that infinity is a troublesome concept.
I suppose infinity can't be totally intuitive in the end. We are not used to handle infinite objects and intuition as such was not developed to handle them. Regards, Piotr Kalinowski -- Intelligence is like a river: the deeper it is, the less noise it makes

Brian Hulley wrote:
Piotr Kalinowski wrote:
On 22/06/06, Brian Hulley
wrote: For example, why do people accept that infinity == infinity + 1 ? Surely this expression is just ill-typed. infinity can't be a number.
This equation is just a shortcut, so I can't see how can it be ill-typed. It means something like: if you add one element to an infinite list, will it be longer?
What does your intuition say about this?
Well, Archimedes thought: "..in the palimpsest we found Archimedes doing just that. He compared two infinitely large sets and stated that they have an equal number of members. No other extant source for Greek mathematics has that." http://news-service.stanford.edu/news/2002/november6/archimedes-116.html -- Chris

2006/6/22, Brian Hulley
minh thu wrote:
2006/6/22, Brian Hulley
: Jerzy Karczmarczuk wrote:
Brian Hulley wrote: [snip] y IS NOT a longer list than yq, since co-recursive equations without limiting cases, apply only to *infinite* streams. Obviously, the consumer of such a stream will generate a finite segment only, but it is his/her/its problem, not that of the producer.
I still don't understand this point, since y = (a*x0 : yq) so surely by induction on the length of yq, y has 1 more element?
y and yq are infinite...
But how does this change the fact that y still has 1 more element than yq? yq is after all, not a circular list. I don't see why induction can't just be applied infinitely to prove this.
maybe i wrong, anyway : induction can be used to prove a property. we claim that the property is true for any finite i. so what's the property that you want to prove by induction ? you say 'by induction on the lenght of yq'.. but yq is just y (modulo the "a*xq + b*"). it's exactly the same in ones = 1:ones does the left "ones" longer than the right one ? mt

minh thu wrote:
maybe i wrong, anyway : induction can be used to prove a property. we claim that the property is true for any finite i. so what's the property that you want to prove by induction ? you say 'by induction on the lenght of yq'.. but yq is just y (modulo the "a*xq + b*").
it's exactly the same in ones = 1:ones
does the left "ones" longer than the right one ?
Thanks for pointing this out - I'd somehow missed the fact that yq was also defined in terms of y, so of course the idea of length becomes meaningless in this context (at least as far as trying to compare the length of yq with that of y) Best regards, Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us. http://www.metamilk.com

On Jun 22, 2006, at 10:16 AM, Brian Hulley wrote:
minh thu wrote:
2006/6/22, Brian Hulley
: Jerzy Karczmarczuk wrote:
Brian Hulley wrote: [snip] y IS NOT a longer list than yq, since co-recursive equations without limiting cases, apply only to *infinite* streams. Obviously, the consumer of such a stream will generate a finite segment only, but it is his/her/its problem, not that of the producer. I still don't understand this point, since y = (a*x0 : yq) so surely by induction on the length of yq, y has 1 more element? y and yq are infinite...
But how does this change the fact that y still has 1 more element than yq? yq is after all, not a circular list. I don't see why induction can't just be applied infinitely to prove this.
Induction doesn't apply to co-inductive objects, such as infinite lists AKA streams. I particular, the "length" of an infinite list is undefined, much as the "size" of an infinite set is undefined. The only think you can discuss, a la Cantor, is cardinality. In both cases, as mentioned by another poster, it is aleph-null. <aside> Every few months a discussion arises about induction and Haskell datatypes, and I feel compelled to trot out this oft-misunderstood fact about Haskell: 'data' declarations in Haskell introduce co- inductive definitions, NOT inductive ones. Induction, in general, does not apply to ADTs defined in Haskell; this is in contrast to similar-looking definitions in, eg, ML. This is a common source of confusion, especially for mathematically-inclined persons new to Haskell. Does anyone know of a good reference which clearly explains the difference and its ramifications? I've never been able to find a paper on the topic that doesn't dive head-first into complicated category theory (which I usually can't follow) ... </aside> Rob Dockins Speak softly and drive a Sherman tank. Laugh hard; it's a long way to the bank. -- TMBG

On Thu, Jun 22, 2006 at 11:06:38AM -0400, Robert Dockins wrote:
<aside> Every few months a discussion arises about induction and Haskell datatypes, and I feel compelled to trot out this oft-misunderstood fact about Haskell: 'data' declarations in Haskell introduce co- inductive definitions, NOT inductive ones. Induction, in general, does not apply to ADTs defined in Haskell; this is in contrast to similar-looking definitions in, eg, ML. This is a common source of confusion, especially for mathematically-inclined persons new to Haskell. Does anyone know of a good reference which clearly explains the difference and its ramifications? I've never been able to find a paper on the topic that doesn't dive head-first into complicated category theory (which I usually can't follow) ... </aside>
I think that's untrue, from a certain point of view. A convenient semantics for Haskell data types includes values for non-termination (_|_, or "bottom"), partial values (containing _|_) and infinite values, with a termination ordering -- a complete partial order (cpo for short). The infinite values are needed for the "complete" part: they arise as the limits of ascending chains of partial values. (The semantics of ML types has extra values too, but in a different place: the partial functions in the -> type.) You can do induction over Haskell data types, as long as your predicate is well-behaved on limits (which conjunctions of equations are), and also satisfied by _|_. There's a good introduction to this sort of reasoning in "Introduction to Functional Programming using Haskell" by Bird (or the first edition by Bird and Wadler). It works because Haskell 'data' definitions yield both an initial fixed point (with respect to strict functions) and a terminal fixed point (with respect to arbitrary functions), and moreover these are usually the same. The former is inductive, the latter co-inductive. They differ only when the definition is strict in the recursive type, as in data Nat = Zero | Succ !Nat The initial fixed point is the natural numbers plus _|_. The terminal fixed point has those elements plus an infinity. The former corresponds to what Haskell provides. So actually Haskell data types are always inductive, and usually also co-inductive.

On Thu, Jun 22, 2006 at 05:44:58PM +0100, I wrote:
It works because Haskell 'data' definitions yield both an initial fixed point (with respect to strict functions) and a terminal fixed point (with respect to arbitrary functions), and moreover these are usually the same. The former is inductive, the latter co-inductive. They differ only when the definition is strict in the recursive type, as in
data Nat = Zero | Succ !Nat
On second thoughts, I think the fixed points coincide in these cases too.

hi !
2006/6/22, Brian Hulley
So: 1) Someone reading the code needs to do a lot of work to try to recover the original equation 2) Wouldn't an imperative loop, using the original equation directly, have made everything much simpler? 3) Therefore laziness has lead to obfuscated code.
the way i see : 1/ but why do you call it 'original equation' ? the original thing is expressed informaly in your head, then into a formula or an algorithm. Here i find the first one-liner really readable. (although i think it misses Y[0] = X[0]). But the loop is really readable too for imperative enabled mind. 2/ for me, list or loop is quite the same thing (in this case) (although loop is mor general since you can use the base index in weird ways). 3/ see 1/ and 2/ Jerzy : can you know a mean to express such computation but with elements depending of time (in about the same way as languages as esterel) (i.e. depending of IO)? Paul Hudak uses a Channel in his book Haskell SOE .. but is there another way ? thanks, vo minh thu

minh thu wrote: /about the stream algorithms, lazy style/
can you know a mean to express such computation but with elements depending of time (in about the same way as languages as esterel) (i.e. depending of IO)? Paul Hudak uses a Channel in his book Haskell SOE .. but is there another way ?
Frankly, I don't see it well. The co-recursive, lazy constructions are based on the fact that it is the consumer who unrolls the list and forces the reduction. So, the unevaluated closures should contain somehow the latent data, or rather refer to them. Synchronous processing is another story, here the producer is really active. Esterel until the version 4 demanded that all 'circuits' which handled the data flows be *acyclic*; now it is more flexible, so you can do a lot of "own-tail-eating snakes" with it, but differently. I believe that ways of producing intricate streams by such languages or Lustre are somehow based on continuation mechanisms. The paper on Esterel, etc. : ftp://ftp-sop.inria.fr/esterel/pub/papers/foundations.pdf gives you an example in Lustre X[n+1] = U[n+1]*sin(X[n] + S[n+1]-S[n]) S[n+1] = cos(S[n]+U[n+1] in a form remarkably analogous as I did: node Control(U:float) returns X:float var S:float let X = 0.0 -> (U*sin(pre(X)+S-pre(S)); S = 1.0 -> cos(pre(S)+U); tel So, I would say that this is obviously a live domain. The language Signal can oversample, and produce output "faster" than the input, but I have never followed the details. Perhaps some YamPa-ladins who read this list could shed some light on the reactive stream processing? They use Arrows, a generalization (and twist, not compatible) of Monads, so there is obviously *some* relation to continuations here... But I am a Perfect Non-Specialist. Jerzy Karczmarczuk

On Thu, Jun 22, 2006 at 01:24:32PM +0200, Jerzy Karczmarczuk wrote:
I believe that ways of producing intricate streams by such languages or Lustre are somehow based on continuation mechanisms. The paper on Esterel, etc. : ftp://ftp-sop.inria.fr/esterel/pub/papers/foundations.pdf
gives you an example in Lustre X[n+1] = U[n+1]*sin(X[n] + S[n+1]-S[n]) S[n+1] = cos(S[n]+U[n+1]
in a form remarkably analogous as I did:
node Control(U:float) returns X:float var S:float let X = 0.0 -> (U*sin(pre(X)+S-pre(S)); S = 1.0 -> cos(pre(S)+U); tel
For comparison, here's a version using arrows (except that the U stream is shifted forward, so its first value is used): class ArrowLoop a => ArrowCircuit a where delay :: b -> a b b control :: ArrowCircuit a => a Float Float control = proc u -> do rec let x' = u * sin (x + s' - s) s' = cos (s * u) x <- delay 0 -< x' s <- delay 1 -< s' returnA -< x One can plug in various implementations of ArrowCircuit. For stream processors, delay is just cons, and the computation is equivalent to the infinite list version. Another implementation uses continuations.

jerzy.karczmarczuk@info.unicaen.fr wrote:
apparently - Clean has better handling of strictness issues [saying at the same time that he/she doesn't use Clean...]
Uhm... well... and does it? From what I've heard, Clean has the same mechanism as Haskell, which is the 'seq' primitive. Clean just adds some syntactic sugar to make functions strict in some arguments. If that's the only difference, I'm quite happy with the False-guard idiom (and may be even more happy with !-patterns).
And here apparently I am one of rare people - I am not proud of it, rather quite sad, who defends laziness as an *algorithmisation tool*, which makes it easy and elegant to construct co-recursive codes. Circular programs, run-away infinite streams, hidden backtracking etc.
And don't forget the Most Unreliable Method to Compute Pi! That would be plain impossible without lazy evaluation. (Nice blend of humor and insight in that paper, by the way.)
In this context, I found Clean more helpful than Haskell, for ONE reason. Clean has a primitive datatype: unboxed, spine-lazy but head-strict lists.
If I understand correctly, you'd get the same in GHC by defining *> data IntList = Nil | Cons I# IntList though it is monomorphic, and you'd get the same semantics from *> data List a = Nil | Cons !a (List a) Now it is polymorphic and it may even get unpacked. Udo. -- If your life was a horse, you'd have to shoot it.

G'day all. Quoting oleg@pobox.com:
Recently Vo Minh Thu wondered if Haskell (or, I generalize, functional programming) can be of much use for computer graphics programming.
As others have pointed out, it's Haskell (and its laziness) that he perceived to be the problem. However, I'd like to take the discussion up one more level. I hope that Thu doesn't mind me saying this, but off-list, I got a better idea of the program that he was trying to write. He's not interested writing an image processing system. In that sense, raster images are actually not a central data structure. The scenario is quite typical: - A decent, ambitious idea about a program or suite of programs that someone would like to write. - A good idea about how to write something simple to start with, at least as a proof of concept, possibly to be refactored into something more sophisticated later. - Some thinking, exploratory code or back-of-the-envelope playing around reveals some data structure to be extremely important to get right for the simple system, even though it's likely to play a small, non-central role in the "real" system. There's no obvious, clean, efficient way to implement this data structure in Haskell. I think that a lot of people here have been in precisely this position. Often there's a good substitute (e.g. I'll wager that a lot of GHC had already been written before FastPackedString became necessary; String turns out to be fine to start things off), but sometimes there isn't. I have a suspicion that this is related to an old complaint, namely that programmers are trained to think about algorithms and data structures and not so much about APIs. An efficient algorithm or data structure is indeed a fine thing. But for 99.9% of cases, for overall program efficiency, nothing beats a well-designed API. If your program has an efficiency problem, you simply remove the offending code and put in new code which conforms to the same API. As if by magic, everything works much as it did, only more efficiently. This advice is triply important for a data structure or algorithm which may not play a central role in the final program. Now I don't know if there are any decent references on API design. This gives the general idea, though: http://lcsd05.cs.tamu.edu/slides/keynote.pdf With my agile programming hat on, however, I'd argue that if you're writing an application (as opposed to an ISO standard), the most important thing is to write something, anything, that WORKS and put it behind a very thin API layer to start with, even if it's a quick and dirty choice of implementation. Both the implementation and the API can be changted as it needs to, and simply having the API there (even if it's bad) will encourage you to modify said API instead of digging into the innards of what you're trying to protect. Eventually, you'll find the true set of operations that you need. (If you are writing an ISO standard, of course, then you need to take more care. The "rule of threes" states that a) you need to examine at least three systems to find out what you need in the API, and b) you need to use the API at least three times to ensure that it's reusable. Nobody claimed this stuff was easy.)
In this project, functional programming does what it is probably best suited: to develop a `compiler' -- a compiler from a filter specification to quite efficient code.
Or, in even more generality: http://c2.com/cgi/wiki?AlternateHardAndSoftLayers Functional programming is great for implementing the "soft" layers where the smart and flexible bits lie. It may not be as great for implementing the "hard" layers where the dirty bit hackery and tight loops lie (though it might be okay for prototyping that part). Over time, anything in the soft layer which turns out to have efficiency issues can migrate to the hard layer as required. As Henry James famously pointed out, "all writing is rewriting". That's especially true of software. Cheers, Andrew Bromage

2006/6/23, ajb@spamcop.net
G'day all.
hey,
Quoting oleg@pobox.com:
Recently Vo Minh Thu wondered if Haskell (or, I generalize, functional programming) can be of much use for computer graphics programming.
As others have pointed out, it's Haskell (and its laziness) that he perceived to be the problem.
However, I'd like to take the discussion up one more level.
actually, i think my problem was at that upper level.
I hope that Thu doesn't mind me saying this, but off-list, I got a better idea of the program that he was trying to write. He's not interested writing an image processing system. In that sense, raster images are actually not a central data structure.
no probs at all. [snip remaining] thank you for sharing that ! vo minh thu
participants (24)
-
ajb@spamcop.net
-
Bill Wood
-
Brian Hulley
-
Bulat Ziganshin
-
Chris Kuklewicz
-
Greg Buchholz
-
Jared Updike
-
Jerzy Karczmarczuk
-
jerzy.karczmarczuk@info.unicaen.fr
-
Joel Reymont
-
Jon Fairbairn
-
Ketil Malde
-
Matthias Fischmann
-
minh thu
-
Neil Mitchell
-
oleg@pobox.com
-
Paul Hudak
-
Piotr Kalinowski
-
Ralf Hinze
-
Robert Dockins
-
Ross Paterson
-
Simon Peyton-Jones
-
Stepan Golosunov
-
Udo Stenzel