
On 7/16/21 8:27 AM, Olaf Klinke wrote:
Hi,
My program BAli-Phy implements probabilistic programming with models written as Haskell programs.
Dear Benjamin,
last time you announced BAli-Phy I pestered you with questions about semantics. In the meantime there was a discussion [1] on this list regarding desirable properties of probabilistic languages and monads in general. A desirable property of any probabilistic language is that when you define a distribution but map a constant function over it, then this has the same computational cost as returning the constant directly. Can you say anything about that?
Cheers, Olaf
[1]
https://mail.haskell.org/pipermail/haskell-cafe/2020-November/132905.html
On Fri, 16 Jul 2021, Benjamin Redelings wrote:
Hi Olaf,
Are you asking if
run $ (const y) <$> normal 0 1
has the same cost as
run $ return y
for some interpreter `run`?
Yes, the cost is the same. In do-notation, we would have
run $ do x <- normal 0 1 return $ (const y x)
Since `const y` never forces `x`, no time is spent evaluating `run $ normal 0 1`. That is basically what I mean by saying that the language is lazy.
-BenRI
Awesome! That is something you can not have with (random number-)state based implementations, as far as I know, because x <- normal 0 1 at least splits the random number generator. Hence running the above ten thousand times even without evaluating the x does have a non-neglegible cost. So how did you implement the lazyness you described above? Do you have thunks just like in Haskell? Last time I remarked that the online documentation contains no proper definition of the model language. Some examples with explanations of individual lines are not enough, IMHO. That appears not to have changed since the last release. So why don't you include a list of keywords and built-in functions and their meaning/semantics? Regards, Olaf