looking for deep learning explanation a la Lämmel's map-reduce paper

Dear Cafe, I like this paper (R. Lämmel in SCP 2008) very much https://userpages.uni-koblenz.de/~laemmel/MapReduce/ both for contents and for style - and I wonder if there's something similar ("Haskell for design recovery", "rigorous description", "executable specification") on deep learning. Mainly for understanding/teaching without all the hype. (But of course, an accelerate-* implementation would be nice.) Yes I know that hyperbole *is* the main thing in this area. Happy New Year - J.

... I wonder if there's something similar ("Haskell for design recovery", "rigorous description", "executable specification") on deep learning.
perhaps this (although the goal is different) https://blog.jle.im/entries/series/+practical-dependent-types-in-haskell.htm... - J

Although the analogies are somewhat stretched, there is this: http://colah.github.io/posts/2015-09-NN-Types-FP/ Tom On Wed, Jan 3, 2018 at 11:03 AM, Johannes Waldmann < johannes.waldmann@htwk-leipzig.de> wrote:
Dear Cafe,
I like this paper (R. Lämmel in SCP 2008) very much https://userpages.uni-koblenz.de/~laemmel/MapReduce/ both for contents and for style -
and I wonder if there's something similar ("Haskell for design recovery", "rigorous description", "executable specification") on deep learning.
Mainly for understanding/teaching without all the hype. (But of course, an accelerate-* implementation would be nice.)
Yes I know that hyperbole *is* the main thing in this area.
Happy New Year - J. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
participants (2)
-
Johannes Waldmann
-
Tom Nielsen