
My knowledge of functional programming is pretty much limited to Haskell, Scheme, and a smattering of Common Lisp. Are there languages other than Haskell that explicitly use monads? How about "not so explicitly"? === Gregory Woodhouse gregory.woodhouse@sbcglobal.net "The universe is not required to be in perfect harmony with human ambition." --Carl Sagan

tor 2005-11-24 klockan 12:36 -0800 skrev Gregory Woodhouse:
My knowledge of functional programming is pretty much limited to Haskell, Scheme, and a smattering of Common Lisp. Are there languages other than Haskell that explicitly use monads? How about "not so explicitly"?
=== Gregory Woodhouse gregory.woodhouse@sbcglobal.net
Lambda the Ultimate recently had a post where people discussed monads in different languages, I haven't looked closely myself though so I can't provide a summary: http://lambda-the-ultimate.org/node/view/1128 Regards Anders Höckersten

Gregory Woodhouse
My knowledge of functional programming is pretty much limited to Haskell, Scheme, and a smattering of Common Lisp. Are there languages other than Haskell that explicitly use monads? How about "not so explicitly"?
Java http://www.ccs.neu.edu/home/dherman/code/monads/JavaMonads.tar.gz Joy http://permalink.gmane.org/gmane.comp.lang.concatenative/1506 OCaml https://mailman.rice.edu/pipermail/metaocaml-users-l/2005-March/000057.html Perl http://sleepingsquirrel.org/monads/monads.html Prolog http://logic.csci.unt.edu/tarau/research/PapersHTML/monadic.html Python http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/439361 Ruby http://moonbase.rydia.net/mental/writings/programming/monads-in-ruby/00intro... Scheme http://www.ccs.neu.edu/home/dherman/research/tutorials/monads-for-schemers.t... Please respond with any language implementations I've missed. -- Shae Matijs Erisson - http://www.ScannedInAvian.com/ - Sockmonster once said: You could switch out the unicycles for badgers, and the game would be the same.

Shae Matijs Erisson wrote:
Gregory Woodhouse
writes: My knowledge of functional programming is pretty much limited to Haskell, Scheme, and a smattering of Common Lisp. Are there languages other than Haskell that explicitly use monads? How about "not so explicitly"?
Java http://www.ccs.neu.edu/home/dherman/code/monads/JavaMonads.tar.gz Joy http://permalink.gmane.org/gmane.comp.lang.concatenative/1506 OCaml https://mailman.rice.edu/pipermail/metaocaml-users-l/2005-March/000057.html Perl http://sleepingsquirrel.org/monads/monads.html Prolog http://logic.csci.unt.edu/tarau/research/PapersHTML/monadic.html Python http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/439361 Ruby http://moonbase.rydia.net/mental/writings/programming/monads-in-ruby/00intro... Scheme http://www.ccs.neu.edu/home/dherman/research/tutorials/monads-for-schemers.t...
Yes, there are plenty. But none of them capture it quite as well as Haskell, IMHO, because the Haskell overloading makes it look particularly nice and succinct. And the second question about "not so explicitly", you could say that any language uses an implicit monad. It's where you throw in all the effects. -- Lennart

Lennart Augustsson:
Shae Matijs Erisson wrote:
Gregory Woodhouse
writes: My knowledge of functional programming is pretty much limited to Haskell, Scheme, and a smattering of Common Lisp. Are there languages other than Haskell that explicitly use monads? How about "not so explicitly"?
Java http://www.ccs.neu.edu/home/dherman/code/monads/JavaMonads.tar.gz Joy http://permalink.gmane.org/gmane.comp.lang.concatenative/1506 OCaml ... etc. ...
Nobody mentioned the most obvious example - the language which is most similar to Haskell, but whose authors simply prefer a bit lowel-level mechanisms, although monads *can* be used as well - if you wish. Clean http://www.cs.ru.nl/~clean/ Such Haskell-implemented monads as list/nondeterminism or Maybe, go without any changements. State/IO use the unique types. Jerzy Karczmarczuk

Shae Matijs Erisson wrote:
Please respond with any language implementations I've missed.
C++ http://www.cc.gatech.edu/~yannis/fc++/New1.5/lambda.html#monad Jim

Shae Matijs Erisson wrote:
Gregory Woodhouse
writes: My knowledge of functional programming is pretty much limited to Haskell, Scheme, and a smattering of Common Lisp. Are there languages other than Haskell that explicitly use monads? How about "not so explicitly"?
Java http://www.ccs.neu.edu/home/dherman/code/monads/JavaMonads.tar.gz Joy http://permalink.gmane.org/gmane.comp.lang.concatenative/1506 OCaml https://mailman.rice.edu/pipermail/metaocaml-users-l/2005-March/000057.html Perl http://sleepingsquirrel.org/monads/monads.html Prolog http://logic.csci.unt.edu/tarau/research/PapersHTML/monadic.html Python http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/439361 Ruby http://moonbase.rydia.net/mental/writings/programming/monads-in-ruby/00intro... Scheme http://www.ccs.neu.edu/home/dherman/research/tutorials/monads-for-schemers.t...
Please respond with any language implementations I've missed.
The Java implementation is actually a little baroque because it doesn't
make use of generics. In June when there was a question here at Penn
about how critical type-classes were to the use of Monads, I put
together this quick example:
interface Monad<A> {
public <B> Monad<B> unit(B e);
public Monad<C> bind(Monad<B> e, MonadBindFun f);
}
interface MonadBindFun {
public Monad<B> fun(A x);
}
class OptionMonad<A> implements Monad<A> {
public <B> Monad<B> unit(B e) {
return new Some<B>(e);
}
public Monad<C> bind(Monad<B> e, MonadBindFun f) {
if(e instanceof Some) {
Some<B> eb = (Some<B>)e;
return f.fun(eb.get());
} else if (e instanceof None) {
return new None<C>();
} else {
throw new Error();
}
}
}
public class None<A> extends OptionMonad<A> implements Monad<A> {
public None() { }
}
public class Some<A> extends OptionMonad<A> implements Monad<A> {
A x = null;
public Some(A x) {
this.x = x;
}
public A get() { return x; }
}
Just a few minutes ago I wrote, but haven't tested the following:
import java.util.*;
public class ListMonad<A> implements Monad<A> {
List<A> list = null;
private ListMonad(List<A> list) {
this.list = list;
}
public <B> ListMonad<B> unit(B e) {
List<B> nlist = new LinkedList<B>();
nlist.add(e);
return new ListMonad(nlist);
}
public Monad<C> bind(Monad<B> e, MonadBindFun f) {
if(e instanceof ListMonad) {
ListMonad<B> be = (ListMonad<B>)e;
List reslists = new LinkedList
();
for(B x : be.list) {
Monad<C> res = f.fun(x);
if(res instanceof ListMonad) {
ListMonad<C> ce = (ListMonad<C>)res;
reslists.add(ce.list);
} else {
throw new Error();
}
}
List<C> flat = new LinkedList<C>();
for(List<C> x : reslists) {
flat.addAll(x);
}
return new ListMonad(flat);
} else {
throw new Error();
}
}
}
I also threw together a slightly cleaner version of these examples using
Scala, but I'm sure the Scala folks could do better than me.

Am Samstag, 26. November 2005 03:56 schrieb Geoffrey Alan Washburn:
[lots of code]
It's interesting to note how verbose Java is in comparison to Haskell, at least, concerning this monad stuff. Best wishes, Wolfgang

Wolfgang Jeltsch wrote:
Am Samstag, 26. November 2005 03:56 schrieb Geoffrey Alan Washburn:
[lots of code]
It's interesting to note how verbose Java is in comparison to Haskell, at least, concerning this monad stuff.
I'd agree. However, my original point was that my version that uses generics is still significantly less verbose than the version without. I think much of the overhead is that Java simply doesn't do as much type inference for you. Scala can do much better still because it has first-class functions and algebraic data types ("case classes").

Geoffrey Alan Washburn
Scala can do much better still because it has first-class functions and algebraic data types ("case classes").
Comments on http://lambda-the-ultimate.org/node/view/1136 include links to Scala http://scala.epfl.ch/examples/files/simpleInterpreter.html and XSLT rumors http://www.biglist.com/lists/xsl-list/archives/200303/msg00422.html There's also Oleg's http://okmij.org/ftp/Computation/monadic-shell.html "at the level of UNIX programming, all i/o can be regarded monadic." -- Shae Matijs Erisson - http://www.ScannedInAvian.com/ - Sockmonster once said: You could switch out the unicycles for badgers, and the game would be the same.

--- Shae Matijs Erisson
Geoffrey Alan Washburn
writes: There's also Oleg's http://okmij.org/ftp/Computation/monadic-shell.html "at the level of UNIX programming, all i/o can be regarded monadic."
Interesting. I had been thinking about I/O and operating systems as a
different(?) framework for trying to explain the concept of monads: If
you think about a write statement in an imperative language, it doesn't
actually perform the requested operation, it just asks the OS to
perform the action "later". With respect to pipes, remember that files
are really fictions, too, and when you write to a pipe the data is
stored somewhere (most likely in memory) and then read in lazy fashion.
Maybe this is a different topic, but exploring concurrency in Haskell
is definitely on my "to do" list, but this is really a bit of a puzzle.
One thing I've been thinking lately is that in functional programming
the process is really the wrong abstraction (computation is reduction,
not a sequence of steps performed in temporal order). But what is
concurrency if their are no processes to "run" concurrently? I've beren
thinking about action systems and non-determinism, but am unsure how
the pieces really fit together.
===
Gregory Woodhouse

Greg Woodhouse
Maybe this is a different topic, but exploring concurrency in Haskell is definitely on my "to do" list, but this is really a bit of a puzzle. One thing I've been thinking lately is that in functional programming the process is really the wrong abstraction (computation is reduction, not a sequence of steps performed in temporal order). But what is concurrency if their are no processes to "run" concurrently? I've beren thinking about action systems and non-determinism, but am unsure how the pieces really fit together.
Maybe dataflow languages like Lucid Synchrone, Esterel and Lustre? http://en.wikipedia.org/wiki/Dataflow_language http://en.wikipedia.org/wiki/Synchronous_programming_language For dataflow in Haskell, see Uustalu & Vene's comonads paper: http://lambda-the-ultimate.org/node/view/988 I do wish I could find the source code from this paper online, I'd rather not type it in by hand. Do researchers usually manually transcribe code? -- Shae Matijs Erisson - http://www.ScannedInAvian.com/ - Sockmonster once said: You could switch out the unicycles for badgers, and the game would be the same.

Shae Matijs Erisson wrote:
I do wish I could find the source code from this paper online, I'd rather not type it in by hand. Do researchers usually manually transcribe code?
Both Evince and Acrobat Reader can copy text. Jim

On Sat, 2005-11-26 at 18:43 +0100, Shae Matijs Erisson wrote:
Greg Woodhouse
writes: Maybe this is a different topic, but exploring concurrency in Haskell is definitely on my "to do" list, but this is really a bit of a puzzle. One thing I've been thinking lately is that in functional programming the process is really the wrong abstraction (computation is reduction, not a sequence of steps performed in temporal order). But what is concurrency if their are no processes to "run" concurrently? I've beren thinking about action systems and non-determinism, but am unsure how the pieces really fit together.
Maybe dataflow languages like Lucid Synchrone, Esterel and Lustre? http://en.wikipedia.org/wiki/Dataflow_language http://en.wikipedia.org/wiki/Synchronous_programming_language
I also think dataflow is a very nice model for concurrency in pure functional languages. You might want to look at some of the work done by Edward Lee's group in the EE Dept. at Berkeley. They developed a generalization of dataflow called "token flow" which is quite nice. It became one of the basic models supported by their heterogeneous simulation systems, Ptolemy and Ptolemy II. I seem to recall that one student at a Texas university did a PhD thesis on dataflow for Haskell and then went to work on token flow in Lee's group. The one downside I found to using dataflow was that most software people seem to be uncomfortable with the lack of identifiable processes doing significant bits of work. I guess if they they're not floundering around in mutual exclusion, semaphores, deadlock detection and all the other manifestations of unmanaged complexity, they don't feel they've *accomplished* anything (BTW I grew up on Dijkstra, Hoare and Hanson, so I can get away with saying this :-). Interestingly enough, and perhaps obvious in retrospect, I often found hardware designers to be very comfortable with dataflow computations. Does anyone know whether much research has been done on using dataflow for concurrency in pure functional computation? How does it mesh with evaluation strategies? And, of course, the flip side: has anyone looked at re-examining real-world problems, conventionally handled with coordinating/communicating processes, as dataflow computations? The latter consideration may be the toughest to deal with. I was working on a project to handle PHY level communication algorithms with dataflow, and I had a great deal of difficulty with the comma-detect function of XAUI. I made some progress, but I never found a satisfactory solution. On the other hand, I did work out a fairly nice dataflow algorithm for CRC32. -- Bill Wood

Hello Bill, Sunday, November 27, 2005, 1:25:59 AM, you wrote: BW> The one downside I found to using dataflow was that most software people BW> seem to be uncomfortable with the lack of identifiable processes doing BW> significant bits of work. I guess if they they're not floundering BW> around in mutual exclusion, semaphores, deadlock detection and all the BW> other manifestations of unmanaged complexity, they don't feel they've BW> *accomplished* anything (BTW I grew up on Dijkstra, Hoare and Hanson, so BW> I can get away with saying this :-). Interestingly enough, and perhaps BW> obvious in retrospect, I often found hardware designers to be very BW> comfortable with dataflow computations. dataflow computers was known at least from 60's as possible alternative to Neumann architecture with its bottleneck of only one operation executed each time. they are very natural for chip designers because real internal processor structure is dataflow, and for external users (assembler programmers) Neumann architecture is emulated -- Best regards, Bulat mailto:bulatz@HotPOP.com

Maybe this is a different topic, but exploring concurrency in Haskell is definitely on my "to do" list, but this is really a bit of a puzzle. One thing I've been thinking lately is that in functional programming the process is really the wrong abstraction (computation is reduction, not a sequence of steps performed in temporal order). But what is concurrency if their are no processes to "run" concurrently? I've beren thinking about action systems and non-determinism, but am unsure how the pieces really fit together.
Concurrency in Haskell is handled by forking the execution of IO actions, which are indeed sequences of steps to be performed in a temporal order. There are some elegant constructions in that direction, not least of which is STM, a system for transactional thread communication, on top of which one can implement channels and various other concurrency abstractions. STM allows one to insist that certain restricted kinds of actions relating to thread communication (in the STM monad) occur atomically with respect to other threads. These transactions can create, read and write to a special kind of mutable variable called a TVar (transactional variable). They can also ask to block the thread they are in and be retried later when one of the TVars they observed changes. There is additionally an operator `orElse` where if a and b are STM transactions, then (a `orElse` b) is a transaction where if a is executed and if it retries, then b is attempted, and if it retries, then the whole transaction retries. The first transaction not to retry gets to return its value. The operator `orElse` is associative, and has retry as an identity. The paper describing STM in more detail is here: http://research.microsoft.com/Users/simonpj/papers/stm/index.htm Perhaps more related to what you were thinking about, with Parallel Haskell, there's a mechanism for parallel computation to be used in a similar fashion to seq called par. The expression x `par` (y `seq`(x + y)), when evaluated, will first spark a parallel process for evaluating x (up to the top level data constructor), while in the main task, y `seq` (x + y) is computed, so the evaluation of y proceeds, and the task potentially blocks until x finishes being computed, and then the sum is returned. Sparking x will create the potential for x to be computed in another thread on a separate processor, if one becomes idle, but if that doesn't happen, x will simply be computed on the same processor as y when it is needed for the sum. Hope this is somewhat interesting and perhaps somewhat answers your question :) - Cale

Hello Greg, Saturday, November 26, 2005, 8:25:38 PM, you wrote: GW> Maybe this is a different topic, but exploring concurrency in Haskell GW> is definitely on my "to do" list, but this is really a bit of a puzzle. GW> One thing I've been thinking lately is that in functional programming GW> the process is really the wrong abstraction (computation is reduction, GW> not a sequence of steps performed in temporal order). But what is GW> concurrency if their are no processes to "run" concurrently? I've beren GW> thinking about action systems and non-determinism, but am unsure how GW> the pieces really fit together. for pure functional computations concurrency is just one of IMPLEMENTATION mechanisms, and it doesn't appear in abstractions DEFINITIONS -- Best regards, Bulat mailto:bulatz@HotPOP.com

Bulat Ziganshin:
for pure functional computations concurrency is just one of IMPLEMENTATION mechanisms, and it doesn't appear in abstractions DEFINITIONS
Well, there are formal aspects of the specification of concurrency as well. Do you claim that no language has the right to demand *abstractly* that evaluating runtwo (proc1) (proc2) mean: launch the two concurrently and process further the first outcome? Jerzy Karczmarczuk

Hello jerzy, Sunday, November 27, 2005, 3:49:07 PM, you wrote:
for pure functional computations concurrency is just one of IMPLEMENTATION mechanisms, and it doesn't appear in abstractions DEFINITIONS
jkiuf> Well, there are formal aspects of the specification of concurrency as well. jkiuf> Do you claim that no language has the right to demand *abstractly* that jkiuf> evaluating jkiuf> runtwo (proc1) (proc2) jkiuf> mean: launch the two concurrently and process further the first outcome? for SPECIFICATION of pure functional computation? ;) -- Best regards, Bulat mailto:bulatz@HotPOP.com

--- Bulat Ziganshin
Hello Greg,
for pure functional computations concurrency is just one of IMPLEMENTATION mechanisms, and it doesn't appear in abstractions DEFINITIONS
I suppose it depends a bit on the question you're asking. A
multiprocessor, considered as a whole, might be a platform upon which
you wish to implement a functional language. And in a certain sense,
what you do with those processors is an implementation issue. But what
I'm after is compositionality. I have in mind message based physically
distributed systems, where individual components can be thought of as
having well-defined semantics from which the semantics of the system as
a whole can be defined. It's not at all clear (to me, anyway) how to do
this. In a physically distributed system, it seems natural to think of
the other processors, together with the bus(es) or network interfaces
as providing part of the environment, and this leads naturally to the
idea of using a theoretical tool like monads or continuations to model
one of these "components" -- but that doesn't (obviously, at least)
lead to compositional semantics becsuse of the obvious asymmetry.
By way of background, a project I had been working on (untitl the
project was cancelled) was something I dubbed an "interface compiler".
I had developed a number of HL7 interfaces in a traditional imperative
language (HL7, or Health Level 7, is an application protocol used in
healthcare). These "interfaces" were virtually identical in most
respects, so I set out to build a generic engine that would abstract
away from the details of each interface. I was successful and easily
re-implemented the interfaces I had already written using the new
engine. But a little reflection lead me to conclude that this template
driven approach was really higher order programming in disguise
(another factor leading to my renewed interest in functional
programming). Okay, that's fine as far as it goes, but it suffers from
a severe limitation: the computational model is a single network node
communicvating with its environment. There is no obvious way (in
functional terms, at least) to go from the semantics of the subsystems
running on each node to the semantics of the system as a whole. An idea
that I've considered, but not really attempted to elaborate, is to
generate code for the whole system *as a unit*. In retrospect, I see
that this is essentially an attempt to move to the setting you
describe, in which concurrency is simply a design issue.
I have not yet read Misra's monograph (I hope I got that right -- I'm
visiting family and away from my library), but I'm attracted to the
idea that concurrency should not be a design issue and, by
extension(?), that the process is not fundamental. (After all, is it
not an artifact of the operating system?) This strikes a chord with me,
because computation in functional languages is a matter of reduction,
not sequential execution of statements (commands, really). I've been
attracted to reactive systems because they, too, seem to provide a path
to moving beyond the process abstraction, and because I've been working
on TCP/IP based applications for years, and find it all quite
fascinating. But, in a fundamental sense, reactive systems seem to
represent a step in the *opposite* direction. After all, the
appropriate program logic here seems to be temporal logic -- hardly
natural from a functional perspective!
I should apologize (no longer in advance) for the "stream of
consciousness" nature of this post. Think of it as an attempt to pull
together a few seemingly (or maybe not so seemingly) unrelated threads
from my earlier posts.
===
Gregory Woodhouse

(I'm going to do a lazy permute on your stream of consciousness; hope it terminates :-). I think the Rubicon here is the step from one to many -- one function/procedure to many, one thread to many, one processor to many, ... . Our favorite pure functions are like the Hoare triples and Dijkstra weakest preconditions of the formal methods folks in that the latter abstract from the body of a procedure to the input-output relation it computes; both the function and the abstracted procedure are atomic and outside of time. After all, aren't "referential transparency" and "copy rule" all about replacing a function body with its results? Well, as soon as there are two or more functions/procedures in the same environment, the prospect of interaction and interference arises, and our nice, clean, *comprehensible* atemporal semantics get replaced by temporal logic, path expressions, trace specifications, what have you. Some notion of process is inevitable, since now each computation must be treated as an activity over time in order to relate events that occur doing the execution of one computation with the events of another. Functional programming gives us the possibility of using algebra to simplify the task of reasoning about single programs. Of course, non-functional procedures can also be reasoned about algebraically, since a procedure P(args) that hammers on state can be adequately described by a pure function f_P :: Args -> State -> State. The problem is, of course, that the state can be large. But the functional paradigm offers some hope for containing the complexity in the world of many as it does in the world of one. I think combining formalisms like Hoare's CSP or Milner's CCS with computations gives us the possibility of doing algebra on the temporal event sequences corresponding to their interactions; the hope is that this is simpler than doing proofs in dynamic or temporal logic. Using functional programs simplifies the algebraic task by reducing the size of the set of events over which the algebra operates -- you consider only the explicitly shared parameters and results, not the implicitly shared memory that can couple non-functional procedures. It is conceivable that you can get your compositionality here as well. Suppose we package computations with input-output parameter specifications and CSP-like specifications of the pattern of event sequences produced when the computation executes. It may be possible to reason about the interactions of the event sequences of groups of packages, determine the event sequences over non-hidden events provided by the composite system, etc. As far as Bulat's comment goes, I'm mostly in agreement. My dataflow view was really driven by the intuition that a functional program can be described by a network of subfunctions linking outputs to inputs; cross your eyes a little and voila! A dataflow network. And if we're smart enough to make a compiler do that, why bother the programmer? But you're not talking about analyzing a function into a parallel/concurrent/distributed implementation; rather, you're interested in synthesizing a temporal process out of interacting computations. The temporal aspect won't go away. And that's the problem. -- Bill Wood

On Nov 27, 2005, at 2:36 PM, Bill Wood wrote:
(I'm going to do a lazy permute on your stream of consciousness; hope it terminates :-).
I think the Rubicon here is the step from one to many -- one function/procedure to many, one thread to many, one processor to many, ... . Our favorite pure functions are like the Hoare triples and Dijkstra weakest preconditions of the formal methods folks in that the latter abstract from the body of a procedure to the input-output relation it computes; both the function and the abstracted procedure are atomic and outside of time.
Right. And the key to working with Hoare triples is that they obey a natural composition law. I can go from P and Q to P;Q as long as the pre- and post-conditions "match up". It's less clear that such a simple logic is even possible for concurrent programs, particularly in a deterministic setting.
After all, aren't "referential transparency" and "copy rule" all about replacing a function body with its results?
Is that really a "copy rule" or is it a requirement that the program obey some type of compositional semantics? Put a little differently, I think your terminology here is a bit leading. By "copy rule", I think you have in mind something like beta reduction -- but with respect to whom? If there is some kind of agent doing the copying that we think of as a real "thing", isn't that a process?
Well, as soon as there are two or more functions/procedures in the same environment, the prospect of interaction and interference arises, and our nice, clean, *comprehensible* atemporal semantics get replaced by temporal logic, path expressions, trace specifications, what have you.
Right, because our execution threads become little lambda universes interacting with their respective environments (i.e., communicating)
Some notion of process is inevitable, since now each computation must be treated as an activity over time in order to relate events that occur doing the execution of one computation with the events of another.
You may be right, but I suppose I'm stubborn and haven't quite given up yet. Think about temporal and dynamic logic as being, in some sense, alternative program logics. They are both useful, of course, but differ in where the "action" is. For temporal logic, the primary dimension is time. Will this condition always hold? Will it hold at some time in the future? But in dynamic logic, the "action" is program composition. Even so, if you write [alpha]P, then you assert that P is satisfied by every execution (in time?) of P, but you do not otherwise reason about program execution. In terms of Kripke ("possible worlds") semantics, what is your accessibility relationship?
Functional programming gives us the possibility of using algebra to simplify the task of reasoning about single programs. Of course, non-functional procedures can also be reasoned about algebraically, since a procedure P(args) that hammers on state can be adequately described by a pure function f_P :: Args -> State -> State. The problem is, of course, that the state can be large.
Right, but Kripke semantics gives us a language in which to talk about how state can change. Better, can subsystems be combined in such a way that state in the larger system can can naturally be understood in terms of state in the subsystems?
But the functional paradigm offers some hope for containing the complexity in the world of many as it does in the world of one. I think combining formalisms like Hoare's CSP or Milner's CCS with computations gives us the possibility of doing algebra on the temporal event sequences corresponding to their interactions; the hope is that this is simpler than doing proofs in dynamic or temporal logic. Using functional programs simplifies the algebraic task by reducing the size of the set of events over which the algebra operates -- you consider only the explicitly shared parameters and results, not the implicitly shared memory that can couple non-functional procedures.
But isn't this true because interaction between the "pieces" is more narrowly constrained? An algebraic analog might be a semidirect product of groups. Unlike the special case of direct products, there is some interference here, but it is restricted to inner automorphisms (conjugation).
It is conceivable that you can get your compositionality here as well. Suppose we package computations with input-output parameter specifications and CSP-like specifications of the pattern of event sequences produced when the computation executes. It may be possible to reason about the interactions of the event sequences of groups of packages, determine the event sequences over non-hidden events provided by the composite system, etc.
As far as Bulat's comment goes, I'm mostly in agreement. My dataflow view was really driven by the intuition that a functional program can be described by a network of subfunctions linking outputs to inputs; cross your eyes a little and voila! A dataflow network.
And I quite liked the data flow concept. That may be what I'm looking for, too, but I need to let it "sink in" a bit.
And if we're smart enough to make a compiler do that, why bother the programmer?
Good question. In fact compiler design has really influenced my thinking here. We can eliminate tail recursion automatically, so why bother the programmer? Redundant reads from a provably unchanged variable can be eliminated, so why bother the programmer? We can even optimize (some) loops for parallel execution on a multiprocessor -- something which is perhaps a bit more on point.
But you're not talking about analyzing a function into a parallel/concurrent/distributed implementation; rather, you're interested in synthesizing a temporal process out of interacting computations.
Not exactly. I'm thinking about them as dual aspects of the same problem: analysis and synthesis. You may recall that I suggested that "programs" for a distributed system might be compiled as a whole, much as a modern compiler might generate code capable of using the possibilities of parallelism in the target architecture. But it seems to me that a satisfactory theory ought to provide some insight into how the pieces fit together, too. Just knowing how to generate them isn't enough.
The temporal aspect won't go away. And that's the problem.
I agree with you -- on both counts.
-- Bill Wood
=== Gregory Woodhouse gregory.woodhouse@sbcglobal.net "And the end of all our exploring will be to arrive where we started And know the place for the first time" -- T.S. Eliot
participants (12)
-
Anders Höckersten
-
Bill Wood
-
Bulat Ziganshin
-
Cale Gibbard
-
Geoffrey Alan Washburn
-
Greg Woodhouse
-
Gregory Woodhouse
-
jerzy.karczmarczuk@info.unicaen.fr
-
Jim Apple
-
Lennart Augustsson
-
Shae Matijs Erisson
-
Wolfgang Jeltsch