
... reasonably efficient transitive closure for Data.Graph ...
well, since we have type Graph = Array Vertex [Vertex] efficiency is already questionable? We cannot quickly check whether an edge is present, we cannot quickly get all predecessors of a vertex. But perhaps we don't need to. The underlying assumptions (cost of elementary operations) of http://www.cs.hut.fi/~enu/thesis.html are not immediately clear. The cited Agda library has newtype Graph n e = Graph (Map n (Map n e)) Another option is to store back edges as well, as in https://github.com/haskell-works/relation/blob/master/src/Data/Relation/Inte... FGL fuses these two maps type GraphRep a b = IntMap (Context' a b) type Context' a b = (IntMap [b], a, IntMap [b]) https://hackage.haskell.org/package/fgl-5.7.0.1/docs/src/Data.Graph.Inductiv... this saves some look-ups (if you need successors and predecessors of the very same node) but this can't be used for relations, where start and end of an edge have different types. But for transitive closure, these types would agree anyway. What to make of this? Perhaps Data.Graph should be moved out of containers. The design space is just too large to single out one specific point? If we keep it, it would be very nice to document the complexity of algorithms. Section 7 of the King/Launchbury paper (cited in the API docs) claims "DFS in O(V+E)", backed up by experiments. This seems to be the main motivation for this library (DFS, with application: SCC). It's not clear whether the underlying design should be recommended for a general graph library. - Johannes