
Sophie Taylor
I don't see why not, other than possible duplication of effort when it comes to some of the basic algorithms.
Speaking of which, what policies are there on bringing in new dependencies to GHC, both compile-time and run-time (e.g. possible SMT solver support)?
We are generally fairly conservative with adding new dependencies of either type. There are a variety of reasons for this: In the case of runtime dependencies the associated costs are fairly clear: it would either be a) harder for users to use GHC (in the case of mandatory dependencies) or, b) make it harder to follow the behavior of the compiler (in the case of optional dependencies discovered at runtime). There are also costs in the case of compile-time dependencies, although they may not be as easy to see. First, in order to maintain a reproducible revision history GHC includes all dependent libraries as submodules and ships them with source distributions. These submodules carry a small but non-negligible cost to developers due to idiosyncracies in how they are handled by both git and Phabricator. Moreover, we need to periodically bump these submodules, which inevitably brings integration issues which require coordination with upstream to fix. Also, there is a significant synchronization overhead associated with getting upstream maintainers to release new library versions prior to a GHC release. While this generally only affects the release manager, for that person it is indeed a significant cost and does tend to slow down the release cycle. Finally, dependencies of the `ghc` library affects users of tooling which links to it (e.g. ghc-mod). Specifically, since we can only link against a single version of a given package at a time, such tooling packages are forced to link against whatever version `ghc` depends upon. This means that users won't get bugfixes and can constrain install plans, sometimes to the point where no plan is possible. Cheers, - Ben