
I did now spend some days(!) on making only some of my packages compatible to both GHC-6.4.1 and GHC-6.8.2. The amount of adaption work increased with every GHC update for me, also because the number of installed packages constantly grew. I'm hardly able to manage this work for GHC-6.10, many packages will then go 'outdated', maybe only weeks after their release. Some people wonder, why not simply upgrade. There are many reasons: This way you can easily fall into a gap between dependent packages that are still not updated to GHC-6.8.2 and others that are already updated, but not backwards compatible. Compiler versions are different in usability, bugs and annoyances. Namely, GHC-6.4.1 introduced wrong warnings on apparently superfluous imports and a bug that let one of my modules become uncompileable because of the compiler running out of memory, GHC-6.6 replaced working filename completion by only partially working identifier completion (it was certainly not a good idea, to remove the old behaviour completely, before the new one worked reliably, but it happened and we have to cope with), GHC-6.8.1 had a bug in compilation. So after investing much time in upgrading you might encounter that your programs don't work anymore or usability decreased considerably and you have the choice to wait for the next compiler release, try to compile the HEAD version from repository yourself (good luck!) or turn everything back to the old version. Even if the compiler does only get better with respect to features, you might decide not to upgrade, because the newer version consumes more memory or is slower due to more features that the compiler must handle. Every GHC update so far forced me to recompile my packages, broke some code, either by new class instances, modules being replaced by newer ones, shifting modules between packages. Sometimes the update helped improving the code, either when the compiler emitted new warnings or when internal functions were changed, and I became aware, that I was using internal functions. But it is very hard to get a library compiled on different compiler versions, not to mention different compilers. This is especially nasty if you are working in an institute (like the universities I worked at in the past) with different machines with very different software installations. We have some Solaris machines here with GHC-5, which I do not administer, Linux machines with GHC-6.4.1, GHC-6.6.1 and so on. I cannot simply push around patches with darcs because every machine needs separate package adaption. It was said, that Cabal would work also with GHC-6.2. I didn't get it running and then switched to GHC-6.4. It was said, that multiple versions of GHC can be installed on the same machine. That's somehow true, but e.g. runhaskell cannot be told which actual GHC binary to use, and thus it is not possible to run Cabal with a compiler or a compiler version different from the compiler to be used for the package. I decided to upgrade to Cabal-1.2, which also needed installation of filepath. I know that installation could be simplified with cabal-install, which has even more dependencies, and thus I canceled this installation project. Then I have equipped my Cabal files with a switch on splitBase, which merely duplicates the globally known information that former base-1.0 package is now split into base-2.0 or base-3.0 and satellites. It doesn't give the user any new value, but costs a lot of time for the package maintainer. I wonder if it would have been simpler to ship GHC-6.8 with a base-1.0 package or provide it on Hackage that just re-exports the old modules in the known way. This would allow the usage of packages that are in different state of adaption and it will reduce the amount of work for package maintainers considerably. I also predict that the switch on different package arrangements in the Cabal file will grow in future, eventually becoming error-prone and unmaintainable. How many GHC versions do you have installed simultaneously in order to test them all? Don't misunderstand me. I embrace tidying the libraries but I urge to do it in a more compatible manner. Deprecated packages do not need to be banned from the internet. It is not necessary to enforce programmers to adapt to changes immediately, it is better provide ways for doing the changes later, when time has come, in a smooth manner. I thought it was a good idea to adapt to FunctorM in GHC-6.4 quickly instead of rolling my own class. Then, two GHC releases later this module disappeared, was replaced by Traversable. I thought it was good style to rewrite code from List.lookup to FiniteMap in GHC-6.0, in GHC-6.4 it already disappeared, replaced by Data.Map. Why is it necessary to make working libraries obsolete so quickly? I thought using standard modules is more reliable (because of more testers, more possible maintainers) than using custom modules. If libraries change so quickly this forces programmers to fork to their own modules.

On Thu, 2008-02-21 at 08:12 +0100, Henning Thielemann wrote:
It was said, that Cabal would work also with GHC-6.2. I didn't get it running and then switched to GHC-6.4. It was said, that multiple versions of GHC can be installed on the same machine. That's somehow true, but e.g. runhaskell cannot be told which actual GHC binary to use, and thus it is not possible to run Cabal with a compiler or a compiler version different from the compiler to be used for the package.
It's always possible to: ghc-6.4.1 --make Setup.hs -o setup ./setup configure ...etc rather than using whatever ghc runghc/runhaskell finds on the $PATH. I keep 3 versions of ghc installed this way to test Cabal and other stuff.
I decided to upgrade to Cabal-1.2, which also needed installation of filepath. I know that installation could be simplified with cabal-install, which has even more dependencies, and thus I canceled this installation project. Then I have equipped my Cabal files with a switch on splitBase, which merely duplicates the globally known information that former base-1.0 package is now split into base-2.0 or base-3.0 and satellites. It doesn't give the user any new value, but costs a lot of time for the package maintainer. I wonder if it would have been simpler to ship GHC-6.8 with a base-1.0 package or provide it on Hackage that just re-exports the old modules in the known way.
We know this issue is a mess. We've discussed it at length. http://hackage.haskell.org/trac/ghc/wiki/PackageCompatibility Sadly at the moment it is impossible to supply a base-1.0 with ghc-6.8 because packages cannot re-export modules and even if they could, ghc and cabal would have no way to figure out if a particular program was intended to use one or the other.
This would allow the usage of packages that are in different state of adaption and it will reduce the amount of work for package maintainers considerably. I also predict that the switch on different package arrangements in the Cabal file will grow in future, eventually becoming error-prone and unmaintainable. How many GHC versions do you have installed simultaneously in order to test them all?
Personally, I have 3. :-)
Don't misunderstand me. I embrace tidying the libraries but I urge to do it in a more compatible manner.
So do I. Tell is what you think about the suggestions on the PackageCompatibility page above.
Deprecated packages do not need to be banned from the internet. It is not necessary to enforce programmers to adapt to changes immediately, it is better provide ways for doing the changes later, when time has come, in a smooth manner. I thought it was a good idea to adapt to FunctorM in GHC-6.4 quickly instead of rolling my own class. Then, two GHC releases later this module disappeared, was replaced by Traversable. I thought it was good style to rewrite code from List.lookup to FiniteMap in GHC-6.0, in GHC-6.4 it already disappeared, replaced by Data.Map. Why is it necessary to make working libraries obsolete so quickly?
Though the advantage of more packages is that we can have (and there is) a compatibility package for the old FiniteMap.
I thought using standard modules is more reliable (because of more testers, more possible maintainers) than using custom modules. If libraries change so quickly this forces programmers to fork to their own modules.
Duncan

Duncan Coutts wrote:
On Thu, 2008-02-21 at 08:12 +0100, Henning Thielemann wrote:
It was said, that Cabal would work also with GHC-6.2. I didn't get it running and then switched to GHC-6.4. It was said, that multiple versions of GHC can be installed on the same machine. That's somehow true, but e.g. runhaskell cannot be told which actual GHC binary to use, and thus it is not possible to run Cabal with a compiler or a compiler version different from the compiler to be used for the package.
It's always possible to: ghc-6.4.1 --make Setup.hs -o setup ./setup configure ...etc
rather than using whatever ghc runghc/runhaskell finds on the $PATH. I keep 3 versions of ghc installed this way to test Cabal and other stuff.
I decided to upgrade to Cabal-1.2, which also needed installation of filepath. I know that installation could be simplified with cabal-install, which has even more dependencies, and thus I canceled this installation project. Then I have equipped my Cabal files with a switch on splitBase, which merely duplicates the globally known information that former base-1.0 package is now split into base-2.0 or base-3.0 and satellites. It doesn't give the user any new value, but costs a lot of time for the package maintainer. I wonder if it would have been simpler to ship GHC-6.8 with a base-1.0 package or provide it on Hackage that just re-exports the old modules in the known way.
We know this issue is a mess. We've discussed it at length. http://hackage.haskell.org/trac/ghc/wiki/PackageCompatibility
Sadly at the moment it is impossible to supply a base-1.0 with ghc-6.8 because packages cannot re-export modules and even if they could, ghc and cabal would have no way to figure out if a particular program was intended to use one or the other.
Duncan's right, and we do plan to tackle this problem in 6.10. But there's something practical that the community could do *right now* to make everyone's life easier. We've talked about this before at various times, but I thought I'd mention it again in case it inspires anyone to stand up and volunteer to spearhead the effort. The idea is to have a group of people who manage "distributions" of Haskell software. The idea would be similar to how GNOME works, where they have a collection of software components bundled together, tested and released as a coherent unit. Each individual component is maintained separately and has its own release cycle, but the distribution managers bundle up a set of mutually-compatible components and call it "GNOME version x.xx", releasing new distributions on a time-based cycle. So the advantage of doing this would be that someone can easily get a version of a package that is compatible with the other packages they already have. Cabal/Hackage would know which distribution your installation is based on, and it would automatically grab the right version of the package you need. When upgrading a distribution you do it all-at-once. No problems with upgrading packages piecemeal and getting into a mess with multiple versions of dependencies. I think doing this would deliver a system that "just works" for the majority of users most of the time. But it needs people to drive it and make it happen. Cheers, Simon

Hello Simon, Thursday, February 21, 2008, 5:43:00 PM, you wrote: these are two orthogonal questions
The idea is to have a group of people who manage "distributions" of Haskell software.
i don't think that we need so much programmers in order to add to Cabal just one field "HLP compatibility" :))) HLP-compatible libraries will assemble into gems automagically, are you see it? ;) so it becomes mainly an advertising problem. it will be great if ghc/hugs/nhc will continue to be released each Autumn. this will allow to port most of the libraries till December and then advertise it as Haskell-20xx gem. actually, we may even recommend to tag libraries (starting from base) with year of release as their major version so it will require less thinking - just use GHC-2008 with ByteString-2008, HTTP-2008 and so on (while HTTP-2008 may be also remain compatible with a few last gems, say starting from Haskell-2006)
I wonder if it would have been simpler to ship GHC-6.8 with a base-1.0 package or provide it on Hackage that just re-exports the old modules in the known way
the problem here is not lack of gems but fluctuation of base lib. don't skip that he needs to use the same code with ghc 6.4...6.8. the only 100% solution of this problem proposed at this moment is freezing of base -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Thu, 2008-02-21 at 14:43 +0000, Simon Marlow wrote:
I think doing this would deliver a system that "just works" for the majority of users most of the time. But it needs people to drive it and make it happen.
In particular I think we need the infrastructure to keep the time required by distro/platform maintainers to a minimum, otherwise it can easily turn into a full time job. I'd like to see an infrastructure where we can define subsets of hackage packages using fully automatic quality tests. Then further subsets defined by human review standards and consistent sets of packages that are tested together. All this testing etc is based on the idea that we have clients like cabal-install (and other special-purpose clients) doing tests and analyses and reporting back to the hackage server. The hackage server should not be doing any heavy processing, just managing the information. Here's a number of ideas for properties used to define hackage subsets: * can satisfy all of its dependencies from within hackage (there are a couple packages that depend on later versions of packages than exist within hackage) * package can actually build on at least some platform * package 'distcheck's ok, meaning can generate a tarball that builds * haddock docs build These are all pretty basic. For a platform release we want this property * all packages within the subset can satisfy all their dependencies consistently and can build against them This is different from the first property, that a solution exists to build each package individually, this requires that there is a solution to build all the packages in the set using only a single version of each package. Then further that this solution does indeed build. Then for higher quality we also want: * builds on windows, linux, macos * builds with -Wall without too high a volume of warnings * follows the package versioning policy * uses bounded range deps for dependencies that follows the PVP * has sufficient haddock documentation * certain % test coverage and tests work All of these can be checked automatically using suitable clients (mostly extensions of cabal+cabal-install) and hackage reporting. Some properties obviously require human review, like api quality, test quality (as opposed to quantity). But if more packages can be filtered down using these automated tests then putting together a platform release becomes much more manageable. Duncan

Hello Duncan, Friday, February 22, 2008, 2:28:15 AM, you wrote:
In particular I think we need the infrastructure to keep the time required by distro/platform maintainers to a minimum, otherwise it can easily turn into a full time job.
I'd like to see an infrastructure where we can define subsets of hackage packages using fully automatic quality tests. Then further subsets defined by human review standards and consistent sets of packages that are tested together.
Duncan, i like your concrete and detailed plan, but let's see again on WHAT we want to reach. i think that our goal is two-sided: 1) assemble the GEMS of packages that are guaranteed to work together 2) have some set of "good packages" that meet some quality standards and therefore recommended for usage let's consider practical situation. at Oct2007 ghc 6.8 arrives with LibX 1.0 bundled. authors of other libs started to rewrite their libs to work with LibX 1.0. then, at Dec2007 LibX 2.0 arrives and then some libs was upgraded to take advantage of LibX 2.0. they become incompatible with libs using LibX 1.0 and we got a problem package versioning policy (i prefer to call it HLP) allows us to watch over this problem. as far as every library installed define exact version ranges for its dependencies, it should be easy to determine that LibY 1.0 and LibZ 1.0 cannot be used together because they relies on different (and incompatible) versions of LibX but next goal we want to reach is to PREVENT such situations as much as possible. and this is social problem. in order to solve it we should limit freedom of using new library versions by other libraries which want to be a "good citizens". when ghc 6.8 arrives, all the libraries bundled with it - with their concrete VERSIONS becomes a base for this year's HSL (haskell std libs) set. i.e., LibX 1.0 becomes a part of HSL-2008. LibX 2.0 may arrive next month but this cannot change a situation - once LibX 1.0 was included in HSL-2008, it stays there currently, October GHC version is more a testbed and December version is practical vehicle, so i may propose the following scheme: from Oct to Dec developers port their libs to new GHC (and HSL!) version, and in December we "officially" present HSL-2008 as platform for the practical haskell development of the next year. It's important that main libs that make up a foundation for application programming are updated in limited time. say, GUI developers can't upgrade to ghc 6.8 before GUI lib will be ported. so, in Dec2007 we can advertise HSL-2008 which includes, aside core libs, GUI+DB+Web libs with their concrete versions/features and this becomes std set of libs for application developers in 2008 and we can say not "once we had Streams/Binary/FRP lib but it's lost in times", but "for 2008, we offer GUI, serialization, DB libs which are really work with current GHC version" while it will be great to upgrade main libs before December, it doesn't mean that HSL-2008 is frozen at this moment. it continues to grow by accepting new libs and bug-fixed versions of included libs (keeping the same API). the only thing prohibited is incompatible versions of the same libs (say, LibX 2.0 if LibX 1.0 is already there) and here we go to the second goal mentioned at the beginning - distinguish between high-quality and so-so libs. here we can establish some set of requirements and check them. we can provide 1-5 stars system or anything else BUT even if some library completely lacks docs, doesn't work on windows or even fails unit tests, it still belongs to the HSL-2008 gem if it works with HSL-2008 libs and there is no previous incompatible version of this library already included in HSL-2008. moreover, these conditions may be checked automatically when uploading package to the Hackage, so library may be nominated AUTOMATICALLY to a part of HSL-2008 standard (!), although for practical purposes i think it's better that Hackage just OFFER this and package author AGREE with it when his package becomes ready to this honor This rule doesn't prohibit quick development of libraries and changing their interfaces every next month. these new versions just will be considered as "research" ones, with a user's responsibility to coordinate their versions. i recall Don's phrase "it's easy to use newer ByteString version, just recompile the base lib" :) So, hackage should be able to 1) test that uploaded library MAY be included in HSL-20xx gem 2) provide a checkbox to make this decision 3) show for each library to which HSLs it belongs 4) allow to filter libraries list to show libs in specified HSL 5) -.- that are compatible to specified HSL (including libs whose checkbox(2) was not checked) using list(5) is good if you just need to see all libs compatible with your setup and list(4) provides list of libs which are guaranteed to compatible with any FUTURE version of given HSL next, all core libs should obey HLP and have versions >=1.0. 0.xxx version numbers should be left for experiments without any compatibility guarantees. the same applies to libs on Hackage. i propose to consider any lib uploaded there which has version >=1.0 and uses ranges for its dependencies (i.e. have finite lower and higher bounds) as library obeying HLP overall, i propose to consider library version as technical field designed for computers. if any library developer has his own, non-HLP library versioning policy, he can write these "human versions" into "description" field. this restriction will allow us to automatically track version dependencies without adding new bureaucracy of Cabal fields two more notes: first, Hackage may check each uploaded library that it doesn't use the same module names as existing libs and propose to include it in HSL-xxx only if it doesn't reuse module names of libs that are already in given HSL second, HLP doesn't specify whether new library versions may add *new* modules. if we will allow this, then library functionality may be extended without losing any bit of compatibility with existing software -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Bulat, You mention the problem of having a new version of a library with an incompatible change appear where a previous version of that library was included in a platform release. I think a good model to follow here is GNOME's release model. They do major releases on a 6-month schedule. Major releases can introduce new features. For example GNOME 2.20.0 then after each major release they have a number of minor releases eg 2.20.1, 2.20.2, 2.20.3. These are exclusively bug fix releases including updated versions of the packages that were in the original 2.20.0. Those updates are only allowed to fix bugs, not change any APIs. The corresponding concept with a Haskell platform release is that every package is following the package versioning policy and minor/bug-fix platform releases are only allowed to include updates that are api compatible. This is the same way that libraries included with GHC work, that the library apis should be the same between GHC 6.8.0 and 6.8.2. We have only very rarely deviated from that policy. Duncan On Fri, 2008-02-22 at 17:00 +0300, Bulat Ziganshin wrote:
Hello Duncan,
Friday, February 22, 2008, 2:28:15 AM, you wrote:
In particular I think we need the infrastructure to keep the time required by distro/platform maintainers to a minimum, otherwise it can easily turn into a full time job.
I'd like to see an infrastructure where we can define subsets of hackage packages using fully automatic quality tests. Then further subsets defined by human review standards and consistent sets of packages that are tested together.
Duncan, i like your concrete and detailed plan, but let's see again on WHAT we want to reach. i think that our goal is two-sided:
1) assemble the GEMS of packages that are guaranteed to work together 2) have some set of "good packages" that meet some quality standards and therefore recommended for usage
let's consider practical situation. at Oct2007 ghc 6.8 arrives with LibX 1.0 bundled. authors of other libs started to rewrite their libs to work with LibX 1.0. then, at Dec2007 LibX 2.0 arrives and then some libs was upgraded to take advantage of LibX 2.0. they become incompatible with libs using LibX 1.0 and we got a problem
package versioning policy (i prefer to call it HLP) allows us to watch over this problem. as far as every library installed define exact version ranges for its dependencies, it should be easy to determine that LibY 1.0 and LibZ 1.0 cannot be used together because they relies on different (and incompatible) versions of LibX
but next goal we want to reach is to PREVENT such situations as much as possible. and this is social problem. in order to solve it we should limit freedom of using new library versions by other libraries which want to be a "good citizens". when ghc 6.8 arrives, all the libraries bundled with it - with their concrete VERSIONS becomes a base for this year's HSL (haskell std libs) set. i.e., LibX 1.0 becomes a part of HSL-2008. LibX 2.0 may arrive next month but this cannot change a situation - once LibX 1.0 was included in HSL-2008, it stays there
currently, October GHC version is more a testbed and December version is practical vehicle, so i may propose the following scheme: from Oct to Dec developers port their libs to new GHC (and HSL!) version, and in December we "officially" present HSL-2008 as platform for the practical haskell development of the next year. It's important that main libs that make up a foundation for application programming are updated in limited time. say, GUI developers can't upgrade to ghc 6.8 before GUI lib will be ported. so, in Dec2007 we can advertise HSL-2008 which includes, aside core libs, GUI+DB+Web libs with their concrete versions/features and this becomes std set of libs for application developers in 2008 and we can say not "once we had Streams/Binary/FRP lib but it's lost in times", but "for 2008, we offer GUI, serialization, DB libs which are really work with current GHC version"
while it will be great to upgrade main libs before December, it doesn't mean that HSL-2008 is frozen at this moment. it continues to grow by accepting new libs and bug-fixed versions of included libs (keeping the same API). the only thing prohibited is incompatible versions of the same libs (say, LibX 2.0 if LibX 1.0 is already there)
and here we go to the second goal mentioned at the beginning - distinguish between high-quality and so-so libs. here we can establish some set of requirements and check them. we can provide 1-5 stars system or anything else BUT even if some library completely lacks docs, doesn't work on windows or even fails unit tests, it still belongs to the HSL-2008 gem if it works with HSL-2008 libs and there is no previous incompatible version of this library already included in HSL-2008. moreover, these conditions may be checked automatically when uploading package to the Hackage, so library may be nominated AUTOMATICALLY to a part of HSL-2008 standard (!), although for practical purposes i think it's better that Hackage just OFFER this and package author AGREE with it when his package becomes ready to this honor
This rule doesn't prohibit quick development of libraries and changing their interfaces every next month. these new versions just will be considered as "research" ones, with a user's responsibility to coordinate their versions. i recall Don's phrase "it's easy to use newer ByteString version, just recompile the base lib" :)
So, hackage should be able to
1) test that uploaded library MAY be included in HSL-20xx gem 2) provide a checkbox to make this decision 3) show for each library to which HSLs it belongs 4) allow to filter libraries list to show libs in specified HSL 5) -.- that are compatible to specified HSL (including libs whose checkbox(2) was not checked)
using list(5) is good if you just need to see all libs compatible with your setup and list(4) provides list of libs which are guaranteed to compatible with any FUTURE version of given HSL
next, all core libs should obey HLP and have versions >=1.0. 0.xxx version numbers should be left for experiments without any compatibility guarantees. the same applies to libs on Hackage. i propose to consider any lib uploaded there which has version >=1.0 and uses ranges for its dependencies (i.e. have finite lower and higher bounds) as library obeying HLP
overall, i propose to consider library version as technical field designed for computers. if any library developer has his own, non-HLP library versioning policy, he can write these "human versions" into "description" field. this restriction will allow us to automatically track version dependencies without adding new bureaucracy of Cabal fields
two more notes: first, Hackage may check each uploaded library that it doesn't use the same module names as existing libs and propose to include it in HSL-xxx only if it doesn't reuse module names of libs that are already in given HSL
second, HLP doesn't specify whether new library versions may add *new* modules. if we will allow this, then library functionality may be extended without losing any bit of compatibility with existing software

Bulat Ziganshin wrote: [...excellent proposal snipped...] I think this is a very good idea. Cheers Ben

On Thu, 21 Feb 2008, Duncan Coutts wrote:
On Thu, 2008-02-21 at 08:12 +0100, Henning Thielemann wrote:
It was said, that Cabal would work also with GHC-6.2. I didn't get it running and then switched to GHC-6.4. It was said, that multiple versions of GHC can be installed on the same machine. That's somehow true, but e.g. runhaskell cannot be told which actual GHC binary to use, and thus it is not possible to run Cabal with a compiler or a compiler version different from the compiler to be used for the package.
It's always possible to: ghc-6.4.1 --make Setup.hs -o setup ./setup configure ...etc
rather than using whatever ghc runghc/runhaskell finds on the $PATH. I keep 3 versions of ghc installed this way to test Cabal and other stuff.
Nice idea.
I decided to upgrade to Cabal-1.2, which also needed installation of filepath. I know that installation could be simplified with cabal-install, which has even more dependencies, and thus I canceled this installation project. Then I have equipped my Cabal files with a switch on splitBase, which merely duplicates the globally known information that former base-1.0 package is now split into base-2.0 or base-3.0 and satellites. It doesn't give the user any new value, but costs a lot of time for the package maintainer. I wonder if it would have been simpler to ship GHC-6.8 with a base-1.0 package or provide it on Hackage that just re-exports the old modules in the known way.
We know this issue is a mess. We've discussed it at length. http://hackage.haskell.org/trac/ghc/wiki/PackageCompatibility
Sadly at the moment it is impossible to supply a base-1.0 with ghc-6.8 because packages cannot re-export modules and even if they could, ghc and cabal would have no way to figure out if a particular program was intended to use one or the other.
I can't follow here. I think it must be possible to provide a base-1.1 which exports the same modules as base-1.0 but gets them from other packages. It can be considered the last version of the base-1 series and the transition to base-2.0.
Don't misunderstand me. I embrace tidying the libraries but I urge to do it in a more compatible manner.
So do I. Tell is what you think about the suggestions on the PackageCompatibility page above.
With respect to "4. Allow packages to re-export modules" Is it a good idea to include the versioning in the language? I see no need for it. I thought it could be done by 'ghc-pkg' to re-export modules. Btw. I had problems with hidden packages in GHC-6.4.1. They still interfer with other package versions, and thus I have to unregister all versions of a package but one, in order to get something compiled. Is this a known issue of GHC-6.4.1 or am I expecting the wrong behaviour of ghc-pkg? I thought the exposed version is visible in GHCi and without -package option, whereas the hidden but registered packages can be imported with -package and thus Cabal. I find the solution "4.3 Don't rename base" the best one. Is this the way for GHC-6.10?
Deprecated packages do not need to be banned from the internet. It is not necessary to enforce programmers to adapt to changes immediately, it is better provide ways for doing the changes later, when time has come, in a smooth manner. I thought it was a good idea to adapt to FunctorM in GHC-6.4 quickly instead of rolling my own class. Then, two GHC releases later this module disappeared, was replaced by Traversable. I thought it was good style to rewrite code from List.lookup to FiniteMap in GHC-6.0, in GHC-6.4 it already disappeared, replaced by Data.Map. Why is it necessary to make working libraries obsolete so quickly?
Though the advantage of more packages is that we can have (and there is) a compatibility package for the old FiniteMap.
Now, a package for FunctorM for GHC>=6.6 and Traversable for GHC<6.6 would be great.

On Thu, 2008-02-21 at 16:19 +0100, Henning Thielemann wrote:
Sadly at the moment it is impossible to supply a base-1.0 with ghc-6.8 because packages cannot re-export modules and even if they could, ghc and cabal would have no way to figure out if a particular program was intended to use one or the other.
I can't follow here. I think it must be possible to provide a base-1.1 which exports the same modules as base-1.0 but gets them from other packages. It can be considered the last version of the base-1 series and the transition to base-2.0.
Ok it would be possible to have two distinct base packages (though there is a restriction in ghc that would need to be lifted first). It would not be a very helpful situation however since base-1.0:Prelude.Int /= base-2.0:Prelude.Int so you'd have to have completely separate stacks of packages for each version of base. Our package infrastructure is just not up to dealing with that at the moment. We have enough problems when we upgrade bytestring and then have packages depending on packages that were built against different versions of that package. For example see: http://hpaste.org/5803
Btw. I had problems with hidden packages in GHC-6.4.1. They still interfer with other package versions, and thus I have to unregister all versions of a package but one, in order to get something compiled. Is this a known issue of GHC-6.4.1 or am I expecting the wrong behaviour of ghc-pkg?
It's a known issue in GHC-6.4.1, it's fixed in 6.4.2. I would recommend you upgrade but you've already told us the long list of reasons why that is impractical.
I thought the exposed version is visible in GHCi and without -package option, whereas the hidden but registered packages can be imported with -package and thus Cabal.
Yes, in 6.4.2 and later. Duncan
participants (5)
-
Ben Franksen
-
Bulat Ziganshin
-
Duncan Coutts
-
Henning Thielemann
-
Simon Marlow