On Tue, Oct 20, 2015 at 2:24 PM Ivan Perez <ivan.perez@keera.co.uk> wrote:
On 20/10/15 19:47, Mike Meyer wrote:
On Tue, Oct 20, 2015 at 1:35 PM Gregory Collins <greg@gregorycollins.net> wrote:
The point Johan is trying to make is this: if I'm thinking of using Haskell, then I'm taking on a lot of project risk to get a (hypothetical, difficult to quantify) X% productivity benefit. If choosing it actually costs me a (real, obvious, easy to quantify) Y% tax because I have to invest K hours every other quarter fixing all my programs to cope with random/spurious changes in the ecosystem and base libraries, then unless we can clearly convince people that X >> Y, the rationale for choosing to use it is degraded or even nullified altogether.

So I'll rephrase a question I asked earlier that never got an answer: if I'm developing a commercial project based on ghc and some ecosystem, what would possibly cause me to change either the ghc version or any part of the ecosystem every other quarter? Or ever, for that matter?
I don't know about them, I can tell you my personal experience.

If GHC and all libraries were perfect and free from bugs and ultimately optimized, then you'd be right: there would be no reason to change.

But if you ever hit a bug in GHC or a library which was fixed in a future version, or if you want an improvement made to it, you may have to update the compiler.

Library creators/maintainers do not always maintain their libraries compatible with very old/very new versions of the compiler. In an ecosystem like ours, with 3 versions of the compiler in use simultaneously, each with different language features and base APIs changed, compatibility requires a lot of work.

This problem is transitive: if you depend on (a new version of a library that depends on)* a new version of base or a new language feature, you'll may have to update GHC. If you do not have the resources to backport those fixes and improvements, you'll be forced to update. In large projects you are likely to use hundreds of auxiliary libraries, so this is very likely to happen.

I recently had to do this for one library because I could only compile it with a newer version of GHC. This project had 30K lines of Haskell split in dozens of libraries and a few commercial projects in production. It meant fixing, recompiling, packaging and testing everything again, which takes days and it's not unattended work :( It could easily happen again if I depend on anything that stops compiling with this version of GHC because someone considers it "outdated" or does not have the resources to maintain two versions of his/her library.

Does that more or less answer your question?

Not really. IIUC, your fundamental complaint is that the cost of tracking changes to the Haskell ecosystem outweighs any potential gains from using Haskell. But the choices that lead you to needing to track those changes don't make sense to me.

For instance, you talk about compatibility requiring a lot of work, which I presume means between projects. Yes, having to swap out ecosystems and tool sets when you change projects can be a PITA, but even maintaining the environment by hand is less work than trying to keep all your projects compatible across multiple environments. So why do that? Especially when you have tools like virtual environments and stack to take away the pain of multiple environments?

And yes, if some part of the ecosystem has a bug you have to get fixed and an update will get the fix, that's one option. But it also comes with a cost, in that you need to verify that it didn't introduce any new bugs while fixing the old one. Plus, dealing with possible changes in the API. And as you note, if that forces you to update some other part of the ecosystem, all that work is transitive to those other parts. It indeed adds up to a lot of work. Enough that I have to question that it's less work than backporting a fix, or even developing a new one from scratch.

Over a couple of decades of building commercial projects in the P languages,  when faced with the alternatives you outlined here, updating anything major was never the choice if more than one  person was actively writing code. Even with a language that put a priority on not breaking old code in order to minimize the cost of doing that update.

Maybe there's something I'm missing about Haskell that makes fixing somebody else's code take a lot more resources than it does in other languages. In which case that, not the changing ecosystem, is the argument against Haskell.