Channel9 Interview: Software Composability and the Future of Languages

No doubt many of you will have seen the interview[1] on Channel9 with Anders Hejlsberg, Herb Sutter, Erik Meijer and Brian Beckman. These are some of Microsoft's top language gurus, and they discuss the future evolution of programming languages. In particular they identify composability, concurrency and FP as being important trends. However their focus is on borrowing features of FP and bringing them into mainstream imperative languages; principally C#. Naturally the subject of Haskell comes up repeatedly throughout the interview. Disappointingly they characterize Haskell as being an impractical language, only useful for research. Erik Meijer at one point states that programming in Haskell is too hard and compares it to assembly programming! Yet the interviewees continually opine on the difficulty of creating higher level abstractions when you can never be sure that a particular block of imperative code is free of side effects. If there were ever a case of the answer staring somebody in the face... I found this interview fascinating but also exasperating. It's a real shame that no reference was made to STM in Haskell. I don't know why the interviewer doesn't even refer to the earlier Channel9 interview with Simon Peyton Jones and Tim Harris - it appears to be the same interviewer. Still, it's nice to see that ideas from Haskell specifically and FP generally are gaining more and more ground in the mainstream programming world. It also highlights some of the misconceptions that still exist and need to be challenged, e.g. the idea that Haskell is too hard or is impractical for real work. [1] http://channel9.msdn.com/Showpost.aspx?postid=273697

impractical language, only useful for research. Erik Meijer at one point states that programming in Haskell is too hard and compares it to assembly programming!
He brings up a very good point. Using a monad lets you deal with side effects but also forces the programmer to specify an exact ordering. This *is* a bit like making me write assembly language programming. I have to write:
do { x <- getSomeNum y <- anotherWayToGetANum return (x + y) }
even if the computation of x and y are completely independant of each other. Yes, I can use liftM2 to hide the extra work (or fmap) but I had to artificially impose an order on the computation. I, the programmer, had to pick an order. Ok, maybe "assembly language" is a bit extreme (I get naming, allocation and garbage collection!) but it is primitive and overspecifies the problem. Tim Newsham http://www.thenewsh.com/~newsham/

Tim Newsham wrote:
I have to write:
do { x <- getSomeNum y <- anotherWayToGetANum return (x + y) }
even if the computation of x and y are completely independant of each other.
I too have really missed a "parallel composition" operator to do something like the above. Something like do { { x <- getSomeNum || y <- anotherWayToGetANum} return (x+y) } Actually, that syntax is rather hideous. What I would _really_ like to write is do { (x,y) <- getSomeNum || anotherWayToGetANum return (x+y) } I would be happy to tell Haskell explicitly that my computations are independent (like the above), to expose parallelization opportunities. Right now, not only can I NOT do that, I am forced to do the exact opposite, and FORCE sequentiality. Jacques

Jacques Carette wrote:
Tim Newsham wrote:
I have to write:
do { x <- getSomeNum y <- anotherWayToGetANum return (x + y) }
even if the computation of x and y are completely independant of each other.
I too have really missed a "parallel composition" operator to do something like the above. Something like
do { { x <- getSomeNum || y <- anotherWayToGetANum} return (x+y) }
Actually, that syntax is rather hideous. What I would _really_ like to write is do { (x,y) <- getSomeNum || anotherWayToGetANum return (x+y) }
I would be happy to tell Haskell explicitly that my computations are independent (like the above), to expose parallelization opportunities. Right now, not only can I NOT do that, I am forced to do the exact opposite, and FORCE sequentiality.
Jacques
What is wanted is a specific relation of the ordering required by the Monad's structure. For pure computation Control.Parallel.Strategies may be helpful. If what was wanted was to keep sequencing but lose binding then the new Control.Applicative would be useful. It almost looks like we want your pair combinator: do { (x,y) <- parallelPair getSomeNum anotherWayToGetANum return (x+y) } This is principled only in a Monad that can supply the same "RealWorld" to both operations passed to parallelPair. After they execute, this same "RealWold" is the context for the "return (x+y)" statement. This ability to run three computations from the same "RealWorld" seems (nearly) identical to backtracking in a nondeterministic monad, which is usually exposed by a MonadPlus instance. The use of pairs looks alot like the arrow notation. And parallelPair a b = a &&& b looks right for arrows. And since monads are all arrows this works, but Kleisli implies ordering like liftM2. For a specific Monads you can write instances of a new class which approximate the semantics you want.
import Control.Arrow import Data.Char import Control.Monad import Control.Monad.State import System.IO.Unsafe
type M = State Int
main = print $ runState goPar 65 -- should be ((65,'A'),65)
opA :: (MonadState Int m) => m Int opA = do i <- get put (10+i) return i
opB :: (MonadState Int m) => m Char opB = do i <- get put (5+i) return (chr i)
goPar :: State Int (Int,Char) goPar = opA `parallelPair` opB
class (Monad m) => MonadPar m where parallelPair :: m a -> m b -> m (a,b)
instance MonadPar (State s) where parallelPair a b = do s <- get let a' = evalState a s b' = evalState b s return (a',b')
-- No obvious way to run the inner monad (without more machinery), -- so we have to resort to ordering instance (Monad m) => MonadPar (StateT s m) where parallelPair a b = do s <- get a' <- lift $ evalStateT a s b' <- lift $ evalStateT b s return (a',b')
-- Reader and Writer work like State
-- Use unsafeInterleaveIO to make a and b lazy and unordered... instance MonadPar IO where parallelPair a b = do a' <- unsafeInterleaveIO a b' <- unsafeInterleaveIO b return (a',b')
k :: State Int b -> Kleisli (State Int) a b k op = Kleisli (const op)
runK :: Kleisli (State Int) a a1 -> (a1, Int) runK kop = runState (runKleisli kop undefined) 65
go :: State Int a -> (a,Int) go op = runK (k op)
kab :: Kleisli (State Int) a (Int, Char) --kab = k opA &&& k opB kab = proc x -> do a <- k opA -< x b <- k opB -< x returnA -< (a,b)

On Friday 26 January 2007 22:14, Tim Newsham wrote:
impractical language, only useful for research. Erik Meijer at one point states that programming in Haskell is too hard and compares it to assembly programming!
He brings up a very good point. Using a monad lets you deal with side effects but also forces the programmer to specify an exact ordering. This *is* a bit like making me write assembly language
programming. I have to write:
do { x <- getSomeNum y <- anotherWayToGetANum return (x + y) }
even if the computation of x and y are completely independant of each other. Yes, I can use liftM2 to hide the extra work (or fmap) but I had to artificially impose an order on the computation. I, the programmer, had to pick an order.
Humm. While I can accept that this is a valid criticism of Haskell's monadic structure for dealing with I/O, I fail to see how it could drive a decision to prefer an imperative language like C#, where every statement has this property (overspecification of evaluation order). The only mainstream-ish general-purpose language I know of that I know of that attempts to addresses this problem head-on is Fortress. (Although, to be honest, I don't know enough about Fortress to know how it handles I/O to even know if it is an actual improvement over the situation in Haskell.)
Ok, maybe "assembly language" is a bit extreme (I get naming, allocation and garbage collection!) but it is primitive and overspecifies the problem.
Tim Newsham http://www.thenewsh.com/~newsham/

Humm. While I can accept that this is a valid criticism of Haskell's monadic structure for dealing with I/O, I fail to see how it could drive a decision to prefer an imperative language like C#, where every statement has this property (overspecification of evaluation order).
True.. perhaps his objection was related to having a bulky syntax (one line per action, if one is not using a higher order function to combine actions) rather than having an order of evaluation rule in the language and letting the programmer (mostly) ignore it (sometime to is own peril). Tim Newsham http://www.thenewsh.com/~newsham/

Hello Tim, Saturday, January 27, 2007, 10:23:31 PM, you wrote:
Humm. While I can accept that this is a valid criticism of Haskell's monadic structure for dealing with I/O, I fail to see how it could drive a decision to prefer an imperative language like C#, where every statement has this property (overspecification of evaluation order).
True.. perhaps his objection was related to having a bulky syntax (one
on *practice*, C++ compilers can reorder statements. are this true for Haskell compilers? -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Tuesday 30 January 2007 19:02, Bulat Ziganshin wrote:
Hello Tim,
Saturday, January 27, 2007, 10:23:31 PM, you wrote:
Humm. While I can accept that this is a valid criticism of Haskell's monadic structure for dealing with I/O, I fail to see how it could drive a decision to prefer an imperative language like C#, where every statement has this property (overspecification of evaluation order).
True.. perhaps his objection was related to having a bulky syntax (one
on *practice*, C++ compilers can reorder statements. are this true for Haskell compilers?
Well... I think most reordering occurs very late in the process, during instruction selection. These reorderings are very fine-grained, very local in scope and are really only (supposed to be!) done when the reordering can be shown to have no affect on the outcome of the computation. I'd be very surprised to see a C or C++ compiler reordering something like function calls. (Although, with gcc I believe there's a flag where you can explicitly mark a function as being side-effect free. I can see a compiler perhaps moving calls to such functions around. But really, how's that any better than what we've got in Haskell?). Caveat: I have only a passing knowledge of the black art of C/C++ compiler construction, so I could be wrong. Rob Dockins

Hello Tim, Saturday, January 27, 2007, 6:14:01 AM, you wrote:
He brings up a very good point. Using a monad lets you deal with side effects but also forces the programmer to specify an exact ordering.
1. it's just a *syntax* issue. at least, ML's solution can be applied: x <- .y + .z where "." is an explicit dereferencing operator (readIORef) 2. it bites me too. it's why i say that C++ is better imperative language than Haskell. there are also many other similar issues, such as lack of good syntax for "for", "while", "break" and other well-known statements, inability to use "return" inside of block and so on -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Wed, Jan 31, 2007 at 02:46:27AM +0300, Bulat Ziganshin wrote:
2. it bites me too. it's why i say that C++ is better imperative language than Haskell.
there are also many other similar issues, such as lack of good syntax for "for", "while", "break" and other well-known statements,
On the other hand you have an ability to define your own control structures.
inability to use "return" inside of block and so on
"inability" is an exaggeration - you can use the ContT monad transformer, which even allows you to choose how "high" you want to jump. But you probably already know this and just want to point that it is cumbersome? Best regards Tomasz

Hello Tomasz, Wednesday, January 31, 2007, 12:01:16 PM, you wrote:
there are also many other similar issues, such as lack of good syntax for "for", "while", "break" and other well-known statements,
On the other hand you have an ability to define your own control structures.
i have a lot, but their features are limited, both in terms of automatic lifting and overall syntax. let's consider while (hGetBuf h buf bufsize == bufsize) crc := updateCrc crc buf bufsize break if crc==0 print crc how this can be expressed in Haskell, without losing clarity?
inability to use "return" inside of block and so on
"inability" is an exaggeration - you can use the ContT monad transformer, which even allows you to choose how "high" you want to jump. But you probably already know this and just want to point that it is cumbersome?
don't know and don't want to use such a hard way. there is a simpler solution, but it still requires to write more boilerplate code than C: res <- doSomething if isLeft res then return$ fromLeft res else do let (Right x) = res ... -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Bulat Ziganshin wrote:
there are also many other similar issues, such as lack of good syntax for "for", "while", "break" and other well-known statements,
On the other hand you have an ability to define your own control structures.
i have a lot, but their features are limited, both in terms of automatic lifting and overall syntax. let's consider
while (hGetBuf h buf bufsize == bufsize) crc := updateCrc crc buf bufsize break if crc==0 print crc
I guess that the crc is a simple fold over the single bytes: crc xs = foldl' (\crc word8 -> crc `xor` word8) 0 xs You do not need xs to be an inefficient String, Data.ByteString.Lazy gives you single byte access (f.i. via fold) but internally reads stuff in a chunked way, just like you now manually do with hGetBuf. Lazy evaluation is very handy for separating those the two concerns of reading the chunks of bytes and presenting them in a non-chunked way. Regards, apfelmus

Hello apfelmus, Wednesday, January 31, 2007, 10:38:00 PM, you wrote:
while (hGetBuf h buf bufsize == bufsize) crc := updateCrc crc buf bufsize break if crc==0 print crc
I guess that the crc is a simple fold over the single bytes:
yes, but it is written in efficient C. and anyway it is monadic operation, because buffer contents are changed on each call. so that is your translation? :)
in a chunked way, just like you now manually do with hGetBuf. Lazy evaluation is very handy for separating those the two concerns of reading the chunks of bytes and presenting them in a non-chunked way.
i need to write MONADIC code. this one-liners is just an examples. i can paste my whole functions what makes much more - read files, simultaneously compress and decompress data using C libs, pass data buffers back and forth between C threads, update GUI state, so on, so on -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Wed, Jan 31, 2007 at 07:46:15PM +0300, Bulat Ziganshin wrote:
Wednesday, January 31, 2007, 12:01:16 PM, you wrote:
there are also many other similar issues, such as lack of good syntax for "for", "while", "break" and other well-known statements,
On the other hand you have an ability to define your own control structures.
i have a lot, but their features are limited, both in terms of automatic lifting and overall syntax. let's consider
while (hGetBuf h buf bufsize == bufsize) crc := updateCrc crc buf bufsize break if crc==0 print crc
A direct translation could look like this: whileM c b = do { x <- c; when x (b >> whileM c b) } f h buf = flip runContT return $ do callCC $ \break -> do flip execStateT 0 $ do whileM (liftIO (liftM (== bufsize) (hGetBuf h buf bufsize))) $ do do crc <- get crc' <- liftIO (updateCrc crc buf bufsize) put crc' crc <- get when (crc == 0) (lift (break crc)) liftIO (print crc) Which, admittedly, is much more lengthy. If we assume that hGetBuf, updateCrc and print can work in any MonadIO, and we define inContT x = flip runContT return x then it becomes slightly shorter: inContT $ callCC $ \break -> do flip execStateT 0 $ do whileM (liftM (== bufsize) (hGetBuf h buf bufsize)) $ do do crc <- get crc' <- updateCrc crc buf bufsize put crc' crc <- get when (crc == 0) (lift (break crc)) Let's define: modifyM f = do x <- get x' <- f x put x' and change the order of parametrs in updateCrc. We get: inContT $ callCC $ \break -> do flip execStateT 0 $ do whileM (liftM (== bufsize) (hGetBuf h buf bufsize)) $ do modifyM (updateCrc buf bufsize) crc <- get when (crc == 0) (lift (break crc)) print crc
how this can be expressed in Haskell, without losing clarity?
I think it's quite clear what it does.
"inability" is an exaggeration - you can use the ContT monad transformer, which even allows you to choose how "high" you want to jump. But you probably already know this and just want to point that it is cumbersome?
don't know and don't want to use such a hard way.
I gave an example above. You can "break" with a return value, so it seem it's what you want.
there is a simpler solution, but it still requires to write more boilerplate code than C:
res <- doSomething if isLeft res then return$ fromLeft res else do let (Right x) = res ...
Not simpler, but easier... and uglier. Somehow I don't like to solve problems on the level of programming language syntax. Best regards Tomasz

Hello Tomasz, Thursday, February 1, 2007, 1:15:39 PM, you wrote:
while (hGetBuf h buf bufsize == bufsize) crc := updateCrc crc buf bufsize break if crc==0 print crc
inContT $ callCC $ \break -> do flip execStateT 0 $ do whileM (liftM (== bufsize) (hGetBuf h buf bufsize)) $ do modifyM (updateCrc buf bufsize) crc <- get when (crc == 0) (lift (break crc)) print crc
how this can be expressed in Haskell, without losing clarity?
I think it's quite clear what it does.
first. it's longer than original. what we can learn here is that imperative languages have built-in "monadic" features support, including automatic lifting and continuations. OTOH, of course, they don't support type inference. so in one environment we need to explicitly declare types while in other environment we need to explicitly specify lifting operations second. unfortunately, current Haskell libs defined in terms of IO monad, not MonadIO. while this issue, i hope, will be addressed in future, i write programs right now :) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

while (hGetBuf h buf bufsize == bufsize) crc := updateCrc crc buf bufsize break if crc==0 print crc
inContT $ callCC $ \break -> do flip execStateT 0 $ do whileM (liftM (== bufsize) (hGetBuf h buf bufsize)) $ do modifyM (updateCrc buf bufsize) crc <- get when (crc == 0) (lift (break crc)) print crc
first. it's longer than original.
is it, though? what makes it longer are features that the original doesn't have, I think. so how about a less ambitious translation, with crc in an MVar and a while-loop that can be broken from the body as well as the condition: while (hGetBuf h buf bufzise .==. (return bufsize)) $ do crc =: updateCrc crc buf bufsize breakIf ((val crc) .==. (return 0)) `orElse` do printM (val crc) od using definitions roughly like this while c b = do { r <- c; when r (b >>= flip when (while c b)) } continueIf c m = c >>= \b-> if b then od else m breakIf c m = c >>= \b-> if b then return False else m orElse = ($) od :: Monad m => m Bool od = return True x .==. y = liftM2 (==) x y printM x = x >>= print v =: x = do { rx <- x; swapMVar v rx } val = readMVar not that I like that style;-) Claus

Hello Claus, Thursday, February 1, 2007, 6:34:23 PM, you wrote:
is it, though? what makes it longer are features that the original doesn't have,
and what i don't need :)
I think. so how about a less ambitious translation, with crc in an MVar and a while-loop that can be broken from the body as well as the condition:
while (hGetBuf h buf bufzise .==. (return bufsize)) $ do crc =: updateCrc crc buf bufsize breakIf ((val crc) .==. (return 0)) `orElse` do printM (val crc) od
your solution is just to make lifted copy of each and every pure operation. so one should define 2^n operations where n is number of arguments -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

while (hGetBuf h buf bufzise .==. (return bufsize)) $ do crc =: updateCrc crc buf bufsize breakIf ((val crc) .==. (return 0)) `orElse` do printM (val crc) od
your solution is just to make lifted copy of each and every pure operation. so one should define 2^n operations where n is number of arguments
ah, I thought the problem at hand was breaking out of a while loop. but if you look closely, I think you'll find it to be ~2 operations, the monadic one, and possibly a pure one to be lifted. one can always lift pure arguments via return, and use the fully lifted monadic operation, as I did in the example code (you were talking about imperative programming, after all?-). if one were to go down that route, one would probably want to overload literals, such as (Num a,Monad m) => Num (m a) for fromInteger, rather than writing (return 0) everywhere, and as usual, the obligatory superclasses would get in the way and would have to be ignored, and Bool isn't overloaded, .. all the usual suspects for hampering embedded DSLs in Haskell. Claus

Claus Reinke wrote:
while (hGetBuf h buf bufsize == bufsize) crc := updateCrc crc buf bufsize break if crc==0 print crc
inContT $ callCC $ \break -> do flip execStateT 0 $ do whileM (liftM (== bufsize) (hGetBuf h buf bufsize)) $ do modifyM (updateCrc buf bufsize) crc <- get when (crc == 0) (lift (break crc)) print crc
first. it's longer than original.
The above version required passing break explicitly. I can pack that into a Reader. The actual semantics of the while loop from 'c' are then more closely followed. This allows:
run_ = runner_ testWhile_ runner_ m = runRWS (runContT m return) NoExit_ (17::Int) testWhile_ = while_ (liftM (>10) get) innerWhile_ innerWhile_ = do v <- get tell_ [show v] when' (v==20) (tell_ ["breaking"] >> breakW_) if v == 15 then put 30 >> continueW_ else modify pred
The result is
((),20,["17","16","15","30","29","28","27","26","25","24","23","22","21","20","breaking"])
Where there is the benefit over C of putting the break or continue in a sub-function. The full code (for two versions) is: -- By Chris Kuklewicz, BSD-3 license, February 2007 -- Example of pure "while" and "repeat until" looping constructs using -- the monad transformer library. Works for me in GHC 6.6 -- -- The underscore version is ContT of RWS and this works more -- correctly than the non-underscore version of RWST of Cont. -- -- Perhaps "Monad Cont done right" from the wiki would help? import Control.Monad.Cont import Control.Monad.RWS import Control.Monad.Error import Control.Monad.ST import System.IO.Unsafe import Data.STRef -- Note that all run* values are the same Type main = mapM_ print [run,run2,run_,run2_] run,run_,run2,run2_ :: MyRet () run = runner testWhile run2 = runner testRepeatUntil run_ = runner_ testWhile_ run2_ = runner_ testRepeatUntil_ runner_ m = runRWS (runContT m return) NoExit_ (17::Int) runner m = (flip runCont) id (runRWST m NoExit (17)) testRepeatUntil_ = repeatUntil_ (liftM (==17) get) innerRepeatUntil_ testRepeatUntil = repeatUntil (liftM (==17) get) innerRepeatUntil innerRepeatUntil_ = tell_ ["I ran"] >> breakW_ innerRepeatUntil = tell ["I ran"] >> breakW testWhile_ = while_ (liftM (>10) get) innerWhile_ testWhile = while (liftM (>10) get) innerWhile -- innerWhile_ :: ContT () (T_ (Exit_ () Bool Bool)) () innerWhile_ = do v <- get tell_ [show v] when' (v==20) (tell_ ["breaking"] >> breakW_) if v == 15 then put 30 >> continueW_ else modify pred innerWhile = do v <- get tell [show v] when' (v==20) (tell ["breaking"] >> breakW) if v == 15 then put 30 >> continueW else modify pred -- The Monoid restictions means I can't write an instance, so use tell_ tell_ = lift . tell -- Generic defintions getCC :: MonadCont m => m (m a) getCC = callCC (\c -> let x = c x in return x) getCC' :: MonadCont m => a -> m (a, a -> m b) getCC' x0 = callCC (\c -> let f x = c (x, f) in return (x0, f)) when' :: (Monad m) => Bool -> m a -> m () when' b m = if b then (m >> return ()) else return () -- Common types type MyState = Int type MyWriter = [String] type MyRet a = (a,MyState,MyWriter) -- RWST of Cont Types type T r = RWST r MyWriter MyState type Foo r a = T (Exit (MyRet r) a a) (Cont (MyRet r)) type WhileFunc = Foo () Bool type ExitFoo r a = Foo r a a -- (Exit r a a) (Cont r) a type ExitType r a = T (Exit r a a) (Cont r) a data Exit r a b = Exit (a -> ExitType r b) | NoExit -- ContT of RWS Types type T_ r = RWS r MyWriter MyState type ExitType_ r a = ContT r (T_ (Exit_ r a a)) a data Exit_ r a b = Exit_ (a -> ExitType_ r b) | NoExit_ -- Smart destructor for Exit* types getExit (Exit loop) = loop getExit NoExit = (\ _ -> return (error "NoExit")) getExit_ (Exit_ loop) = loop getExit_ NoExit_ = (\ _ -> return (error "NoExit")) -- I cannot see how to lift withRWS, so use local -- Perhaps "Monad Cont done right" from the wiki would help? withLoop_ loop = local (\r -> Exit_ loop) -- withRWST can change the reader Type withLoop loop = withRWST (\r s -> (Exit loop,s)) -- The condition is never run in the scope of the (withLoop loop) -- continuation. I could have invoked (loop True) for normal looping -- but I decided a tail call works as well. This decision has -- implication for the non-underscore version, since the writer/state -- can get lost if you call (loop _). while_ mCondition mBody = do (proceed,loop) <- getCC' True let go = do check <-mCondition when' check (withLoop_ loop mBody >> go) when' proceed go while mCondition mBody = do (proceed,loop) <- getCC' True let go = do check <-mCondition when' check (withLoop loop mBody >> go) when' proceed go repeatUntil_ mCondition mBody = do (proceed,loop) <- getCC' True let go = do withLoop_ loop mBody check <- mCondition when' (not check) go when' proceed go repeatUntil mCondition mBody = do (proceed,loop) <- getCC' True let go = do withLoop loop mBody check <- mCondition when' (not check) go when' proceed go -- breakW :: WhileFunc a breakW_ = ask >>= \e -> getExit_ e False >> return undefined breakW = ask >>= \e -> getExit e False >> return undefined -- continueW :: WhileFunc a continueW_ = ask >>= \e -> getExit_ e True >> return undefined continueW = ask >>= \e -> getExit e True >> return undefined

Bulat Ziganshin wrote:
2. it bites me too. it's why i say that C++ is better imperative language than Haskell. there are also many other similar issues, such as lack of good syntax for "for", "while", "break" and other well-known statements, inability to use "return" inside of block and so on
You forgot to mention 'goto' ;-) Ben

Hello Neil, Friday, January 26, 2007, 8:13:43 PM, you wrote:
evolution of programming languages. In particular they identify composability, concurrency and FP as being important trends. However their focus is on borrowing features of FP and bringing them into mainstream imperative languages; principally C#.
afaik, C# borrows one feature after another from FP world - it has limited type inference, anonymous functions, lazy evaluation (or not?) it is why i prefer C# to Java (if i will ever use one of these) - Java don't changed so radically. but really i don't tried C# yet and expect that it has all the problems of "language created by committee" - it combines features from many different worlds and should be hard to master and it should be hard to use various-worlds features together -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

C# [..] has all the problems of "language created by committee"
Whereas Haskell has all the benefits of a language created by committee! Actually, wasn't C# largely created by one man, Anders Hejlsberg? - Neil

Hello Neil, Wednesday, January 31, 2007, 11:09:06 AM, you wrote:
C# [..] has all the problems of "language created by committee"
Whereas Haskell has all the benefits of a language created by committee! Actually, wasn't C# largely created by one man, Anders Hejlsberg?
C# 1.0 may be nice language. but since then he has borrowed features here and there. it's like Ocaml that tries to join FP and OOP together Haskell suffers from "committee problem" in his extensions. for example, declaration styles for regular "data" and GADTs are different, because peoples creating first and second had different stylistic preferences -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com
participants (10)
-
apfelmusï¼ quantentunnel.de
-
Benjamin Franksen
-
Bulat Ziganshin
-
Chris Kuklewicz
-
Claus Reinke
-
Jacques Carette
-
Neil Bartlett
-
Robert Dockins
-
Tim Newsham
-
Tomasz Zielonka