RE: [Haskell-cafe] develop new Haskell shell?

From: haskell-cafe-bounces@haskell.org [mailto:haskell-cafe-bounces@haskell.org] On Behalf Of Graham Klyne
Did you see [http://nellardo.com/lang/haskell/hash/] ?
Google also finds some links to code.
#g --
Marc Weber wrote:
Hi.
Who wants to try devloping a new shell with me?
Also: http://www.cse.unsw.edu.au/~dons/h4sh.html ***************************************************************** Confidentiality Note: The information contained in this message, and any attachments, may contain confidential and/or privileged material. It is intended solely for the person(s) or entity to which it is addressed. Any review, retransmission, dissemination, or taking of any action in reliance upon this information by persons or entities other than the intended recipient(s) is prohibited. If you received this in error, please contact the sender and delete the material from any computer. *****************************************************************

Who wants to try devloping a new shell with me?
And (in Clean): Rinus Plasmeijer and Arjen van Weelden. A functional shell that operates on typed and compiled applications. In Varmo Vene and Tarmo Uustalu, editors, Advanced Functional Programming, 5th International Summer School, AFP 2004, University of Tartu, Revised Lectures, volume 3622 of Lecture Notes in Computer Science, pages 245-272, Tartu, Estonia, August 2004. Springer http://www.cs.ru.nl/A.vanWeelden/index.php?p=publications -- Johan

johanj:
Who wants to try devloping a new shell with me?
And (in Clean):
Rinus Plasmeijer and Arjen van Weelden. A functional shell that operates on typed and compiled applications. In Varmo Vene and Tarmo Uustalu, editors, Advanced Functional Programming, 5th International Summer School, AFP 2004, University of Tartu, Revised Lectures, volume 3622 of Lecture Notes in Computer Science, pages 245-272, Tartu, Estonia, August 2004. Springer
Funny this should come up. We've just had several submissions to work on a functional shell for the google summer of code. Here's a bit of a summary of what's been done in Haskell I prepared a while back. http://www.cse.unsw.edu.au/~pls/thesis-topics/functionalshell.html -- Don

Donald Bruce Stewart wrote:
Funny this should come up. We've just had several submissions to work on a functional shell for the google summer of code.
Here's a bit of a summary of what's been done in Haskell I prepared a while back.
http://www.cse.unsw.edu.au/~pls/thesis-topics/functionalshell.html
Looking at the brief description of the Esther shell, I was struck by the question - why not just use Haskell directly ie by extending something like GHCi to allow interactive definition of functions/values and an operator to map filenames to functions. I was reminded of http://users.ipa.net/~dwighth/smalltalk/byte_aug81/design_principles_behind_... and in particular the following principle: Operating System: An operating system is a collection of things that don't fit into a language. There shouldn't be one. Regards, Brian.

On Wed, 10 May 2006, Donald Bruce Stewart wrote:
Funny this should come up. We've just had several submissions to work on a functional shell for the google summer of code.
Here's a bit of a summary of what's been done in Haskell I prepared a while back.
http://www.cse.unsw.edu.au/~pls/thesis-topics/functionalshell.html
My background is more shells than FP, so I'm not sure what to make of this - if we're talking about doing things for educational purposes, or serious attempts to make a viable alternative to ... something. At any rate, for anyone thinking about writing a UNIX shell, here are two items that might be worth reading: http://www.faqs.org/faqs/unix-faq/shell/csh-whynot/ A rant about the failings of csh. Particularly note the first topic about file descriptors - if you think the UNIX file descriptor system (the numbers, dup2(), etc.) is quaint but not worth taking very seriously, then you have a lot in common with other people interested in higher level languages (Bill Joy wrote csh) but probably should not be writing a UNIX shell. Tom C. has been doing this rant longer than he has been doing Perl. http://cm.bell-labs.com/sys/doc/rc.pdf The Plan 9 shell. Plan 9 comes from the inventors of UNIX, and its shell is one of the few really good ones. There's a UNIX implementation, whose author also collaborated on "es", which another pretty interesting shell, I see it's mentioned on the UNSW web page. Plus a bonus one for the functional connection - I can't find any detailed information about it, but several years ago the next generation Amiga was going to have an FP shell. Here's an article, not in English, sorry, but isn't Italian a beautiful language! http://www.quantum-leap.it/default_frame.asp?id=30 Donn Cave, donn@drizzle.com

Donn Cave wrote:
On Wed, 10 May 2006, Donald Bruce Stewart wrote:
Funny this should come up. We've just had several submissions to work on a functional shell for the google summer of code.
Here's a bit of a summary of what's been done in Haskell I prepared a while back.
http://www.cse.unsw.edu.au/~pls/thesis-topics/functionalshell.html
My background is more shells than FP, so I'm not sure what to make of this - if we're talking about doing things for educational purposes, or serious attempts to make a viable alternative to ... something.
Here is an example of the kind of thing you could have with a pure interactive Haskell shell: newtype FileName = FileName String -- for example newtype DirName = DirName String newtype Shell a = Shell (IO a) deriving (Monad, MonadIO) ls :: Shell [FileName] cat :: FileName -> [FileName] -> Shell () -- run a command with the current directory set to DirName withDir :: DirName -> Shell a -> Shell a -- catenate all files in a specified directory catenate outputFile dir = withDir dir $ ls >>= cat outputFile Of course the above could no doubt be improved but surely it is already far easier to understand and much more powerful than the idiosyncratic text based approach used in UNIX shells (including rc). To extend the example further, with a simple function to split a filename into a name and an extension, we could rename all .txt files to .hs files with: split :: FileName -> (String, String) unsplit :: (String, String) -> FileName rename :: FileName -> FileName -> Shell () rename extFrom extTo files = do let candidates = filter (\(_,ext) -> ext==extFrom) (map split files) mapM_ (\f@(n,_) -> rename (unsplit f) (unsplit (n, extTo))) candidates % ls >>= rename "txt" "hs" So although the above may be educational, I think it would also be truly useful and practical. It would also save a lot of trouble if everything was done in Haskell instead of using all those UNIX commands, because then people would only need to learn one language to be able to do anything with their computing environment. Regards, Brian.

Brian Hulley wrote:
rename extFrom extTo files = do let candidates = filter (\(_,ext) -> ext==extFrom) (map split files) mapM_ (\f@(n,_) -> rename (unsplit f) (unsplit (n, extTo))) candidates
% ls >>= rename "txt" "hs"
I see I've used the same name twice...;-) It should be: ren extFrom extTo files = do let candidates = filter (\(_,ext) -> ext==extFrom) (map split files) mapM_ (\f@(n,_) -> rename (unsplit f) (unsplit (n, extTo))) candidates % ls >>= ren "txt" "hs" Of course a better choice of primitive commands would give a more powerful shell interface eg using: renif extFrom extTo fileName = case split fileName of (n, ext) | ext == extFrom -> rename fileName (unsplit (n, extTo)) _ -> return () % ls >>= mapM_ (renif "txt" "hs") Regards, Brian.

At Thu, 11 May 2006 23:05:14 +0100, Brian Hulley wrote:
Of course the above could no doubt be improved but surely it is already far easier to understand and much more powerful than the idiosyncratic text based approach used in UNIX shells (including rc).
The idea of representing unix pipes as monads has been around for a
while -- but what most people fail to account for is that many (most?)
real-world shell scripts also need to deal with return values and
stderr. Even standard unix shells are pretty terrible in this regard
-- so if we could do it *better* than standard shells -- that could be
pretty compelling.
Here are some simple examples of things to handle, starting with
failures in a pipeline:
$ aoeu | cat -n ; echo $?
bash: aoeu: command not found
0
$
Sweet! A successful return code even though there is clearly a
failure. Bash 3.x *finally* added, set -o pipefail -- which would
cause the above to return an error. Unfortunately, there is no way to
tell which part of the pipeline failed, or any way to attempt recovery
of the part that failed.
Often times, a program is run for its return code, not its output:
if /usr/bin/test -f /etc/motd ; then
echo "you have an /etc/motd" ;
fi
And, there are also times when you want to do something with output,
and something else with the return code.
if cat -n /etc/motd > /tmp/numbered ; then
echo "you have an /etc/motd" ;
fi
This is tricky because many programs will not terminate until you have
consumed all the output. Because of haskell's laziness, it is very
easy to deadlock. Shell is also pretty weak in this regard, you can
either get the return code or the output of a command, but you have to
go through gyrations to get both. For example,
$ echo `cat aoeu` ; echo $?
cat: aoeu: No such file or directory
0
$
How do I check that `cat aoeu` has returned successfully? I think you
have to use an intermediate file:
if cat aoeu > /tmp/tmpfile ; then
echo "An error occurred reading aoeu"
fi

It would also be wise to look at occam and erlang and see if they have any useful ideas. And, of course, Windows PowerShell.
And scsh (Scheme shell, pretty full featured these days): http://www.scsh.net/ Jared.
j. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- http://www.updike.org/~jared/ reverse ")-:"

Jared Updike wrote:
It would also be wise to look at occam and erlang and see if they have any useful ideas. And, of course, Windows PowerShell.
And scsh (Scheme shell, pretty full featured these days): http://www.scsh.net/
At http://jaortega.wordpress.com/2006/05/16/not-your-parents-shell/ there is an interesting blog post about scsh and a new frontend for it called Commander S.

Jeremy Shaw wrote:
At Thu, 11 May 2006 23:05:14 +0100, Brian Hulley wrote:
Of course the above could no doubt be improved but surely it is already far easier to understand and much more powerful than the idiosyncratic text based approach used in UNIX shells (including rc).
The idea of representing unix pipes as monads has been around for a while -- but what most people fail to account for is that many (most?) real-world shell scripts also need to deal with return values and stderr. Even standard unix shells are pretty terrible in this regard -- so if we could do it *better* than standard shells -- that could be pretty compelling. [snip lots of examples and other interesting points]
Some other possibilities are: 1) Every command returns a pair consisting of result and return code 2) Use exceptions instead of stderr 3) Use a more complicated monad
It may still be a good idea to take the top 20 unix utils and code them as native haskell functions and see how far that goes. I know there are some existing libraries that deal with basic stuff like mv, etc. Has anyone implemented grep, find, etc?
This is also how I would start because it would allow all the control flow/ ease of use issues to be explored just using GHCi / Hugs etc before tackling the problem of how to get binaries to interface with the shell. Regards, Brian.

"Brian" == Brian Hulley
writes: Brian> Some other possibilities are:
Brian> 1) Every command returns a pair consisting of result and return Brian> code IMHO the distinction between command's output (to stdout and stderr) and its return code is one of the faults in UNIX shells. Nothing, but log should be written to stdout by command, and stderr should be useless if we use exceptions (I'm not quite sure). Brian> 2) Use exceptions instead of stderr instead of stderr and return code. The return code of `test' is in fact its result. Brian> 3) Use a more complicated monad
It may still be a good idea to take the top 20 unix utils and code them as native haskell functions and see how far that goes. I know there are some existing libraries that deal with basic stuff like mv, etc. Has anyone implemented grep, find, etc?
Brian> This is also how I would start because it would allow all the Brian> control flow/ ease of use issues to be explored just using GHCi Brian> / Hugs etc before tackling the problem of how to get binaries Brian> to interface with the shell. -- WBR, Max Vasin.

On 2006-05-12, Max Vasin
"Brian" == Brian Hulley
writes: Brian> Some other possibilities are: Brian> 1) Every command returns a pair consisting of result and return Brian> code
IMHO the distinction between command's output (to stdout and stderr) and its return code is one of the faults in UNIX shells. Nothing, but log should be written to stdout by command, and stderr should be useless if we use exceptions (I'm not quite sure).
You have failed to grasp the problem domain and the composability provided. Requiring names for output is worse than requiring names for functions. -- Aaron Denney -><-

I have only been skimming this thread so sorry if this was already posted: http://www.webcom.com/~haahr/es/es-usenix-winter93.html es is a shell roughly based on rc but with higher order functions and a functional nature in general. It is quite interesting and could serve as inspiration. John -- John Meacham - ⑆repetae.net⑆john⑈

On 2006-05-12, Jeremy Shaw
At Thu, 11 May 2006 23:05:14 +0100, Brian Hulley wrote:
Of course the above could no doubt be improved but surely it is already far easier to understand and much more powerful than the idiosyncratic text based approach used in UNIX shells (including rc).
The idea of representing unix pipes as monads has been around for a while -- but what most people fail to account for is that many (most?) real-world shell scripts also need to deal with return values and stderr. Even standard unix shells are pretty terrible in this regard -- so if we could do it *better* than standard shells -- that could be pretty compelling.
Here are some simple examples of things to handle, starting with failures in a pipeline:
$ aoeu | cat -n ; echo $? bash: aoeu: command not found 0 $
Sweet! A successful return code even though there is clearly a failure. Bash 3.x *finally* added, set -o pipefail -- which would cause the above to return an error. Unfortunately, there is no way to tell which part of the pipeline failed, or any way to attempt recovery of the part that failed.
See also the "pipestatus/PIPESTATUS" arrays in e.g. zsh and ksh. Maybe it's in bash too these days. -- Aaron Denney -><-

On Thu, 11 May 2006, Brian Hulley wrote: ...
-- catenate all files in a specified directory
catenate outputFile dir = withDir dir $ ls >>= cat outputFile
So, you would apply this like catenate "result" "/etc/stuff" ? String literals need quotes?
Of course the above could no doubt be improved but surely it is already far easier to understand and much more powerful than the idiosyncratic text based approach used in UNIX shells (including rc).
(cd /etc/stuff; cat * > result) ?
renif extFrom extTo fileName = case split fileName of (n, ext) | ext == extFrom -> rename fileName (unsplit (n, extTo)) _ -> return ()
% ls >>= mapM_ (renif "txt" "hs")
$ for a in *.txt; do mv $a $(basename $a .txt); done ? Not saying the UNIX shell is a rich and well structured programming environment, and maybe FP is a good direction for that problem. But don't underestimate it, the principles behind it are sharp, and while I think you could expect to win on complex data structures, you can't afford to lose on simple commands, because that's where most of the action is. Hm. Not to pick at the details too much, but you know "cat" is actually a standard UNIX command, that writes to standard output and has no output file parameter? What's up with the new parameter in your version - was it not going to be workable the way it was? Donn Cave, donn@drizzle.com

Donn Cave wrote:
On Thu, 11 May 2006, Brian Hulley wrote: ...
-- catenate all files in a specified directory
catenate outputFile dir = withDir dir $ ls >>= cat outputFile
So, you would apply this like catenate "result" "/etc/stuff" ? String literals need quotes?
Yes - why not? Also, on Windows for example, filenames can have spaces so quotes are needed anyway with any shell at the moment if such filenames are used. However if this was a real problem it *might* be possible to relax the need for quotes by using a much more complicated parsing algorithm that could take into account the types of expected args and coerce the appropriate unquoted lexemes/expressions into strings, but I don't know if this would really be worth the trouble, and it would introduce ambiguity eg is etc/stuff a filename or an arithmetic expression?
Of course the above could no doubt be improved but surely it is already far easier to understand and much more powerful than the idiosyncratic text based approach used in UNIX shells (including rc).
(cd /etc/stuff; cat * > result)
Well the problem here is that the command leaves you in /etc/stuff so you have to remember this when you subsequently execute another command. The advantage of withDir is that the original directory is restored afterwards, which might make it easier to write modular scripts. In any case you could also make a cd command in Haskell, and write: cd "etc/stuff" >> ls >>= cat "result"
?
renif extFrom extTo fileName = case split fileName of (n, ext) | ext == extFrom -> rename fileName (unsplit (n, extTo)) _ -> return ()
% ls >>= mapM_ (renif "txt" "hs")
$ for a in *.txt; do mv $a $(basename $a .txt); done
Well someone had to define the meaning of basename so if we make the definition of renif similarly built-in the comparison is between ls >>= mapM_ (renif "txt" "hs") and for a in *.txt; do mv $a $(basename $a .txt); done So the Haskell command is shorter, easier to read, and more re-usable, because mapM_ (renif "txt" "hs") can be used anywhere that supplies a list of files whereas "for a in *.txt" doesn't make the source of the list explicit. Do they come from the current directory? What if some other list of files should be used?
? Not saying the UNIX shell is a rich and well structured programming environment, and maybe FP is a good direction for that problem. But don't underestimate it, the principles behind it are sharp, and while I think you could expect to win on complex data structures, you can't afford to lose on simple commands, because that's where most of the action is.
From the above even the simple commands are easier in Haskell. The only drawback is the need to put quotes round filenames/paths but imho this doesn't seem like a major problem compared to the ease with which complex commands can be built up and the advantage of only having to learn one universal language.
Hm. Not to pick at the details too much, but you know "cat" is actually a standard UNIX command, that writes to standard output and has no output file parameter? What's up with the new parameter in your version - was it not going to be workable the way it was?
I forgot about this. You could define cat in Haskell as: cat :: [FileName] -> Shell String and have another command analogous to > to write a string into a file, say "into" into :: FileName -> String -> Shell () Then you could catenate all files in the current directory into a file called "result" by: ls >>= cat >>= into "result" (Same as cat * > result) So in balance I think that while some UNIX commands may be slightly shorter, the shortness comes at the expense of the assumptions they have to make about the kinds of things you want to do eg cat * works well if the only possible source of files is the current directory, but doesn't work at all if you want to create a list of files from some other operation (unless you create a temporary directory with symlinks etc but it easily degenerates into a very complicated mess compared to Haskell). Regards, Brian.

Brian Hulley wrote:
Donn Cave wrote:
(cd /etc/stuff; cat * > result)
Well the problem here is that the command leaves you in /etc/stuff so you have to remember this when you subsequently execute another command.
No it doesn't. The parentheses around the command sequence cause it to run in a subshell with its own private working directory.
Well someone had to define the meaning of basename so if we make the definition of renif similarly built-in the comparison is between
ls >>= mapM_ (renif "txt" "hs")
and
for a in *.txt; do mv $a $(basename $a .txt); done
This comparison is unfair because basename is a much more generic operation than renif. The Haskell code should be something like glob "*.txt" >>= mapM_ (\a -> mv a (basename a ".txt" ++ ".hs"))
So the Haskell command is shorter, easier to read, and more re-usable, because mapM_ (renif "txt" "hs") can be used anywhere that supplies a list of files whereas "for a in *.txt" doesn't make the source of the list explicit. Do they come from the current directory? What if some other list of files should be used?
This makes no sense. Bash has its own set of rules. The for statement iterates over a list, which in this case is generated by a glob. If you want something else, you use the appropriate construct. The body of the for loop is just as reusable as the corresponding Haskell code. My reaction to this thread is the same as Donn Cave's: even after reading through the whole thread, I don't understand what a Haskell shell is supposed to be. It feels like people are more interested in capturing territory for Haskell than in solving any actual problem. For simple commands and pipes, the bash syntax is perfect. For anything nontrivial, I use some other language anyway. I long ago wrote a Perl script to do a far more general form of the renaming example you gave above. As far as I know, the only reason people write nontrivial /bin/sh scripts is that it's the only scripting language that's universally available on Unix systems. Even Perl isn't deployed everywhere. A Haskell shell is never going to be ubiquitous, and Haskell syntax is inferior to bash syntax for 99% of the command lines I type. On the other hand, I'm entirely in favor of extending Haskell with functions like glob :: String -> IO [String]. That would be useful. -- Ben

On Fri, 12 May 2006, Ben Rudiak-Gould wrote:
... For simple commands and pipes, the bash syntax is perfect. For anything nontrivial, I use some other language anyway. I long ago wrote a Perl script to do a far more general form of the renaming example you gave above. As far as I know, the only reason people write nontrivial /bin/sh scripts is that it's the only scripting language that's universally available on Unix systems.
I have a blind spot here due to a visceral dislike of Perl, but I do think there's a slim chance that a really well designed language could be useful in that niche - roughly speaking, non-trivial shell scripts. You're right, I wouldn't be able to use it at work, just like "rc" or, for that matter, Haskell, but still I'd love to see it happen. I just think "really well designed" is a tall order, and the notion that you can get there by just dropping Haskell into this application domain is an absurdity on the order of Edgar Rice Burroughs' fantasy of Tarzan appearing out of the jungle and being appointed chief of the Waziri. Donn Cave, donn@drizzle.com

Ben Rudiak-Gould wrote:
My reaction to this thread is the same as Donn Cave's: even after reading through the whole thread, I don't understand what a Haskell shell is supposed to be.
I'd like one as a scripting environment, a bit like scsh, just strongly typed and easier on the eyes. Haskell as interactive shell would be a nightmare indeed, having to type 'system "foo"' instead of simply 'foo' for everyday commands just won't cut it. On the other hand, as soon as a script has at least some programming logic in it, bash (or anything similar) soon becomes a huge PITA. Just think of all the different quotes and how difficult it is to write a script that doesn't go bonkers if it encounters a filename with a space in it (or a parenthesis, a bracket, an asterisk or anything of a myriad special chars I forgot). Haskell shines here; in a combinator library no quoting is necessary and the typechecker will detect most blunders in the equivalent code. Besides, Cabal could benefit from a good file manipulation library, as could a lot of other programs.
I long ago wrote a Perl script to do a far more general form of the renaming example you gave above.
So did I, but I don't want to experience that ever again. Anyway, for complex renaming, there's always mmv.
On the other hand, I'm entirely in favor of extending Haskell with functions like glob :: String -> IO [String]. That would be useful.
Yes, of course. More specific types would be a good thing, though. Representing both file names and globs by strings will soon reproduce the mess of quote chars that makes sh such a bad programming language. Udo. -- It is explained that all relationships require a little give and take. This is untrue. Any partnership demands that we give and give and give and at the last, as we flop into our graves exhausted, we are told that we didn't give enough. -- Quentin Crisp, "How to Become a Virgin"

Udo Stenzel wrote:
I'd like one as a scripting environment, a bit like scsh, just strongly typed and easier on the eyes. Haskell as interactive shell would be a nightmare indeed, having to type 'system "foo"' instead of simply 'foo' for everyday commands just won't cut it.
This seems to be your only objection. It might be solvable by making some rule that an identifier that's used in a value position would be automatically bound to a function/value found by instantiating to a binary in the file system if it's not already bound, and there would need to be some rules about how binaries would work to cooperate with the Haskell type system. Another approach, to allow GHCi to be used as a shell immediately (given the right module with useful commands like ls, cat etc which could be written right now) would be to just have a shorter name for "system" eg what about: % #"foo" Just think: three extra characters but an infinity of new possibilities..... :-) Regards, Brian.

On Fri, 12 May 2006, Brian Hulley wrote:
Udo Stenzel wrote:
I'd like one as a scripting environment, a bit like scsh, just strongly typed and easier on the eyes. Haskell as interactive shell would be a nightmare indeed, having to type 'system "foo"' instead of simply 'foo' for everyday commands just won't cut it.
This seems to be your only objection. It might be solvable by making some rule that an identifier that's used in a value position would be automatically bound to a function/value found by instantiating to a binary in the file system if it's not already bound, and there would need to be some rules about how binaries would work to cooperate with the Haskell type system.
What about the parameters - certainly there's little point in relieving me of the bother of quoting a command name, if I have to quote each parameter? Donn Cave, donn@drizzle.com

Donn Cave wrote:
On Fri, 12 May 2006, Brian Hulley wrote:
Udo Stenzel wrote:
I'd like one as a scripting environment, a bit like scsh, just strongly typed and easier on the eyes. Haskell as interactive shell would be a nightmare indeed, having to type 'system "foo"' instead of simply 'foo' for everyday commands just won't cut it.
This seems to be your only objection. It might be solvable by making some rule that an identifier that's used in a value position would be automatically bound to a function/value found by instantiating to a binary in the file system if it's not already bound, and there would need to be some rules about how binaries would work to cooperate with the Haskell type system.
What about the parameters - certainly there's little point in relieving me of the bother of quoting a command name, if I have to quote each parameter?
My idea of a Haskell shell would be that everything in the computer would be visible as a strongly typed monadic value, so for example, instead of typing $ ghc -c -O1 Main.hs ghc would appear in the shell as if it was a normal Haskell function with this type: ghc :: GHCOptions -> [FileName] -> Shell () where GHCOptions would be a record. For each binary, there would be default options, so from Haskell you could type: Shell> ghc ghcDefaultOptions{link=False, opt=1} ["Main.hs"] It might even be possible to make a syntactic extension to Haskell that any function whose first argument is a record could be called with the record brackets immediately after the function name, (ie with an implicit default record based on the name of the function before the opening brace) so the above could be written as: Shell> ghc{link=False, opt=1} ["Main.hs"] There would have to be some specification somewhere to tell the binder what the type of the binary (and its options) was etc. Regards, Brian.

"Brian Hulley"
Donn Cave wrote:
On Fri, 12 May 2006, Brian Hulley wrote:
Udo Stenzel wrote:
I'd like one as a scripting environment, a bit like scsh, just strongly typed and easier on the eyes. Haskell as interactive shell would be a nightmare indeed, having to type 'system "foo"' instead of simply 'foo' for everyday commands just won't cut it.
This seems to be your only objection. It might be solvable by making some rule that an identifier that's used in a value position would be automatically bound to a function/value found by instantiating to a binary in the file system if it's not already bound, and there would need to be some rules about how binaries would work to cooperate with the Haskell type system.
What about the parameters - certainly there's little point in relieving me of the bother of quoting a command name, if I have to quote each parameter?
My idea of a Haskell shell would be that everything in the computer would be visible as a strongly typed monadic value, so for example, instead of typing
$ ghc -c -O1 Main.hs
ghc would appear in the shell as if it was a normal Haskell function with this type:
ghc :: GHCOptions -> [FileName] -> Shell ()
I and a fellow student have implemented something along those lines. We walk through the $PATH and write a small stub definition for each program. This is then compiled and loaded using hs-plugins. The result is that you can access for example ghc from the command line. The type of the automatically generated functions are e.g.: cat :: Program String String which is the best type you can give without a lot of manual labour (it really ought to be [Word8]). You can then combine programs (and standard haskell functions) like so: cat >|< map toUpper where (>|<) :: (Cmd c1, Cmd c2, Marshal t, Marshal i, Marshal o) => c1 i t -> c2 t o -> Command i o and instance Cmd Program ... instance Cmd (->) .. So the interface is not monadic but more similar to arrow composition. This is probably a bad idea since you need to use something like xargs to run a command on each item in the input. As others (Donn) have pointed out, having to write (in our syntax) e.g. ssh -."l" #"jansborg" #"remote.mdstud.chalmers.se" gets old really quickly for interactive use, so I don't think a haskell shell is really useful other than for scripting. Basic job control and tab completion for programs and files (but not normal haskell bindings) is implemented. The code is available here: http://www.mdstud.chalmers.se/~jansborg/haskal.tar.gz but please note that it is not at all finished, likely quite buggy, completely undocumented and not really well thought through. /Mats

Ben Rudiak-Gould wrote:
Brian Hulley wrote:
Well someone had to define the meaning of basename so if we make the definition of renif similarly built-in the comparison is between
ls >>= mapM_ (renif "txt" "hs")
and
for a in *.txt; do mv $a $(basename $a .txt); done
This comparison is unfair because basename is a much more generic operation than renif. The Haskell code should be something like
glob "*.txt" >>= mapM_ (\a -> mv a (basename a ".txt" ++ ".hs")) [rearranged] On the other hand, I'm entirely in favor of extending Haskell with functions like glob :: String -> IO [String]. That would be useful.
Why assume all filenames are strings? Is it not better to make a distinction between a file and a directory? Why fix everything down to the IO monad? In any case, the Haskell above is still just as short as the UNIX command.
So the Haskell command is shorter, easier to read, and more re-usable, because mapM_ (renif "txt" "hs") can be used anywhere that supplies a list of files whereas "for a in *.txt" doesn't make the source of the list explicit. Do they come from the current directory? What if some other list of files should be used?
This makes no sense. Bash has its own set of rules.
But who wants to waste their life learning them? :-)
The for statement iterates over a list, which in this case is generated by a glob. If you want something else, you use the appropriate construct. The body of the for loop is just as reusable as the corresponding Haskell code.
Ok perhaps I was being a little bit unfair. ;-)
My reaction to this thread is the same as Donn Cave's: even after reading through the whole thread, I don't understand what a Haskell shell is supposed to be. It feels like people are more interested in capturing territory for Haskell than in solving any actual problem. For simple commands and pipes, the bash syntax is perfect.
But it's surely just an accident of historical development. Now that we've got Haskell, why bother with old crusty stuff that's awkward and idiosyncratic?
For anything nontrivial, I use some other language anyway.
Why not always just use Haskell?
A Haskell shell is never going to be ubiquitous
At this rate it's never even going to get a chance...
, and Haskell syntax is inferior to bash syntax for 99% of the command lines I type.
Well perhaps this is just a matter of personal preference. Certainly it's good that everyone can use whatever they prefer. I personally disagree that Haskell syntax is inferior, except perhaps for the need to use quotes but that is imho a very minor distraction. Much more important is that by using the same language for shell + program development, whatever that language is, people could concentrate on solving problems instead of having to continually adapt themselves to the different mindsets of the different communities which develop various modes of interaction with a computer. Regards, Brian.
participants (14)
-
Aaron Denney
-
Bayley, Alistair
-
Ben Rudiak-Gould
-
Brian Hulley
-
Donn Cave
-
dons@cse.unsw.edu.au
-
Jared Updike
-
Jeremy Shaw
-
Johan Jeuring
-
John Hamilton
-
John Meacham
-
Mats Jansborg
-
Max Vasin
-
Udo Stenzel