
Further to the previous message about modules and imports, what would be *really* useful would be if importing was in some sense “just a function” in IO or some other Monad and therefore also if modules were represented as some Haskell data structure. That’d obviously make the top level ambient context of a module of Haskell into a sort of monadic action, so it’d be a bit weird, tho at the moment who’s to say what it is anyway? It’s certainly not something that’s first class to the language or able to be programmed as it stands. I suppose this is sounding a little like I’m interested in homoiconicity in some sense, and that’s probably not far from the truth. The main reason I would find this useful is similar to what Sylvain was saying about storing modules of code in databases. I have a system that produces “representations” as Text and then sometimes compiles it as Haskell source (and it’s meta-recursive in that there are many of these little pieces of executed code that then compile others’ outputs, etc)… and even if we don’t have programmatic control over actual compilation or which versions of libraries of code to include, having programmatic control over where the data for modules’ sources comes from would be fantastic. Given that importing is just some magic declaration at the moment that’s kind of outside the language’s programmable constructs, tho, I can’t see that happening without some major reworking, tho, really. Is there appetite for such a thing? One of the things a little strange about the programming industry is how quick we are to invent entirely new languages rather than attempt to modify existing ones and beat them into a new more flexible shape so we can shoehorn our new requirements into a more flexible frame. I realise it’s not always possible, but surely if we all got together and worked on it we could do it? Really, I probably don’t know enough about the internal workings of Haskell & GHC to know if this is crazy or partly reasonable, but crazy ideas are sometimes fantastic, so I’m asking anyway. It would be *glorious* if stack (or cabal, or whatever package manager we use) itself were programmable in the sense that it could be called from within Haskell itself using a typed API and data could drive its included packages, and its type could be a function whose range was IO ByteString, providing the compiled binary. I guess where I’m driving with all this is that it’d be amazing if more of the language and package system and build tools and dependencies were first class and/or have an API just like the GHC API exists because being able to manipulate these things from programs would be great and simplify a lot of currently complex stuff; the only way to do that at the moment is to leave Haskell’s type safety and deal with the textual interfaces in System.Process et al. (I’m actually using System.Process.Streaming, but that’s neither here nor there). I do have a further idea I’ve been pondering for a long while that’s even more crazy than these two, but I might leave that for later, if at all; it has to do with programmability of syntax or lack thereof. Essentially, my system is aiming at keeping everything (source, executables, results of all kinds) separated into small pieces, and results cached so that I don’t have to continually recompile and recompute things that have already been compiled and computed before. It has some of the same ideas as unison in the content hashing sense, tho not the caching at the AST level because Haskell doesn’t work like that, and I’m not interested in building a monolithic unison/smalltalk-style “image/database/world” that has to have everything imported into it — rather the other way around — to have little programs that use parsing, do one thing, and have a typed interface between each of them that glues them all together as and when needed. Apologies that it’s so hand-wavey. Maybe there’s a better place or way to discuss such ideas? If so, let me know. Thanks!

On Wed, Dec 4, 2024 at 9:31 AM julian getcontented.com.au < julian@getcontented.com.au> wrote:
One of the things a little strange about the programming industry is how quick we are to invent entirely new languages rather than attempt to modify existing ones and beat them into a new more flexible shape so we can shoehorn our new requirements into a more flexible frame. I realise it’s not always possible, but surely if we all got together and worked on it we could do it?
Generally at the price of breaking backward compatibility, which is already a major issue with GHC and would get much worse here.
It would be *glorious* if stack (or cabal, or whatever package manager we use) itself were programmable in the sense that it could be called from within Haskell itself using a typed API and data could drive its included packages, and its type could be a function whose range was IO ByteString, providing the compiled binary.
The Cabal library is already largely programmable in this sense, although as it drives system tools you have the problem that inputs and outputs are disk files coming from or going to those tools. Changing this requires changing the system tools, or reimplementing them which would be prohibitively difficult and subject to regular breakage. -- brandon s allbery kf8nh allbery.b@gmail.com

what would be *really* useful would be if importing was in some sense “just a function” in IO or some other Monad and therefore also if modules were represented as some Haskell data structure.
You can do something like this using the GHC API. A program that depends on
the `ghc` library can parse and compile a module, and then look up module
bindings by name and (unsafely) coerce them into normal values. See `hint` (
https://hackage.haskell.org/package/hint) for some inspiration.
I was playing around with this sort of thing because I wanted to use
Haskell modules as plugins for a program that I wrote. I got it working -
the program could load and run the plugin module - but decided to go back
to a fraught stringly-typed interface between two separately compiled
processes.
One reason was that the dynamic loading approach was too slow, and I wasn't
interested in figuring out how to make it faster.
Another reason was that my program is intended for use in certain Haskell
projects, and it's unlikely that the GHC compiler embedded in the program
will match the version that the user has for their project. I didn't want
to allow the user to write code that compiles with their GHC but not when
loaded into my program (e.g. if their GHC version is newer and they use a
new feature).
On Thu, 5 Dec 2024, 03:08 Brandon Allbery,
On Wed, Dec 4, 2024 at 9:31 AM julian getcontented.com.au < julian@getcontented.com.au> wrote:
One of the things a little strange about the programming industry is how quick we are to invent entirely new languages rather than attempt to modify existing ones and beat them into a new more flexible shape so we can shoehorn our new requirements into a more flexible frame. I realise it’s not always possible, but surely if we all got together and worked on it we could do it?
Generally at the price of breaking backward compatibility, which is already a major issue with GHC and would get much worse here.
It would be *glorious* if stack (or cabal, or whatever package manager we use) itself were programmable in the sense that it could be called from within Haskell itself using a typed API and data could drive its included packages, and its type could be a function whose range was IO ByteString, providing the compiled binary.
The Cabal library is already largely programmable in this sense, although as it drives system tools you have the problem that inputs and outputs are disk files coming from or going to those tools. Changing this requires changing the system tools, or reimplementing them which would be prohibitively difficult and subject to regular breakage.
-- brandon s allbery kf8nh allbery.b@gmail.com _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Yeah, thanks for the suggestions; I explored the GHC API and hint a few years ago (four). Even contributed to GHC’s linker to allow hint to do multi-layered interpretation when I was using it to build a distributed haskell / cloud haskell backed service that was the precursor to the system I’m building now. Of course sadly Cloud Haskell has been pretty much dropped in the last few years, so that possibility went away, and Hint is primarily aimed at interpretation not compilation, but I realise you’re suggesting to use hint as an example of how the GHCI API could be used, not saying to use hint itself necessarily. However I’m interested in compilation not interpretation.
If I understand what you’re suggesting, I think it’d become pretty tedious if every single module we wrote had to import the GHC API just so it could import other modules? I wasn’t intending that imports should be dynamic, just that it’d be nice if it were more explicit about exactly what an import meant, in the sense that other Haskell expressions are, and thus more first class, which would allow using Text as source, and obtaining that source from a database or some other place just as easily as off disk.
https://github.com/haskell-hint/hint/issues/68
[68.png]
Multiple embedded interpreter instances · Issue #68 · haskell-hint/hinthttps://github.com/haskell-hint/hint/issues/68
github.comhttps://github.com/haskell-hint/hint/issues/68
On 5 Dec 2024, at 7:59 AM, Isaac Elliott
what would be *really* useful would be if importing was in some sense “just a function” in IO or some other Monad and therefore also if modules were represented as some Haskell data structure.
You can do something like this using the GHC API. A program that depends on the `ghc` library can parse and compile a module, and then look up module bindings by name and (unsafely) coerce them into normal values. See `hint` (https://hackage.haskell.org/package/hint) for some inspiration. I was playing around with this sort of thing because I wanted to use Haskell modules as plugins for a program that I wrote. I got it working - the program could load and run the plugin module - but decided to go back to a fraught stringly-typed interface between two separately compiled processes. One reason was that the dynamic loading approach was too slow, and I wasn't interested in figuring out how to make it faster. Another reason was that my program is intended for use in certain Haskell projects, and it's unlikely that the GHC compiler embedded in the program will match the version that the user has for their project. I didn't want to allow the user to write code that compiles with their GHC but not when loaded into my program (e.g. if their GHC version is newer and they use a new feature).

Nix is an example of a language with first class imports, `import` is a plain function the returns the "module" which is just a record like any other. Based on my experience, it's not all that great. Of course it could be because the nix language itself is weak, with dynamic typing and no user defined data types. But the main effect is that it's hard to tell who imports what or where some value comes from, because import may syntactically occur in any expression, and then the module may be passed around arbitrarily far before being used. As opposed to haskell where you just look at the top of the file and a single step takes you to the definition. An advantage is that of course a "module" can just be a function which returns a module, which is similar to how I imagine ML functors being, except all in the value language. It's used to conveniently inject a bunch of variables into scope. Since there are no user-defined data types though, none of the traditional "define a hash map with built-in hash function" type stuff. I've (very) occasionally wanted a parameterized module in haskell, the closest analog is just a normal record, and you can then use RecordWildCards like a local import. It would be nice for DSLs if records and modules could be unified that way, so you could have both local imports and parameterized modules. I haven't really come up with a use for it outside of one particular DSL though. I gather there's a deep history for this in ML though, and I recall some dependently typed languages like Agda or Idris had unified modules and records to a greater degree than haskell. For GHC API, I have a program that uses it to implement a REPL which is then the main text-oriented UI. I originally wanted to use dynamic loading to be able to write and insert code at runtime, but never got that going. Instead it's reasonably fast to rebuild and relink and reload the program, which seems like a less finicky way to go about it, and I settle for the REPL which compiles fragments to modify the state, but are not themselves persistent unless I paste them into a module somewhere.

Personally, what I thought of immediately would be using dlopen() to import custom .so files (optionally located in database blobs?) Or, perhaps - the serialisable Free Function of a Category which looked interesting and inspired a project - https://www.youtube.com/watch?v=xZmPuz9m2t0 Ta-ta

On Sun, Dec 8, 2024 at 1:12 PM Dan Dart
Personally, what I thought of immediately would be using dlopen() to import custom .so files (optionally located in database blobs?)
This used to be possible with the plugins package, but I think it's bitrotted now. Supposedly there are ghc-lib functions that can do it, but I don't know if you can apply them to a BLOB or if it has to be a disk file. -- brandon s allbery kf8nh allbery.b@gmail.com
participants (5)
-
Brandon Allbery
-
Dan Dart
-
Evan Laforge
-
Isaac Elliott
-
julian getcontented.com.au