
On Wed, Apr 21, 2021 at 08:45:03AM -0400, Mario wrote:
If your main goal is a defensive position you shouldn't be in an open field to begin with. You should encamp on a high ground, with plenty of nearby lumber to build ramparts, and a secure source of fresh water to withstand a siege. That kind of position limits your maneuverability, admittedly, but it's much easier to protect.
On the topic of whether safety can/should be in the language or be implemented via containerisation, much depends on the details of how the language goes about sandboxing. Some decades back I've used Safe Tcl to limit what untrusted scripts can execute in a server process. With Safe Tcl one populates the untrusted interpreted with a very limited set of verbs and optional aliases into wrapper verbs that run in a separate trusted interpreter. The wrapper verbs can inspect the arguments of restricted commands and allow only appropriate access. Similar features are likely available by embedding Lua or another similar interpreter where it is possible to explicitly expose only a specific language dialect. Such designs tend to be robust. On the other hand, I've always been sceptical of the sandboxing in Java, it was much too complex to be obviously correct, and the long history of bugs in Java applet sandboxing justified the scepticism. Fortunately we longer use Java applets, though JavaScript security in browsers continues to be challenging. So it should not be surprising that I'd also be sceptical of any strong safety claims from Safe Haskell. It seems unlikely to be simple enough to be "obviously correct". This does not mean that it can't be a useful element of defense in depth, but it is difficult to envision how it would be sufficient on its own. The challenge with Safe Haskell, unlike with safe Tcl, is that there is no hard separation between the untrusted and trusted components of the program, they're only separated by static analysis, not separation into distinct execution engines with a controlled communication channel as with Safe Tcl and slave interpreters. For Safe Haskell to be safe, I'd want to see internal virtualisation, with untrusted code using a stripped down RTS that is incapable of supporting external I/O, and all unsafe operations proxied into a separate trusted RTS running separately compiled code. All unsafe operations in the untrusted RTS would then need to be RPC calls into the trusted RTS. The difficulty is though that unlike the case with Tcl, many basic Haskell libraries that export safe pure interfaces, internally use unsafe interfaces (e.g. raw memory access) for performance reasons. So it is rather unclear how to expose a usable untrusted subset of the language except through static analysis (type level guarantess) of the sort that I'm reluctant to trust as a result of high complexity. So while I take no position on whether Safe Haskell should or should not be abandoned, I agree that it is a valid question because the problem is rather non-trivial, and it is not clear whether it is actually possible to implement a sufficiently robust design. -- Viktor.