
Am 22.04.21 um 22:36 schrieb Sven Panne:
Am Do., 22. Apr. 2021 um 21:29 Uhr schrieb Joachim Durchholz
mailto:jo@durchholz.org>: True, but the semantics behind each syscall can be horrendously complex. [...]
That's correct, but a sandbox doesn't need to implement all of it. Checking that e.g. only something below a given directory can be opened (perhaps e.g. only for reading) is relatively easy,
That's exactly what I mean: the API is deceptively simple, but the actual semantics is pretty complicated, even open-ended. Things you can have that complicate the picture: - Symlinks - mount -o remount /dir - Prohibiting a subdirectory on NFS but it happens to live on the local machine so you have to remember to prohibit both paths - NFS in general can allow mapping - then there's also VFS, which offers even more mapping options
prohibiting creation
of sockets, limiting the amount of memory which can be mmapped (and how), etc. etc.
These things can indeed be managed at the OS level. Though in practice it's surprisingly hard to close all loopholes. And attackers think in terms of loopholes.
I can hardly imagine what could be considered "safe" for a program which can use all of e.g. POSIX.
Well, that's exactly my point: sandboxes live at the Posix level (actually, at the level of the operating system), and that's a huge and complicated surface to harden.
That's why you can have a sandbox and this still doesn't protect you from symlink timing attacks on /tmp [...]
Well, if it is *your* sandbox and some processes from outside the sandbox can change its contents arbitrarily, then you have more security issues than simple symlink attacks.
I'm not sure what you mean. The sandbox can run such a symlink attack on its own - assuming it officially has access to /tmp. Of course, sandbox makers have been made aware of this attack and are hardening their sandboxes against it. The point isn't this particular attack, it's that seemingly simple APIs can offer very unexpected loopholes, just by not providing atomicity. I simply don't believe it's possible to build a reliable sandbox. 20 years of Javascript hardening attempts and proved that it's possible to make attacks harder, but we still see pown2own successes.
Except that there is no such thing as an inherently safe syscall interface, there are unsafe ways to use it.
And that's exactly the reason why you don't give the full power of all syscalls to a sandboxed program.
Which is exactly the point at which sandbox makers get pressured into adding yet another feature to work around a restriction, on the grounds that "but THIS way of doing it is safe" - which it often enough isn't. The semantics is too complex, it's riddled with aliasing and atomicity problems.
And that's where language-based safety can help. [...]
Only if *all* of your program is written in that single language, which is hardly the case for every non-toy program: Sooner or later you call out to a C library, and then all bets are off.
That's why the approaches are complementary. They can't replace each other.
In general: I think all security-related discussions are futile unless one precisely defines what is considered a threat and what is considered to be safe. And I think we agree to disagree here. :-)
Actually I agree with that. I just disagree with the idea that making syscall-level sandboxes has a better ROI than making language checkers.