Perhaps you are correct.
That said: the retpoline style mitigation can only recover performance of normal pipelining / branch prediction if you statically know the common jump targets. Which brings you quickly into doing whole compilation strategies like type directed defunctionalization.
Either way
1) the attacks require remote code execution.
2) the Data exfiltration risk only matters if there’s both remote code execution and a communication channel to exfiltrate with.
On consumer facing desktop / laptops, the best immediate mitigation is to make sure you’re using Firefox 57.04 (already out )and or chrome >= 64 (due out later this month ). JavaScript in browsers being a remote code execution environment by design! There is a very simple mitigation in the case of java script, eg firefox is reducing the resolution of its high precision js timer to 20 microseconds. Which is afaict a tad too course for the applicable timing side channels
On server end of things:
Don’t allow unauthorized code executions / remote code executions ! The usual don’t allow code injections or buggy c parsers or return oriented buffer code injection hijinks still apply
Security is about depth. This new class of attacks just means that remote code execution where the attacker knows how to interpret the memory layout of the target process is a game over.
I guess this attack does increase the value proposition of systems configuration tools that whitelist the collection of processes a system is expected to run.
Will we see attacks that masquerade as systems benchmarking/ microbrnchmarking tools?
Point being: yes it’s a new very powerful attack. But that does not mean separate compilation and good performance for higher Order programming languages is now disallowed. It just means there’s more science and engineering to be done!
> The only impacted code is the code which should already
be engineered to be side channel resistant... which already need to be
written in a way that has constant control flow and memory lookup.
As far as I understand, that's not really true. If you have a process, which has secrets that you do not want to leak to arbitrary other code running on the same CPU, then not only do you need to avoid indirect branches in your side-channel resistent part (as is the case today) but the *rest* of the program also should not contain indirect branches (assuming the presence of gadgets which make memory leaking possible). So even if your crypto library uses no indirect branches and is side-channel resistant, that is no longer enough: if you link it into a program where other parts of the program have indirect branches, then you can use those branches to potentially leak the crypto keys.
So in general, you need to apply mitigations for this attack if you, at any time, store secrets in the process memory that you do not want to be leaked (and being a hardware bug, leaking means that they can, potentially, be leaked to arbitrary users. Privilege-separation provided by the OS does not really matter here, so in theory it may be possible to leak it from JavaScript running in a browser sandbox for example.).
Indeed. It’s worth noting that the discussed cases where you can recover the perf benefits of branch / jump prediction only work in the context of a first order and or whole program compilation approach. The ghc rts and design is not compatible with those approaches today.
I suspect you could get them to work in a whole program optimizing compiler like MLTON, or a hypothetical compiler for Haskell that has a different rts rep
Note that both GCC and LLVM will be learning this Ratpoline technique.
_______________________________________________
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs