
On Thu, Sep 24, 2020 at 09:15:19PM +0200, Dominik Bollmann wrote:
Thank you for the input and hints, Viktor and Brent. I appreciate it! I'll try to come up with a better algorithm.
Good luck. Indeed once an efficient algorithm is implemented, the bulk of the runtime is doing the I/O and deserialisation of the input values. With `getContents` and `readInt` from Data.ByteString.Lazy.Char8 the runtime for 100 million ints was ~6.8 seconds, while with `stdin` and `readInt` from Data.ByteString.Streaming.stdin + it was ~25s, but a more efficient `readInt` replacement for streaming ByteStrings brings that down to 5.5s. A loop in C using `scanf("%" PRIu64, &n)`, decodes 100M Ints in ~10s on the same machine, which slower than the Haskell code do the same and also solving this exercise, likely due to stdio(3) not being particularly efficient, and scanf(3) having to reparse the format string on every call. In any case, this clearly points in the direction of reading and converting the ASCII decimals as being the dominant cost in this problem. -- Viktor.