
On Fri, Sep 14, 2018 at 10:11:46PM +0200, Olaf Klinke wrote:
Is the result of parsing Unix seconds to LocalTime even well-defined? A LocalTime could be in any time zone, while Unix epoch is relative to a certain time-point in UTC. Hence the parse result must depend on what time zone the LocalTime is referring to.
It's even worse - unix time may also refer to the host's local timezone, although these days almost everything sets the RTC to UTC, and corrects for local timezone on the fly (simply because this allows for somewhat sane handling of DST and such). Also, unix time may represent either actual seconds elapsed since epoch, or "logical" seconds since epoch (ignoring leap seconds, such that midnight is always a multiple of 86400 seconds). However, once you pick a convention for the second part, you can convert unix timestamps to some "local time without timezone info" data structure (e.g. LocalTime); whether you assume UTC or any timezone becomes relevant when you want to convert that data structure into a data structure that does contain timezone info explicitly (like ZonedTime) or implicitly (like UTCTime). https://hackage.haskell.org/package/time-1.9.2/docs/Data-Time.html provides a nice overview of the various date/time types defined in the `time` package, and what they mean. The relevant ones here are: - UTCTime: idealized (ignoring leap seconds) absolute moment in time - LocalTime: moment in time, in some unspecified timezone - ZonedTime: moment in time, in a specific timezone And the "morally correct" way of parsing unix time into anything would be to go through LocalTime first, then either convert to UTCTime if the unix timestamp may be interpreted as being in UTC, or attaching the correct timezone, yielding a ZonedTime. All this assuming that you can afford to ignore leap seconds one way or another.