
Kent Karlsson wrote:
Default definitions may be inefficient, but in my opinion, default definitions for approximate operations should not give drastically lower accuracy, and should certainly not violate any other reasonable expectations (like that sin x returns x for x close to 0).
I agree. The idea of giving defaults was that class mathods are not always independent, so no harm is done by defining some in terms of others, but there is no obligation to supply them. This thinking did not take into account problems of numerical accuracy with types with fixed-size floating-point representations. If a default definition can have seriously bad numerical behaviour then it should never be used, and I see no point in supplying it. To do so would just encourage users to accept the definition and get bad results. --brian