
Am 14.06.20 um 12:25 schrieb Henning Thielemann:
On Sun, 14 Jun 2020, Joachim Durchholz wrote:
Am 12.06.20 um 21:51 schrieb Henning Thielemann:
It could have its use to convert legacy code in legacy languages to shiny new Haskell code. :-)
Well, the AI would have to know what "shiny" code is.
I thought that this is the point of the AI approach and the training.
In that case, the trainer needs to know it. Plus the trainer needs to know how to make the AI recognize it.
Otherwise, you'll just get a C++/VB/Cobol program written in Haskell.
This would still be helpful. Imagine we transcode LAPACK to Haskell. It would be still FORTRAN or C encoded in Haskell with the same old FORTRAN API, but we can easily generalize the code to any floating point number type that any Haskell library provides, e.g. numbers with extended precision, with decimal numbers or with interval arithmetics.
No, you wouldn't be able to generalize. Well, sort of. Some things will work. But there are corner cases where e.g. float and double behave differently. E.g. your lists of interpolation points may round differently, so you need adjustments to the algorithms that uses these tables. I haven't looked into Lapack itself but I know that floating-point is infamous for the amount of fiddly detail that you have to consider, sometimes separately for each bit width. Fun fact: Numeric algorithms can break, i.e. produce incorrect results, if you do intermediate calculations with additional mantissa bits. This hurt the 8087 badly because they added an internal stack, which had 80-bit mantissas, because, more precision so why not? So... I tend to be scared if somebody claims to generalize floating-point code. Either because that person knows too little, or because that person knows so much more than me that they actually know how to deal with these complications :-) Regards, Jo