Optimization flag changing result of code execution

I was trying to solve a computational problem form James P Sethna's book Statistical Mechanics: Entropy, Order Parameters, and Complexity[1]. The problem is on page 19 of the pdf linked and is titled Six degrees of separation. For it I came up with this code: http://hpaste.org/84114 It runs fine when compiled with -O0 and consistently yields an answer around 10, but with -O1 and -O2 it consistently gives an answer around 25. Can somebody explain what is happening here? [1] http://pages.physics.cornell.edu/~sethna/StatMech/EntropyOrderParametersComp... Azeem

Hey Azeem,
have you tried running the same calculation using rationals? Theres some
subtleties to writing numerically stable code using floats and doubles,
where simple optimizations change the orders of operations in ways that
*significantly* change the result. In this case it looks like you're
averaging the averages, which i *believe* can get pretty nasty in terms of
numerical precision. Rationals would be a bit slower, but you could then
sort out which number is more correct.
On Fri, Mar 15, 2013 at 4:07 PM, Azeem -ul-Hasan
I was trying to solve a computational problem form James P Sethna's book Statistical Mechanics: Entropy, Order Parameters, and Complexity[1]. The problem is on page 19 of the pdf linked and is titled Six degrees of separation. For it I came up with this code: http://hpaste.org/84114 http://hpaste.org/84114 It runs fine when compiled with -O0 and consistently yields an answer around 10, but with -O1 and -O2 it consistently gives an answer around 25. Can somebody explain what is happening here?
[1] http://pages.physics.cornell.edu/~sethna/StatMech/EntropyOrderParametersComp...
Azeem
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Hi Carter,
Thank you for your help, but I can confirm that this is not due to floating point errors. My own hunch is that it is due to the way I am using random number generation from System.Random.MWC. To check it I wrote a version of mkNetwork function using random number generation from System.Random and it works fine with optimizations turned on. So any ideas why optimizations are messing with System.Random.MWC?
Azeem
From: carter.schonwald@gmail.com
Date: Fri, 15 Mar 2013 17:09:36 -0400
Subject: Re: [Haskell-cafe] Optimization flag changing result of code execution
To: azeeem@live.com
CC: haskell-cafe@haskell.org
Hey Azeem,have you tried running the same calculation using rationals? Theres some subtleties to writing numerically stable code using floats and doubles, where simple optimizations change the orders of operations in ways that *significantly* change the result. In this case it looks like you're averaging the averages, which i *believe* can get pretty nasty in terms of numerical precision. Rationals would be a bit slower, but you could then sort out which number is more correct.
On Fri, Mar 15, 2013 at 4:07 PM, Azeem -ul-Hasan

Perhaps the problem is in withSystemRandom, which uses unsafePerformIO?
Does the problem persist if you seed your program with some predefined
seed?
Roman
* Azeem -ul-Hasan
Hi Carter,
Thank you for your help, but I can confirm that this is not due to floating point errors. My own hunch is that it is due to the way I am using random number generation from System.Random.MWC. To check it I wrote a version of mkNetwork function using random number generation from System.Random and it works fine with optimizations turned on. So any ideas why optimizations are messing with System.Random.MWC?
Azeem From: carter.schonwald@gmail.com Date: Fri, 15 Mar 2013 17:09:36 -0400 Subject: Re: [Haskell-cafe] Optimization flag changing result of code execution To: azeeem@live.com CC: haskell-cafe@haskell.org
Hey Azeem,have you tried running the same calculation using rationals? Theres some subtleties to writing numerically stable code using floats and doubles, where simple optimizations change the orders of operations in ways that *significantly* change the result. In this case it looks like you're averaging the averages, which i *believe* can get pretty nasty in terms of numerical precision. Rationals would be a bit slower, but you could then sort out which number is more correct.
On Fri, Mar 15, 2013 at 4:07 PM, Azeem -ul-Hasan
wrote: I was trying to solve a computational problem form James P Sethna's book Statistical Mechanics: Entropy, Order Parameters, and Complexity[1]. The problem is on page 19 of the pdf linked and is titled Six degrees of separation. For it I came up with this code: http://hpaste.org/84114
It runs fine when compiled with -O0 and consistently yields an answer around 10, but with -O1 and -O2 it consistently gives an answer around 25. Can somebody explain what is happening here?
[1] http://pages.physics.cornell.edu/~sethna/StatMech/EntropyOrderParametersComp...
Azeem
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Nope that isn't the case either. Even if I make use of defaultSeed through create the problem still remains. The problem seems to be in the generation of a vector of (a,a) i.e in the part V.generateM ((round $ p*(fromIntegral $ l*z)) `div` 2) (\i-> R.uniformR ((0,0) , (l-1,l-1)) gen) in line 16. Thanks again. Azeem
Date: Sat, 16 Mar 2013 10:58:50 +0200 From: roma@ro-che.info To: azeeem@live.com CC: carter.schonwald@gmail.com; haskell-cafe@haskell.org Subject: Re: [Haskell-cafe] Optimization flag changing result of code execution
Perhaps the problem is in withSystemRandom, which uses unsafePerformIO?
Does the problem persist if you seed your program with some predefined seed?
Roman
* Azeem -ul-Hasan
[2013-03-16 13:46:54+0500] Hi Carter,
Thank you for your help, but I can confirm that this is not due to floating point errors. My own hunch is that it is due to the way I am using random number generation from System.Random.MWC. To check it I wrote a version of mkNetwork function using random number generation from System.Random and it works fine with optimizations turned on. So any ideas why optimizations are messing with System.Random.MWC?
Azeem From: carter.schonwald@gmail.com Date: Fri, 15 Mar 2013 17:09:36 -0400 Subject: Re: [Haskell-cafe] Optimization flag changing result of code execution To: azeeem@live.com CC: haskell-cafe@haskell.org
Hey Azeem,have you tried running the same calculation using rationals? Theres some subtleties to writing numerically stable code using floats and doubles, where simple optimizations change the orders of operations in ways that *significantly* change the result. In this case it looks like you're averaging the averages, which i *believe* can get pretty nasty in terms of numerical precision. Rationals would be a bit slower, but you could then sort out which number is more correct.
On Fri, Mar 15, 2013 at 4:07 PM, Azeem -ul-Hasan
wrote: I was trying to solve a computational problem form James P Sethna's book Statistical Mechanics: Entropy, Order Parameters, and Complexity[1]. The problem is on page 19 of the pdf linked and is titled Six degrees of separation. For it I came up with this code: http://hpaste.org/84114
It runs fine when compiled with -O0 and consistently yields an answer around 10, but with -O1 and -O2 it consistently gives an answer around 25. Can somebody explain what is happening here?
[1] http://pages.physics.cornell.edu/~sethna/StatMech/EntropyOrderParametersComp...
Azeem
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 16.03.2013 13:31, Azeem -ul-Hasan wrote:
Nope that isn't the case either. Even if I make use of defaultSeed through create the problem still remains. The problem seems to be in the generation of a vector of (a,a) i.e in the part
V.generateM ((round $ p*(fromIntegral $ l*z)) `div` 2) (\i-> R.uniformR ((0,0) , (l-1,l-1)) gen)
in line 16. Thanks again.
I've tried to run you program and I've got approximately same results regardless of optimization level. Which versions of GHC, mwc-random, vector and primitive do you use?

Aleksey Khudyakov
I've tried to run you program and I've got approximately same results regardless of optimization level. Which versions of GHC, mwc-random, vector and primitive do you use?
By approximate do you mean you are getting Monte Carlo noise or Floating Point noise? If the latter then that's reasonable; if the former then that's worrying. Dominic.

On 17 March 2013 21:49, Dominic Steinitz
Aleksey Khudyakov
writes: I've tried to run you program and I've got approximately same results regardless of optimization level. Which versions of GHC, mwc-random, vector and primitive do you use?
By approximate do you mean you are getting Monte Carlo noise or Floating Point noise? If the latter then that's reasonable; if the former then that's worrying.
Difficult to say. I got values around 10 with and without optimizations. Most likely it's MC noise I was using GHC-7.6.2 and latest vector/primitive/mwc-random. I didn't tried to reproduce bug with versions which Azeem Ul Hasan use.
participants (5)
-
Aleksey Khudyakov
-
Azeem -ul-Hasan
-
Carter Schonwald
-
Dominic Steinitz
-
Roman Cheplyaka