
Daniel Fischer wrote:
If the algorithm - including dt - is prescribed, fine, but I wonder what sort of deviation physicists would consider acceptable. For dt = 0.01, k = 2000 we have a relative error of about 2*10^(-5), is that within accepted bounds or not? (Any physicists hang about here?)
It is becoming explicitly off-topic... I didn't follow this thread, I had just one glimpse on the equations used, I had the impression that I saw the standard Euler extrapolation instead of something more stable, like, e.g., Verlet, and I switched off, having other stuff to do. Now, mind you, *no physicist* will tell you a priori whether this or that relative error is acceptable or not. 0.01 percent looks nice, but if you simulate a system in order to see whether it is stable or not, then it won't suffice. For example: is the Solar system eternal? On the other hand, if your system is inherently chaotic, ergodic, and if you are not interested in KAM tori or other islands of stability, then the system is much more tolerant, one chaos is essentially equivalent to another chaos, the details of the trajectory are irrelevant. Then, small errors are not critical at all. == What some people do : 1. They compute the initial energy. 2. They solve the differential equations using some *good* methods. 3. After some steps they stop for a moment, they scratch their heads, and they recompute the energy. If it changed a bit, then they RENORMALIZE the velocity vectors in such a way that the energy *remains* constant unconditionally. Jerzy Karczmarczuk