
Oh, sorry. Statistics is down the hall =)
There is another often used method which might be worth considering. If you don't need the absolute accuracy that the "boxcar" moving average gives, then perhaps a exponentially weighted moving average (aka: exponential filter) will do. It has the advantage of being easier to code, requiring less CPU power (normally), and needing only one variable for long term storage. There a good description at: http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average Simply put, you save only the result, not the individual sample. To update it with a new sample you first remove the "equivalent" of one sample then add your new sample. How do you remove the oldest sample? You can't, since you didn't save it. Instead you remove 1/n'th (for averaging over n samples) of the previous result, then add the new sample. If you think about it, the longer a sample has been part of the result, the less effect it has on the result - exponentially less. This is why I said it isn't as accurate as the boxcar method. But for many situations (most of the ones I've run into) it suffices and indeed can be better since it dampens (filters) step changes. As an equation: average_new = average_old - (average_old / n) + (sample_new / n) Or tweaking for efficiency: average_new = average_old + ( (sample_new - average_old) / n ) Note: You need to use floating point numbers for the calculation and resulting average, even if the samples are integers. Also, choosing a binary number for n should help speed up the division, if the math pack takes advantage of it. I realize that this doesn't directly address the question which Alex posed, but I thought that perhaps it might be of interest as an alternative. John John Souvestre - New Orleans LA - (504) 454-0899