[Algorithms] Truncate then average? Or average then truncate?
Brought to you by:
vexxed72
From: Robin G. <rob...@gm...> - 2010-05-28 07:24:13
|
Got a bit of a brain block, and I'd welcome someone with a clearer picture of what's the correct thing to do. I have a hardware controller that I'm writing firmware for, specifically debouncing button presses. That's gone fairly well by using a word for each sample of the keystate. An interrupt timer fires and I sample the button states, one bit per key, dumping the current state into a ring buffer of N words. At key scanning time I AND all the words in the buffer together and if any of the bits in each "column" are set I register it as a keypress. So I get instant reaction to a keydown and an N-iteration delay before the columns clear so I can register a keyup. This deals with noisy bounces on key depressions and releases just fine. I am now attempting to "debounce" an analog input. The issue is that the ADC reads the input and returns a 10-bit value, the bottom two bits of which are subject to noise, but that's fine as I only need a value in the range 0..127. So my current effort is to read the ADC value several times, sum them, divide by the number of samples and then lose the bottom 3 bits. This works great for most values in the range, except for the smallest values - 0, 1, 2, ... As you move the controller through the range down to the smallest values, they exhibit the noise and become like an unordered list. Other values where a lot of bits flip, like the boundary between 63 and 64, also allow the noise to propagate up to affect the value. Technically, there should be no reason for even the smallest numbers to show the noise - only the bottom two bits exhibit noise and we're removing three bits of the final value. What order should I be doing this and why? Why am I amplifying the noise to the point where it's a problem? - Robin Green. |