News:

PROTON pic BASIC Compilers for PIC, PIC24, dsPIC33

Main Menu

Question on the ethics of averaging on a display

Started by TimB, May 23, 2021, 11:31 AM

Previous topic - Next topic

TimB


Hi all

I'm thinking of expanding my range of temp meters. Its really just code changes but anyway.

One product I want to make is a reference meter. My meter displays to 0.1oc Internally it is 0.02oc to 0.03oc steps due to the resolution of the ADC converter.

If I want to claim 0.01 resolution and achieve that using some form of averaging. Displaying digits that step in 0.01 steps. Would that be dishonest?  I'm sure others are doing this already if I read the data sheets right.

In all honesty working to 0.01oc is very hard and you are at the mercy of so many other factors outside of your control.

One thing I note is that at that level I'm getting very stable results with minimal averaging.

BTW any ideas on how to increase resolution with averaging. What are the best averaging algos that infill for missing steps?
 

TimB


trastikata

Quote from: TimB on May 23, 2021, 11:36 AMhttps://www.silabs.com/documents/public/application-notes/an118.pdf

When I started "playing" with ADC IC's first document I read on ADC resolution increase by oversampling was this:

http://ww1.microchip.com/downloads/en/appnotes/doc8003.pdf


To answer your question about the ethics .... virtually all of the digital sensors nowadays work by oversampling and they have no troubles labeling them as 20bit ADC resolution, 22bit ADC etc.

If you read the theory about ADC you will see that for example the 24bit Delta-Sigma ADCs are actually 1-bit ADC with oversampling and decimation   ;).

As long as your math is correct, you are not misleading the client.


RGV250

Hi Tim,
I see this a lot with multimeters etc, they quote resolution and accuracy. This is from a Fluke 117, as you can see they are very fine resolution but the accuracy allows for small differences.
resolution.jpg

Bob

John Drew

That's a very interesting article Tim.
Re your first question I notice some instruments distinguish between resolution and accuracy.
Unless you can measure against a standard across a range of readings it's probably a stretch to claim 0.01 deg accuracy but resolution is a different matter which you could legitimately claim.
Cheers
John

Bob and I were typing at the same time. Looks like we were saying similar things

TimB

Thanks for the info. As I suspected people are claiming resolution. To me the main aspect of a system I would worry about is repeatability. Not accuracy as I can add accuracy with calibration, even if it requires many steps.

An example is the usage of Class B, A 1/3 Din and 10th Din probes the only thing its showing is out of the box this probe is within x of the standard. I'm sure all they do is make a batch of platinum elements and check them. Just sort them according to how close they are. They are not intrinsically more stable of linear. You just gain the ability to swap probes and know they will match. Since anytime I change a probe I should get it re-calibrated it makes no odds what the class is. Although I do prefer Class A.

My real issue now is that the system is so stable. I put my kit in calibration mode to show the highest resolution and the lower digit hardly moves. When it does its due to external forces like my hand was a few cm away from the probe tip.
It will read say 19.04 then a little later 19.02 then 18.99
Over sampling doing the  'Oversampling and decimation' thing is not going to generate the intermediate digits. Also the chip its self is limited in how often it will give a result. I think its 16hz although I may be wrong. Its doing many x more internally due to the 50/60hz rejection.

From the Mchip data sheet "However another criteria for a successful enhancement of the resolution is that the input signal has to vary when sampled. In my case its not

I need need to be able to on the fly interpolate between 2 signals based on very little data.

Its almost like I have to do loads of logging then constantly look at the data over say 5 seconds previously and look at a trend. Then insert values into the stream based on the trend and finally display it. Introducing changes where there are none.


Example I have these values

19.00
19.00
19.00
19.00
19.20
19.20
19.50
19.50
19.50

I need to either draw an imaginary line through the data and work out the steps between the points and display them, Or to use the previous data to generate an imaginary trend and display the insteps based on my projected trending line.

If there is any help on this I would really appreciate it.

Cheers

Tim

trastikata

Quote from: TimB on May 23, 2021, 01:41 PMI need to either draw an imaginary line through the data and work out the steps between the points and display them, Or to use the previous data to generate an imaginary trend and display the insteps based on my projected trending line.
If there is any help on this I would really appreciate it.

Any interpolation between the current and the previous data to generate intermediate steps will be less accurate than the current data.

The only correct way for software solution is oversampling and decimation. What is the chip that are you using as front-end for the TC if it not secret - do you have digital control over its settings? Maybe you can increase the gain of the IA and use it to increase the resolution by limiting the range?

TimB


The chip is the MAX31865 Max sample rate is around 16hz. But as I said and is pointed out in the data sheet you posted "However another criteria for a successful enhancement of the resolution is that the input signal has to vary when sampled."

I am not getting hardly any changes so I have to work over a long time range. Either running a Linear interpolation on a ring buffer of previous logs or using previous data to essentially look ahead.

Maths is never my strong point so I need to learn how to do Cubic Linear interpolation and run that on a say 32 sample ring buffer to make it a 64 entry ring then sample that to display my result.

The issue is I cannot step in 0.03 or 0.02 steps it has to be 0.01 steps


 

TimB


Actually I find that if I just have a ring buffer and average it produces the results I want.  :o

trastikata

#9
Quote from: TimB on May 23, 2021, 03:38 PMI need to learn how to do Cubic Linear interpolation and run that on a say 32 sample ring buffer to make it a 64 entry ring then sample that to display my result.

If I remember my math correctly, for cubic interpolation you need to create and solve a large matrix of equations (including first and second derivatives too). 32 points is way too much for PIC computation. I would say leave the spline interpolation.

Linear interpolation and extrapolation, especially between that many points could give you a significant error compared to the accuracy you have now, but it is easy to implement.

Another way is using weighted average, giving more weight to the newest readings in the circular buffer. For example the oldest reading has only 0.005 weight, newest has for example 0.95 weight, but the sum of all coefficients should be 1. There are more fancy variations of filtering using weights if require.

If your PIC has Fixed Voltage reference - it is noisy enough to create random noise and a better solution might be using it to generate and add random noise to the front-end ADC readings in order to oversample and decimate. Remember all you need is random noise, even 1 LSB to the original signal.
- Average, say 512 ADC readings from VREF, call it BaseNoiseLevel
- At each front-end ADC reading, get new VREF reading and subtract it from BaseNoiseLevel - that is your random noise - generated using the internal ADC.
- Add that difference as bits to the front-end ADC reading to create random noise
- Oversample and decimate the "noisy" font-end ADC readings to get intermediate values
- Because the PIC's Fixed Voltage Reference might drift, which will create a systematic error, it is advisable to refresh BaseNoiseLevel regularly