Thermocouple and RTD linearisation question

On Fri, 4 Oct 2019 08:56:09 +0100, Martin Brown
<'''newspam'''@nezumi.demon.co.uk> wrote:

On 03/10/2019 22:31, John Larkin wrote:
On Thu, 3 Oct 2019 10:56:23 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 03/10/2019 03:53, Bill Sloman wrote:
On Thursday, October 3, 2019 at 2:50:07 AM UTC+10, jla...@highlandsniptechnology.com wrote:

In real life, 3rd order mostly works.

In John Larkin's experience, this may be true. It probably doesn't generalise.

He has a point in this instance though. There is little evidence that
the N=10 solutions used by the ITS-1990 are any better than N=3 or N=4.

Evaluate the individual terms for T=-270 in the low range polynomial to
see what I mean. I am old school where polynomials are concerned I like
to see nice well behaved convergence towards a solution.

Put another way you should be worried when d[n]/d[n+1] < 270

I reckon d[n] for n>5 are not improving things in any meaningful sense
but have to be included because of the way the fitting was done.

FWIW Excel charting can get plausible looking fits out to N=6 (which is
as far as it permits users to go). That is in the sense that they have
coefficients d[n] that tend towards zero faster than (1/270)^n.

I am mystified how they managed to calibrate this at all when there are
so few well behaved triple points available for precision spot checks.

Commercial thermocouples are usually specified for 0.5 to 5 C
accuracy. Numbers like +-2C +-0.75% are typical. Ref junction sensing
and amplifier errors add to that. We're dealing with microvolts.

In real life, reasonably close to room temp, they are usually a lot
better than their specs.

Table lookup and interpolation is easy. The numbers are available in
tables.

That is probably why no-one has noticed that some of the N=10 fitted
polynomial coefficients in that standards document are complete crap.

Do you happen to have a table for type K? I'd be interested in doing a
proper polynomial fit to what the correct result ought to look like.

Here's a type T table, voltage to temperature. The NMR people like T
because it covers their range well. I derived this from the NIST
polynomials with a PowerBasic program, generating a file for a 68K
cross assembler.

https://www.dropbox.com/s/ftlmjrqt27rp529/L350_Type_T.txt?dl=0

I probably have a type K somewhere. We've done every known type.

I recall doing the RTD tables by typing the numbers from an Omega
handbook. There aren't many points for the ref junction correction.




--

John Larkin Highland Technology, Inc

lunatic fringe electronics
 
On Fri, 4 Oct 2019 04:05:25 -0700 (PDT), whit3rd <whit3rd@gmail.com>
wrote:

On Thursday, October 3, 2019 at 7:24:43 PM UTC-7, jla...@highlandsniptechnology.com wrote:
On Thu, 3 Oct 2019 17:45:50 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Thursday, October 3, 2019 at 4:35:56 AM UTC-7, jla...@highlandsniptechnology.com wrote:

whit3rd <whit3rd@gmail.com> wrote:

On Wednesday, October 2, 2019 at 3:08:04 PM UTC-7, John Larkin wrote:

[about thermocouple reference junctions with controlled temperature]

If you heat the reference junction, all the incoming thermocouple pair
transitions to copper must be at the same temp. The heater creates
thermal gradients.

Why heat a block, and control its temperature, and try to keep all the
tc-to-copper junctions at that temp, when you could just leave the
block at ambient and measure it?

Because an uninsulated block might be subject to thermal gradients from any
and all exterior sources. Uncontrolled variables, like that, are what
the design is intended to eliminate or minimize.

Once it's insulated, heating is inexpensive.

Not if you have to heat the whole world to the same temperature, which
is what you'll need to do to eliminate gradients and heat flow along
the wires.

You insulate so you do NOT heat the whole world, that's why you use insulation.
It has insulating properties. This never 'eliminates' heat flow, just limits it
to within your tolerances. Gradients on the wire are why you actively
heat, of course. So, that's solved already.

How can you keep the interior of the box at a constant temperature
without a sensor? What's the advantage of then adding heaters and
power supplies and power amps, other than avoiding a little math?

Few if any boxes with ADC capabilities are lacking in power supplies, and the
heater requirements are so crude, you could even use AC. No 'added' power supply.

How many heater watts for, say, a 16 channel t/c acquision system?


Every semiconductor junction is a sensor, and it only takes an error
amplifier, and one calibration trial, to find the thermal setpoint.

The advantage would be lessened computation and conversion requirements.

"Lessened computation" only appeals to the math-phobic. I'd rather
have a half page of code, than design a thermally insulated, heated
junction box, and design the heater controller.

Our t/c products allow the customer to have his own reference junction
location and configuration, which they sometimes do. Sometimes it's
literally a bump in a cable. We only ask him to add an RTD out there,
which we will process. They seem to like that. If we asked them to
heat the junction box to an accurate temperature, they would
rightfully think us to be insane.

A microprocessor doesn't mind doing math "repeatedly."

Or any other unnecessary task. Angry Birds doesn't annoy a cellphone.

How would you propose to measure the t/c voltage and convert to
temperature without a processor?

This one has the ref junction RTD on the same PC board as everything
else.

https://www.dropbox.com/s/noapeoyk10iayoz/Chimera_Ref_Junction.JPG?raw=1



--

John Larkin Highland Technology, Inc

lunatic fringe electronics
 
On Friday, October 4, 2019 at 9:40:39 AM UTC-7, jla...@highlandsniptechnology.com wrote:

> How many heater watts for, say, a 16 channel t/c acquision system?

Three.

> "Lessened computation" only appeals to the math-phobic.

Not true; roundoff errors for things like high-order polynomials are NOT
minor concerns. I'm not math-phobic, so I often propogate an error estimate
through the formulae. The maybe-four-digit accuracy of a thermocouple
requires the coefficients to be specified to seven or eight decimal places.
It's a tad ugly.

I'd rather have a half page of code, than design a thermally insulated, heated
junction box, and design the heater controller.

The polynomial coefficients from NIST had some typing errors; you might want
to review those 'half page of code' items after looking here:

<https://srdata.nist.gov/its90/corrections/corrections.html>

Your anguish at putting a sock over the thing is noted.
Anguish at a design of electronics... well, this is a support group
for that, we'll all help.
 
On 10/2/19 6:07 PM, John Larkin wrote:
On Wed, 2 Oct 2019 13:36:09 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Wednesday, October 2, 2019 at 7:55:06 AM UTC-7, jla...@highlandsniptechnology.com wrote:
On Wed, 02 Oct 2019 14:45:49 +0100, Peter <nospam@nospam9876.com
wrote:

I am building a board with an ADC on it which is to measure these two
sensor types.

The tricky bit is the linearisation.

The big error source is usually the reference junction.

It doesn't need to be. If your case is always cool, the
reference junction can be an insulated glob with a heater
that always stays at one setpoint. That cuts out half the calculation
overhead, and all it takes is a thermostat (very simple electronics).

It isn't a surface-mount off-the-shelf jellybean, though. And it takes a few seconds
to come on-line.

If you heat the reference junction, all the incoming thermocouple pair
transitions to copper must be at the same temp. The heater creates
thermal gradients.

And the copper has to be the same. Different copper wire alloys can
easily have 100-nV/K TC slopes.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 10/2/19 12:37 PM, Martin Brown wrote:
On 02/10/2019 17:04, Peter wrote:

  Martin Brown <'''newspam'''@nezumi.demon.co.uk> wrote:

Most likely these days just store the coefficients.

I can find the polynomial coefficients for EJKRST here

https://paginas.fe.up.pt/saic/Docencia/ci_lem/Ficheiros/an043.pdf

for both directions. I just need them for B and N.

Depending on the range of temperatures your sensor is expected to
encounter then you can choose the right coefficients. How many you need
depends on how accurate you want the calibration to be.

I am trying to support the full documented temp range for each type.

However I have had no luck yet finding a *single* resistance to
temperature equation for the RTD.

It won't hurt much if you use the high range polynomial for temperatures
a little below zero or the low range one for room temperatures. It
really matters which you use when you get to very hot or very cold. The
two range calibration methods should agree pretty well in their overlap.

Providing a bit of hysteresis so you only swap to high range at +30C and
to low range at -10C would be a reasonable compromise. I haven't checked
what maximum error that would incur (you ought to though).

Tables can be found for all these so one could generate a lookup
table.

I know that in principle one can generate a polynomial for almost any
curve, and these curves being monotonic, it is even easier. If you
want to fit 10 points exactly, you need a polynomial with 10 (11?)
terms. How to do this, I don't know, but clearly it is well known.

That is an unfortunate tendency of engineers to overfit their
calibration data and get a polynomial that fits all their calibration
points exactly and oscillates wildly at all points in between.

Cubic splines are the bomb for this sort of job.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Fri, 4 Oct 2019 12:18:47 -0700 (PDT), whit3rd <whit3rd@gmail.com>
wrote:

On Friday, October 4, 2019 at 9:40:39 AM UTC-7, jla...@highlandsniptechnology.com wrote:

How many heater watts for, say, a 16 channel t/c acquision system?

Three.

"Lessened computation" only appeals to the math-phobic.

Not true; roundoff errors for things like high-order polynomials are NOT
minor concerns. I'm not math-phobic, so I often propogate an error estimate
through the formulae. The maybe-four-digit accuracy of a thermocouple
requires the coefficients to be specified to seven or eight decimal places.
It's a tad ugly.

We most always use table lookup and interpolation at runtime. Some of
our uPs don't have hardware float.

I'd rather have a half page of code, than design a thermally insulated, heated
junction box, and design the heater controller.

The polynomial coefficients from NIST had some typing errors; you might want
to review those 'half page of code' items after looking here:

https://srdata.nist.gov/its90/corrections/corrections.html

Your anguish at putting a sock over the thing is noted.
Anguish at a design of electronics... well, this is a support group
for that, we'll all help.

Anguish? I don't recall much anguish. Our thermocouple and RTD gadgets
(both acquisition and simulation) all work.

Probably the worst parts were in the RTD simulators. It's nontrivial,
sometimes a real PITA, to accurately simulate a floating resistor.
Some people use a lot of real resistors and relays, but that has
issues too.
 
On Fri, 4 Oct 2019 15:27:24 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

On 10/2/19 6:07 PM, John Larkin wrote:
On Wed, 2 Oct 2019 13:36:09 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Wednesday, October 2, 2019 at 7:55:06 AM UTC-7, jla...@highlandsniptechnology.com wrote:
On Wed, 02 Oct 2019 14:45:49 +0100, Peter <nospam@nospam9876.com
wrote:

I am building a board with an ADC on it which is to measure these two
sensor types.

The tricky bit is the linearisation.

The big error source is usually the reference junction.

It doesn't need to be. If your case is always cool, the
reference junction can be an insulated glob with a heater
that always stays at one setpoint. That cuts out half the calculation
overhead, and all it takes is a thermostat (very simple electronics).

It isn't a surface-mount off-the-shelf jellybean, though. And it takes a few seconds
to come on-line.

If you heat the reference junction, all the incoming thermocouple pair
transitions to copper must be at the same temp. The heater creates
thermal gradients.


And the copper has to be the same. Different copper wire alloys can
easily have 100-nV/K TC slopes.

I didn't really want to know that.
 
On 10/4/19 3:18 PM, whit3rd wrote:
On Friday, October 4, 2019 at 9:40:39 AM UTC-7, jla...@highlandsniptechnology.com wrote:

How many heater watts for, say, a 16 channel t/c acquision system?

Three.

"Lessened computation" only appeals to the math-phobic.

Not true; roundoff errors for things like high-order polynomials are NOT
minor concerns.

It's only a problem if you're forming the polynomials as sums of powers.

You can approximate x**N using polynomials of order N-2 to an relative
accuracy of 2**-N. This is a cute result that follows directly from the
explicit formula for Chebyshev polynomials--they have a maximum
amplitude of 1, and the leading coefficient is 2**N.

Tn(x) = (2x)**n + (terms of order n-2 and lower)

so x**n = Tn(x)/2**n + (terms of order n-2 and lower).

Now |Tn(x)| <= 1, so you can approximate x**n by a polynomial of order
n-2 to within a relative error of 2**-n. In other words, the basis set
{x**0...x**N} itself is horribly ill-conditioned for large N.

Long ago I wrote a program to generate polynomial and rational
approximations based on the Chebyshev recurrence. (I didn't invent the
idea, I found it in a book.)

For an analytically-known function, you sample it on the Chebyshev
abscissae (ycliu) and perform an N-point FFT. That gets you the
Chebyshev coefficients up to order N-1.

The key benefit of this is that the Chebyshev polynomials are
orthonormal, so you can immediately see the effect of truncating the big
long Chebyshev expansion: the worst-case error is less than the sum of
the absolute values of the terms you neglect.

You can also use the Chebyshev orthogonality relation to transform an
Nth order polynomial to an Nth order rational function, which is often
better-behaved, especially with respect to oscillations between fit points.

You make the formal equation

sum b_m T_m(x)
m=0 to M
sum a_n T_n(x) = ------------------------
n=0 to N sum c_k T_k(x)
k = 0 to (N-M)

where c_0 = 1.

You then multiply through by the denominator and recursively apply the
Chebyshev orthogonality condition

integral (2/pi) T_m(x) T_n(x)/sqrt(1-x**2) = 1 (n = m)
-1 to 1 0 (otherwise)

and that gets you the b_m and c_k. (You don't have to compute the
integral because the orthogonality condition tells you that you can
ignore most of the contributions, and how big the others are. IIRC you
wind up with a tridiagonal matrix, and that's super easy to solve.

You can do the same thing with reasonably smooth experimental data, e.g.
thermocouple coefficients, if you use lots more than N points and use a
least-squares cubic spline fit to resample the data at the Chebyshev
abscissae.

Once you have the Chebyshev coefficients, you evaluate them using
Clenshaw's rule, which is nearly as efficient as the usual Horner method
for evaluating polynomials in explicit powers of x.

Pretty simple, and works great.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Friday, October 4, 2019 at 10:58:51 AM UTC-4, jla...@highlandsniptechnology.com wrote:
On Fri, 4 Oct 2019 05:42:02 -0700 (PDT), George Herold
gherold@teachspin.com> wrote:

On Friday, October 4, 2019 at 3:56:22 AM UTC-4, Martin Brown wrote:
On 03/10/2019 22:31, John Larkin wrote:
On Thu, 3 Oct 2019 10:56:23 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 03/10/2019 03:53, Bill Sloman wrote:
On Thursday, October 3, 2019 at 2:50:07 AM UTC+10, jla...@highlandsniptechnology.com wrote:

In real life, 3rd order mostly works.

In John Larkin's experience, this may be true. It probably doesn't generalise.

He has a point in this instance though. There is little evidence that
the N=10 solutions used by the ITS-1990 are any better than N=3 or N=4.

Evaluate the individual terms for T=-270 in the low range polynomial to
see what I mean. I am old school where polynomials are concerned I like
to see nice well behaved convergence towards a solution.

Put another way you should be worried when d[n]/d[n+1] < 270

I reckon d[n] for n>5 are not improving things in any meaningful sense
but have to be included because of the way the fitting was done.

FWIW Excel charting can get plausible looking fits out to N=6 (which is
as far as it permits users to go). That is in the sense that they have
coefficients d[n] that tend towards zero faster than (1/270)^n.

I am mystified how they managed to calibrate this at all when there are
so few well behaved triple points available for precision spot checks.

Commercial thermocouples are usually specified for 0.5 to 5 C
accuracy. Numbers like +-2C +-0.75% are typical. Ref junction sensing
and amplifier errors add to that. We're dealing with microvolts.

In real life, reasonably close to room temp, they are usually a lot
better than their specs.

Table lookup and interpolation is easy. The numbers are available in
tables.

That is probably why no-one has noticed that some of the N=10 fitted
polynomial coefficients in that standards document are complete crap.

Do you happen to have a table for type K? I'd be interested in doing a
proper polynomial fit to what the correct result ought to look like.

The older CRC handbooks had thermocouple tables.
(not sure about newer editions)
George H.

--
Regards,
Martin Brown


Omega has some nice stuff.

The tables do change now and then, every few decades maybe.

Note that there are two standard platinum RTD tables, the usual 385
curve and the pure-platinum lab grade 392.

It's easy to linearize a platinum (or nickel!) RTD in hardware, but
that's not necessary much these days.



--

John Larkin Highland Technology, Inc

lunatic fringe electronics

Many years ago I went through this linearization exercise for an application using a embedded processor. While thumbing through an OMEGA temperature products handbook, I notice the lineraization tables for their thermocouples. I bet that they are still available and also for RTDs. IMHO, a lookup table is the easiest & fastest way to go.
 
On 2019-10-03, Peter <nospam@nospam9876.com> wrote:
This is really interesting.

I too noticed that the last coefficients, around E-30, are bound to be
just silly. You would need to use double floats for them and almost
anything done with that is defying physical reality :)

The last terms are probably coverng for rounding error in the
regresion computation, and the few before that covering for imprecision
in the input data.

--
When I tried casting out nines I made a hash of it.
 
On 5/10/19 5:32 am, Phil Hobbs wrote:
On 10/2/19 12:37 PM, Martin Brown wrote:
On 02/10/2019 17:04, Peter wrote:

  Martin Brown <'''newspam'''@nezumi.demon.co.uk> wrote:

Most likely these days just store the coefficients.

I can find the polynomial coefficients for EJKRST here

https://paginas.fe.up.pt/saic/Docencia/ci_lem/Ficheiros/an043.pdf

for both directions. I just need them for B and N.

Depending on the range of temperatures your sensor is expected to
encounter then you can choose the right coefficients. How many you need
depends on how accurate you want the calibration to be.

I am trying to support the full documented temp range for each type.

However I have had no luck yet finding a *single* resistance to
temperature equation for the RTD.

It won't hurt much if you use the high range polynomial for
temperatures a little below zero or the low range one for room
temperatures. It really matters which you use when you get to very hot
or very cold. The two range calibration methods should agree pretty
well in their overlap.

Providing a bit of hysteresis so you only swap to high range at +30C
and to low range at -10C would be a reasonable compromise. I haven't
checked what maximum error that would incur (you ought to though).

Tables can be found for all these so one could generate a lookup
table.

I know that in principle one can generate a polynomial for almost any
curve, and these curves being monotonic, it is even easier. If you
want to fit 10 points exactly, you need a polynomial with 10 (11?)
terms. How to do this, I don't know, but clearly it is well known.

That is an unfortunate tendency of engineers to overfit their
calibration data and get a polynomial that fits all their calibration
points exactly and oscillates wildly at all points in between.


Cubic splines are the bomb for this sort of job.

Exactly what I said two days ago.

Use interpolated splines however, since they pass through the control
points, not B-splines (despite their advantages of smooth derivatives)

CH
 
On Friday, October 4, 2019 at 4:32:05 PM UTC-4, John Larkin wrote:
On Fri, 4 Oct 2019 15:27:24 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 10/2/19 6:07 PM, John Larkin wrote:
On Wed, 2 Oct 2019 13:36:09 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Wednesday, October 2, 2019 at 7:55:06 AM UTC-7, jla...@highlandsniptechnology.com wrote:
On Wed, 02 Oct 2019 14:45:49 +0100, Peter <nospam@nospam9876.com
wrote:

I am building a board with an ADC on it which is to measure these two
sensor types.

The tricky bit is the linearisation.

The big error source is usually the reference junction.

It doesn't need to be. If your case is always cool, the
reference junction can be an insulated glob with a heater
that always stays at one setpoint. That cuts out half the calculation
overhead, and all it takes is a thermostat (very simple electronics).

It isn't a surface-mount off-the-shelf jellybean, though. And it takes a few seconds
to come on-line.

If you heat the reference junction, all the incoming thermocouple pair
transitions to copper must be at the same temp. The heater creates
thermal gradients.


And the copper has to be the same. Different copper wire alloys can
easily have 100-nV/K TC slopes.

I didn't really want to know that.

That's the point of the isothermal connections. As long as it is all isothermal other than the joint you are measuring it doesn't matter what the other wires are, it all falls out of the equations. So for all the joints other than the isothermal connection, either keep them the same materials or keep them at the same temperatures.

--

Rick C.

+ Get 2,000 miles of free Supercharging
+ Tesla referral code - https://ts.la/richard11209
 
Martin Brown wrote:

He has a point in this instance though. There is little evidence that
the N=10 solutions used by the ITS-1990 are any better than N=3 or N=4.

Just use cubic splines instead of high-level polynomials.
Otherwise, if you really have to, at least chose the nodes properly
(Chebyshev, if possible). <greta mode on>A set of equidistant points.
This is all wrong. How dare you.</>

Best regards, Piotr
 
On 05/10/2019 06:52, Jasen Betts wrote:
On 2019-10-03, Peter <nospam@nospam9876.com> wrote:
This is really interesting.

I too noticed that the last coefficients, around E-30, are bound to be
just silly. You would need to use double floats for them and almost
anything done with that is defying physical reality :)

The last terms are probably coverng for rounding error in the
regresion computation, and the few before that covering for imprecision
in the input data.

Some of them are not even convergent over their claimed range of
validity! I find it rather worrying that they used N=10 polynomial
fitting combined with a numerical algorithm that was incapable of
delivering accurate results.

Excels own charting polynomial works OK on the same dataset out to N=6.

Here is the result of applying that more stable polynomial fit N=4 and
forced through zero to the table of data in the link given elsewhere:

y = 3.0828574638606400E-11x4 - 9.1070253343747400E-08x3 +
3.1204293829190100E-05x2 + 3.9589043715018600E-02x
R² = 9.9999981693967300E-01

It remains numerically stable for N=5 and N=6 but makes very little
improvement to the quality of the fit.

Note that *the* above coefficients are decreasing fast enough that the
resulting series is convergent to a sensible answer even at T=-270
Successive terms are ~1000x smaller so that the non-linearity
corrections are convergent and sensible.

ITS-1990 table gives

emf units: mV
range: -270.000, 0.000, 10
0.000000000000E+00
0.394501280250E-01
0.236223735980E-04
-0.328589067840E-06
-0.499048287770E-08
-0.675090591730E-10
-0.574103274280E-12
-0.310888728940E-14
-0.104516093650E-16
-0.198892668780E-19
-0.163226974860E-22

The above are clearly total gibberish when T=-270!

--
Regards,
Martin Brown
 
This
http://www.ti.com/lit/an/sbaa189/sbaa189.pdf
suggests that simple linear interpolation with just 32 segments gets
you within some 0.05C.
 
On 2019-10-05 09:08, Martin Brown wrote:
On 05/10/2019 06:52, Jasen Betts wrote:
On 2019-10-03, Peter <nospam@nospam9876.com> wrote:
This is really interesting.

I too noticed that the last coefficients, around E-30, are bound to be
just silly. You would need to use double floats for them and almost
anything done with that is defying physical reality :)

The last terms are probably coverng for rounding error in the
regresion computation, and the few before that covering for imprecision
in the input data.

Some of them are not even convergent over their claimed range of
validity! I find it rather worrying that they used N=10 polynomial
fitting combined with a numerical algorithm that was incapable of
delivering accurate results.

Excels own charting polynomial works OK on the same dataset out to N=6.

Here is the result of applying that more stable polynomial fit N=4 and
forced through zero to the table of data in the link given elsewhere:

y = 3.0828574638606400E-11x4 - 9.1070253343747400E-08x3 +
3.1204293829190100E-05x2 + 3.9589043715018600E-02x
R² = 9.9999981693967300E-01

It remains numerically stable for N=5 and N=6 but makes very little
improvement to the quality of the fit.

Note that *the* above coefficients are decreasing fast enough that the
resulting series is convergent to a sensible answer even at T=-270
Successive terms are ~1000x smaller so that the non-linearity
corrections are convergent and sensible.

ITS-1990 table gives

emf units: mV
range: -270.000, 0.000, 10
  0.000000000000E+00
  0.394501280250E-01
  0.236223735980E-04
 -0.328589067840E-06
 -0.499048287770E-08
 -0.675090591730E-10
 -0.574103274280E-12
 -0.310888728940E-14
 -0.104516093650E-16
 -0.198892668780E-19
 -0.163226974860E-22

The above are clearly total gibberish when T=-270!

It's just plain stupid to use a long polynomial in powers of T, on
account of the creeping linear dependence of the basis functions.
Search for "Hilbert matrix".

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 2019-10-05 02:05, Clifford Heath wrote:
On 5/10/19 5:32 am, Phil Hobbs wrote:
On 10/2/19 12:37 PM, Martin Brown wrote:
On 02/10/2019 17:04, Peter wrote:

  Martin Brown <'''newspam'''@nezumi.demon.co.uk> wrote:

Most likely these days just store the coefficients.

I can find the polynomial coefficients for EJKRST here

https://paginas.fe.up.pt/saic/Docencia/ci_lem/Ficheiros/an043.pdf

for both directions. I just need them for B and N.

Depending on the range of temperatures your sensor is expected to
encounter then you can choose the right coefficients. How many you
need
depends on how accurate you want the calibration to be.

I am trying to support the full documented temp range for each type.

However I have had no luck yet finding a *single* resistance to
temperature equation for the RTD.

It won't hurt much if you use the high range polynomial for
temperatures a little below zero or the low range one for room
temperatures. It really matters which you use when you get to very
hot or very cold. The two range calibration methods should agree
pretty well in their overlap.

Providing a bit of hysteresis so you only swap to high range at +30C
and to low range at -10C would be a reasonable compromise. I haven't
checked what maximum error that would incur (you ought to though).

Tables can be found for all these so one could generate a lookup
table.

I know that in principle one can generate a polynomial for almost any
curve, and these curves being monotonic, it is even easier. If you
want to fit 10 points exactly, you need a polynomial with 10 (11?)
terms. How to do this, I don't know, but clearly it is well known.

That is an unfortunate tendency of engineers to overfit their
calibration data and get a polynomial that fits all their calibration
points exactly and oscillates wildly at all points in between.


Cubic splines are the bomb for this sort of job.

Exactly what I said two days ago.

Pure chronological snobbery. ;)

Use interpolated splines however, since they pass through the control
points, not B-splines (despite their advantages of smooth derivatives)

CH

You want to use least-squares cubic splines to fit data, though--both X
and Y of each knot are fit parameters. That gives you a useful amount
of noise rejection without causing artifacts the way a long polynomial
does. Nelder-Mead works well for finding the best fit. I usually run
an interpolating spline through a subset of the data points for an
initial guess.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 6/10/19 4:28 am, Phil Hobbs wrote:
On 2019-10-05 02:05, Clifford Heath wrote:
On 5/10/19 5:32 am, Phil Hobbs wrote:
On 10/2/19 12:37 PM, Martin Brown wrote:
On 02/10/2019 17:04, Peter wrote:

  Martin Brown <'''newspam'''@nezumi.demon.co.uk> wrote:

Most likely these days just store the coefficients.

I can find the polynomial coefficients for EJKRST here

https://paginas.fe.up.pt/saic/Docencia/ci_lem/Ficheiros/an043.pdf

for both directions. I just need them for B and N.

Depending on the range of temperatures your sensor is expected to
encounter then you can choose the right coefficients. How many you
need
depends on how accurate you want the calibration to be.

I am trying to support the full documented temp range for each type.

However I have had no luck yet finding a *single* resistance to
temperature equation for the RTD.

It won't hurt much if you use the high range polynomial for
temperatures a little below zero or the low range one for room
temperatures. It really matters which you use when you get to very
hot or very cold. The two range calibration methods should agree
pretty well in their overlap.

Providing a bit of hysteresis so you only swap to high range at +30C
and to low range at -10C would be a reasonable compromise. I haven't
checked what maximum error that would incur (you ought to though).

Tables can be found for all these so one could generate a lookup
table.

I know that in principle one can generate a polynomial for almost any
curve, and these curves being monotonic, it is even easier. If you
want to fit 10 points exactly, you need a polynomial with 10 (11?)
terms. How to do this, I don't know, but clearly it is well known.

That is an unfortunate tendency of engineers to overfit their
calibration data and get a polynomial that fits all their
calibration points exactly and oscillates wildly at all points in
between.


Cubic splines are the bomb for this sort of job.

Exactly what I said two days ago.

Pure chronological snobbery. ;)


Use interpolated splines however, since they pass through the control
points, not B-splines (despite their advantages of smooth derivatives)

CH

You want to use least-squares cubic splines to fit data,

Yes, when fitting measured data. But we were talking about
manufacturer's theoretical curves. By definition they are not noisy
data, they're supposed to be definitive. If you're not going to
calibrate to a better standard, your curves should hit all the points.

CH
 
On Sunday, October 6, 2019 at 8:56:59 AM UTC+11, Clifford Heath wrote:
On 6/10/19 4:28 am, Phil Hobbs wrote:
On 2019-10-05 02:05, Clifford Heath wrote:
On 5/10/19 5:32 am, Phil Hobbs wrote:
On 10/2/19 12:37 PM, Martin Brown wrote:
On 02/10/2019 17:04, Peter wrote:

  Martin Brown <'''newspam'''@nezumi.demon.co.uk> wrote:

Most likely these days just store the coefficients.

I can find the polynomial coefficients for EJKRST here

https://paginas.fe.up.pt/saic/Docencia/ci_lem/Ficheiros/an043.pdf

for both directions. I just need them for B and N.

Depending on the range of temperatures your sensor is expected to
encounter then you can choose the right coefficients. How many you
need
depends on how accurate you want the calibration to be.

I am trying to support the full documented temp range for each type..

However I have had no luck yet finding a *single* resistance to
temperature equation for the RTD.

It won't hurt much if you use the high range polynomial for
temperatures a little below zero or the low range one for room
temperatures. It really matters which you use when you get to very
hot or very cold. The two range calibration methods should agree
pretty well in their overlap.

Providing a bit of hysteresis so you only swap to high range at +30C
and to low range at -10C would be a reasonable compromise. I haven't
checked what maximum error that would incur (you ought to though).

Tables can be found for all these so one could generate a lookup
table.

I know that in principle one can generate a polynomial for almost any
curve, and these curves being monotonic, it is even easier. If you
want to fit 10 points exactly, you need a polynomial with 10 (11?)
terms. How to do this, I don't know, but clearly it is well known.

That is an unfortunate tendency of engineers to overfit their
calibration data and get a polynomial that fits all their
calibration points exactly and oscillates wildly at all points in
between.


Cubic splines are the bomb for this sort of job.

Exactly what I said two days ago.

Pure chronological snobbery. ;)


Use interpolated splines however, since they pass through the control
points, not B-splines (despite their advantages of smooth derivatives)

CH

You want to use least-squares cubic splines to fit data,

Yes, when fitting measured data. But we were talking about
manufacturer's theoretical curves. By definition they are not noisy
data, they're supposed to be definitive. If you're not going to
calibrate to a better standard, your curves should hit all the points.

Sadly, they are defined by experimental data, which is always noisy.

And since they are printed numbers of finite length, there is also rounding error.

--
Bill Sloman, Sydney
 
On 05/10/2019 18:30, Phil Hobbs wrote:
On 2019-10-05 09:08, Martin Brown wrote:
On 05/10/2019 06:52, Jasen Betts wrote:
On 2019-10-03, Peter <nospam@nospam9876.com> wrote:
This is really interesting.

I too noticed that the last coefficients, around E-30, are bound to be
just silly. You would need to use double floats for them and almost
anything done with that is defying physical reality :)

The last terms are probably coverng for rounding error in the
regresion computation, and the few before that covering for imprecision
in the input data.

Some of them are not even convergent over their claimed range of
validity! I find it rather worrying that they used N=10 polynomial
fitting combined with a numerical algorithm that was incapable of
delivering accurate results.

Excels own charting polynomial works OK on the same dataset out to N=6.

Here is the result of applying that more stable polynomial fit N=4 and
forced through zero to the table of data in the link given elsewhere:

y = 3.0828574638606400E-11x4 - 9.1070253343747400E-08x3 +
3.1204293829190100E-05x2 + 3.9589043715018600E-02x
R² = 9.9999981693967300E-01

It remains numerically stable for N=5 and N=6 but makes very little
improvement to the quality of the fit.

Note that *the* above coefficients are decreasing fast enough that the
resulting series is convergent to a sensible answer even at T=-270
Successive terms are ~1000x smaller so that the non-linearity
corrections are convergent and sensible.

ITS-1990 table gives

emf units: mV
range: -270.000, 0.000, 10
   0.000000000000E+00
   0.394501280250E-01
   0.236223735980E-04
  -0.328589067840E-06
  -0.499048287770E-08
  -0.675090591730E-10
  -0.574103274280E-12
  -0.310888728940E-14
  -0.104516093650E-16
  -0.198892668780E-19
  -0.163226974860E-22

The above are clearly total gibberish when T=-270!

It's just plain stupid to use a long polynomial in powers of T, on
account of the creeping linear dependence of the basis functions. Search
for "Hilbert matrix".

+1

I know that and you know that but unfortunately engineers do not. A
useful sanity check is that if the high order coefficients are not
decreasing faster than (1/MAXRANGE)^N you are in serious trouble.

I am astonished that the ITS-1990 contains such complete and utter junk.
Even back then it was not beyond the wit of man to rescale the problem
onto -1 to 1 and then use Chebeshev polynomials or better rationals.

I suspect somehow we did the same course (or the same course material).

I have never been all that keen on splines for wide range calibration
since most of the systems I needed to calibrate had well behaved
deviations from ideal (at least if you did the calibration properly).
There were a very limited number of of nearly monoisotopic elements to
choose from and we needed 6 sig fig calibration from 7amu to 250amu.

Actually I was curious and just modelled it in Excel using the tables
referenced elsewhere to determine the fit and I think what they did was
almost certainly least squares fit Chebeshev (which converges OK) and
then converted the coefficients to engineering polynomials which do not.
I can reproduce their curve shape almost exactly except that my early
Gaussian shaped misfit residual is centred on 145C rather than 169C. The
rest of the curve is relatively well behaved apart from the extreme end.

It stems from the non-linear behaviour near absolute zero and near
melting point that fitting a hockey-stick curve concentrates power in
one of the highest available polynomial coefficients ~T6 in this case.

Anova shows there is almost no point in going beyond T5.

Coeff_N ChiSq MaxError
V raw data 1445199.38 54.886
T0 Vres0 27.98034061 357227.8113 27.98034061
T1 Vres1 27.72757892 248.24666 0.821919528
T2 Vres2 -0.515081267 72.94431375 0.374740382
T3 Vres3 -0.312621842 2.039427885 0.077484025
T4 Vres4 0.020342186 1.760139883 0.094468194
T5 Vres2 0.038069482 0.774091766 0.056627856

Where T0 = 1, T1 = (2T/1372)-1 and T[n] = 2x.T[n-1]-T[n-2]

The T4 term isn't doing much good either. The worst case deviation can
be improved by deleting it but with a larger least squares error.


If you push too hard and use N=10 this is what happens:

T Coeff_N ChiSq MaxError
V raw data 1445199.38 54.886
T0 Vres0 27.97517699 357230.2563 27.97517699
T1 Vres1 27.72827116 250.5769622 0.817448158
T2 Vres2 -0.526050181 72.76262183 0.374846308
T3 Vres3 -0.311791205 1.991300561 0.086330497
T4 Vres2 0.006474322 1.882873757 0.091531603
T5 Vres2 0.039344529 0.878821435 0.052193998
T6 Vres2 -0.029108975 0.149404482 0.030724305
T7 Vres2 0.003952056 0.127703384 0.033284386
T8 Vres2 0.00969461 0.096817906 0.025513741
T9 Vres2 -0.006133809 0.071203158 0.019940176
T10Vres2 0.007501157 0.032705474 0.012619076

Excels solver doesn't get the absolute optimum least square solution.

T7 through 10 offer no worthwhile improvement in fit whatsoever.

For some calibration purposes equal ripple is better.

--
Regards,
Martin Brown
 

Welcome to EDABoard.com

Sponsor

Back
Top