Thermocouple and RTD linearisation question

On 05/10/2019 22:56, Clifford Heath wrote:
On 6/10/19 4:28 am, Phil Hobbs wrote:
On 2019-10-05 02:05, Clifford Heath wrote:
On 5/10/19 5:32 am, Phil Hobbs wrote:
On 10/2/19 12:37 PM, Martin Brown wrote:
On 02/10/2019 17:04, Peter wrote:

  Martin Brown <'''newspam'''@nezumi.demon.co.uk> wrote:

Most likely these days just store the coefficients.

I can find the polynomial coefficients for EJKRST here

https://paginas.fe.up.pt/saic/Docencia/ci_lem/Ficheiros/an043.pdf

for both directions. I just need them for B and N.

Depending on the range of temperatures your sensor is expected to
encounter then you can choose the right coefficients. How many
you need
depends on how accurate you want the calibration to be.

I am trying to support the full documented temp range for each type.

However I have had no luck yet finding a *single* resistance to
temperature equation for the RTD.

It won't hurt much if you use the high range polynomial for
temperatures a little below zero or the low range one for room
temperatures. It really matters which you use when you get to very
hot or very cold. The two range calibration methods should agree
pretty well in their overlap.

Providing a bit of hysteresis so you only swap to high range at
+30C and to low range at -10C would be a reasonable compromise. I
haven't checked what maximum error that would incur (you ought to
though).

Tables can be found for all these so one could generate a lookup
table.

I know that in principle one can generate a polynomial for almost any
curve, and these curves being monotonic, it is even easier. If you
want to fit 10 points exactly, you need a polynomial with 10 (11?)
terms. How to do this, I don't know, but clearly it is well known.

That is an unfortunate tendency of engineers to overfit their
calibration data and get a polynomial that fits all their
calibration points exactly and oscillates wildly at all points in
between.


Cubic splines are the bomb for this sort of job.

Exactly what I said two days ago.

Pure chronological snobbery. ;)


Use interpolated splines however, since they pass through the control
points, not B-splines (despite their advantages of smooth derivatives)

CH

You want to use least-squares cubic splines to fit data,

Yes, when fitting measured data. But we were talking about
manufacturer's theoretical curves. By definition they are not noisy
data, they're supposed to be definitive. If you're not going to
calibrate to a better standard, your curves should hit all the points.

I'm not quite sure how they determine these tables but they must
ultimately be referenced back to a handful of reliable triple point
references. I did check the fit using Chebeshev polynomials on the range
-270 to 0 and the results are suggestive that the published polynomials
were derived as Chebeshev fits and converted to divergent polynomials!

--
Regards,
Martin Brown
 
On Mon, 07 Oct 2019 17:02:00 +0100, Peter <nospam@nospam9876.com>
wrote:

May I now take this great and very educating discussion back to the
RTD topic?

Everywhere I look, I see the same Callendar van Dusen equations, but
with cryptic references to more accurate equations used by the
standards bodies.

Then we have the two RTD types: 385 and 392. 385 seems to be
commercial ones and the 392 coefficient seem to be lab standards. This
covers that topic well

http://www.acromag.com/wp-content/uploads/2019/06/RTD_Temperature_Measurement_917A.pdf

The first thing I don't get is which is supposed to be the exact
resistance-temperature relationship - the tables or the equations?

Presumably they are about the same. The polynomials were no doubt
derived from a finite number of point measurements. But then so were
the tables.


I reckon it must be the tables, so simply interpolating them would be
the safest way.

It's easy.

Otherwise, there are two forms of the CVD equation: one for below 0C
and one for above 0C. It's easy to select which one; you switch them
above/below 100 ohms.

Is it really the case that a simple cubic equation represents the
behaviour of the metal?

No, but it's close enough. Even 2nd order is pretty good in the
human-compatible temperature range. Super-accurate RTDs are expensive
and generally not practical; there are lots of other errors in
temperature measurement.






--

John Larkin Highland Technology, Inc

lunatic fringe electronics
 
May I now take this great and very educating discussion back to the
RTD topic?

Everywhere I look, I see the same Callendar van Dusen equations, but
with cryptic references to more accurate equations used by the
standards bodies.

Then we have the two RTD types: 385 and 392. 385 seems to be
commercial ones and the 392 coefficient seem to be lab standards. This
covers that topic well

http://www.acromag.com/wp-content/uploads/2019/06/RTD_Temperature_Measurement_917A.pdf

The first thing I don't get is which is supposed to be the exact
resistance-temperature relationship - the tables or the equations?

I reckon it must be the tables, so simply interpolating them would be
the safest way.

Otherwise, there are two forms of the CVD equation: one for below 0C
and one for above 0C. It's easy to select which one; you switch them
above/below 100 ohms.

Is it really the case that a simple cubic equation represents the
behaviour of the metal?
 
On 07/10/2019 17:02, Peter wrote:
May I now take this great and very educating discussion back to the
RTD topic?

Everywhere I look, I see the same Callendar van Dusen equations, but
with cryptic references to more accurate equations used by the
standards bodies.

Then we have the two RTD types: 385 and 392. 385 seems to be
commercial ones and the 392 coefficient seem to be lab standards. This
covers that topic well

http://www.acromag.com/wp-content/uploads/2019/06/RTD_Temperature_Measurement_917A.pdf

The first thing I don't get is which is supposed to be the exact
resistance-temperature relationship - the tables or the equations?

I reckon it must be the tables, so simply interpolating them would be
the safest way.

The tables are derived from a master calibration done against however
many precisely defined reference temperatures the standards body uses.
Otherwise, there are two forms of the CVD equation: one for below 0C
and one for above 0C. It's easy to select which one; you switch them
above/below 100 ohms.

Is it really the case that a simple cubic equation represents the
behaviour of the metal?

The cubic is good enough for most practical purposes and if you intend
to use anything higher order you really need to know what you are doing
because some of the stuff in the reference documents is gibberish.

On 0 to 1372 Type-K

linear error +0.8C/-0.6C
quadratic error +0.45C/-0.45C *
cubic error +/-0.1C
10th order fit +/-0.05C
(that's a lot of extra work for no real gain in accuracy)

By a strange quirk of fate the residual error on the quadratic fit for
this is an almost perfect sine wave with zeroes at 0,686,1372. Fitting
that instead of a cubic term gets within 0.05C for the full range.

Interpolating on the tables is safe enough and is probably what a lot of
kit does since anything that relies on evaluating those crazy divergent
10th order polynomial fits would be doomed to fail spectacularly.

--
Regards,
Martin Brown
 
On 10/7/19 5:21 AM, Martin Brown wrote:
On 05/10/2019 18:30, Phil Hobbs wrote:
On 2019-10-05 09:08, Martin Brown wrote:
On 05/10/2019 06:52, Jasen Betts wrote:
On 2019-10-03, Peter <nospam@nospam9876.com> wrote:
This is really interesting.

I too noticed that the last coefficients, around E-30, are bound to be
just silly. You would need to use double floats for them and almost
anything done with that is defying physical reality :)

The last terms are probably coverng for rounding error in the
regresion computation, and the few before that covering for imprecision
in the input data.

Some of them are not even convergent over their claimed range of
validity! I find it rather worrying that they used N=10 polynomial
fitting combined with a numerical algorithm that was incapable of
delivering accurate results.

Excels own charting polynomial works OK on the same dataset out to N=6.

Here is the result of applying that more stable polynomial fit N=4
and forced through zero to the table of data in the link given
elsewhere:

y = 3.0828574638606400E-11x4 - 9.1070253343747400E-08x3 +
3.1204293829190100E-05x2 + 3.9589043715018600E-02x
R² = 9.9999981693967300E-01

It remains numerically stable for N=5 and N=6 but makes very little
improvement to the quality of the fit.

Note that *the* above coefficients are decreasing fast enough that
the resulting series is convergent to a sensible answer even at T=-270
Successive terms are ~1000x smaller so that the non-linearity
corrections are convergent and sensible.

ITS-1990 table gives

emf units: mV
range: -270.000, 0.000, 10
   0.000000000000E+00
   0.394501280250E-01
   0.236223735980E-04
  -0.328589067840E-06
  -0.499048287770E-08
  -0.675090591730E-10
  -0.574103274280E-12
  -0.310888728940E-14
  -0.104516093650E-16
  -0.198892668780E-19
  -0.163226974860E-22

The above are clearly total gibberish when T=-270!

It's just plain stupid to use a long polynomial in powers of T, on
account of the creeping linear dependence of the basis functions.
Search for "Hilbert matrix".

+1

I know that and you know that but unfortunately engineers do not. A
useful sanity check is that if the high order coefficients are not
decreasing faster than (1/MAXRANGE)^N you are in serious trouble.

I am astonished that the ITS-1990 contains such complete and utter junk.
Even back then it was not beyond the wit of man to rescale the problem
onto -1 to 1 and then use Chebeshev polynomials or better rationals.

I suspect somehow we did the same course (or the same course material).

I have never been all that keen on splines for wide range calibration
since most of the systems I needed to calibrate had well behaved
deviations from ideal (at least if you did the calibration properly).
There were a very limited number of of nearly monoisotopic elements to
choose from and we needed 6 sig fig calibration from 7amu to 250amu.

Actually I was curious and just modelled it in Excel using the tables
referenced elsewhere to determine the fit and I think what they did was
almost certainly least squares fit Chebeshev (which converges OK) and
then converted the coefficients to engineering polynomials which do not.
I can reproduce their curve shape almost exactly except that my early
Gaussian shaped misfit residual is centred on 145C rather than 169C. The
rest of the curve is relatively well behaved apart from the extreme end.

It stems from the non-linear behaviour near absolute zero and near
melting point that fitting a hockey-stick curve concentrates power in
one of the highest available polynomial coefficients ~T6 in this case.

Anova shows there is almost no point in going beyond T5.

        Coeff_N        ChiSq        MaxError
V raw data            1445199.38    54.886
T0 Vres0    27.98034061    357227.8113    27.98034061
T1 Vres1    27.72757892    248.24666    0.821919528
T2 Vres2    -0.515081267    72.94431375    0.374740382
T3 Vres3    -0.312621842    2.039427885    0.077484025
T4 Vres4    0.020342186    1.760139883    0.094468194
T5 Vres2    0.038069482    0.774091766    0.056627856

Where T0 = 1, T1 = (2T/1372)-1 and T[n] = 2x.T[n-1]-T[n-2]

The T4 term isn't doing much good either. The worst case deviation can
be improved by deleting it but with a larger least squares error.


If you push too hard and use N=10 this is what happens:

T        Coeff_N    ChiSq        MaxError
V raw data            1445199.38    54.886
T0 Vres0    27.97517699    357230.2563    27.97517699
T1 Vres1    27.72827116    250.5769622    0.817448158
T2 Vres2    -0.526050181    72.76262183    0.374846308
T3 Vres3    -0.311791205    1.991300561    0.086330497
T4 Vres2    0.006474322    1.882873757    0.091531603
T5 Vres2    0.039344529    0.878821435    0.052193998
T6 Vres2    -0.029108975    0.149404482    0.030724305
T7 Vres2    0.003952056    0.127703384    0.033284386
T8 Vres2    0.00969461    0.096817906    0.025513741
T9 Vres2    -0.006133809    0.071203158    0.019940176
T10Vres2    0.007501157    0.032705474    0.012619076

Excels solver doesn't get the absolute optimum least square solution.

T7 through 10 offer no worthwhile improvement in fit whatsoever.

For some calibration purposes equal ripple is better.

This is the sort of case where the rational Chebyshev approximation is
super useful. The estimable Forman Acton, in "Numerical Methods that
Work", talks about the folly of trying to fit functions that don't have
polynomial-type geometries, e.g. things with asymptotes. Even if the
difficult place is outside the fit range, it still drives polynomials batty.

The procedure I outlined upthread will often produce nearly the minimax
rational function.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
https://hobbs-eo.com
 
jlarkin@highlandsniptechnology.com wrote

Otherwise, there are two forms of the CVD equation: one for below 0C
and one for above 0C. It's easy to select which one; you switch them
above/below 100 ohms.

Is it really the case that a simple cubic equation represents the
behaviour of the metal?

No, but it's close enough. Even 2nd order is pretty good in the
human-compatible temperature range. Super-accurate RTDs are expensive
and generally not practical; there are lots of other errors in
temperature measurement.

I guess one could check this by evaluating the equation but I read of
a 3K error at +3K which would be highly relevant in a cryogenic
application.

Interpolating the table would be a lot better - if one assumes the
table is definitive.
 
On Mon, 07 Oct 2019 21:24:57 +0100, Peter <nospam@nospam9876.com>
wrote:

jlarkin@highlandsniptechnology.com wrote

Otherwise, there are two forms of the CVD equation: one for below 0C
and one for above 0C. It's easy to select which one; you switch them
above/below 100 ohms.

Is it really the case that a simple cubic equation represents the
behaviour of the metal?

No, but it's close enough. Even 2nd order is pretty good in the
human-compatible temperature range. Super-accurate RTDs are expensive
and generally not practical; there are lots of other errors in
temperature measurement.


I guess one could check this by evaluating the equation but I read of
a 3K error at +3K which would be highly relevant in a cryogenic
application.

Interpolating the table would be a lot better - if one assumes the
table is definitive.

RTDs tend to go to hell at liquid helium temps, just where you want
milliKelvin accuracy. All sorts of things get weird below about 20K.

We used Lakeshore silicon diodes for the Cebaf cryo stuff. They have
some magical recipe. Below 20K the carriers "freeze out" and the
voltage drop skyrockets, which is great.
 
On Friday, October 4, 2019 at 10:53:48 AM UTC-4, jla...@highlandsniptechnology.com wrote:
On Fri, 4 Oct 2019 08:56:09 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 03/10/2019 22:31, John Larkin wrote:
On Thu, 3 Oct 2019 10:56:23 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 03/10/2019 03:53, Bill Sloman wrote:
On Thursday, October 3, 2019 at 2:50:07 AM UTC+10, jla...@highlandsniptechnology.com wrote:

In real life, 3rd order mostly works.

In John Larkin's experience, this may be true. It probably doesn't generalise.

He has a point in this instance though. There is little evidence that
the N=10 solutions used by the ITS-1990 are any better than N=3 or N=4.

Evaluate the individual terms for T=-270 in the low range polynomial to
see what I mean. I am old school where polynomials are concerned I like
to see nice well behaved convergence towards a solution.

Put another way you should be worried when d[n]/d[n+1] < 270

I reckon d[n] for n>5 are not improving things in any meaningful sense
but have to be included because of the way the fitting was done.

FWIW Excel charting can get plausible looking fits out to N=6 (which is
as far as it permits users to go). That is in the sense that they have
coefficients d[n] that tend towards zero faster than (1/270)^n.

I am mystified how they managed to calibrate this at all when there are
so few well behaved triple points available for precision spot checks.

Commercial thermocouples are usually specified for 0.5 to 5 C
accuracy. Numbers like +-2C +-0.75% are typical. Ref junction sensing
and amplifier errors add to that. We're dealing with microvolts.

In real life, reasonably close to room temp, they are usually a lot
better than their specs.

Table lookup and interpolation is easy. The numbers are available in
tables.

That is probably why no-one has noticed that some of the N=10 fitted
polynomial coefficients in that standards document are complete crap.

Do you happen to have a table for type K? I'd be interested in doing a
proper polynomial fit to what the correct result ought to look like.


Here's a type T table, voltage to temperature. The NMR people like T
because it covers their range well.
I use type T for magnetic applications because it's the least magnetic
of the TC pairs. Using thin wire helps too obviously.

George H.
I derived this from the NIST
polynomials with a PowerBasic program, generating a file for a 68K
cross assembler.

https://www.dropbox.com/s/ftlmjrqt27rp529/L350_Type_T.txt?dl=0

I probably have a type K somewhere. We've done every known type.

I recall doing the RTD tables by typing the numbers from an Omega
handbook. There aren't many points for the ref junction correction.




--

John Larkin Highland Technology, Inc

lunatic fringe electronics
 
On 07/10/2019 21:24, Peter wrote:
jlarkin@highlandsniptechnology.com wrote

Otherwise, there are two forms of the CVD equation: one for below 0C
and one for above 0C. It's easy to select which one; you switch them
above/below 100 ohms.

Is it really the case that a simple cubic equation represents the
behaviour of the metal?

No, but it's close enough. Even 2nd order is pretty good in the
human-compatible temperature range. Super-accurate RTDs are expensive
and generally not practical; there are lots of other errors in
temperature measurement.


I guess one could check this by evaluating the equation but I read of
a 3K error at +3K which would be highly relevant in a cryogenic
application.

The equation as cast in Centrigrade gets pretty ropey down at T=-270 and
the RTD doesn't help things as its dV/dT gets smaller as you approach
absolute zero which magnifies a small error in voltage converted to
temperature. You can do slightly better at low temperatures by recasting
the problem so that you fit a polynomial in absolute T ie in Kelvin.

Fitting T/Centigrade the values are subject to numerical cancellation in
the cryogenic regime just where things get interesting.

The difference is not huge but if you are intending to use it for
cryogenincs could matter. Better sensors are available for the task.

Chebeshev fit 0 to -270C chi-squared 0.0125 max error 22.5mV
Kelvin polynomial fit 3 to 273K chi-squared 0.000428 max error 3.2mV

It gets the error in temperature down to 2.5K worst case at the expense
of being subject to cancellation errors near normal fridge temperatures.
Interpolating the table would be a lot better - if one assumes the
table is definitive.

Not an entirely safe assumption. The table is derived.
--
Regards,
Martin Brown
 
On 07/10/2019 18:18, Phil Hobbs wrote:
On 10/7/19 5:21 AM, Martin Brown wrote:

For some calibration purposes equal ripple is better.

This is the sort of case where the rational Chebyshev approximation is
super useful. The estimable Forman Acton, in "Numerical Methods that
Work", talks about the folly of trying to fit functions that don't have
polynomial-type geometries, e.g. things with asymptotes. Even if the
difficult place is outside the fit range, it still drives polynomials batty.

Another blast from the past. I used to like that book too.

The procedure I outlined upthread will often produce nearly the minimax
rational function.

In this case a considerable improvement at the low temperature end is
possible simply by a shift of temperature origin from Centigrade to
Kelvin so that the rather curved non-linear bit near absolute zero is
not being computed as the small difference between large numbers.

I'm a little surprised by just how effective the shift of origin is as
it gains nearly an order of magnitude decrease in the residuals using
the standard spreadsheet precision against Chebeshev. RTD still isn't
the method of choice down there but that's another story.

Moral of the story which seems not to have been followed in this case is
always look at a plot of the residuals after fitting any function.

--
Regards,
Martin Brown
 
On Mon, 7 Oct 2019 19:39:53 -0700 (PDT), George Herold
<gherold@teachspin.com> wrote:

On Friday, October 4, 2019 at 10:53:48 AM UTC-4, jla...@highlandsniptechnology.com wrote:
On Fri, 4 Oct 2019 08:56:09 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 03/10/2019 22:31, John Larkin wrote:
On Thu, 3 Oct 2019 10:56:23 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 03/10/2019 03:53, Bill Sloman wrote:
On Thursday, October 3, 2019 at 2:50:07 AM UTC+10, jla...@highlandsniptechnology.com wrote:

In real life, 3rd order mostly works.

In John Larkin's experience, this may be true. It probably doesn't generalise.

He has a point in this instance though. There is little evidence that
the N=10 solutions used by the ITS-1990 are any better than N=3 or N=4.

Evaluate the individual terms for T=-270 in the low range polynomial to
see what I mean. I am old school where polynomials are concerned I like
to see nice well behaved convergence towards a solution.

Put another way you should be worried when d[n]/d[n+1] < 270

I reckon d[n] for n>5 are not improving things in any meaningful sense
but have to be included because of the way the fitting was done.

FWIW Excel charting can get plausible looking fits out to N=6 (which is
as far as it permits users to go). That is in the sense that they have
coefficients d[n] that tend towards zero faster than (1/270)^n.

I am mystified how they managed to calibrate this at all when there are
so few well behaved triple points available for precision spot checks.

Commercial thermocouples are usually specified for 0.5 to 5 C
accuracy. Numbers like +-2C +-0.75% are typical. Ref junction sensing
and amplifier errors add to that. We're dealing with microvolts.

In real life, reasonably close to room temp, they are usually a lot
better than their specs.

Table lookup and interpolation is easy. The numbers are available in
tables.

That is probably why no-one has noticed that some of the N=10 fitted
polynomial coefficients in that standards document are complete crap.

Do you happen to have a table for type K? I'd be interested in doing a
proper polynomial fit to what the correct result ought to look like.


Here's a type T table, voltage to temperature. The NMR people like T
because it covers their range well.
I use type T for magnetic applications because it's the least magnetic
of the TC pairs. Using thin wire helps too obviously.

George H.

Good point. Our thermocouples were located close to the sample, near
the center of an X-MHz superconductive NMR magnet.

We controlled the heater that blew air on to the sample, and the t/c
sampled the air close to the sample, all in a dewar.




--

John Larkin Highland Technology, Inc

lunatic fringe electronics
 
On Tuesday, October 8, 2019 at 8:27:49 AM UTC-4, Martin Brown wrote:
On 07/10/2019 21:24, Peter wrote:

jlarkin@highlandsniptechnology.com wrote

Otherwise, there are two forms of the CVD equation: one for below 0C
and one for above 0C. It's easy to select which one; you switch them
above/below 100 ohms.

Is it really the case that a simple cubic equation represents the
behaviour of the metal?

No, but it's close enough. Even 2nd order is pretty good in the
human-compatible temperature range. Super-accurate RTDs are expensive
and generally not practical; there are lots of other errors in
temperature measurement.


I guess one could check this by evaluating the equation but I read of
a 3K error at +3K which would be highly relevant in a cryogenic
application.

The equation as cast in Centrigrade gets pretty ropey down at T=-270 and
the RTD doesn't help things as its dV/dT gets smaller as you approach
absolute zero which magnifies a small error in voltage converted to
temperature. You can do slightly better at low temperatures by recasting
the problem so that you fit a polynomial in absolute T ie in Kelvin.

Fitting T/Centigrade the values are subject to numerical cancellation in
the cryogenic regime just where things get interesting.

The difference is not huge but if you are intending to use it for
cryogenincs could matter. Better sensors are available for the task.

Chebeshev fit 0 to -270C chi-squared 0.0125 max error 22.5mV
Kelvin polynomial fit 3 to 273K chi-squared 0.000428 max error 3.2mV

I know little of various curve fitting schemes. But having some basis
in 'real' physics would seem like an obvious first step.
(Such as using absolute temperature and not some arbitrary zero.)

George H.
It gets the error in temperature down to 2.5K worst case at the expense
of being subject to cancellation errors near normal fridge temperatures.

Interpolating the table would be a lot better - if one assumes the
table is definitive.

Not an entirely safe assumption. The table is derived.
--
Regards,
Martin Brown
 
On 09/10/2019 01:23, George Herold wrote:
On Tuesday, October 8, 2019 at 8:27:49 AM UTC-4, Martin Brown wrote:
On 07/10/2019 21:24, Peter wrote:

I guess one could check this by evaluating the equation but I read of
a 3K error at +3K which would be highly relevant in a cryogenic
application.

The equation as cast in Centrigrade gets pretty ropey down at T=-270 and
the RTD doesn't help things as its dV/dT gets smaller as you approach
absolute zero which magnifies a small error in voltage converted to
temperature. You can do slightly better at low temperatures by recasting
the problem so that you fit a polynomial in absolute T ie in Kelvin.

Fitting T/Centigrade the values are subject to numerical cancellation in
the cryogenic regime just where things get interesting.

The difference is not huge but if you are intending to use it for
cryogenincs could matter. Better sensors are available for the task.

Chebeshev fit 0 to -270C chi-squared 0.0125 max error 22.5mV
Kelvin polynomial fit 3 to 273K chi-squared 0.000428 max error 3.2mV

I know little of various curve fitting schemes. But having some basis
in 'real' physics would seem like an obvious first step.
(Such as using absolute temperature and not some arbitrary zero.)

With infinite machine precision they would all give the same answer but
in practice even at double precision how you evaluate things matters.

Numerical analysis of data fitting for calibration can be fickle.

There are always trade-offs to be made in practice.

--
Regards,
Martin Brown
 

Welcome to EDABoard.com

Sponsor

Back
Top