Math is hard

In comp.arch.fpga robert bristow-johnson <rbj@audioimagination.com> wrote:
On 1/21/14 12:18 PM, Rob Gaddi wrote:
(snip)
Not sure I'm following you there, this could be a problem with my
understanding. When I hear people talking about spline fitting, to my
mind that's a series of piecewise polynomial approximations.

that's normally what i thought. however, when i look up the math for
"cubic splines" it doesn't seem to be exactly the same as fitting either
3rd-order Lagrange nor 3-order Hermite polynomials. i would have
expected it to come out as one or the other.

Fit N-1 cubic sections to N points, you need 4N-4 parameters.
Going through the points at either end, is 2N-2. Continuous
first and second derivative at the N-2 splice points, 2N-4 more.
That leaves two more, which are the boundary conditions at the
end points.
Piecewise linear being the degenerate case.
Is that a reasonable statement?

who are you calling "degenerate"?! :)

probably piecewise-constant is more degenerate. maybe zeros inserted
between points is even more degenerate. other than approximating the
whole thing with zero (or some other constant), i cannot think of a more
degenerate case.

(snip)

maybe direct lookup for the first 128 and some equally-spaced polynomial
splines for the remaining 511 regions?

That might be the more obvious, and easiest to implement, of the
non-uniform spaced interpolation methods.

Reminds me of some years ago, someone was building a sampling
system for a signal with a long exponential decaying tail.

(And when high-speed RAM was pretty expensive.)

The idea was to sample at full speed for 10 samples, then average
groups of 10 for the next 100 (or maybe 90), then average groups
of 100 for the next. Then exponentially longer and longer
blocks after that. They would do all the averaging in logic,
before storing into the not-too-big RAM.

I had some ideas how to build one, but they didn't ask.

-- glen
 
Rob Gaddi <rgaddi@technologyhighland.invalid> writes:

On Tue, 21 Jan 2014 10:45:35 +0100
David Brown <david.brown@hesbynett.no> wrote:

On 20/01/14 19:11, Rob Gaddi wrote:

I did a little playing with it again this morning, and just tried curve
fitting just the first small segment (n = 1:128). Even a cubic fit has
immense errors over just that short of a range. exp() really has
pretty vicious behavior.


You can't fit a cubic (or other polynomial) by itself - you need a
spline. And for a curve like this, you want varying steps, with far
more steps at the lower range. You might be able to divide it up using
the first zero as the index (i.e., ranges 1..2, 2..4, 4..8, 8..16, etc.)
The lowest range - say up to 16 - is probably best handled with a
straight lookup table. After that, you'd need 12 sets of coefficients.
If cubic interpolation here is not good enough, try higher odd powers
(anchor your polynomials at the end points and the derivatives at these
points to get a smooth spline).


Not sure I'm following you there, this could be a problem with my
understanding. When I hear people talking about spline fitting, to my
mind that's a series of piecewise polynomial approximations. Piecewise
linear being the degenerate case. Is that a reasonable statement?

The overall range I was trying to span was integer N 1:65535. Trying
to fit a cubic to the range 1:128 was at attempt to see whether even
going to 512 (linearly spaced) pieces was going to give me a decent
approximation. At least in the high curvature section of small N, the
results were ghastly.

Ron, why is such a fancy debouncing algorithm necessary? Just curious.
--
Randy Yates
Digital Signal Labs
http://www.digitalsignallabs.com
 
On 1/21/14 12:18 PM, Rob Gaddi wrote:
On Tue, 21 Jan 2014 10:45:35 +0100
David Brown<david.brown@hesbynett.no> wrote:

On 20/01/14 19:11, Rob Gaddi wrote:

I did a little playing with it again this morning, and just tried curve
fitting just the first small segment (n = 1:128). Even a cubic fit has
immense errors over just that short of a range. exp() really has
pretty vicious behavior.


You can't fit a cubic (or other polynomial) by itself - you need a
spline. And for a curve like this, you want varying steps, with far
more steps at the lower range. You might be able to divide it up using
the first zero as the index (i.e., ranges 1..2, 2..4, 4..8, 8..16, etc.)
The lowest range - say up to 16 - is probably best handled with a
straight lookup table. After that, you'd need 12 sets of coefficients.
If cubic interpolation here is not good enough, try higher odd powers
(anchor your polynomials at the end points and the derivatives at these
points to get a smooth spline).


Not sure I'm following you there, this could be a problem with my
understanding. When I hear people talking about spline fitting, to my
mind that's a series of piecewise polynomial approximations.

that's normally what i thought. however, when i look up the math for
"cubic splines" it doesn't seem to be exactly the same as fitting either
3rd-order Lagrange nor 3-order Hermite polynomials. i would have
expected it to come out as one or the other.

Piecewise
linear being the degenerate case. Is that a reasonable statement?

who are you calling "degenerate"?! :)

probably piecewise-constant is more degenerate. maybe zeros inserted
between points is even more degenerate. other than approximating the
whole thing with zero (or some other constant), i cannot think of a more
degenerate case.

The overall range I was trying to span was integer N 1:65535. Trying
to fit a cubic to the range 1:128 was an attempt to see whether even
going to 512 (linearly spaced) pieces was going to give me a decent
approximation. At least in the high curvature section of small N, the
results were ghastly.

maybe direct lookup for the first 128 and some equally-spaced polynomial
splines for the remaining 511 regions?

--

r b-j rbj@audioimagination.com

"Imagination is more important than knowledge."
 
On Tue, 21 Jan 2014 12:33:53 -0500
Randy Yates <yates@digitalsignallabs.com> wrote:

Rob Gaddi <rgaddi@technologyhighland.invalid> writes:

On Tue, 21 Jan 2014 10:45:35 +0100
David Brown <david.brown@hesbynett.no> wrote:

On 20/01/14 19:11, Rob Gaddi wrote:

I did a little playing with it again this morning, and just tried curve
fitting just the first small segment (n = 1:128). Even a cubic fit has
immense errors over just that short of a range. exp() really has
pretty vicious behavior.


You can't fit a cubic (or other polynomial) by itself - you need a
spline. And for a curve like this, you want varying steps, with far
more steps at the lower range. You might be able to divide it up using
the first zero as the index (i.e., ranges 1..2, 2..4, 4..8, 8..16, etc.)
The lowest range - say up to 16 - is probably best handled with a
straight lookup table. After that, you'd need 12 sets of coefficients.
If cubic interpolation here is not good enough, try higher odd powers
(anchor your polynomials at the end points and the derivatives at these
points to get a smooth spline).


Not sure I'm following you there, this could be a problem with my
understanding. When I hear people talking about spline fitting, to my
mind that's a series of piecewise polynomial approximations. Piecewise
linear being the degenerate case. Is that a reasonable statement?

The overall range I was trying to span was integer N 1:65535. Trying
to fit a cubic to the range 1:128 was at attempt to see whether even
going to 512 (linearly spaced) pieces was going to give me a decent
approximation. At least in the high curvature section of small N, the
results were ghastly.

Ron, why is such a fancy debouncing algorithm necessary? Just curious.
--
Randy Yates
Digital Signal Labs
http://www.digitalsignallabs.com

That's just it, at the end of the day I decided it wasn't. The
asymmetry is really handy because it means the end customer can decide
between setting the rise/fall symmetrically and having basically just a
bandlimited view of the input, or asymmetrically, which provides glitch
detection; you can have 10s of ms to get around to looking for a brief
spike. So that feature's handy.

But the actual implementation, whereby a first-order exponential
averager can be programmed with linear time constants? Cute when it
looked like there was some simple way to get there. As the
complication level got worse and worse, other options that are "a bit
of a nuisance" started looking a whole lot better.

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order. See above to fix.
 
On 1/21/14 5:12 PM, glen herrmannsfeldt wrote:
In comp.arch.fpga Rob Gaddi<rgaddi@technologyhighland.invalid> wrote:
On Tue, 21 Jan 2014 10:45:35 +0100
David Brown<david.brown@hesbynett.no> wrote:

(snip)
You can't fit a cubic (or other polynomial) by itself - you need a
spline. And for a curve like this, you want varying steps, with far
more steps at the lower range. You might be able to divide it up using
the first zero as the index (i.e., ranges 1..2, 2..4, 4..8, 8..16, etc.)
The lowest range - say up to 16 - is probably best handled with a
straight lookup table. After that, you'd need 12 sets of coefficients.
If cubic interpolation here is not good enough, try higher odd powers
(anchor your polynomials at the end points and the derivatives at these
points to get a smooth spline).

Not sure I'm following you there, this could be a problem with my
understanding. When I hear people talking about spline fitting, to my
mind that's a series of piecewise polynomial approximations. Piecewise
linear being the degenerate case. Is that a reasonable statement?

Seems to me that the advantage of cubic splines is the continuous
first and second derivative.

*second* derivative? how does a third-order polynomial, with 4
coefficients, satisfy 6 constraints - 3 on each side, left and right?

i seem to remember the term "osculating" from Duane Wise.

i can understand how the Hermite polynomials do it, but i can't see how
this additional derivative works.


--

r b-j rbj@audioimagination.com

"Imagination is more important than knowledge."
 
(I know the OP has found a better way to solve the original problem, but
I think the discussion here is still fun!)

On 21/01/14 23:12, glen herrmannsfeldt wrote:
In comp.arch.fpga Rob Gaddi <rgaddi@technologyhighland.invalid> wrote:
On Tue, 21 Jan 2014 10:45:35 +0100
David Brown <david.brown@hesbynett.no> wrote:

(snip)
You can't fit a cubic (or other polynomial) by itself - you need a
spline. And for a curve like this, you want varying steps, with far
more steps at the lower range. You might be able to divide it up using
the first zero as the index (i.e., ranges 1..2, 2..4, 4..8, 8..16, etc.)

(Note that this should be "first one", not "first zero".)

The lowest range - say up to 16 - is probably best handled with a
straight lookup table. After that, you'd need 12 sets of coefficients.
If cubic interpolation here is not good enough, try higher odd powers
(anchor your polynomials at the end points and the derivatives at these
points to get a smooth spline).

Not sure I'm following you there, this could be a problem with my
understanding. When I hear people talking about spline fitting, to my
mind that's a series of piecewise polynomial approximations. Piecewise
linear being the degenerate case. Is that a reasonable statement?

Ignoring other people's comments about degenerate cases, yes, that's
correct.

Seems to me that the advantage of cubic splines is the continuous
first and second derivative. (And more derivatives for higher
order splines.) Also, that the result goes through the supplied
points.

If you fit an n-1 degree polynomial to n points, it will go through
the points, but likely fluctuate wildly in between. A lower order
polynomial will be less wild, but won't go through the points.

For a cubic spline, 3rd derivatives are constant - and higher order
derivatives are always 0. This avoids the "wildness" of many higher
order polynomials - in particular, if you try to fit a high order
polynomial directly to a set of points then higher derivatives are often
very large. Cubic splines are often chosen for being more flexible (and
curvy) than linear splines, but with more tractable maths and less
"wildness" than higher orders.

But remember that there are /many/ ways to make a cubic spline for a set
of points or for a given function. A common way - as glen mentioned in
another post - is to make it pass through all N points and make the
first and second derivatives match up smoothly. But there are others,
such as aiming to minimise the RMS error from many points, or making the
values and first derivatives match the target curve at the boundary
points. Some methods involve solving huge matrices - calculating the
entire spline in one go - and others are done piecemeal.


The overall range I was trying to span was integer N 1:65535. Trying
to fit a cubic to the range 1:128 was at attempt to see whether even
going to 512 (linearly spaced) pieces was going to give me a decent
approximation. At least in the high curvature section of small N, the
results were ghastly.

That is why I said you should not be using linearly spaced pieces - you
need closer sections at the higher curvature part of the table.

It isn't required that an N element lookup table use N linearly
spaced points, but it does simplify the logic. Consider how A-law
and u-law coding allows reasonably dynamic range for digital
telephone audio.

Linear spacing makes the table smaller (you don't have to store "x"
values), and lookup faster (you can go directly to the right line). My
thoughts on this function is that indexing by first one bit could give
you a compact and fast table with non-linear spacing (approximately
logarithmic spacing).
 

Welcome to EDABoard.com

Sponsor

Back
Top