Lead-Lag versus PID

On a sunny day (Wed, 26 May 2004 13:44:08 -0700) it happened Tim Wescott
<tim@wescottnospamdesign.com> wrote in <10ba0c9cl4v8e04@corp.supernews.com>:

You sound like you're beyond it already, but see my "PID Without a PhD"
and the other articles on my website:
http://www.wescottdesign.com/articles/pidwophd.html,
http://www.wescottdesign.com/articles/zTransform/z-transforms.html.

Embedded Systems Programming magazine owns the copyright on the PID
article, so I'm not supposed to distribute it :(. My website just sends
you over to theirs to read it.

They will, however, sell me reprints -- $3000 for 500 of 'em.

Tim, I have just read your article, in one long read,
and I want to thank you for this nice clearly written well documented piece
of work.
It is the best PID intro I have ever read.
Regeards
JP
 
In article <10ba2p5ib99p6c6@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:
Gregory L. Hansen wrote:

In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:

I've been learning about control theory, and I'm a little confused on
lead, lag, and lead-lag compensation. My book (Bucek) described them as
filters, and gave analog circuits consisting of a few resistors and
capacitors, in contrast to the PID circuits with op-amps. But he also
analyzes them alone, in front of a plant, as if they were stand-alone
controllers. The lead-lag is a lot like a PID with an extra pole.

What should I think of them, and how they compare with PIDs?


You can think about all of them as filters, or as controllers. On the
surface this is confusing, but it means that you can choose how to think
about it in the way that makes the problem it hand easiest to solve.

It is almost always (like 99.999% of the time) a good idea to put some
frequency limiting on a differentiator. You do this by putting a pole
into your differentiator at some high frequency. When you do this
you've made a lead-lag compensator with it's zero frequency at zero.


I suppose every system must have a pole in there somewhere, because
everything has inertia, or thermal inertia, or inductance, or something.

So as I understand this, a PID transfer function looks like

P + I/s + Ds

A low-pass filter has a transfer function like

1/(1 + Ts)

Bring them together in a somewhat standard form, and

(D/T) (s^2 + (P/D)s + (I/D)) / s(s + 1/T)

So that almost looks like a lead-lag, except the lead-lag can have a pole
somewhere else instead of at s=0.
And it matters whether the low-pass filter is on the controller or on the
system measurement, doesn't it? G/(1+GH), as the story goes, and if the
low-pass in on the lock-in that feeds me my error signal, it's in H and
not G.

A resistor+capacitor low-pass filter is a lot like an integrator, except
it's not an ideal integrator. If it were an ideal integrator it would be
a/s, and not a/(s+b).

I wanted to say a few things about what the pole at the origin means, but
I suppose I have to think of controller and plant, not just controller.


As I pointed out elsewhere, a lead-lag is generally a first-order
pole-zero pair, it's a bit odd to call a 2nd order transfer function
"lead-lag".
Eh? I thought a lead compensator was first-order, a lag compensator was
first order, and a lead-lag compensator was second order.

For lead or lag,

D(s) = K (s+a)/(s+b)

For lead-lag,

D(s) = K (s+a1)(s+a2) / (s+b1)(s+b2)

In frequency-domain terms having the pole at s = 0 means that the error
term has a zero at s = 0, which means that your system has zero error at
DC. In nice, familiar continuous-time terms having an integrator in
your controller means that when it's presented with a non-zero error
your controller will just push harder and harder and harder until that
damn error goes _away_, fer cryin out loud!

Using a low-pass filter instead of an integrator is like hiring your
lazy brother-in-law*. He'll push on it up to a certain extent, but he
knows that you'll never fire him, so you only get so much effort out of him.

* I'm in an odd mood today...
Well, the lazy brother-in-law showed me the value of an s=0 there, and I'm
convinced.
--
"For every problem there is a solution which is simple, clean and wrong."
-- Henry Louis Mencken
 
On Wed, 26 May 2004 13:44:08 -0700, Tim Wescott
<tim@wescottnospamdesign.com> wrote:

Jim Thompson wrote:
On Wed, 26 May 2004 19:14:48 +0000 (UTC),
glhansen@steel.ucs.indiana.edu (Gregory L. Hansen) wrote:


In article <10b9jsvgfc6m92@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:


In article <u7Vsc.10214$XI4.368596@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:


"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c90p97$voq$1@hood.uits.indiana.edu...


In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:


-- snip --

You sound like you're beyond it already, but see my "PID Without a PhD"
and the other articles on my website:
http://www.wescottdesign.com/articles/pidwophd.html,
http://www.wescottdesign.com/articles/zTransform/z-transforms.html.

[snip]

Tim,

The second article was easy enough to print as a PDF, but the first is
surrounded by advertising. Any way to get a copy as a clean PDF?

Thanks!

...Jim Thompson

Embedded Systems Programming magazine owns the copyright on the PID
article, so I'm not supposed to distribute it :(. My website just sends
you over to theirs to read it.

They will, however, sell me reprints -- $3000 for 500 of 'em.
Oooooh! How kind ;-)

...Jim Thompson
--
| James E.Thompson, P.E. | mens |
| Analog Innovations, Inc. | et |
| Analog/Mixed-Signal ASIC's and Discrete Systems | manus |
| Phoenix, Arizona Voice:(480)460-2350 | |
| E-mail Address at Website Fax:(480)460-2142 | Brass Rat |
| http://www.analog-innovations.com | 1962 |

I love to cook with wine. Sometimes I even put it in the food.
 
Gregory L. Hansen wrote:
In article <10ba10b6g9onqe5@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:


Choose your nonlinearity to make the system appear as linear as possible
to the controller. Most thermal systems respond fairly linearly to
temperature rather than power, so I'd stick with taking the square root.
Putting _two_ square terms in the loop is probably not a good idea at all.


I'm not sure I get you.

My error signal is linear in temperature, and my system heat capacities
and conductivities are highly independent of temperature in the range of
temperatures I'm in. So there's no trick to calculating something like

dP = k dT

because the conductivity k is a constant to high precision. But P=V^2/R,
so the steady-state change in temperature caused by a control action is
linear in V^2, not V.

So I wasn't sure whether I should PID my temperature error, or PID my
power error.


Obscurity again? It's not my day...

Since the plant is linear going from power in to temperature out, you
are linearizing the right way (taking the square root).


Spelling it out to avoid an obscurity moment, I should keep using
temperature error in the PID routine, and take the square root, rather
than squaring the temperature error for the PID routine and taking the
square root of the result.

Yes, exactly. The only drawback is that taking a square root is quite
processor intensive -- but if you're already doing it then it must be OK.
Sphero's
suggestion for a "black box" constant power controller would work, but
who engineers it?


That sort of seemed like what I've been doing by taking the square root.

Yes and no. Most things change their bulk resistivity with temperature,
so the power dissipation relationship is more complex than just V^2. If
you want to _exactly_ measure the power input you should take this into
account either by monitoring the current and using V x I or with a magic
black box.
Yes. The problem is twofold: First, you're essentially nulling out the
pole so if your zero is off frequency by much then the cancellation
doesn't happen. Second, you're not affecting the behavior of the
subsystem with the pole, so it can still misbehave. The extreme case is
when you have an unstable pole and a non-minimum-phase zero. You end up
with an analytical transfer function that's stable, but the actual
system has a hidden state that will bang up against some limit or
another, and the system will stop working.


I just read in Leigh about moving poles. You need a transfer function
that puts zeroes on the old poles, and has a set of new poles. And he
mentioned some of the same warnings you have.

Maybe you'd need a control system to adjust the locations of the zeroes.

Now that I think of it, a lead-lag compensator would do exactly that, if
you choose the constants right.

But I don't think I have any reason to be tempted to try it.


It's generally the nature of a feedback system to move the poles around
-- that's why a feedback system with a PID doesn't usually have a pole
at s = 0 (or z = 1, take your choice [I'm being obscure again --
sorry]). So I'm not sure why he's advocating masking the existing poles
with zeros.


I don't think he was advocating it. He just mentioned it, and listed some
problems associated with it.
That's a relief.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
Gregory L. Hansen wrote:

In article <10ba2p5ib99p6c6@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:


In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:


Gregory L. Hansen wrote:


I've been learning about control theory, and I'm a little confused on
lead, lag, and lead-lag compensation. My book (Bucek) described them as
filters, and gave analog circuits consisting of a few resistors and
capacitors, in contrast to the PID circuits with op-amps. But he also
analyzes them alone, in front of a plant, as if they were stand-alone
controllers. The lead-lag is a lot like a PID with an extra pole.

What should I think of them, and how they compare with PIDs?


You can think about all of them as filters, or as controllers. On the
surface this is confusing, but it means that you can choose how to think
about it in the way that makes the problem it hand easiest to solve.

It is almost always (like 99.999% of the time) a good idea to put some
frequency limiting on a differentiator. You do this by putting a pole
into your differentiator at some high frequency. When you do this
you've made a lead-lag compensator with it's zero frequency at zero.


I suppose every system must have a pole in there somewhere, because
everything has inertia, or thermal inertia, or inductance, or something.

So as I understand this, a PID transfer function looks like

P + I/s + Ds

A low-pass filter has a transfer function like

1/(1 + Ts)

Bring them together in a somewhat standard form, and

(D/T) (s^2 + (P/D)s + (I/D)) / s(s + 1/T)

So that almost looks like a lead-lag, except the lead-lag can have a pole
somewhere else instead of at s=0.


And it matters whether the low-pass filter is on the controller or on the
system measurement, doesn't it? G/(1+GH), as the story goes, and if the
low-pass in on the lock-in that feeds me my error signal, it's in H and
not G.

Exactly. Putting the low pass outside the loop would be a very good idea.

A resistor+capacitor low-pass filter is a lot like an integrator, except
it's not an ideal integrator. If it were an ideal integrator it would be
a/s, and not a/(s+b).

I wanted to say a few things about what the pole at the origin means, but
I suppose I have to think of controller and plant, not just controller.


As I pointed out elsewhere, a lead-lag is generally a first-order
pole-zero pair, it's a bit odd to call a 2nd order transfer function
"lead-lag".


Eh? I thought a lead compensator was first-order, a lag compensator was
first order, and a lead-lag compensator was second order.

For lead or lag,

D(s) = K (s+a)/(s+b)

For lead-lag,

D(s) = K (s+a1)(s+a2) / (s+b1)(s+b2)

Nope. Your D(s) = K(s+a)/(s+b) is a "lead/lag", with the zero (at s =
-a) providing the "lead" and the pole (at s = -b) providing the "lag".
Below a and above b (assuming that a < b) there isn't any phase shift,
hence no lead or lag.

Now, your book may be different. Every author gets to invent their own
terminology, but the prevalent terminology that I see is for the
1st-order filter.
In frequency-domain terms having the pole at s = 0 means that the error
term has a zero at s = 0, which means that your system has zero error at
DC. In nice, familiar continuous-time terms having an integrator in
your controller means that when it's presented with a non-zero error
your controller will just push harder and harder and harder until that
damn error goes _away_, fer cryin out loud!

Using a low-pass filter instead of an integrator is like hiring your
lazy brother-in-law*. He'll push on it up to a certain extent, but he
knows that you'll never fire him, so you only get so much effort out of him.

* I'm in an odd mood today...


Well, the lazy brother-in-law showed me the value of an s=0 there, and I'm
convinced.
Well I'm glad that lazy SOB was good for something.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c92b3q$iju$1@hood.uits.indiana.edu...
In article <u7Vsc.10214$XI4.368596@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c90p97$voq$1@hood.uits.indiana.edu...
In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:
Gregory L. Hansen wrote:

You can think about all of them as filters, or as controllers. On the
surface this is confusing, but it means that you can choose how to
think
about it in the way that makes the problem it hand easiest to solve.

When I first saw them, I couldn't help thinking of them as something
you'd
put before or after a PID loop to condition the signal as it goes
through.
Maybe my main hang-up was that the sample circuit consisted entirely of
passive components. How can you collect two resistors and two
capacitors
and call it a controller?

by having the error-amp (setpoint - feedback) elsewhere, and not
requiring a
controller gain of more than 1. In practice gains of > 1 are often
required,
and an opamp provides the necessary gain as well as subtracting the
feedback
from the setpoint.

That makes sense. I suppose normally the control output from the
circuit wouldn't be dumped directly into a heater or something, but
rather control a power supply, or input to a valve moving mechanism...
it's a signal for something else that does the actual work. A voltage
divider is a lot like a gain < 1, a low-pass filter is a lot like an
integrator, a high-pass filter is a lot like a differentiator. But it
sure seems like a lot of theory for one or two resistors and one or two
capacitors.
Tims comment os on the money - if it (greatly) affects system dynamics, then
dont ignore it. all the rest is just verbiage


In PLL design they call them loop filters.

Ultimately they are passive/active networks that shape frequency
response.
How you choose to view them usually is preference-driven. If you like
pole
placement or root-locus design methods, you will probably think about
poles
& zeroes. If you want to use Zeigler-Nichols methods, you will think
about
Kp, Ki and Kd. Either approach works. Me, I like the method of symmetric
optimum.

I like Ziegler-Nichols. At least I know how to use it. But it doesn't
offer much insight into a system.
OTOH if you have a factory process to control, it may not be necessary to
have much insight, but it *sure* is necessary to stabilise the loop!

I was thinking of turning my digital PID into a digital lead-lag, but
the
lock-in, which provides the error signal, has a low-pass filter on the
output, so maybe that job has already been done.
...

I'm still coming to terms with things like what happens when I put a
pole
here and a zero there. But I had noticed the similarities in the
transfer
functions, and especially in the equivalent difference equations. As
long
as I don't try to take squares or sines or anything, all I can do is
add
up current and past error signals and outputs, and multiply them by
constants.

you can run a model-reference controller too, and "synthesise" future
values. I have had a lot of success with so-called Internal Model Control
for large bidirectional rectifiers - basically run an ideal model of your
plant, to compute the ideal output of your controller based on setpoint
info - ie fancy feedforward. Then have an "error" controller, which
effectively deals with anything your feedforward "model" doesnt predict.
I
found that a 250kW regenerative rectifier with PI control and deadtime
compensation was not as good as IMC alone - the IMC did a great job of
compensating for deadtime errors. IMC + deadtime compensation was even
better. Its only a small leap from there to gain-scheduling the IMC model
to
take into account thermal & saturation effects on R,L,C etc.

In other words, use as much information as possible, it makes your
controllers life that much easier.

I'm not sure if I have no idea what you're talking about, but 250 kW seems
like some serious rectifying.
yeah, a little bit.

I'm trying to keep a temperature in a radiometer constant, with periodic
changes in an input power that are known pretty well. A simple offset
pretty much takes care of my servo needs. But it's part of a high
precision measurement, and my sensed power differs from my calibration
power. I went over the code and discovered the controlling variables are
floats, which are a little short on precision. I hope changing them to
doubles will just fix the problem. But I blew up part of the apparatus
recently, so I haven't been able to try it. Maybe next week. In the
mean-time, I'm studying control theory. Bucek is a magical textbook
that makes a difficult subject seem almost comprehensible.

A twist is that my signal isn't the temperature, it's the power required
to keep the temperature constant. And it's harder to measure when it
fluctuates a lot. It doesn't even matter if it fluctuates a lot as long
as it averages to zero and I stay in the linear regime. But overall I
think my control needs must be pretty mild.
when you start slapping filters on feedback signals (quite feasible) is
often when you discover they can affect closed-loop dynamics too.

Likewise, condition your inputs - NO
controller can respond to a step, so there aint much point in trying.
ramp-limiting, saturating integrators, anti-windup control, setpoint
shaping
filters etc. are all ways of doing this

My error signal is proportional to the deviation of the temperature from
the set-point, but the change in temperature is proportional to power,
V^2/R. We take the square root of the PID calculation to linearize the
output, sqrt(vout)^2/R. I was thinking of squaring the input before I run
through my PID routine, so the calculation itself is in terms of power
rather than temperature, but I don't know whether that's clever or a
mistake.
I have seen a number of regen rectifier control papers where they do exactly
that - instead of the error amp measuring (Vdc* - Vdc) (*=setpoint) they
measure (Vdc*^2 - Vdc^2) to actually "linearise" the loop - more correctly
they are taking the non-linearity outside of the controller - and it works
because they really want to control power (hey, just like you :)


Maybe doubles will just fix the problem and I won't have to worry about
things like that.
IME floats dont help much - like Tim I do 32-bit fixed point arithmetic.
Floating point suffers from a problem when adding big things to small
things - the mantissa (ie fractional part) determines at what point the
small thing disappears (think about how you re-normalise to add 2 floating
point numbers)

A very good trick is to normalise your inputs as soon as you digitise them -
say you measure T in Kelvin*10, and (nominal) setpoint is 373K = 3730. I
would choose 4096 = 100% then divide the measured T by 4096 to get a
"per-unitised" number (and multiply by say 8192 so I can have +/- 4PU fit
into one 16-bit number. scale to suit 32 bits). T* = 3730/4096 = 0.9106
(close to 1). All your maths is then (assuming you picked a sensible
normalising factor) dealing with numbers that are approximately 1, so is
numerically well conditioned. When you have calculated you normalised
output, de-normalise it by multiplying it by the 100% output value.

If you build multiple products (say motor controllers from 200W -
1,000,000W) they can use the exact same control loop - all that changes are
the input (and output) normalisations (and of course the sensor scaling
factors etc).



BTW, be careful of pole-zero cancellations. They may look nice
analytically,
but such cancellations have a nasty habit of not working in the real
world,
whereupon your system dynamics can change significantly. A good example
would be trying to cancel anything near the unit circle, and having
quantisation "noise" preventing it from actually cancelling. Or perhaps
the
L's and C's have different values....

Pole-zero cancellations? That's like

(s+a)(s+b) / (s+a)(s+c)

The circle and the cross on the same point?
yes, in theory. In practice they often are not, leading to (as tim pointed
out) "hidden dynamics" ie ones that bite you in the arse at the least
convenient time :)

Cheers
Terry
 
In article <10ba668rc5bqo9c@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:
Gregory L. Hansen wrote:
In article <10ba10b6g9onqe5@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

So I wasn't sure whether I should PID my temperature error, or PID my
power error.


Obscurity again? It's not my day...

Since the plant is linear going from power in to temperature out, you
are linearizing the right way (taking the square root).


Spelling it out to avoid an obscurity moment, I should keep using
temperature error in the PID routine, and take the square root, rather
than squaring the temperature error for the PID routine and taking the
square root of the result.

Yes, exactly. The only drawback is that taking a square root is quite
processor intensive -- but if you're already doing it then it must be OK.
Well, that makes things a little easier on me, in that there's nothing to
change.


Sphero's
suggestion for a "black box" constant power controller would work, but
who engineers it?


That sort of seemed like what I've been doing by taking the square root.

Yes and no. Most things change their bulk resistivity with temperature,
so the power dissipation relationship is more complex than just V^2. If
you want to _exactly_ measure the power input you should take this into
account either by monitoring the current and using V x I or with a magic
black box.
I think I qualify there. I operate at about 1.8 kelvin, my temperature
varies by about a microkelvin, and material properties change no more than
about a part in 10^5. I control the target with a few hundred nanowatts
into a carbon resistor, whose resistance seems to vary very little from
room temperature down to damn-cold. We do actually measure V*I by
passing the current through a precision resistor at room temperature and
measuring the voltage across that, but V_heater*V_shunt is a lot like
V_heater^2 to a scaling factor. The output to the heater is still a
voltage.

And I tried cooling down today and discovered that I can't hold my
helium. The vacuum looks good, couldn't find any leaks, I don't know
what the problem is, but I boiled off a full load of liquid helium in an
hour. Sigh. Everything has to be hard.
--
"Things should be made as simple as possible -- but no simpler."
-- Albert Einstein
 
In article <c933u5$4t7$1@news.epidc.co.kr>,
Jan Panteltje <pNaonStpealmtje@yahoo.com> wrote:
On a sunny day (Wed, 26 May 2004 13:44:08 -0700) it happened Tim Wescott
tim@wescottnospamdesign.com> wrote in <10ba0c9cl4v8e04@corp.supernews.com>:

You sound like you're beyond it already, but see my "PID Without a PhD"
and the other articles on my website:
http://www.wescottdesign.com/articles/pidwophd.html,
http://www.wescottdesign.com/articles/zTransform/z-transforms.html.

Embedded Systems Programming magazine owns the copyright on the PID
article, so I'm not supposed to distribute it :(. My website just sends
you over to theirs to read it.

They will, however, sell me reprints -- $3000 for 500 of 'em.

Tim, I have just read your article, in one long read,
and I want to thank you for this nice clearly written well documented piece
of work.
It is the best PID intro I have ever read.
Regeards
JP
I'll second that. It's very nice to see the contribution of each term
explained so thoroughly.


--
"Experiments are the only means of knowledge at our disposal. The rest is
poetry, imagination." -- Max Planck
 
In article <Tp9tc.10577$XI4.382936@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c92b3q$iju$1@hood.uits.indiana.edu...
In article <u7Vsc.10214$XI4.368596@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c90p97$voq$1@hood.uits.indiana.edu...


I'm trying to keep a temperature in a radiometer constant, with periodic
changes in an input power that are known pretty well. A simple offset
pretty much takes care of my servo needs. But it's part of a high
precision measurement, and my sensed power differs from my calibration
power. I went over the code and discovered the controlling variables are
floats, which are a little short on precision. I hope changing them to
doubles will just fix the problem. But I blew up part of the apparatus
recently, so I haven't been able to try it. Maybe next week. In the
mean-time, I'm studying control theory. Bucek is a magical textbook
that makes a difficult subject seem almost comprehensible.

A twist is that my signal isn't the temperature, it's the power required
to keep the temperature constant. And it's harder to measure when it
fluctuates a lot. It doesn't even matter if it fluctuates a lot as long
as it averages to zero and I stay in the linear regime. But overall I
think my control needs must be pretty mild.

when you start slapping filters on feedback signals (quite feasible) is
often when you discover they can affect closed-loop dynamics too.
Our usual mode has been to Z-N it to get a starting point, and then vary
parameters. But past a certain point it can take a day to get the error
bar down enough to know whether a certain set of parameters was an
improvement. I was hoping some theory could shorten that a little.

Likewise, condition your inputs - NO
controller can respond to a step, so there aint much point in trying.
ramp-limiting, saturating integrators, anti-windup control, setpoint
shaping
filters etc. are all ways of doing this

My error signal is proportional to the deviation of the temperature from
the set-point, but the change in temperature is proportional to power,
V^2/R. We take the square root of the PID calculation to linearize the
output, sqrt(vout)^2/R. I was thinking of squaring the input before I run
through my PID routine, so the calculation itself is in terms of power
rather than temperature, but I don't know whether that's clever or a
mistake.


I have seen a number of regen rectifier control papers where they do exactly
that - instead of the error amp measuring (Vdc* - Vdc) (*=setpoint) they
measure (Vdc*^2 - Vdc^2) to actually "linearise" the loop - more correctly
they are taking the non-linearity outside of the controller - and it works
because they really want to control power (hey, just like you :)
Uh, oh. I've gotten some conflicting advice on that point.

I suppose the only way to be sure is try it and find out. If only I could
hold my helium... Hardware problems to overcome before software becomes
an issue again.

Maybe doubles will just fix the problem and I won't have to worry about
things like that.


IME floats dont help much - like Tim I do 32-bit fixed point arithmetic.
Floating point suffers from a problem when adding big things to small
things - the mantissa (ie fractional part) determines at what point the
small thing disappears (think about how you re-normalise to add 2 floating
point numbers)
I've figured that out. And when the hardware works again, it might turn
out that's all I've needed to do for as long as I've been having trouble
with this thing.

I've been doing calibration runs where a known amount of power is put in
and I measure it. And results vary widely, but pretty typical is that the
measured and input power differ by something like 2% +- 0.5%.

But when I look at the smallest error term that can be added to the
integral term that keeps me at my running voltage, the coarse resolution
is good for about that order of error. Doubles will add many decades to
the precision, and I'm hoping that's all it will take to fix it.

A very good trick is to normalise your inputs as soon as you digitise them -
say you measure T in Kelvin*10, and (nominal) setpoint is 373K = 3730. I
would choose 4096 = 100% then divide the measured T by 4096 to get a
"per-unitised" number (and multiply by say 8192 so I can have +/- 4PU fit
into one 16-bit number. scale to suit 32 bits). T* = 3730/4096 = 0.9106
(close to 1). All your maths is then (assuming you picked a sensible
normalising factor) dealing with numbers that are approximately 1, so is
numerically well conditioned. When you have calculated you normalised
output, de-normalise it by multiplying it by the 100% output value.
Hmm... I'm just using straight voltages as they were read from the
lock-in. I'm also hacking on legacy code, and a little change can ripple
across a lot of files.

Maybe next time.

If you build multiple products (say motor controllers from 200W -
1,000,000W) they can use the exact same control loop - all that changes are
the input (and output) normalisations (and of course the sensor scaling
factors etc).




BTW, be careful of pole-zero cancellations. They may look nice
analytically,
but such cancellations have a nasty habit of not working in the real
world,
whereupon your system dynamics can change significantly. A good example
would be trying to cancel anything near the unit circle, and having
quantisation "noise" preventing it from actually cancelling. Or perhaps
the
L's and C's have different values....

Pole-zero cancellations? That's like

(s+a)(s+b) / (s+a)(s+c)

The circle and the cross on the same point?


yes, in theory. In practice they often are not, leading to (as tim pointed
out) "hidden dynamics" ie ones that bite you in the arse at the least
convenient time :)
Eh. Lately I've been bitten in the arse more or less constantly. Some
times are less convenient than other times, I just notice it more when
it's less convenient.

--
"Usenet is like a herd of performing elephants with diarrhea -- massive,
difficult to redirect, awe-inspiring, entertaining, and a source of
mind-boggling amounts of excrement when you least expect it. "
-- Gene Spafford, 1992
 
In article <10ba6e8nfvsmo64@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:
Gregory L. Hansen wrote:

In article <10ba2p5ib99p6c6@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:


And it matters whether the low-pass filter is on the controller or on the
system measurement, doesn't it? G/(1+GH), as the story goes, and if the
low-pass in on the lock-in that feeds me my error signal, it's in H and
not G.

Exactly. Putting the low pass outside the loop would be a very good idea.
Well, swell. It would have to be a digital filter, then, because from
lock-in to heater output, it's all digitized.

I did see a suggestion to smooth out the derivative signal by an averaging
over many measurements, like

D * (e[k] + e[k-1] - e[k-2] - e[k-3]) / (2^2 dT)

Or even more terms. And I've also been told that's the oldest mistake in
the book, take the derivative as it comes so the system can respond
quickly, and filter the output.

I'm not sure I understand how one is really faster than the other.

We actually have a high-frequency cut that takes a weighted average of the
current and previous proportional plus derivative terms. But not the
integral term. I don't know why the integral term was left out of that,
and the guy that wrote the software is no longer here.

I was thinking of rolling in a few extra terms to smooth the output a
little better, like

(u[k] + (HF/dt)(u[k-1] + 1/2*u[k-2] + 1/4*u[k-3])) /

(1 + (HF/dt)*(1 + 1/2 + 1/4))

or some other weighting scheme that gives less importance to terms
farther in the past. Currently we just go to the u[k-1] term. Something
to try if doubles don't fix the problem, I suppose.

Series blocks commute, don't they? So it doesn't matter whether I filter
the input or the output; the effect will be the same?

A resistor+capacitor low-pass filter is a lot like an integrator, except
it's not an ideal integrator. If it were an ideal integrator it would be
a/s, and not a/(s+b).

I wanted to say a few things about what the pole at the origin means, but
I suppose I have to think of controller and plant, not just controller.


As I pointed out elsewhere, a lead-lag is generally a first-order
pole-zero pair, it's a bit odd to call a 2nd order transfer function
"lead-lag".


Eh? I thought a lead compensator was first-order, a lag compensator was
first order, and a lead-lag compensator was second order.

For lead or lag,

D(s) = K (s+a)/(s+b)

For lead-lag,

D(s) = K (s+a1)(s+a2) / (s+b1)(s+b2)

Nope. Your D(s) = K(s+a)/(s+b) is a "lead/lag", with the zero (at s =
-a) providing the "lead" and the pole (at s = -b) providing the "lag".
Below a and above b (assuming that a < b) there isn't any phase shift,
hence no lead or lag.

Now, your book may be different. Every author gets to invent their own
terminology, but the prevalent terminology that I see is for the
1st-order filter.
And when another author uses different terminology, now I'll know. But
the circuits that Bucek presented for the lead and the lag were a low-pass
filter and a high-pass filter (in one order or the other) with an extra
resistor in series with the capacitor in each case, while his
implementation of the lead-lag had two sets of resistor and capacitor in
parallel. Different circuit, more components, so the quadratic form
seemed reasonable to me.
--
"The average person, during a single day, deposits in his or her underwear
an amount of fecal bacteria equal to the weight of a quarter of a peanut."
-- Dr. Robert Buckman, Human Wildlife, p119.
 
Gregory L. Hansen wrote:

In article <10ba6e8nfvsmo64@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:


In article <10ba2p5ib99p6c6@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:


Gregory L. Hansen wrote:


And it matters whether the low-pass filter is on the controller or on the
system measurement, doesn't it? G/(1+GH), as the story goes, and if the
low-pass in on the lock-in that feeds me my error signal, it's in H and
not G.


Exactly. Putting the low pass outside the loop would be a very good idea.


Well, swell. It would have to be a digital filter, then, because from
lock-in to heater output, it's all digitized.
Digital filters are good -- they don't drift all over with every effect
known (or unknown) to man.

I did see a suggestion to smooth out the derivative signal by an averaging
over many measurements, like

D * (e[k] + e[k-1] - e[k-2] - e[k-3]) / (2^2 dT)

Or even more terms. And I've also been told that's the oldest mistake in
the book, take the derivative as it comes so the system can respond
quickly, and filter the output.

I'm not sure I understand how one is really faster than the other.

We actually have a high-frequency cut that takes a weighted average of the
current and previous proportional plus derivative terms. But not the
integral term. I don't know why the integral term was left out of that,
and the guy that wrote the software is no longer here.

Derivatives amplify high frequencies, noise tends to be white and plants
tend to be squirrelly at high frequencies. So it's a good idea to
frequency limit your derivative. I FIR filters are generally
disrecommended inside a control loop because you pay a lot of extra
phase lag for the amplitude changes you get. I would use a 1st-order
lowpass.

He left the integral out because (a) if your filtering the derivative
the rest doesn't matter and (b) the integrator is more or less an
uber-lowpass (unter-lowpass?). Filtering the integrator output would be
pointless as well as strange.

I was thinking of rolling in a few extra terms to smooth the output a
little better, like

(u[k] + (HF/dt)(u[k-1] + 1/2*u[k-2] + 1/4*u[k-3])) /

(1 + (HF/dt)*(1 + 1/2 + 1/4))

or some other weighting scheme that gives less importance to terms
farther in the past. Currently we just go to the u[k-1] term. Something
to try if doubles don't fix the problem, I suppose.

More low-pass may not be a bad idea if you make the rolloff
significantly higher than your closed-loop bandwidth. Really, if you're
getting the right average value to the plant you may just want to
seriously low-pass the drive _outside_ of the control loop. You can do
this simply by taking a nice long successive average without worrying
about the fact that it's FIR.

Series blocks commute, don't they? So it doesn't matter whether I filter
the input or the output; the effect will be the same?
Er, yes and no. As long as your whole system is staying linear
(including any truncation noise from the digital computations) then they
commute. If not, then no.
A resistor+capacitor low-pass filter is a lot like an integrator, except
it's not an ideal integrator. If it were an ideal integrator it would be
a/s, and not a/(s+b).

I wanted to say a few things about what the pole at the origin means, but
I suppose I have to think of controller and plant, not just controller.


As I pointed out elsewhere, a lead-lag is generally a first-order
pole-zero pair, it's a bit odd to call a 2nd order transfer function
"lead-lag".


Eh? I thought a lead compensator was first-order, a lag compensator was
first order, and a lead-lag compensator was second order.

For lead or lag,

D(s) = K (s+a)/(s+b)

For lead-lag,

D(s) = K (s+a1)(s+a2) / (s+b1)(s+b2)


Nope. Your D(s) = K(s+a)/(s+b) is a "lead/lag", with the zero (at s =
-a) providing the "lead" and the pole (at s = -b) providing the "lag".
Below a and above b (assuming that a < b) there isn't any phase shift,
hence no lead or lag.

Now, your book may be different. Every author gets to invent their own
terminology, but the prevalent terminology that I see is for the
1st-order filter.


And when another author uses different terminology, now I'll know. But
the circuits that Bucek presented for the lead and the lag were a low-pass
filter and a high-pass filter (in one order or the other) with an extra
resistor in series with the capacitor in each case, while his
implementation of the lead-lag had two sets of resistor and capacitor in
parallel. Different circuit, more components, so the quadratic form
seemed reasonable to me.
There's so many dammed corner cases when you're writing material that
it's hard to (a) say something in a new and better way (b) stay
consistent with prevailing terminology and (c) not confuse the hell out
of your poor reader.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c93hc8$vrm$4@hood.uits.indiana.edu...
In article <Tp9tc.10577$XI4.382936@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c92b3q$iju$1@hood.uits.indiana.edu...
In article <u7Vsc.10214$XI4.368596@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c90p97$voq$1@hood.uits.indiana.edu...


I'm trying to keep a temperature in a radiometer constant, with
periodic
changes in an input power that are known pretty well. A simple offset
pretty much takes care of my servo needs. But it's part of a high
precision measurement, and my sensed power differs from my calibration
power. I went over the code and discovered the controlling variables
are
floats, which are a little short on precision. I hope changing them to
doubles will just fix the problem. But I blew up part of the apparatus
recently, so I haven't been able to try it. Maybe next week. In the
mean-time, I'm studying control theory. Bucek is a magical textbook
that makes a difficult subject seem almost comprehensible.

A twist is that my signal isn't the temperature, it's the power
required
to keep the temperature constant. And it's harder to measure when it
fluctuates a lot. It doesn't even matter if it fluctuates a lot as
long
as it averages to zero and I stay in the linear regime. But overall I
think my control needs must be pretty mild.

when you start slapping filters on feedback signals (quite feasible) is
often when you discover they can affect closed-loop dynamics too.

Our usual mode has been to Z-N it to get a starting point, and then vary
parameters. But past a certain point it can take a day to get the error
bar down enough to know whether a certain set of parameters was an
improvement. I was hoping some theory could shorten that a little.
If you have a reasonable model for your system (being thermal it should be
fairly easy) then you should be able to analytically "solve" for your
input-to-output transfer function, and then directly calculate Kp, Ki and
Kd.


Likewise, condition your inputs - NO
controller can respond to a step, so there aint much point in trying.
ramp-limiting, saturating integrators, anti-windup control, setpoint
shaping
filters etc. are all ways of doing this

My error signal is proportional to the deviation of the temperature
from
the set-point, but the change in temperature is proportional to power,
V^2/R. We take the square root of the PID calculation to linearize the
output, sqrt(vout)^2/R. I was thinking of squaring the input before I
run
through my PID routine, so the calculation itself is in terms of power
rather than temperature, but I don't know whether that's clever or a
mistake.


I have seen a number of regen rectifier control papers where they do
exactly
that - instead of the error amp measuring (Vdc* - Vdc) (*=setpoint) they
measure (Vdc*^2 - Vdc^2) to actually "linearise" the loop - more
correctly
they are taking the non-linearity outside of the controller - and it
works
because they really want to control power (hey, just like you :)

Uh, oh. I've gotten some conflicting advice on that point.

I suppose the only way to be sure is try it and find out. If only I could
hold my helium... Hardware problems to overcome before software becomes
an issue again.
I like to think of a closed loop thusly:

your loop gain is basically what "headroom" the controller has to regulate
out disturbances. In many cases loop gain is so high that the controller
will happily treat the non-existent sqrt() as a "disturbance" and regulate
it out, ie force the output to be whatever is required to zero the error.

If you dont have much loopgain (or the "disturbance" is really large, or
your regulation requirements are really strict) then such a crude loop may
not suffice. Hence my previous comment about giving your controller as much
information as possible (eg via feedforward, or in your case doing the
sqrt() explicitly) - it "frees up" a whole bunch of loop gain, which can be
used to regulate out "real" disturbances.

When I was making my 250kW regenerative rectifier go, I ran it as an
inverter, and discovered my load regulation was ratshit - I wanted 0.1%, but
was getting more like 5%. And the dynamics sucked. Closer inspection showed
that my sin(theta) routine was crap, and overflowed for 10 degrees (and
elsewhere underflowed for about the same amount). Yet the damned thing still
worked - I just used a LOT of loop gain dealing with my DIY disturbance.
Upon discovering this I then went through and rgiorously verified every
maths function I had written -= I excited each function with every possible
input, logged the outputs (from the real control board) then loaded them
into matlab, calculated ideal responses and plotted error vs input. I found
a couple of gross bugs (sin(x) was the worst. I didnt need to plot the
error - sin(x) all by itself was non-sinusoidal :). When I fixed those bugs,
my regulation miraculously improved, as did the dynamics.


Maybe doubles will just fix the problem and I won't have to worry about
things like that.


IME floats dont help much - like Tim I do 32-bit fixed point arithmetic.
Floating point suffers from a problem when adding big things to small
things - the mantissa (ie fractional part) determines at what point the
small thing disappears (think about how you re-normalise to add 2
floating
point numbers)

I've figured that out. And when the hardware works again, it might turn
out that's all I've needed to do for as long as I've been having trouble
with this thing.

I've been doing calibration runs where a known amount of power is put in
and I measure it. And results vary widely, but pretty typical is that the
measured and input power differ by something like 2% +- 0.5%.

But when I look at the smallest error term that can be added to the
integral term that keeps me at my running voltage, the coarse resolution
is good for about that order of error. Doubles will add many decades to
the precision, and I'm hoping that's all it will take to fix it.


A very good trick is to normalise your inputs as soon as you digitise
them -
say you measure T in Kelvin*10, and (nominal) setpoint is 373K = 3730. I
would choose 4096 = 100% then divide the measured T by 4096 to get a
"per-unitised" number (and multiply by say 8192 so I can have +/- 4PU fit
into one 16-bit number. scale to suit 32 bits). T* = 3730/4096 = 0.9106
(close to 1). All your maths is then (assuming you picked a sensible
normalising factor) dealing with numbers that are approximately 1, so is
numerically well conditioned. When you have calculated you normalised
output, de-normalise it by multiplying it by the 100% output value.

Hmm... I'm just using straight voltages as they were read from the
lock-in. I'm also hacking on legacy code, and a little change can ripple
across a lot of files.

Maybe next time.
Once you try it you will be hooked. I have seen a few people get majorly
stuck with dynamic range issues, especially when they try and represent
things as volts, amps, etc. PU is what power systems engineers use too - a
1PU (100%) choke has 1PU (100%) current flowing thru it when 1PU (100%)
volts are applied across it. So a 5% transformer has 5% leakage inductance.
Short circuit current will therefore be (100%/5%) * 100% = 2000% ie 20x
short-circuit current

If you build multiple products (say motor controllers from 200W -
1,000,000W) they can use the exact same control loop - all that changes
are
the input (and output) normalisations (and of course the sensor scaling
factors etc).




BTW, be careful of pole-zero cancellations. They may look nice
analytically,
but such cancellations have a nasty habit of not working in the real
world,
whereupon your system dynamics can change significantly. A good
example
would be trying to cancel anything near the unit circle, and having
quantisation "noise" preventing it from actually cancelling. Or
perhaps
the
L's and C's have different values....

Pole-zero cancellations? That's like

(s+a)(s+b) / (s+a)(s+c)

The circle and the cross on the same point?


yes, in theory. In practice they often are not, leading to (as tim
pointed
out) "hidden dynamics" ie ones that bite you in the arse at the least
convenient time :)

Eh. Lately I've been bitten in the arse more or less constantly. Some
times are less convenient than other times, I just notice it more when
it's less convenient.

--
"Usenet is like a herd of performing elephants with diarrhea -- massive,
difficult to redirect, awe-inspiring, entertaining, and a source of
mind-boggling amounts of excrement when you least expect it. "
-- Gene Spafford, 1992
"managing engineers is like herding cats with egos" - Grant Elliott, 2002

Cheers
Terry
 
In article <10bav0gcmg2vb61@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:
Gregory L. Hansen wrote:

You're being extraordinarily helpful.


In article <10ba6e8nfvsmo64@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:


In article <10ba2p5ib99p6c6@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:


Gregory L. Hansen wrote:


And it matters whether the low-pass filter is on the controller or on the
system measurement, doesn't it? G/(1+GH), as the story goes, and if the
low-pass in on the lock-in that feeds me my error signal, it's in H and
not G.


Exactly. Putting the low pass outside the loop would be a very good idea.


Well, swell. It would have to be a digital filter, then, because from
lock-in to heater output, it's all digitized.


Digital filters are good -- they don't drift all over with every effect
known (or unknown) to man.
I always worry about sampling time. And about using a two-term filter to
simulate something involving a capacitor whose charge reflects the whole
history of operation.

But digital filtering obviously does a good job. My friend is a trucker,
and when I talk to him on his cell phone I don't even know whether he's
driving, his phone cuts out all the road noises. And that with a receiver
that comes down to his cheek. I assume that's no coincidence, and that
the filtering is digital.

Given the number of yahoos I see pulling out of a parking lot with a phone
to their ear, it seems somehow wrong to put that much effort into
removing the road noises. They could have finished the call before
pulling out, but I think they dial when they start driving. But the
phones do a really good job.

I did see a suggestion to smooth out the derivative signal by an averaging
over many measurements, like

D * (e[k] + e[k-1] - e[k-2] - e[k-3]) / (2^2 dT)

Or even more terms. And I've also been told that's the oldest mistake in
the book, take the derivative as it comes so the system can respond
quickly, and filter the output.

I'm not sure I understand how one is really faster than the other.

We actually have a high-frequency cut that takes a weighted average of the
current and previous proportional plus derivative terms. But not the
integral term. I don't know why the integral term was left out of that,
and the guy that wrote the software is no longer here.

Derivatives amplify high frequencies, noise tends to be white and plants
tend to be squirrelly at high frequencies. So it's a good idea to
frequency limit your derivative. I FIR filters are generally
disrecommended inside a control loop because you pay a lot of extra
phase lag for the amplitude changes you get. I would use a 1st-order
lowpass.
What is FIR and 1st-order lowpass? 1st-order low-pass is 1/(1+Ts), or,
uh... (1-exp(-dt/T))/(z-exp(-dt/T))?

He left the integral out because (a) if your filtering the derivative
the rest doesn't matter and (b) the integrator is more or less an
uber-lowpass (unter-lowpass?). Filtering the integrator output would be
pointless as well as strange.
Well, I knew the integrator is a filter. But I had thought instead of
doing any filtering inside the PID logic, just use the canonical algorithm
to produce an output, and filter the output.

The filtering currently isn't just on the derivative, but the derivative
plus proportional term. An extra variable is used to keep track of that
sub-sum.

I was thinking of rolling in a few extra terms to smooth the output a
little better, like

(u[k] + (HF/dt)(u[k-1] + 1/2*u[k-2] + 1/4*u[k-3])) /

(1 + (HF/dt)*(1 + 1/2 + 1/4))
This is not a first-order low-pass?

or some other weighting scheme that gives less importance to terms
farther in the past. Currently we just go to the u[k-1] term. Something
to try if doubles don't fix the problem, I suppose.

More low-pass may not be a bad idea if you make the rolloff
significantly higher than your closed-loop bandwidth. Really, if you're
getting the right average value to the plant you may just want to
seriously low-pass the drive _outside_ of the control loop. You can do
this simply by taking a nice long successive average without worrying
about the fact that it's FIR.
Hmm... there's that FIR again. But it seems like what I had in mind with
filtering the integral term. Not that the integral term itself would be
filtered, it's just part of the output.

A long succesive average like, for output from PID routine u[k] (all the
PID work is done and I'm ready to send a number to the DAC),

v_real_output = (u[k] + u[k-1] + u[k-2] + u[k-3] + u[k-4])/5

with v_real_output not saved, but the next v_real_output calculated from
u[k+1]+u[k]+...?

Series blocks commute, don't they? So it doesn't matter whether I filter
the input or the output; the effect will be the same?


Er, yes and no. As long as your whole system is staying linear
(including any truncation noise from the digital computations) then they
commute. If not, then no.
Truncation noise should be brought under control. Does sampling time
figure in there?

A resistor+capacitor low-pass filter is a lot like an integrator, except
it's not an ideal integrator. If it were an ideal integrator it would be
a/s, and not a/(s+b).

I wanted to say a few things about what the pole at the origin means, but
I suppose I have to think of controller and plant, not just controller.


As I pointed out elsewhere, a lead-lag is generally a first-order
pole-zero pair, it's a bit odd to call a 2nd order transfer function
"lead-lag".


Eh? I thought a lead compensator was first-order, a lag compensator was
first order, and a lead-lag compensator was second order.

For lead or lag,

D(s) = K (s+a)/(s+b)

For lead-lag,

D(s) = K (s+a1)(s+a2) / (s+b1)(s+b2)


Nope. Your D(s) = K(s+a)/(s+b) is a "lead/lag", with the zero (at s =
-a) providing the "lead" and the pole (at s = -b) providing the "lag".
Below a and above b (assuming that a < b) there isn't any phase shift,
hence no lead or lag.

Now, your book may be different. Every author gets to invent their own
terminology, but the prevalent terminology that I see is for the
1st-order filter.


And when another author uses different terminology, now I'll know. But
the circuits that Bucek presented for the lead and the lag were a low-pass
filter and a high-pass filter (in one order or the other) with an extra
resistor in series with the capacitor in each case, while his
implementation of the lead-lag had two sets of resistor and capacitor in
parallel. Different circuit, more components, so the quadratic form
seemed reasonable to me.

There's so many dammed corner cases when you're writing material that
it's hard to (a) say something in a new and better way (b) stay
consistent with prevailing terminology and (c) not confuse the hell out
of your poor reader.
I checked Bateson, and he has integration lead, lag, lead-lag, lag-lead,
which are all first-order. His lag is a low-pass.

--
"Usenet is like a herd of performing elephants with diarrhea -- massive,
difficult to redirect, awe-inspiring, entertaining, and a source of
mind-boggling amounts of excrement when you least expect it. "
-- Gene Spafford, 1992
 
Gregory L. Hansen wrote:
In article <10bav0gcmg2vb61@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:



You're being extraordinarily helpful.


Your problems are too damned interesting!

-- snip --
Derivatives amplify high frequencies, noise tends to be white and plants
tend to be squirrelly at high frequencies. So it's a good idea to
frequency limit your derivative. I FIR filters are generally
disrecommended inside a control loop because you pay a lot of extra
phase lag for the amplitude changes you get. I would use a 1st-order
lowpass.


What is FIR and 1st-order lowpass? 1st-order low-pass is 1/(1+Ts), or,
uh... (1-exp(-dt/T))/(z-exp(-dt/T))?
FIR is "Finite Impulse Response"; a successive averaging filter is a
very simple FIR filter. 1st-order lowpass is something like (1-d)/(z-d)
(and IIR, or "Infinite Impulse Response").

He left the integral out because (a) if your filtering the derivative
the rest doesn't matter and (b) the integrator is more or less an
uber-lowpass (unter-lowpass?). Filtering the integrator output would be
pointless as well as strange.


Well, I knew the integrator is a filter. But I had thought instead of
doing any filtering inside the PID logic, just use the canonical algorithm
to produce an output, and filter the output.

The filtering currently isn't just on the derivative, but the derivative
plus proportional term. An extra variable is used to keep track of that
sub-sum.


I was thinking of rolling in a few extra terms to smooth the output a
little better, like

(u[k] + (HF/dt)(u[k-1] + 1/2*u[k-2] + 1/4*u[k-3])) /

(1 + (HF/dt)*(1 + 1/2 + 1/4))


This is not a first-order low-pass?

Uh-uh. This'll give you a transfer function something like

z^3 + (HF/dt)(z^2 + 1/2 z + 1/4)
--------------------------------,
(1 + 7/4 HF/dt)z^3

the z^3 in the denominator makes it 3-rd order. The _naked_ z^3 in the
denominator makes it FIR.

or some other weighting scheme that gives less importance to terms
farther in the past. Currently we just go to the u[k-1] term. Something
to try if doubles don't fix the problem, I suppose.


More low-pass may not be a bad idea if you make the rolloff
significantly higher than your closed-loop bandwidth. Really, if you're
getting the right average value to the plant you may just want to
seriously low-pass the drive _outside_ of the control loop. You can do
this simply by taking a nice long successive average without worrying
about the fact that it's FIR.


Hmm... there's that FIR again. But it seems like what I had in mind with
filtering the integral term. Not that the integral term itself would be
filtered, it's just part of the output.

A long succesive average like, for output from PID routine u[k] (all the
PID work is done and I'm ready to send a number to the DAC),

v_real_output = (u[k] + u[k-1] + u[k-2] + u[k-3] + u[k-4])/5

with v_real_output not saved, but the next v_real_output calculated from
u[k+1]+u[k]+...?
Yes. Or v_real_output = v_real_output + (u[k] - u[k - N])/N -- it's a
good way to keep a running average if you trust your state variables to
not drift. The running average is more robust.

Series blocks commute, don't they? So it doesn't matter whether I filter
the input or the output; the effect will be the same?


Er, yes and no. As long as your whole system is staying linear
(including any truncation noise from the digital computations) then they
commute. If not, then no.


Truncation noise should be brought under control. Does sampling time
figure in there?
Only in that the blocks don't really commute across a sampler. Within
domains (continuous or sampled) they do commute.

A resistor+capacitor low-pass filter is a lot like an integrator, except
it's not an ideal integrator. If it were an ideal integrator it would be
a/s, and not a/(s+b).

I wanted to say a few things about what the pole at the origin means, but
I suppose I have to think of controller and plant, not just controller.


As I pointed out elsewhere, a lead-lag is generally a first-order
pole-zero pair, it's a bit odd to call a 2nd order transfer function
"lead-lag".


Eh? I thought a lead compensator was first-order, a lag compensator was
first order, and a lead-lag compensator was second order.

For lead or lag,

D(s) = K (s+a)/(s+b)

For lead-lag,

D(s) = K (s+a1)(s+a2) / (s+b1)(s+b2)


Nope. Your D(s) = K(s+a)/(s+b) is a "lead/lag", with the zero (at s =
-a) providing the "lead" and the pole (at s = -b) providing the "lag".
Below a and above b (assuming that a < b) there isn't any phase shift,
hence no lead or lag.

Now, your book may be different. Every author gets to invent their own
terminology, but the prevalent terminology that I see is for the
1st-order filter.


And when another author uses different terminology, now I'll know. But
the circuits that Bucek presented for the lead and the lag were a low-pass
filter and a high-pass filter (in one order or the other) with an extra
resistor in series with the capacitor in each case, while his
implementation of the lead-lag had two sets of resistor and capacitor in
parallel. Different circuit, more components, so the quadratic form
seemed reasonable to me.

There's so many dammed corner cases when you're writing material that
it's hard to (a) say something in a new and better way (b) stay
consistent with prevailing terminology and (c) not confuse the hell out
of your poor reader.


I checked Bateson, and he has integration lead, lag, lead-lag, lag-lead,
which are all first-order. His lag is a low-pass.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
In article <Ryjtc.10819$XI4.389719@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c93hc8$vrm$4@hood.uits.indiana.edu...
In article <Tp9tc.10577$XI4.382936@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message


Our usual mode has been to Z-N it to get a starting point, and then vary
parameters. But past a certain point it can take a day to get the error
bar down enough to know whether a certain set of parameters was an
improvement. I was hoping some theory could shorten that a little.

If you have a reasonable model for your system (being thermal it should be
fairly easy) then you should be able to analytically "solve" for your
input-to-output transfer function, and then directly calculate Kp, Ki and
Kd.
I'm starting to work on that. One thing I worry about, though, is when I
put power on the heater, the thermometer doesn't just see a straight
1-exp(-kt) function. There's an S-shape, an inflection, at the beginning
of the change that I think must be important. I think that means I have a
2nd order process instead of a 1st order. When I get the darn equipment
working again I can try again to measure it, I understand the software
better, but last time I tried I couldn't see it, so I've been discovering
the wonders of transient heat flow calculations. Maybe it's fast enough
that it doesn't matter.

Likewise, condition your inputs - NO
controller can respond to a step, so there aint much point in trying.
ramp-limiting, saturating integrators, anti-windup control, setpoint
shaping
filters etc. are all ways of doing this

My error signal is proportional to the deviation of the temperature
from
the set-point, but the change in temperature is proportional to power,
V^2/R. We take the square root of the PID calculation to linearize the
output, sqrt(vout)^2/R. I was thinking of squaring the input before I
run
through my PID routine, so the calculation itself is in terms of power
rather than temperature, but I don't know whether that's clever or a
mistake.


I have seen a number of regen rectifier control papers where they do
exactly
that - instead of the error amp measuring (Vdc* - Vdc) (*=setpoint) they
measure (Vdc*^2 - Vdc^2) to actually "linearise" the loop - more
correctly
they are taking the non-linearity outside of the controller - and it
works
because they really want to control power (hey, just like you :)

Uh, oh. I've gotten some conflicting advice on that point.

I suppose the only way to be sure is try it and find out. If only I could
hold my helium... Hardware problems to overcome before software becomes
an issue again.

I like to think of a closed loop thusly:

your loop gain is basically what "headroom" the controller has to regulate
out disturbances. In many cases loop gain is so high that the controller
will happily treat the non-existent sqrt() as a "disturbance" and regulate
it out, ie force the output to be whatever is required to zero the error.

If you dont have much loopgain (or the "disturbance" is really large, or
your regulation requirements are really strict) then such a crude loop may
not suffice. Hence my previous comment about giving your controller as much
information as possible (eg via feedforward, or in your case doing the
sqrt() explicitly) - it "frees up" a whole bunch of loop gain, which can be
used to regulate out "real" disturbances.
Well, my thought was my output is a correction to the power to deliver, so
the PID loop should be done in terms of power even if the input is a
temperature.

When I was making my 250kW regenerative rectifier go, I ran it as an
inverter, and discovered my load regulation was ratshit - I wanted 0.1%, but
was getting more like 5%. And the dynamics sucked. Closer inspection showed
that my sin(theta) routine was crap, and overflowed for 10 degrees (and
elsewhere underflowed for about the same amount). Yet the damned thing still
worked - I just used a LOT of loop gain dealing with my DIY disturbance.
Upon discovering this I then went through and rgiorously verified every
maths function I had written -= I excited each function with every possible
input, logged the outputs (from the real control board) then loaded them
into matlab, calculated ideal responses and plotted error vs input. I found
a couple of gross bugs (sin(x) was the worst. I didnt need to plot the
error - sin(x) all by itself was non-sinusoidal :). When I fixed those bugs,
my regulation miraculously improved, as did the dynamics.
sin(x)? Then that wasn't a typical linear controller?

A very good trick is to normalise your inputs as soon as you digitise
them -
say you measure T in Kelvin*10, and (nominal) setpoint is 373K = 3730. I
would choose 4096 = 100% then divide the measured T by 4096 to get a
"per-unitised" number (and multiply by say 8192 so I can have +/- 4PU fit
into one 16-bit number. scale to suit 32 bits). T* = 3730/4096 = 0.9106
(close to 1). All your maths is then (assuming you picked a sensible
normalising factor) dealing with numbers that are approximately 1, so is
numerically well conditioned. When you have calculated you normalised
output, de-normalise it by multiplying it by the 100% output value.

Hmm... I'm just using straight voltages as they were read from the
lock-in. I'm also hacking on legacy code, and a little change can ripple
across a lot of files.

Maybe next time.

Once you try it you will be hooked. I have seen a few people get majorly
stuck with dynamic range issues, especially when they try and represent
things as volts, amps, etc. PU is what power systems engineers use too - a
1PU (100%) choke has 1PU (100%) current flowing thru it when 1PU (100%)
volts are applied across it. So a 5% transformer has 5% leakage inductance.
Short circuit current will therefore be (100%/5%) * 100% = 2000% ie 20x
short-circuit current
Seems more true to a modularity philosophy, too. I don't actually have
anything built into the loop that depends on voltages and things, but my
control parameters are set that way. So I might have a gain of 10^6 that
relates an error signal of nanovolts to an output signal of volts. And
every time some little thing changes, the run parameters have to change.

--
"Usenet is like a herd of performing elephants with diarrhea -- massive,
difficult to redirect, awe-inspiring, entertaining, and a source of
mind-boggling amounts of excrement when you least expect it. "
-- Gene Spafford, 1992

"managing engineers is like herding cats with egos" - Grant Elliott, 2002
When the new wing was built on this building, the toilet paper dispensors
had springs pushing on the sides of the toilet paper that made it hard to
unroll. It took about three days for all of them to be broken off or bent
away. But at a slow neutron conference the back half of the auditorium
was taped off, to encourage people to sit near the front. The scientists,
including the same people that fixed the toilet paper dispensers, sat
wherever they wanted to, anyway. But for three days they all
conscientiously stepped over the tape, so it was still intact at the end
of the conference.

I don't know what that means.

--
"Usenet is like a herd of performing elephants with diarrhea -- massive,
difficult to redirect, awe-inspiring, entertaining, and a source of
mind-boggling amounts of excrement when you least expect it. "
-- Gene Spafford, 1992
 
In article <10bc22sog9kna0b@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:
Gregory L. Hansen wrote:
In article <10bav0gcmg2vb61@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:



You're being extraordinarily helpful.


Your problems are too damned interesting!
Oh, good. I thought I must be becoming a pest. I've gotta love a guy who
loves his work.

-- snip --
Derivatives amplify high frequencies, noise tends to be white and plants
tend to be squirrelly at high frequencies. So it's a good idea to
frequency limit your derivative. I FIR filters are generally
disrecommended inside a control loop because you pay a lot of extra
phase lag for the amplitude changes you get. I would use a 1st-order
lowpass.


What is FIR and 1st-order lowpass? 1st-order low-pass is 1/(1+Ts), or,
uh... (1-exp(-dt/T))/(z-exp(-dt/T))?



FIR is "Finite Impulse Response"; a successive averaging filter is a
very simple FIR filter. 1st-order lowpass is something like (1-d)/(z-d)
(and IIR, or "Infinite Impulse Response").
Finite impulse response must be what happens when, well, you introduce a
finite impulse to the system?

We have frequency response, what the system does when a continuous, pure
frequency is input. Step response, what happens when you change the
setpoint and leave it there. And finite impulse response, which is what
happens when you kick it.

I was thinking of rolling in a few extra terms to smooth the output a
little better, like

(u[k] + (HF/dt)(u[k-1] + 1/2*u[k-2] + 1/4*u[k-3])) /

(1 + (HF/dt)*(1 + 1/2 + 1/4))


This is not a first-order low-pass?

Uh-uh. This'll give you a transfer function something like

z^3 + (HF/dt)(z^2 + 1/2 z + 1/4)
--------------------------------,
(1 + 7/4 HF/dt)z^3

the z^3 in the denominator makes it 3-rd order. The _naked_ z^3 in the
denominator makes it FIR.


or some other weighting scheme that gives less importance to terms
farther in the past. Currently we just go to the u[k-1] term. Something
to try if doubles don't fix the problem, I suppose.


More low-pass may not be a bad idea if you make the rolloff
significantly higher than your closed-loop bandwidth. Really, if you're
getting the right average value to the plant you may just want to
seriously low-pass the drive _outside_ of the control loop. You can do
this simply by taking a nice long successive average without worrying
about the fact that it's FIR.


Hmm... there's that FIR again. But it seems like what I had in mind with
filtering the integral term. Not that the integral term itself would be
filtered, it's just part of the output.

A long succesive average like, for output from PID routine u[k] (all the
PID work is done and I'm ready to send a number to the DAC),

v_real_output = (u[k] + u[k-1] + u[k-2] + u[k-3] + u[k-4])/5

with v_real_output not saved, but the next v_real_output calculated from
u[k+1]+u[k]+...?


Yes. Or v_real_output = v_real_output + (u[k] - u[k - N])/N -- it's a
good way to keep a running average if you trust your state variables to
not drift. The running average is more robust.
Now I get it. I had wanted to average the output over some length of
time, not just over two terms. And I wanted the most recent terms to be
the most important, so the system will respond faster to new disturbances.
But that's really what my high frequency cut does right now with the
proportional plus derivative terms. Because, for some constant t,

u[k] = (u[k] + t*u[k-1])/(1+t)

But

u[k-1] = (u[k-1] + t*u[k-2])/(1+t)

and

u[k-2] = (u[k-2] + t*u[k-3])/(1+t)

and so on. So when I expand it all out, the filter produces

u[k] = 1/(1+t) \sum_i t^i u[k-i] / (1+t)^i

which is exactly what I had in mind all along. And the integral isn't a
weighted average, it's just an average, which makes it ueber-filtered,
like you said. If I were doing

u[k] = (u[k] + t*u[k-1] + t/2*u[k-2])/(1+t+t/2)

and expand it out, I quickly become confused since computers don't mind
having u[k] on the right and left side of the equals sign, and I lose
track of which u[k-i]'s still need to be substituted in and which should
be left alone. But when the dust settles, the mess still has to reduce to

u[k] = a0*u[k] + a1*u[k-1] + a2*u[k-2] + ...

for some set of constants a0, a1, a2...

And if I filtered the whole thing, including the integral, I suppose the
integral is double-filtered or something. But I can write out a
difference equation, it's easy to implement one way or the other. I
guess I see no good reason to change the HF cut the way it is right now.
I'd have to find another way if I didn't want a geometric series, but I
have nothing against geometric series.

Series blocks commute, don't they? So it doesn't matter whether I filter
the input or the output; the effect will be the same?


Er, yes and no. As long as your whole system is staying linear
(including any truncation noise from the digital computations) then they
commute. If not, then no.


Truncation noise should be brought under control. Does sampling time
figure in there?


Only in that the blocks don't really commute across a sampler. Within
domains (continuous or sampled) they do commute.
It still feels better to filter the output rather than the input.
--
"The preferred method of entering a building is to use a tank main gun
round, direct fire artillery round, or TOW, Dragon, or Hellfire missile to
clear the first room." -- THE RANGER HANDBOOK U.S. Army, 1992
 
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c950rf$g4j$2@hood.uits.indiana.edu...
In article <Ryjtc.10819$XI4.389719@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c93hc8$vrm$4@hood.uits.indiana.edu...
In article <Tp9tc.10577$XI4.382936@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message


Our usual mode has been to Z-N it to get a starting point, and then
vary
parameters. But past a certain point it can take a day to get the
error
bar down enough to know whether a certain set of parameters was an
improvement. I was hoping some theory could shorten that a little.

If you have a reasonable model for your system (being thermal it should
be
fairly easy) then you should be able to analytically "solve" for your
input-to-output transfer function, and then directly calculate Kp, Ki and
Kd.

I'm starting to work on that. One thing I worry about, though, is when I
put power on the heater, the thermometer doesn't just see a straight
1-exp(-kt) function. There's an S-shape, an inflection, at the beginning
of the change that I think must be important. I think that means I have a
2nd order process instead of a 1st order. When I get the darn equipment
working again I can try again to measure it, I understand the software
better, but last time I tried I couldn't see it, so I've been discovering
the wonders of transient heat flow calculations. Maybe it's fast enough
that it doesn't matter.
many things can successfully be modelled as 1st order. Most that cant, look
like 2nd order. This is especially true with energy flow type problems - it
is usually average energy flow that is of interest.


Likewise, condition your inputs - NO
controller can respond to a step, so there aint much point in
trying.
ramp-limiting, saturating integrators, anti-windup control,
setpoint
shaping
filters etc. are all ways of doing this

My error signal is proportional to the deviation of the temperature
from
the set-point, but the change in temperature is proportional to
power,
V^2/R. We take the square root of the PID calculation to linearize
the
output, sqrt(vout)^2/R. I was thinking of squaring the input before
I
run
through my PID routine, so the calculation itself is in terms of
power
rather than temperature, but I don't know whether that's clever or a
mistake.


I have seen a number of regen rectifier control papers where they do
exactly
that - instead of the error amp measuring (Vdc* - Vdc) (*=setpoint)
they
measure (Vdc*^2 - Vdc^2) to actually "linearise" the loop - more
correctly
they are taking the non-linearity outside of the controller - and it
works
because they really want to control power (hey, just like you :)

Uh, oh. I've gotten some conflicting advice on that point.

I suppose the only way to be sure is try it and find out. If only I
could
hold my helium... Hardware problems to overcome before software
becomes
an issue again.

I like to think of a closed loop thusly:

your loop gain is basically what "headroom" the controller has to
regulate
out disturbances. In many cases loop gain is so high that the controller
will happily treat the non-existent sqrt() as a "disturbance" and
regulate
it out, ie force the output to be whatever is required to zero the error.

If you dont have much loopgain (or the "disturbance" is really large, or
your regulation requirements are really strict) then such a crude loop
may
not suffice. Hence my previous comment about giving your controller as
much
information as possible (eg via feedforward, or in your case doing the
sqrt() explicitly) - it "frees up" a whole bunch of loop gain, which can
be
used to regulate out "real" disturbances.

Well, my thought was my output is a correction to the power to deliver, so
the PID loop should be done in terms of power even if the input is a
temperature.
I agree. dT = P*Rtheta*1/(1+s/Wo) is a pretty good model for most thermal
processes. The controller PID's Terror to give Psetpoint. Then (i think this
is what you do) convert the post-controller Psetpoint into a voltage output
(knowing R). Thats what I would do. But of course if you have enough loop
gain, your controller inputs can be anything vaguely related to T (or P) and
it will still work :)

When I was making my 250kW regenerative rectifier go, I ran it as an
inverter, and discovered my load regulation was ratshit - I wanted 0.1%,
but
was getting more like 5%. And the dynamics sucked. Closer inspection
showed
that my sin(theta) routine was crap, and overflowed for 10 degrees (and
elsewhere underflowed for about the same amount). Yet the damned thing
still
worked - I just used a LOT of loop gain dealing with my DIY disturbance.
Upon discovering this I then went through and rgiorously verified every
maths function I had written -= I excited each function with every
possible
input, logged the outputs (from the real control board) then loaded them
into matlab, calculated ideal responses and plotted error vs input. I
found
a couple of gross bugs (sin(x) was the worst. I didnt need to plot the
error - sin(x) all by itself was non-sinusoidal :). When I fixed those
bugs,
my regulation miraculously improved, as did the dynamics.

sin(x)? Then that wasn't a typical linear controller?
a balanced three-phase power system can be converted to an equivalent
2-phase (quadrature) system - because the angles between the three phases
are constant, the 3-phase system is over-determined. This quadrature system
is really just a cartesian representation of a vector rotating at the
angular frequency w = 2pi*50Hz (60Hz), V = Vpeak*e^(jwt). This is a real
pain to control. If I multiply it by e^(-jw1t), and w1 = w then the e^(jwt)
disappears - in other words, if my controller co-ordinate system rotates at
the same speed as the vector, the vector looks stationary. If my controller
co-ordinates are aligned to the vector, then one axis of my rotating
coordinate system (by definition) is zero. So I measure the three-phase ac
supply, convert to two-phase (just a scalar transformation as the angles are
0 and +/- 120 degrees, so numbers like 1/sqrt(3) pop up) then do a vector
rotation e^(-jtheta) (theta = w1t) to get Vd, Vq. If w1 = w then Vd = 0, so
I PI-control Vd with a setpoint of zero. The output of my PI controller is
w1, which I integrate to get theta. Regardless of what w1, theta start out
as, the PI controller rapidly forces w1 = w, and theta = wt. So even though
the system was "non-linear" (ie riddled with sin(x)) my linear controller
works beautifully. And the PI controller, in conjunction with the theta
integrator, did a great job of regulating out my ratshit sin(theta) calc
(used in the e^(-jw1t) vector rotation).

This is in fact a PLL, but its kind of hiding in the maths. If my Vd
setpoint is non-zero, I can dial up any arbitrary phase angle between me and
the national grid - this is how active power factor compensators work
nowadays.

Its kind of a fancy digital stroboscope :)

A very good trick is to normalise your inputs as soon as you digitise
them -
say you measure T in Kelvin*10, and (nominal) setpoint is 373K = 3730.
I
would choose 4096 = 100% then divide the measured T by 4096 to get a
"per-unitised" number (and multiply by say 8192 so I can have +/- 4PU
fit
into one 16-bit number. scale to suit 32 bits). T* = 3730/4096 =
0.9106
(close to 1). All your maths is then (assuming you picked a sensible
normalising factor) dealing with numbers that are approximately 1, so
is
numerically well conditioned. When you have calculated you normalised
output, de-normalise it by multiplying it by the 100% output value.

Hmm... I'm just using straight voltages as they were read from the
lock-in. I'm also hacking on legacy code, and a little change can
ripple
across a lot of files.

Maybe next time.

Once you try it you will be hooked. I have seen a few people get majorly
stuck with dynamic range issues, especially when they try and represent
things as volts, amps, etc. PU is what power systems engineers use too -
a
1PU (100%) choke has 1PU (100%) current flowing thru it when 1PU (100%)
volts are applied across it. So a 5% transformer has 5% leakage
inductance.
Short circuit current will therefore be (100%/5%) * 100% = 2000% ie 20x
short-circuit current

Seems more true to a modularity philosophy, too. I don't actually have
anything built into the loop that depends on voltages and things, but my
control parameters are set that way. So I might have a gain of 10^6 that
relates an error signal of nanovolts to an output signal of volts. And
every time some little thing changes, the run parameters have to change.
*CLONK* - the sound of a nail being hit squarely on the head. non-normalised
systems are generally a pain in the arse for this reason :}


--
"Usenet is like a herd of performing elephants with diarrhea --
massive,
difficult to redirect, awe-inspiring, entertaining, and a source of
mind-boggling amounts of excrement when you least expect it. "
-- Gene Spafford, 1992

"managing engineers is like herding cats with egos" - Grant Elliott, 2002

When the new wing was built on this building, the toilet paper dispensors
had springs pushing on the sides of the toilet paper that made it hard to
unroll. It took about three days for all of them to be broken off or bent
away. But at a slow neutron conference the back half of the auditorium
was taped off, to encourage people to sit near the front. The scientists,
including the same people that fixed the toilet paper dispensers, sat
wherever they wanted to, anyway. But for three days they all
conscientiously stepped over the tape, so it was still intact at the end
of the conference.

I don't know what that means.
Pavlovian behaviour, stemming from police tape as seen on TV?

cheers
Terry
 
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c95fdp$ltu$1@hood.uits.indiana.edu...
In article <10bc22sog9kna0b@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:
Gregory L. Hansen wrote:
In article <10bav0gcmg2vb61@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:



You're being extraordinarily helpful.


Your problems are too damned interesting!

Oh, good. I thought I must be becoming a pest. I've gotta love a guy who
loves his work.


-- snip --
Derivatives amplify high frequencies, noise tends to be white and
plants
tend to be squirrelly at high frequencies. So it's a good idea to
frequency limit your derivative. I FIR filters are generally
disrecommended inside a control loop because you pay a lot of extra
phase lag for the amplitude changes you get. I would use a 1st-order
lowpass.


What is FIR and 1st-order lowpass? 1st-order low-pass is 1/(1+Ts), or,
uh... (1-exp(-dt/T))/(z-exp(-dt/T))?



FIR is "Finite Impulse Response"; a successive averaging filter is a
very simple FIR filter. 1st-order lowpass is something like (1-d)/(z-d)
(and IIR, or "Infinite Impulse Response").

Finite impulse response must be what happens when, well, you introduce a
finite impulse to the system?

We have frequency response, what the system does when a continuous, pure
frequency is input. Step response, what happens when you change the
setpoint and leave it there. And finite impulse response, which is what
happens when you kick it.

FIR: output = weighted sum of past and present INPUTS

IIR: output = weighted sum of past and present (NPUTS and OUTPUTS)

with a n-step FIR, the response to an impulse disappears after N steps - it
is FINITE

with an IIR, the response to an impulse *never* disappears, as it keeps
getting fed back from the output - it is INFINITE. Actually if the output
weighting terms are < 1, the response does eventually disappear, but only
due to quantisation.

In practice (almost) All controllers use IIR filters - they will give much
better attenuation than FIR filters (using the same amount of "taps"). FIR
used to be more of a joke than useful, as a decent rolloff with an FIR
filter needed many, many taps. Then Moore's law saved FIR from the dustbin -
transistors are cheap. who cares if we need 512 flip-flops to implement a
filter, my FPGA has 20,000 of the damn things... :)

FIR filters are usually used in audio - an FIR with an even number of taps
has a linear phase characteristic. FIR filters are also "harder" to design
than IIR filters

IIR filters are commonly used to convert analogue filters into the digital
domain (handle cranking) using backward-difference or bilinear transforms. I
posted a JPEG on a.b.s.e (title: software control of flyback converters) a
whilke back you should look at - it compares BD and BLT IIR first-order
filters with an RC filter, using a network analyser. very interesting - the
BD IIR phase response was utter shit.

Its also pretty straightforward to do all design strictly in the Z-domain
(eh Tim?)


I was thinking of rolling in a few extra terms to smooth the output a
little better, like

(u[k] + (HF/dt)(u[k-1] + 1/2*u[k-2] + 1/4*u[k-3])) /

(1 + (HF/dt)*(1 + 1/2 + 1/4))


This is not a first-order low-pass?

Uh-uh. This'll give you a transfer function something like

z^3 + (HF/dt)(z^2 + 1/2 z + 1/4)
--------------------------------,
(1 + 7/4 HF/dt)z^3

the z^3 in the denominator makes it 3-rd order. The _naked_ z^3 in the
denominator makes it FIR.


or some other weighting scheme that gives less importance to terms
farther in the past. Currently we just go to the u[k-1] term.
Something
to try if doubles don't fix the problem, I suppose.


More low-pass may not be a bad idea if you make the rolloff
significantly higher than your closed-loop bandwidth. Really, if
you're
getting the right average value to the plant you may just want to
seriously low-pass the drive _outside_ of the control loop. You can do
this simply by taking a nice long successive average without worrying
about the fact that it's FIR.


Hmm... there's that FIR again. But it seems like what I had in mind
with
filtering the integral term. Not that the integral term itself would
be
filtered, it's just part of the output.

A long succesive average like, for output from PID routine u[k] (all
the
PID work is done and I'm ready to send a number to the DAC),

v_real_output = (u[k] + u[k-1] + u[k-2] + u[k-3] + u[k-4])/5

with v_real_output not saved, but the next v_real_output calculated
from
u[k+1]+u[k]+...?


Yes. Or v_real_output = v_real_output + (u[k] - u[k - N])/N -- it's a
good way to keep a running average if you trust your state variables to
not drift. The running average is more robust.

Now I get it. I had wanted to average the output over some length of
time, not just over two terms. And I wanted the most recent terms to be
the most important, so the system will respond faster to new disturbances.
But that's really what my high frequency cut does right now with the
proportional plus derivative terms. Because, for some constant t,

u[k] = (u[k] + t*u[k-1])/(1+t)

But

u[k-1] = (u[k-1] + t*u[k-2])/(1+t)

and

u[k-2] = (u[k-2] + t*u[k-3])/(1+t)

and so on. So when I expand it all out, the filter produces

u[k] = 1/(1+t) \sum_i t^i u[k-i] / (1+t)^i

which is exactly what I had in mind all along. And the integral isn't a
weighted average, it's just an average, which makes it ueber-filtered,
like you said. If I were doing

u[k] = (u[k] + t*u[k-1] + t/2*u[k-2])/(1+t+t/2)

and expand it out, I quickly become confused since computers don't mind
having u[k] on the right and left side of the equals sign, and I lose
track of which u[k-i]'s still need to be substituted in and which should
be left alone. But when the dust settles, the mess still has to reduce to

u[k] = a0*u[k] + a1*u[k-1] + a2*u[k-2] + ...

for some set of constants a0, a1, a2...

And if I filtered the whole thing, including the integral, I suppose the
integral is double-filtered or something. But I can write out a
difference equation, it's easy to implement one way or the other. I
guess I see no good reason to change the HF cut the way it is right now.
I'd have to find another way if I didn't want a geometric series, but I
have nothing against geometric series.



Series blocks commute, don't they? So it doesn't matter whether I
filter
the input or the output; the effect will be the same?


Er, yes and no. As long as your whole system is staying linear
(including any truncation noise from the digital computations) then
they
commute. If not, then no.


Truncation noise should be brought under control. Does sampling time
figure in there?


Only in that the blocks don't really commute across a sampler. Within
domains (continuous or sampled) they do commute.

It still feels better to filter the output rather than the input.
--
"The preferred method of entering a building is to use a tank main gun
round, direct fire artillery round, or TOW, Dragon, or Hellfire missile to
clear the first room." -- THE RANGER HANDBOOK U.S. Army, 1992
 
On a sunny day (Fri, 28 May 2004 11:35:30 +1200) it happened "Terry Given"
<the_domes@xtra.co.nz> wrote in <dYutc.11015$XI4.397436@news.xtra.co.nz>:

"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c950rf$g4j$2@hood.uits.indiana.edu...
In article <Ryjtc.10819$XI4.389719@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c93hc8$vrm$4@hood.uits.indiana.edu...
In article <Tp9tc.10577$XI4.382936@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message


Our usual mode has been to Z-N it to get a starting point, and then
vary
parameters. But past a certain point it can take a day to get the
error
bar down enough to know whether a certain set of parameters was an
improvement. I was hoping some theory could shorten that a little.

If you have a reasonable model for your system (being thermal it should
be
fairly easy) then you should be able to analytically "solve" for your
input-to-output transfer function, and then directly calculate Kp, Ki and
Kd.

I'm starting to work on that. One thing I worry about, though, is when I
put power on the heater, the thermometer doesn't just see a straight
1-exp(-kt) function. There's an S-shape, an inflection, at the beginning
of the change that I think must be important. I think that means I have a
2nd order process instead of a 1st order. When I get the darn equipment
working again I can try again to measure it, I understand the software
better, but last time I tried I couldn't see it, so I've been discovering
the wonders of transient heat flow calculations. Maybe it's fast enough
that it doesn't matter.

many things can successfully be modelled as 1st order. Most that cant, look
like 2nd order. This is especially true with energy flow type problems - it
is usually average energy flow that is of interest.





Likewise, condition your inputs - NO
controller can respond to a step, so there aint much point in
trying.
ramp-limiting, saturating integrators, anti-windup control,
setpoint
shaping
filters etc. are all ways of doing this

My error signal is proportional to the deviation of the temperature
from
the set-point, but the change in temperature is proportional to
power,
V^2/R. We take the square root of the PID calculation to linearize
the
output, sqrt(vout)^2/R. I was thinking of squaring the input before
I
run
through my PID routine, so the calculation itself is in terms of
power
rather than temperature, but I don't know whether that's clever or a
mistake.


I have seen a number of regen rectifier control papers where they do
exactly
that - instead of the error amp measuring (Vdc* - Vdc) (*=setpoint)
they
measure (Vdc*^2 - Vdc^2) to actually "linearise" the loop - more
correctly
they are taking the non-linearity outside of the controller - and it
works
because they really want to control power (hey, just like you :)

Uh, oh. I've gotten some conflicting advice on that point.

I suppose the only way to be sure is try it and find out. If only I
could
hold my helium... Hardware problems to overcome before software
becomes
an issue again.

I like to think of a closed loop thusly:

your loop gain is basically what "headroom" the controller has to
regulate
out disturbances. In many cases loop gain is so high that the controller
will happily treat the non-existent sqrt() as a "disturbance" and
regulate
it out, ie force the output to be whatever is required to zero the error.

If you dont have much loopgain (or the "disturbance" is really large, or
your regulation requirements are really strict) then such a crude loop
may
not suffice. Hence my previous comment about giving your controller as
much
information as possible (eg via feedforward, or in your case doing the
sqrt() explicitly) - it "frees up" a whole bunch of loop gain, which can
be
used to regulate out "real" disturbances.

Well, my thought was my output is a correction to the power to deliver, so
the PID loop should be done in terms of power even if the input is a
temperature.

I agree. dT = P*Rtheta*1/(1+s/Wo) is a pretty good model for most thermal
processes. The controller PID's Terror to give Psetpoint. Then (i think this
is what you do) convert the post-controller Psetpoint into a voltage output
(knowing R). Thats what I would do. But of course if you have enough loop
gain, your controller inputs can be anything vaguely related to T (or P) and
it will still work :)



When I was making my 250kW regenerative rectifier go, I ran it as an
inverter, and discovered my load regulation was ratshit - I wanted 0.1%,
but
was getting more like 5%. And the dynamics sucked. Closer inspection
showed
that my sin(theta) routine was crap, and overflowed for 10 degrees (and
elsewhere underflowed for about the same amount). Yet the damned thing
still
worked - I just used a LOT of loop gain dealing with my DIY disturbance.
Upon discovering this I then went through and rgiorously verified every
maths function I had written -= I excited each function with every
possible
input, logged the outputs (from the real control board) then loaded them
into matlab, calculated ideal responses and plotted error vs input. I
found
a couple of gross bugs (sin(x) was the worst. I didnt need to plot the
error - sin(x) all by itself was non-sinusoidal :). When I fixed those
bugs,
my regulation miraculously improved, as did the dynamics.

sin(x)? Then that wasn't a typical linear controller?

a balanced three-phase power system can be converted to an equivalent
2-phase (quadrature) system - because the angles between the three phases
are constant, the 3-phase system is over-determined. This quadrature system
is really just a cartesian representation of a vector rotating at the
angular frequency w = 2pi*50Hz (60Hz), V = Vpeak*e^(jwt). This is a real
pain to control. If I multiply it by e^(-jw1t), and w1 = w then the e^(jwt)
disappears - in other words, if my controller co-ordinate system rotates at
the same speed as the vector, the vector looks stationary. If my controller
co-ordinates are aligned to the vector, then one axis of my rotating
coordinate system (by definition) is zero. So I measure the three-phase ac
supply, convert to two-phase (just a scalar transformation as the angles are
0 and +/- 120 degrees, so numbers like 1/sqrt(3) pop up) then do a vector
rotation e^(-jtheta) (theta = w1t) to get Vd, Vq. If w1 = w then Vd = 0, so
I PI-control Vd with a setpoint of zero. The output of my PI controller is
w1, which I integrate to get theta. Regardless of what w1, theta start out
as, the PI controller rapidly forces w1 = w, and theta = wt. So even though
the system was "non-linear" (ie riddled with sin(x)) my linear controller
works beautifully. And the PI controller, in conjunction with the theta
integrator, did a great job of regulating out my ratshit sin(theta) calc
(used in the e^(-jw1t) vector rotation).

This is in fact a PLL, but its kind of hiding in the maths. If my Vd
setpoint is non-zero, I can dial up any arbitrary phase angle between me and
the national grid - this is how active power factor compensators work
nowadays.

Its kind of a fancy digital stroboscope :)



A very good trick is to normalise your inputs as soon as you digitise
them -
say you measure T in Kelvin*10, and (nominal) setpoint is 373K = 3730.
I
would choose 4096 = 100% then divide the measured T by 4096 to get a
"per-unitised" number (and multiply by say 8192 so I can have +/- 4PU
fit
into one 16-bit number. scale to suit 32 bits). T* = 3730/4096 =
0.9106
(close to 1). All your maths is then (assuming you picked a sensible
normalising factor) dealing with numbers that are approximately 1, so
is
numerically well conditioned. When you have calculated you normalised
output, de-normalise it by multiplying it by the 100% output value.

Hmm... I'm just using straight voltages as they were read from the
lock-in. I'm also hacking on legacy code, and a little change can
ripple
across a lot of files.

Maybe next time.

Once you try it you will be hooked. I have seen a few people get majorly
stuck with dynamic range issues, especially when they try and represent
things as volts, amps, etc. PU is what power systems engineers use too -
a
1PU (100%) choke has 1PU (100%) current flowing thru it when 1PU (100%)
volts are applied across it. So a 5% transformer has 5% leakage
inductance.
Short circuit current will therefore be (100%/5%) * 100% = 2000% ie 20x
short-circuit current

Seems more true to a modularity philosophy, too. I don't actually have
anything built into the loop that depends on voltages and things, but my
control parameters are set that way. So I might have a gain of 10^6 that
relates an error signal of nanovolts to an output signal of volts. And
every time some little thing changes, the run parameters have to change.


*CLONK* - the sound of a nail being hit squarely on the head. non-normalised
systems are generally a pain in the arse for this reason :}


--
"Usenet is like a herd of performing elephants with diarrhea --
massive,
difficult to redirect, awe-inspiring, entertaining, and a source of
mind-boggling amounts of excrement when you least expect it. "
-- Gene Spafford, 1992

"managing engineers is like herding cats with egos" - Grant Elliott, 2002

When the new wing was built on this building, the toilet paper dispensors
had springs pushing on the sides of the toilet paper that made it hard to
unroll. It took about three days for all of them to be broken off or bent
away. But at a slow neutron conference the back half of the auditorium
was taped off, to encourage people to sit near the front. The scientists,
including the same people that fixed the toilet paper dispensers, sat
wherever they wanted to, anyway. But for three days they all
conscientiously stepped over the tape, so it was still intact at the end
of the conference.

I don't know what that means.


Pavlovian behaviour, stemming from police tape as seen on TV?

cheers
Terry
If it was 'red tape' then yes, they all would know about te be careful
with that.
JP
 
In article <dYutc.11015$XI4.397436@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c950rf$g4j$2@hood.uits.indiana.edu...
In article <Ryjtc.10819$XI4.389719@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c93hc8$vrm$4@hood.uits.indiana.edu...


I'm starting to work on that. One thing I worry about, though, is when I
put power on the heater, the thermometer doesn't just see a straight
1-exp(-kt) function. There's an S-shape, an inflection, at the beginning
of the change that I think must be important. I think that means I have a
2nd order process instead of a 1st order. When I get the darn equipment
working again I can try again to measure it, I understand the software
better, but last time I tried I couldn't see it, so I've been discovering
the wonders of transient heat flow calculations. Maybe it's fast enough
that it doesn't matter.

many things can successfully be modelled as 1st order. Most that cant, look
like 2nd order. This is especially true with energy flow type problems - it
is usually average energy flow that is of interest.
If the lag from the inflection point is on the order of the sampling time,
I suppose it just doesn't matter much one way or the other.

showed
that my sin(theta) routine was crap, and overflowed for 10 degrees (and
elsewhere underflowed for about the same amount). Yet the damned thing
still
worked - I just used a LOT of loop gain dealing with my DIY disturbance.
Upon discovering this I then went through and rgiorously verified every
maths function I had written -= I excited each function with every
possible
input, logged the outputs (from the real control board) then loaded them
into matlab, calculated ideal responses and plotted error vs input. I
found
a couple of gross bugs (sin(x) was the worst. I didnt need to plot the
error - sin(x) all by itself was non-sinusoidal :). When I fixed those
bugs,
my regulation miraculously improved, as did the dynamics.

sin(x)? Then that wasn't a typical linear controller?

a balanced three-phase power system can be converted to an equivalent
2-phase (quadrature) system - because the angles between the three phases
are constant, the 3-phase system is over-determined. This quadrature system
is really just a cartesian representation of a vector rotating at the
angular frequency w = 2pi*50Hz (60Hz), V = Vpeak*e^(jwt). This is a real
pain to control. If I multiply it by e^(-jw1t), and w1 = w then the e^(jwt)
disappears - in other words, if my controller co-ordinate system rotates at
the same speed as the vector, the vector looks stationary.
That seems quite clever. Linearizing the system? Seems almost more like
physicist talk than engineering talk.

If my controller
co-ordinates are aligned to the vector, then one axis of my rotating
coordinate system (by definition) is zero. So I measure the three-phase ac
supply, convert to two-phase (just a scalar transformation as the angles are
0 and +/- 120 degrees, so numbers like 1/sqrt(3) pop up) then do a vector
rotation e^(-jtheta) (theta = w1t) to get Vd, Vq. If w1 = w then Vd = 0, so
I PI-control Vd with a setpoint of zero. The output of my PI controller is
w1, which I integrate to get theta. Regardless of what w1, theta start out
as, the PI controller rapidly forces w1 = w, and theta = wt. So even though
the system was "non-linear" (ie riddled with sin(x)) my linear controller
works beautifully. And the PI controller, in conjunction with the theta
integrator, did a great job of regulating out my ratshit sin(theta) calc
(used in the e^(-jw1t) vector rotation).

This is in fact a PLL, but its kind of hiding in the maths. If my Vd
setpoint is non-zero, I can dial up any arbitrary phase angle between me and
the national grid - this is how active power factor compensators work
nowadays.

Its kind of a fancy digital stroboscope :)



A very good trick is to normalise your inputs as soon as you digitise
them -
say you measure T in Kelvin*10, and (nominal) setpoint is 373K = 3730.
I
would choose 4096 = 100% then divide the measured T by 4096 to get a
"per-unitised" number (and multiply by say 8192 so I can have +/- 4PU
fit
into one 16-bit number. scale to suit 32 bits). T* = 3730/4096 =
0.9106
(close to 1). All your maths is then (assuming you picked a sensible
normalising factor) dealing with numbers that are approximately 1, so
is
numerically well conditioned. When you have calculated you normalised
output, de-normalise it by multiplying it by the 100% output value.

Hmm... I'm just using straight voltages as they were read from the
lock-in. I'm also hacking on legacy code, and a little change can
ripple
across a lot of files.

Maybe next time.

Once you try it you will be hooked. I have seen a few people get majorly
stuck with dynamic range issues, especially when they try and represent
things as volts, amps, etc. PU is what power systems engineers use too -
a
1PU (100%) choke has 1PU (100%) current flowing thru it when 1PU (100%)
volts are applied across it. So a 5% transformer has 5% leakage
inductance.
Short circuit current will therefore be (100%/5%) * 100% = 2000% ie 20x
short-circuit current

Seems more true to a modularity philosophy, too. I don't actually have
anything built into the loop that depends on voltages and things, but my
control parameters are set that way. So I might have a gain of 10^6 that
relates an error signal of nanovolts to an output signal of volts. And
every time some little thing changes, the run parameters have to change.


*CLONK* - the sound of a nail being hit squarely on the head. non-normalised
systems are generally a pain in the arse for this reason :}
Heh! I've learned enough now that I can calculate corrections for some
things. The temperature is measured by an AC bridge with the thermometer
and reference resistor at the cold end, and a ratio transformer at the
other. The run temperature is determined by setting the transformer. But
the ratio^2 figures into dV/dT, so that also changes the gain of my
sensor. But changes it predictably, so I think I can save some work from
that, at least.
--
"Things should be made as simple as possible -- but no simpler."
-- Albert Einstein
 

Welcome to EDABoard.com

Sponsor

Back
Top