Lead-Lag versus PID

G

Gregory L. Hansen

Guest
I've been learning about control theory, and I'm a little confused on
lead, lag, and lead-lag compensation. My book (Bucek) described them as
filters, and gave analog circuits consisting of a few resistors and
capacitors, in contrast to the PID circuits with op-amps. But he also
analyzes them alone, in front of a plant, as if they were stand-alone
controllers. The lead-lag is a lot like a PID with an extra pole.

What should I think of them, and how they compare with PIDs?

--
"The preferred method of entering a building is to use a tank main gun
round, direct fire artillery round, or TOW, Dragon, or Hellfire missile to
clear the first room." -- THE RANGER HANDBOOK U.S. Army, 1992
 
Gregory L. Hansen wrote:
I've been learning about control theory, and I'm a little confused on
lead, lag, and lead-lag compensation. My book (Bucek) described them as
filters, and gave analog circuits consisting of a few resistors and
capacitors, in contrast to the PID circuits with op-amps. But he also
analyzes them alone, in front of a plant, as if they were stand-alone
controllers. The lead-lag is a lot like a PID with an extra pole.

What should I think of them, and how they compare with PIDs?
You can think about all of them as filters, or as controllers. On the
surface this is confusing, but it means that you can choose how to think
about it in the way that makes the problem it hand easiest to solve.

It is almost always (like 99.999% of the time) a good idea to put some
frequency limiting on a differentiator. You do this by putting a pole
into your differentiator at some high frequency. When you do this
you've made a lead-lag compensator with it's zero frequency at zero.

If you accept that your differentiator must have a high-frequency pole,
then your PD "controller" is nothing but a lead-lag "filter" -- the
transfer function is the same, you're just talking about it using
different terms. In fact, there are a number of ways you can describe a
PD/lead-lag block. Each uses three numbers, and each can be derived
from the others. Usually you'll talk about a PD in terms of
proportional and differential gain, and by the way the differentiator
rolloff frequency. Usually you'll talk about a lead-lag in terms of
it's DC (or AC) gain and it's zero and pole frequency -- but you may
want to talk about it's DC gain, it's AC gain and it's zero (or pole)
frequency.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:
Gregory L. Hansen wrote:
I've been learning about control theory, and I'm a little confused on
lead, lag, and lead-lag compensation. My book (Bucek) described them as
filters, and gave analog circuits consisting of a few resistors and
capacitors, in contrast to the PID circuits with op-amps. But he also
analyzes them alone, in front of a plant, as if they were stand-alone
controllers. The lead-lag is a lot like a PID with an extra pole.

What should I think of them, and how they compare with PIDs?


You can think about all of them as filters, or as controllers. On the
surface this is confusing, but it means that you can choose how to think
about it in the way that makes the problem it hand easiest to solve.
When I first saw them, I couldn't help thinking of them as something you'd
put before or after a PID loop to condition the signal as it goes through.
Maybe my main hang-up was that the sample circuit consisted entirely of
passive components. How can you collect two resistors and two capacitors
and call it a controller?

It is almost always (like 99.999% of the time) a good idea to put some
frequency limiting on a differentiator. You do this by putting a pole
into your differentiator at some high frequency. When you do this
you've made a lead-lag compensator with it's zero frequency at zero.
When I look at the discrete version of PID versus lead-lag, they look
pretty much the same except the lead-lag has an extra term or two.

I was thinking of turning my digital PID into a digital lead-lag, but the
lock-in, which provides the error signal, has a low-pass filter on the
output, so maybe that job has already been done.

If you accept that your differentiator must have a high-frequency pole,
then your PD "controller" is nothing but a lead-lag "filter" -- the
transfer function is the same, you're just talking about it using
different terms. In fact, there are a number of ways you can describe a
PD/lead-lag block. Each uses three numbers, and each can be derived
from the others. Usually you'll talk about a PD in terms of
proportional and differential gain, and by the way the differentiator
rolloff frequency. Usually you'll talk about a lead-lag in terms of
it's DC (or AC) gain and it's zero and pole frequency -- but you may
want to talk about it's DC gain, it's AC gain and it's zero (or pole)
frequency.
I'm still coming to terms with things like what happens when I put a pole
here and a zero there. But I had noticed the similarities in the transfer
functions, and especially in the equivalent difference equations. As long
as I don't try to take squares or sines or anything, all I can do is add
up current and past error signals and outputs, and multiply them by
constants.

--
"Don't try to teach a pig how to sing. You'll waste your time and annoy
the pig."
 
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c90p97$voq$1@hood.uits.indiana.edu...
In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:
Gregory L. Hansen wrote:
I've been learning about control theory, and I'm a little confused on
lead, lag, and lead-lag compensation. My book (Bucek) described them
as
filters, and gave analog circuits consisting of a few resistors and
capacitors, in contrast to the PID circuits with op-amps. But he also
analyzes them alone, in front of a plant, as if they were stand-alone
controllers. The lead-lag is a lot like a PID with an extra pole.

What should I think of them, and how they compare with PIDs?


You can think about all of them as filters, or as controllers. On the
surface this is confusing, but it means that you can choose how to think
about it in the way that makes the problem it hand easiest to solve.

When I first saw them, I couldn't help thinking of them as something you'd
put before or after a PID loop to condition the signal as it goes through.
Maybe my main hang-up was that the sample circuit consisted entirely of
passive components. How can you collect two resistors and two capacitors
and call it a controller?
by having the error-amp (setpoint - feedback) elsewhere, and not requiring a
controller gain of more than 1. In practice gains of > 1 are often required,
and an opamp provides the necessary gain as well as subtracting the feedback
from the setpoint.

In PLL design they call them loop filters.

Ultimately they are passive/active networks that shape frequency response.
How you choose to view them usually is preference-driven. If you like pole
placement or root-locus design methods, you will probably think about poles
& zeroes. If you want to use Zeigler-Nichols methods, you will think about
Kp, Ki and Kd. Either approach works. Me, I like the method of symmetric
optimum.


It is almost always (like 99.999% of the time) a good idea to put some
frequency limiting on a differentiator. You do this by putting a pole
into your differentiator at some high frequency. When you do this
you've made a lead-lag compensator with it's zero frequency at zero.

When I look at the discrete version of PID versus lead-lag, they look
pretty much the same except the lead-lag has an extra term or two.

I was thinking of turning my digital PID into a digital lead-lag, but the
lock-in, which provides the error signal, has a low-pass filter on the
output, so maybe that job has already been done.


If you accept that your differentiator must have a high-frequency pole,
then your PD "controller" is nothing but a lead-lag "filter" -- the
transfer function is the same, you're just talking about it using
different terms. In fact, there are a number of ways you can describe a
PD/lead-lag block. Each uses three numbers, and each can be derived
from the others. Usually you'll talk about a PD in terms of
proportional and differential gain, and by the way the differentiator
rolloff frequency. Usually you'll talk about a lead-lag in terms of
it's DC (or AC) gain and it's zero and pole frequency -- but you may
want to talk about it's DC gain, it's AC gain and it's zero (or pole)
frequency.

I'm still coming to terms with things like what happens when I put a pole
here and a zero there. But I had noticed the similarities in the transfer
functions, and especially in the equivalent difference equations. As long
as I don't try to take squares or sines or anything, all I can do is add
up current and past error signals and outputs, and multiply them by
constants.
you can run a model-reference controller too, and "synthesise" future
values. I have had a lot of success with so-called Internal Model Control
for large bidirectional rectifiers - basically run an ideal model of your
plant, to compute the ideal output of your controller based on setpoint
info - ie fancy feedforward. Then have an "error" controller, which
effectively deals with anything your feedforward "model" doesnt predict. I
found that a 250kW regenerative rectifier with PI control and deadtime
compensation was not as good as IMC alone - the IMC did a great job of
compensating for deadtime errors. IMC + deadtime compensation was even
better. Its only a small leap from there to gain-scheduling the IMC model to
take into account thermal & saturation effects on R,L,C etc.

In other words, use as much information as possible, it makes your
controllers life that much easier. Likewise, condition your inputs - NO
controller can respond to a step, so there aint much point in trying.
ramp-limiting, saturating integrators, anti-windup control, setpoint shaping
filters etc. are all ways of doing this

BTW, be careful of pole-zero cancellations. They may look nice analytically,
but such cancellations have a nasty habit of not working in the real world,
whereupon your system dynamics can change significantly. A good example
would be trying to cancel anything near the unit circle, and having
quantisation "noise" preventing it from actually cancelling. Or perhaps the
L's and C's have different values....

--
"Don't try to teach a pig how to sing. You'll waste your time and annoy
the pig."
Cheers
Terry
 
In article <u7Vsc.10214$XI4.368596@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:
"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c90p97$voq$1@hood.uits.indiana.edu...
In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:
Gregory L. Hansen wrote:

You can think about all of them as filters, or as controllers. On the
surface this is confusing, but it means that you can choose how to think
about it in the way that makes the problem it hand easiest to solve.

When I first saw them, I couldn't help thinking of them as something you'd
put before or after a PID loop to condition the signal as it goes through.
Maybe my main hang-up was that the sample circuit consisted entirely of
passive components. How can you collect two resistors and two capacitors
and call it a controller?

by having the error-amp (setpoint - feedback) elsewhere, and not requiring a
controller gain of more than 1. In practice gains of > 1 are often required,
and an opamp provides the necessary gain as well as subtracting the feedback
from the setpoint.
That makes sense. I suppose normally the control output from the
circuit wouldn't be dumped directly into a heater or something, but
rather control a power supply, or input to a valve moving mechanism...
it's a signal for something else that does the actual work. A voltage
divider is a lot like a gain < 1, a low-pass filter is a lot like an
integrator, a high-pass filter is a lot like a differentiator. But it
sure seems like a lot of theory for one or two resistors and one or two
capacitors.

In PLL design they call them loop filters.

Ultimately they are passive/active networks that shape frequency response.
How you choose to view them usually is preference-driven. If you like pole
placement or root-locus design methods, you will probably think about poles
& zeroes. If you want to use Zeigler-Nichols methods, you will think about
Kp, Ki and Kd. Either approach works. Me, I like the method of symmetric
optimum.
I like Ziegler-Nichols. At least I know how to use it. But it doesn't
offer much insight into a system.

I was thinking of turning my digital PID into a digital lead-lag, but the
lock-in, which provides the error signal, has a low-pass filter on the
output, so maybe that job has already been done.
....

I'm still coming to terms with things like what happens when I put a pole
here and a zero there. But I had noticed the similarities in the transfer
functions, and especially in the equivalent difference equations. As long
as I don't try to take squares or sines or anything, all I can do is add
up current and past error signals and outputs, and multiply them by
constants.

you can run a model-reference controller too, and "synthesise" future
values. I have had a lot of success with so-called Internal Model Control
for large bidirectional rectifiers - basically run an ideal model of your
plant, to compute the ideal output of your controller based on setpoint
info - ie fancy feedforward. Then have an "error" controller, which
effectively deals with anything your feedforward "model" doesnt predict. I
found that a 250kW regenerative rectifier with PI control and deadtime
compensation was not as good as IMC alone - the IMC did a great job of
compensating for deadtime errors. IMC + deadtime compensation was even
better. Its only a small leap from there to gain-scheduling the IMC model to
take into account thermal & saturation effects on R,L,C etc.

In other words, use as much information as possible, it makes your
controllers life that much easier.
I'm not sure if I have no idea what you're talking about, but 250 kW seems
like some serious rectifying.

I'm trying to keep a temperature in a radiometer constant, with periodic
changes in an input power that are known pretty well. A simple offset
pretty much takes care of my servo needs. But it's part of a high
precision measurement, and my sensed power differs from my calibration
power. I went over the code and discovered the controlling variables are
floats, which are a little short on precision. I hope changing them to
doubles will just fix the problem. But I blew up part of the apparatus
recently, so I haven't been able to try it. Maybe next week. In the
mean-time, I'm studying control theory. Bucek is a magical textbook
that makes a difficult subject seem almost comprehensible.

A twist is that my signal isn't the temperature, it's the power required
to keep the temperature constant. And it's harder to measure when it
fluctuates a lot. It doesn't even matter if it fluctuates a lot as long
as it averages to zero and I stay in the linear regime. But overall I
think my control needs must be pretty mild.

Likewise, condition your inputs - NO
controller can respond to a step, so there aint much point in trying.
ramp-limiting, saturating integrators, anti-windup control, setpoint shaping
filters etc. are all ways of doing this
My error signal is proportional to the deviation of the temperature from
the set-point, but the change in temperature is proportional to power,
V^2/R. We take the square root of the PID calculation to linearize the
output, sqrt(vout)^2/R. I was thinking of squaring the input before I run
through my PID routine, so the calculation itself is in terms of power
rather than temperature, but I don't know whether that's clever or a
mistake.

Maybe doubles will just fix the problem and I won't have to worry about
things like that.

BTW, be careful of pole-zero cancellations. They may look nice analytically,
but such cancellations have a nasty habit of not working in the real world,
whereupon your system dynamics can change significantly. A good example
would be trying to cancel anything near the unit circle, and having
quantisation "noise" preventing it from actually cancelling. Or perhaps the
L's and C's have different values....
Pole-zero cancellations? That's like

(s+a)(s+b) / (s+a)(s+c)

The circle and the cross on the same point?


--
"When the fool walks through the street, in his lack of understanding he
calls everything foolish." -- Ecclesiastes 10:3, New American Bible
 
Gregory L. Hansen wrote:

In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:

I've been learning about control theory, and I'm a little confused on
lead, lag, and lead-lag compensation. My book (Bucek) described them as
filters, and gave analog circuits consisting of a few resistors and
capacitors, in contrast to the PID circuits with op-amps. But he also
analyzes them alone, in front of a plant, as if they were stand-alone
controllers. The lead-lag is a lot like a PID with an extra pole.

What should I think of them, and how they compare with PIDs?


You can think about all of them as filters, or as controllers. On the
surface this is confusing, but it means that you can choose how to think
about it in the way that makes the problem it hand easiest to solve.


When I first saw them, I couldn't help thinking of them as something you'd
put before or after a PID loop to condition the signal as it goes through.
Maybe my main hang-up was that the sample circuit consisted entirely of
passive components. How can you collect two resistors and two capacitors
and call it a controller?
Well if you're shameless you can do anything :).

Certainly when it has a significant effect on the system behavior then
you'd better call it _something_ impressive enough to motivate you to
analyze it's effects.

It is almost always (like 99.999% of the time) a good idea to put some
frequency limiting on a differentiator. You do this by putting a pole
into your differentiator at some high frequency. When you do this
you've made a lead-lag compensator with it's zero frequency at zero.


When I look at the discrete version of PID versus lead-lag, they look
pretty much the same except the lead-lag has an extra term or two.

I was thinking of turning my digital PID into a digital lead-lag, but the
lock-in, which provides the error signal, has a low-pass filter on the
output, so maybe that job has already been done.
You probably don't want to lose the integrator. My general
recommendation is to use the PID first (with appropriate filtering on
the differentiator), and only start using lead-lags and notches and
whatnot if you really need it.

If you accept that your differentiator must have a high-frequency pole,
then your PD "controller" is nothing but a lead-lag "filter" -- the
transfer function is the same, you're just talking about it using
different terms. In fact, there are a number of ways you can describe a
PD/lead-lag block. Each uses three numbers, and each can be derived

from the others. Usually you'll talk about a PD in terms of

proportional and differential gain, and by the way the differentiator
rolloff frequency. Usually you'll talk about a lead-lag in terms of
it's DC (or AC) gain and it's zero and pole frequency -- but you may
want to talk about it's DC gain, it's AC gain and it's zero (or pole)
frequency.


I'm still coming to terms with things like what happens when I put a pole
here and a zero there. But I had noticed the similarities in the transfer
functions, and especially in the equivalent difference equations. As long
as I don't try to take squares or sines or anything, all I can do is add
up current and past error signals and outputs, and multiply them by
constants.
Pretty much, yes. You've just described a linear controller; if you can
pretend that your plant is linear then there's a lot of good tools out
there for designing nice controllers. If you just can't pretend that
your plant is linear then there's nice linearization techniques to use.
If you can't linearize your plant then there's folks like me who are
willing to take your $$ to make it work, or to tell you why you'll never
get there from here.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
Gregory L. Hansen wrote:

In article <u7Vsc.10214$XI4.368596@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:

"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c90p97$voq$1@hood.uits.indiana.edu...

In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:


-- snip --

I'm trying to keep a temperature in a radiometer constant, with periodic
changes in an input power that are known pretty well. A simple offset
pretty much takes care of my servo needs. But it's part of a high
precision measurement, and my sensed power differs from my calibration
power. I went over the code and discovered the controlling variables are
floats, which are a little short on precision. I hope changing them to
doubles will just fix the problem. But I blew up part of the apparatus
recently, so I haven't been able to try it. Maybe next week. In the
mean-time, I'm studying control theory. Bucek is a magical textbook
that makes a difficult subject seem almost comprehensible.
If you have the processor power to use doubles and keep your sampling
rate then that'll be the least-effort way. It's amazing how much
difference going from 24 bits to 32 bits can make -- I often end up with
32-bit fixed-point math rather than using floating point, single or double.

A twist is that my signal isn't the temperature, it's the power required
to keep the temperature constant. And it's harder to measure when it
fluctuates a lot. It doesn't even matter if it fluctuates a lot as long
as it averages to zero and I stay in the linear regime. But overall I
think my control needs must be pretty mild.
Hopefully you mean (a) if the error averages to zero or (b) if it
averages to a constant...

Likewise, condition your inputs - NO
controller can respond to a step, so there aint much point in trying.
ramp-limiting, saturating integrators, anti-windup control, setpoint shaping
filters etc. are all ways of doing this


My error signal is proportional to the deviation of the temperature from
the set-point, but the change in temperature is proportional to power,
V^2/R. We take the square root of the PID calculation to linearize the
output, sqrt(vout)^2/R. I was thinking of squaring the input before I run
through my PID routine, so the calculation itself is in terms of power
rather than temperature, but I don't know whether that's clever or a
mistake.

Maybe doubles will just fix the problem and I won't have to worry about
things like that.
Choose your nonlinearity to make the system appear as linear as possible
to the controller. Most thermal systems respond fairly linearly to
temperature rather than power, so I'd stick with taking the square root.
Putting _two_ square terms in the loop is probably not a good idea at all.

If your system tends to settle to the same power input all the time you
may be able to do away with that processor-intensive square root
entirely, and just control it with a straight PID. I get the feeling
that this is a lab prototype, in which case you can spend a lot of money
on the equipment before your equipment cost exceeds the cost of your pay.

BTW, be careful of pole-zero cancellations. They may look nice analytically,
but such cancellations have a nasty habit of not working in the real world,
whereupon your system dynamics can change significantly. A good example
would be trying to cancel anything near the unit circle, and having
quantisation "noise" preventing it from actually cancelling. Or perhaps the
L's and C's have different values....
Hear hear. Using pole-zero cancellations has a _very_ limited place
(it's kinda what you're doing with your internal model control, or what
happens when you notch out the first resonance of a mechanical system),
but use it with very great care.

Pole-zero cancellations? That's like

(s+a)(s+b) / (s+a)(s+c)

The circle and the cross on the same point?
Yes. The problem is twofold: First, you're essentially nulling out the
pole so if your zero is off frequency by much then the cancellation
doesn't happen. Second, you're not affecting the behavior of the
subsystem with the pole, so it can still misbehave. The extreme case is
when you have an unstable pole and a non-minimum-phase zero. You end up
with an analytical transfer function that's stable, but the actual
system has a hidden state that will bang up against some limit or
another, and the system will stop working.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
Gregory L. Hansen wrote:

In article <u7Vsc.10214$XI4.368596@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:

"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c90p97$voq$1@hood.uits.indiana.edu...

In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

-- snip --
You sound like you're beyond it already, but see my "PID Without a PhD"
and the other articles on my website:
http://www.wescottdesign.com/articles/pidwophd.html,
http://www.wescottdesign.com/articles/zTransform/z-transforms.html.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
In article <10b9j3q4sj4b3ab@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:
Gregory L. Hansen wrote:

In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:

I've been learning about control theory, and I'm a little confused on
lead, lag, and lead-lag compensation. My book (Bucek) described them as
filters, and gave analog circuits consisting of a few resistors and
capacitors, in contrast to the PID circuits with op-amps. But he also
analyzes them alone, in front of a plant, as if they were stand-alone
controllers. The lead-lag is a lot like a PID with an extra pole.

What should I think of them, and how they compare with PIDs?


You can think about all of them as filters, or as controllers. On the
surface this is confusing, but it means that you can choose how to think
about it in the way that makes the problem it hand easiest to solve.


When I first saw them, I couldn't help thinking of them as something you'd
put before or after a PID loop to condition the signal as it goes through.
Maybe my main hang-up was that the sample circuit consisted entirely of
passive components. How can you collect two resistors and two capacitors
and call it a controller?


Well if you're shameless you can do anything :).

Certainly when it has a significant effect on the system behavior then
you'd better call it _something_ impressive enough to motivate you to
analyze it's effects.
Ha! A passively activated phase-adjusting anti-discombobulator ought to
do it.

It is almost always (like 99.999% of the time) a good idea to put some
frequency limiting on a differentiator. You do this by putting a pole
into your differentiator at some high frequency. When you do this
you've made a lead-lag compensator with it's zero frequency at zero.


When I look at the discrete version of PID versus lead-lag, they look
pretty much the same except the lead-lag has an extra term or two.

I was thinking of turning my digital PID into a digital lead-lag, but the
lock-in, which provides the error signal, has a low-pass filter on the
output, so maybe that job has already been done.


You probably don't want to lose the integrator. My general
recommendation is to use the PID first (with appropriate filtering on
the differentiator), and only start using lead-lags and notches and
whatnot if you really need it.
Lose the integrator? I didn't realize that would happen. When I look at
the equivalent difference equations, if I have my math right the PID looks
like

u(k) = u(k-1) + A e(k) + B e(k-1) + C e(k-2)

and a lead-lag looks like

u(k) = A u(k-1) + B u(k-2) + c e(k) + D e(k-1) + E e(k-2)

for constants to be chosen by the method of your preference. It looked
like the lead-lag was a superset of the PID. I was thinking of putting a
digital low-pass filter on my PID output, which would have made the
similarity stronger. The latter to smooth the control output which is
what contains the physics I'm interested in, and seems harder to measure
when it's choppy. Measure the power required to keep the temperature
constant.

If you accept that your differentiator must have a high-frequency pole,
then your PD "controller" is nothing but a lead-lag "filter" -- the
transfer function is the same, you're just talking about it using
different terms. In fact, there are a number of ways you can describe a
PD/lead-lag block. Each uses three numbers, and each can be derived

from the others. Usually you'll talk about a PD in terms of

proportional and differential gain, and by the way the differentiator
rolloff frequency. Usually you'll talk about a lead-lag in terms of
it's DC (or AC) gain and it's zero and pole frequency -- but you may
want to talk about it's DC gain, it's AC gain and it's zero (or pole)
frequency.


I'm still coming to terms with things like what happens when I put a pole
here and a zero there. But I had noticed the similarities in the transfer
functions, and especially in the equivalent difference equations. As long
as I don't try to take squares or sines or anything, all I can do is add
up current and past error signals and outputs, and multiply them by
constants.


Pretty much, yes. You've just described a linear controller; if you can
pretend that your plant is linear then there's a lot of good tools out
there for designing nice controllers. If you just can't pretend that
your plant is linear then there's nice linearization techniques to use.
If you can't linearize your plant then there's folks like me who are
willing to take your $$ to make it work, or to tell you why you'll never
get there from here.
I appreciate the offer,

Tim Wescott
Wescott Design Services
But my system is fortunately highly linear in my operating regime.
Hopefully I can figure it out.

--
Irony: "Small businesses want relief from the flood of spam clogging their
in-boxes, but they fear a proposed national 'Do Not Spam' registry will
make it impossible to use e-mail as a marketing tool."
http://www.bizjournals.com/houston/stories/2003/11/10/newscolumn6.html
 
On Wed, 26 May 2004 19:14:48 +0000 (UTC),
glhansen@steel.ucs.indiana.edu (Gregory L. Hansen) wrote:

In article <10b9jsvgfc6m92@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:
Gregory L. Hansen wrote:

In article <u7Vsc.10214$XI4.368596@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:

"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c90p97$voq$1@hood.uits.indiana.edu...

In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

-- snip --


You sound like you're beyond it already, but see my "PID Without a PhD"
and the other articles on my website:
http://www.wescottdesign.com/articles/pidwophd.html,
http://www.wescottdesign.com/articles/zTransform/z-transforms.html.
[snip]

Tim,

The second article was easy enough to print as a PDF, but the first is
surrounded by advertising. Any way to get a copy as a clean PDF?

Thanks!

...Jim Thompson
--
| James E.Thompson, P.E. | mens |
| Analog Innovations, Inc. | et |
| Analog/Mixed-Signal ASIC's and Discrete Systems | manus |
| Phoenix, Arizona Voice:(480)460-2350 | |
| E-mail Address at Website Fax:(480)460-2142 | Brass Rat |
| http://www.analog-innovations.com | 1962 |

I love to cook with wine. Sometimes I even put it in the food.
 
In article <10b9jovf7tu2r30@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:
Gregory L. Hansen wrote:

In article <u7Vsc.10214$XI4.368596@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:

"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c90p97$voq$1@hood.uits.indiana.edu...

In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:


-- snip --

I'm trying to keep a temperature in a radiometer constant, with periodic
changes in an input power that are known pretty well. A simple offset
pretty much takes care of my servo needs. But it's part of a high
precision measurement, and my sensed power differs from my calibration
power. I went over the code and discovered the controlling variables are
floats, which are a little short on precision. I hope changing them to
doubles will just fix the problem. But I blew up part of the apparatus
recently, so I haven't been able to try it. Maybe next week. In the
mean-time, I'm studying control theory. Bucek is a magical textbook
that makes a difficult subject seem almost comprehensible.


If you have the processor power to use doubles and keep your sampling
rate then that'll be the least-effort way. It's amazing how much
difference going from 24 bits to 32 bits can make -- I often end up with
32-bit fixed-point math rather than using floating point, single or double.
24 bit to 32 bit? I'd have to calculate bits, but according to the C
defined constants, on that compiler and computer my floats are 6 digits
and my doubles 18. And my discrepency is comparable to the finite
precision of the float. I kind of have an itch to try out the theory I'm
learning, but hopefully just changing to doubles will make the problem go
away.

A twist is that my signal isn't the temperature, it's the power required
to keep the temperature constant. And it's harder to measure when it
fluctuates a lot. It doesn't even matter if it fluctuates a lot as long
as it averages to zero and I stay in the linear regime. But overall I
think my control needs must be pretty mild.


Hopefully you mean (a) if the error averages to zero or (b) if it
averages to a constant...
Yes, if the error averages to zero. I turn a beam on and off and measure
the change in power required to keep the target at a constant temperature.

Likewise, condition your inputs - NO
controller can respond to a step, so there aint much point in trying.
ramp-limiting, saturating integrators, anti-windup control, setpoint shaping
filters etc. are all ways of doing this


My error signal is proportional to the deviation of the temperature from
the set-point, but the change in temperature is proportional to power,
V^2/R. We take the square root of the PID calculation to linearize the
output, sqrt(vout)^2/R. I was thinking of squaring the input before I run
through my PID routine, so the calculation itself is in terms of power
rather than temperature, but I don't know whether that's clever or a
mistake.

Maybe doubles will just fix the problem and I won't have to worry about
things like that.


Choose your nonlinearity to make the system appear as linear as possible
to the controller. Most thermal systems respond fairly linearly to
temperature rather than power, so I'd stick with taking the square root.
Putting _two_ square terms in the loop is probably not a good idea at all.
I'm not sure I get you.

My error signal is linear in temperature, and my system heat capacities
and conductivities are highly independent of temperature in the range of
temperatures I'm in. So there's no trick to calculating something like

dP = k dT

because the conductivity k is a constant to high precision. But P=V^2/R,
so the steady-state change in temperature caused by a control action is
linear in V^2, not V.

So I wasn't sure whether I should PID my temperature error, or PID my
power error.

If your system tends to settle to the same power input all the time you
may be able to do away with that processor-intensive square root
entirely, and just control it with a straight PID. I get the feeling
that this is a lab prototype, in which case you can spend a lot of money
on the equipment before your equipment cost exceeds the cost of your pay.
It is a lab prototype. But at the same time it has software developed
with a particular set of hardware, and nobody really feels comfortable
moving it to new hardware without reason. Actually it's running on an old
Mac IIci with a NuBus controller for the GPIB. So all of that would have
to be redone for a PCI card if it were moved, at the least.

BTW, be careful of pole-zero cancellations. They may look nice analytically,
but such cancellations have a nasty habit of not working in the real world,
whereupon your system dynamics can change significantly. A good example
would be trying to cancel anything near the unit circle, and having
quantisation "noise" preventing it from actually cancelling. Or perhaps the
L's and C's have different values....


Hear hear. Using pole-zero cancellations has a _very_ limited place
(it's kinda what you're doing with your internal model control, or what
happens when you notch out the first resonance of a mechanical system),
but use it with very great care.


Pole-zero cancellations? That's like

(s+a)(s+b) / (s+a)(s+c)

The circle and the cross on the same point?



Yes. The problem is twofold: First, you're essentially nulling out the
pole so if your zero is off frequency by much then the cancellation
doesn't happen. Second, you're not affecting the behavior of the
subsystem with the pole, so it can still misbehave. The extreme case is
when you have an unstable pole and a non-minimum-phase zero. You end up
with an analytical transfer function that's stable, but the actual
system has a hidden state that will bang up against some limit or
another, and the system will stop working.
I just read in Leigh about moving poles. You need a transfer function
that puts zeroes on the old poles, and has a set of new poles. And he
mentioned some of the same warnings you have.

Maybe you'd need a control system to adjust the locations of the zeroes.

Now that I think of it, a lead-lag compensator would do exactly that, if
you choose the constants right.

But I don't think I have any reason to be tempted to try it.

--
"The average person, during a single day, deposits in his or her underwear
an amount of fecal bacteria equal to the weight of a quarter of a peanut."
-- Dr. Robert Buckman, Human Wildlife, p119.
 
In article <10b9jsvgfc6m92@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:
Gregory L. Hansen wrote:

In article <u7Vsc.10214$XI4.368596@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:

"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c90p97$voq$1@hood.uits.indiana.edu...

In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

-- snip --


You sound like you're beyond it already, but see my "PID Without a PhD"
and the other articles on my website:
http://www.wescottdesign.com/articles/pidwophd.html,
http://www.wescottdesign.com/articles/zTransform/z-transforms.html.
Beyond it? We'll see. Thanks.

--
"Let us learn to dream, gentlemen, then perhaps we shall find the
truth... But let us beware of publishing our dreams before they have been
put to the proof by the waking understanding." -- Friedrich August Kekulé
 
In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:
Gregory L. Hansen wrote:
I've been learning about control theory, and I'm a little confused on
lead, lag, and lead-lag compensation. My book (Bucek) described them as
filters, and gave analog circuits consisting of a few resistors and
capacitors, in contrast to the PID circuits with op-amps. But he also
analyzes them alone, in front of a plant, as if they were stand-alone
controllers. The lead-lag is a lot like a PID with an extra pole.

What should I think of them, and how they compare with PIDs?


You can think about all of them as filters, or as controllers. On the
surface this is confusing, but it means that you can choose how to think
about it in the way that makes the problem it hand easiest to solve.

It is almost always (like 99.999% of the time) a good idea to put some
frequency limiting on a differentiator. You do this by putting a pole
into your differentiator at some high frequency. When you do this
you've made a lead-lag compensator with it's zero frequency at zero.
I suppose every system must have a pole in there somewhere, because
everything has inertia, or thermal inertia, or inductance, or something.

So as I understand this, a PID transfer function looks like

P + I/s + Ds

A low-pass filter has a transfer function like

1/(1 + Ts)

Bring them together in a somewhat standard form, and

(D/T) (s^2 + (P/D)s + (I/D)) / s(s + 1/T)

So that almost looks like a lead-lag, except the lead-lag can have a pole
somewhere else instead of at s=0.

A resistor+capacitor low-pass filter is a lot like an integrator, except
it's not an ideal integrator. If it were an ideal integrator it would be
a/s, and not a/(s+b).

I wanted to say a few things about what the pole at the origin means, but
I suppose I have to think of controller and plant, not just controller.

--
"Things should be made as simple as possible -- but no simpler."
-- Albert Einstein
 
On Wed, 26 May 2004 19:11:48 +0000 (UTC), the renowned
glhansen@steel.ucs.indiana.edu (Gregory L. Hansen) wrote:


<snip>
My error signal is linear in temperature, and my system heat capacities
and conductivities are highly independent of temperature in the range of
temperatures I'm in. So there's no trick to calculating something like

dP = k dT

because the conductivity k is a constant to high precision. But P=V^2/R,
so the steady-state change in temperature caused by a control action is
linear in V^2, not V.

So I wasn't sure whether I should PID my temperature error, or PID my
power error.
<more snip>

I would feed the temperature error into my controller (PID, modified
PID or whatever) and feed the control output into a
linearized-power-output black box.

Best regards,
Spehro Pefhany
--
"it's the network..." "The Journey is the reward"
speff@interlog.com Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog Info for designers: http://www.speff.com
 
Jim Thompson wrote:
On Wed, 26 May 2004 19:14:48 +0000 (UTC),
glhansen@steel.ucs.indiana.edu (Gregory L. Hansen) wrote:


In article <10b9jsvgfc6m92@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:


In article <u7Vsc.10214$XI4.368596@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:


"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c90p97$voq$1@hood.uits.indiana.edu...


In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:


-- snip --

You sound like you're beyond it already, but see my "PID Without a PhD"
and the other articles on my website:
http://www.wescottdesign.com/articles/pidwophd.html,
http://www.wescottdesign.com/articles/zTransform/z-transforms.html.

[snip]

Tim,

The second article was easy enough to print as a PDF, but the first is
surrounded by advertising. Any way to get a copy as a clean PDF?

Thanks!

...Jim Thompson
Embedded Systems Programming magazine owns the copyright on the PID
article, so I'm not supposed to distribute it :(. My website just sends
you over to theirs to read it.

They will, however, sell me reprints -- $3000 for 500 of 'em.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
Gregory L. Hansen wrote:

In article <10b9jovf7tu2r30@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:


In article <u7Vsc.10214$XI4.368596@news.xtra.co.nz>,
Terry Given <the_domes@xtra.co.nz> wrote:


"Gregory L. Hansen" <glhansen@steel.ucs.indiana.edu> wrote in message
news:c90p97$voq$1@hood.uits.indiana.edu...


In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:


Gregory L. Hansen wrote:


-- snip --

I'm trying to keep a temperature in a radiometer constant, with periodic
changes in an input power that are known pretty well. A simple offset
pretty much takes care of my servo needs. But it's part of a high
precision measurement, and my sensed power differs from my calibration
power. I went over the code and discovered the controlling variables are
floats, which are a little short on precision. I hope changing them to
doubles will just fix the problem. But I blew up part of the apparatus
recently, so I haven't been able to try it. Maybe next week. In the
mean-time, I'm studying control theory. Bucek is a magical textbook
that makes a difficult subject seem almost comprehensible.


If you have the processor power to use doubles and keep your sampling
rate then that'll be the least-effort way. It's amazing how much
difference going from 24 bits to 32 bits can make -- I often end up with
32-bit fixed-point math rather than using floating point, single or double.


24 bit to 32 bit? I'd have to calculate bits, but according to the C
defined constants, on that compiler and computer my floats are 6 digits
and my doubles 18. And my discrepency is comparable to the finite
precision of the float. I kind of have an itch to try out the theory I'm
learning, but hopefully just changing to doubles will make the problem go
away.

I was being obscure:

I was referring to using 32-bit integers and fixed point math. IEEE
single-precision floating point has a 24-bit mantissa. Double-precision
has a 48-bit mantissa, which should be more than enough for you. I
usually end up using fixed-point arithmetic which gives you 32 bits of
precision in the same space that single-precision floating point uses.
Since signal processing applications usually have very well defined
ranges fixed point is a very good way to go.

Fixed point math is generally much faster than double-precision float on
crusty old machines like Mac IIci's, for that matter.
A twist is that my signal isn't the temperature, it's the power required
to keep the temperature constant. And it's harder to measure when it
fluctuates a lot. It doesn't even matter if it fluctuates a lot as long
as it averages to zero and I stay in the linear regime. But overall I
think my control needs must be pretty mild.


Hopefully you mean (a) if the error averages to zero or (b) if it
averages to a constant...


Yes, if the error averages to zero. I turn a beam on and off and measure
the change in power required to keep the target at a constant temperature.


Likewise, condition your inputs - NO
controller can respond to a step, so there aint much point in trying.
ramp-limiting, saturating integrators, anti-windup control, setpoint shaping
filters etc. are all ways of doing this


My error signal is proportional to the deviation of the temperature from
the set-point, but the change in temperature is proportional to power,
V^2/R. We take the square root of the PID calculation to linearize the
output, sqrt(vout)^2/R. I was thinking of squaring the input before I run
through my PID routine, so the calculation itself is in terms of power
rather than temperature, but I don't know whether that's clever or a
mistake.

Maybe doubles will just fix the problem and I won't have to worry about
things like that.


Choose your nonlinearity to make the system appear as linear as possible
to the controller. Most thermal systems respond fairly linearly to
temperature rather than power, so I'd stick with taking the square root.
Putting _two_ square terms in the loop is probably not a good idea at all.


I'm not sure I get you.

My error signal is linear in temperature, and my system heat capacities
and conductivities are highly independent of temperature in the range of
temperatures I'm in. So there's no trick to calculating something like

dP = k dT

because the conductivity k is a constant to high precision. But P=V^2/R,
so the steady-state change in temperature caused by a control action is
linear in V^2, not V.

So I wasn't sure whether I should PID my temperature error, or PID my
power error.

Obscurity again? It's not my day...

Since the plant is linear going from power in to temperature out, you
are linearizing the right way (taking the square root). Sphero's
suggestion for a "black box" constant power controller would work, but
who engineers it?
If your system tends to settle to the same power input all the time you
may be able to do away with that processor-intensive square root
entirely, and just control it with a straight PID. I get the feeling
that this is a lab prototype, in which case you can spend a lot of money
on the equipment before your equipment cost exceeds the cost of your pay.


It is a lab prototype. But at the same time it has software developed
with a particular set of hardware, and nobody really feels comfortable
moving it to new hardware without reason. Actually it's running on an old
Mac IIci with a NuBus controller for the GPIB. So all of that would have
to be redone for a PCI card if it were moved, at the least.


BTW, be careful of pole-zero cancellations. They may look nice analytically,
but such cancellations have a nasty habit of not working in the real world,
whereupon your system dynamics can change significantly. A good example
would be trying to cancel anything near the unit circle, and having
quantisation "noise" preventing it from actually cancelling. Or perhaps the
L's and C's have different values....

Hear hear. Using pole-zero cancellations has a _very_ limited place
(it's kinda what you're doing with your internal model control, or what
happens when you notch out the first resonance of a mechanical system),
but use it with very great care.


Pole-zero cancellations? That's like

(s+a)(s+b) / (s+a)(s+c)

The circle and the cross on the same point?



Yes. The problem is twofold: First, you're essentially nulling out the
pole so if your zero is off frequency by much then the cancellation
doesn't happen. Second, you're not affecting the behavior of the
subsystem with the pole, so it can still misbehave. The extreme case is
when you have an unstable pole and a non-minimum-phase zero. You end up
with an analytical transfer function that's stable, but the actual
system has a hidden state that will bang up against some limit or
another, and the system will stop working.


I just read in Leigh about moving poles. You need a transfer function
that puts zeroes on the old poles, and has a set of new poles. And he
mentioned some of the same warnings you have.

Maybe you'd need a control system to adjust the locations of the zeroes.

Now that I think of it, a lead-lag compensator would do exactly that, if
you choose the constants right.

But I don't think I have any reason to be tempted to try it.
It's generally the nature of a feedback system to move the poles around
-- that's why a feedback system with a PID doesn't usually have a pole
at s = 0 (or z = 1, take your choice [I'm being obscure again --
sorry]). So I'm not sure why he's advocating masking the existing poles
with zeros.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
Gregory L. Hansen wrote:

In article <10b9j3q4sj4b3ab@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:


In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:


Gregory L. Hansen wrote:


-- snip --
You probably don't want to lose the integrator. My general
recommendation is to use the PID first (with appropriate filtering on
the differentiator), and only start using lead-lags and notches and
whatnot if you really need it.


Lose the integrator? I didn't realize that would happen. When I look at
the equivalent difference equations, if I have my math right the PID looks
like

u(k) = u(k-1) + A e(k) + B e(k-1) + C e(k-2)

and a lead-lag looks like

u(k) = A u(k-1) + B u(k-2) + c e(k) + D e(k-1) + E e(k-2)

for constants to be chosen by the method of your preference. It looked
like the lead-lag was a superset of the PID. I was thinking of putting a
digital low-pass filter on my PID output, which would have made the
similarity stronger. The latter to smooth the control output which is
what contains the physics I'm interested in, and seems harder to measure
when it's choppy. Measure the power required to keep the temperature
constant.
Actually a basic lead-lag has a first-order transfer function:

Cz + B
T(z) = ---------
z - A

What you have is second-order. Moreover, if you want to keep the
integrator you're constrained to having a pole at z = 1; while
technically this is a "lead-lag" nobody calls it that.

If you accept that your differentiator must have a high-frequency pole,
then your PD "controller" is nothing but a lead-lag "filter" -- the
transfer function is the same, you're just talking about it using
different terms. In fact, there are a number of ways you can describe a
PD/lead-lag block. Each uses three numbers, and each can be derived

from the others. Usually you'll talk about a PD in terms of


proportional and differential gain, and by the way the differentiator
rolloff frequency. Usually you'll talk about a lead-lag in terms of
it's DC (or AC) gain and it's zero and pole frequency -- but you may
want to talk about it's DC gain, it's AC gain and it's zero (or pole)
frequency.


I'm still coming to terms with things like what happens when I put a pole
here and a zero there. But I had noticed the similarities in the transfer
functions, and especially in the equivalent difference equations. As long
as I don't try to take squares or sines or anything, all I can do is add
up current and past error signals and outputs, and multiply them by
constants.


Pretty much, yes. You've just described a linear controller; if you can
pretend that your plant is linear then there's a lot of good tools out
there for designing nice controllers. If you just can't pretend that
your plant is linear then there's nice linearization techniques to use.
If you can't linearize your plant then there's folks like me who are
willing to take your $$ to make it work, or to tell you why you'll never
get there from here.


I appreciate the offer,


Tim Wescott
Wescott Design Services


But my system is fortunately highly linear in my operating regime.
Hopefully I can figure it out.
I was just (ahem) using myself as a ready example.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
Gregory L. Hansen wrote:

In article <10b7du54tshgs12@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:

Gregory L. Hansen wrote:

I've been learning about control theory, and I'm a little confused on
lead, lag, and lead-lag compensation. My book (Bucek) described them as
filters, and gave analog circuits consisting of a few resistors and
capacitors, in contrast to the PID circuits with op-amps. But he also
analyzes them alone, in front of a plant, as if they were stand-alone
controllers. The lead-lag is a lot like a PID with an extra pole.

What should I think of them, and how they compare with PIDs?


You can think about all of them as filters, or as controllers. On the
surface this is confusing, but it means that you can choose how to think
about it in the way that makes the problem it hand easiest to solve.

It is almost always (like 99.999% of the time) a good idea to put some
frequency limiting on a differentiator. You do this by putting a pole
into your differentiator at some high frequency. When you do this
you've made a lead-lag compensator with it's zero frequency at zero.


I suppose every system must have a pole in there somewhere, because
everything has inertia, or thermal inertia, or inductance, or something.

So as I understand this, a PID transfer function looks like

P + I/s + Ds

A low-pass filter has a transfer function like

1/(1 + Ts)

Bring them together in a somewhat standard form, and

(D/T) (s^2 + (P/D)s + (I/D)) / s(s + 1/T)

So that almost looks like a lead-lag, except the lead-lag can have a pole
somewhere else instead of at s=0.

A resistor+capacitor low-pass filter is a lot like an integrator, except
it's not an ideal integrator. If it were an ideal integrator it would be
a/s, and not a/(s+b).

I wanted to say a few things about what the pole at the origin means, but
I suppose I have to think of controller and plant, not just controller.
As I pointed out elsewhere, a lead-lag is generally a first-order
pole-zero pair, it's a bit odd to call a 2nd order transfer function
"lead-lag".

In frequency-domain terms having the pole at s = 0 means that the error
term has a zero at s = 0, which means that your system has zero error at
DC. In nice, familiar continuous-time terms having an integrator in
your controller means that when it's presented with a non-zero error
your controller will just push harder and harder and harder until that
damn error goes _away_, fer cryin out loud!

Using a low-pass filter instead of an integrator is like hiring your
lazy brother-in-law*. He'll push on it up to a certain extent, but he
knows that you'll never fire him, so you only get so much effort out of him.

* I'm in an odd mood today...

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
In article <10ba10b6g9onqe5@corp.supernews.com>,
Tim Wescott <tim@wescottnospamdesign.com> wrote:
Gregory L. Hansen wrote:


Choose your nonlinearity to make the system appear as linear as possible
to the controller. Most thermal systems respond fairly linearly to
temperature rather than power, so I'd stick with taking the square root.
Putting _two_ square terms in the loop is probably not a good idea at all.


I'm not sure I get you.

My error signal is linear in temperature, and my system heat capacities
and conductivities are highly independent of temperature in the range of
temperatures I'm in. So there's no trick to calculating something like

dP = k dT

because the conductivity k is a constant to high precision. But P=V^2/R,
so the steady-state change in temperature caused by a control action is
linear in V^2, not V.

So I wasn't sure whether I should PID my temperature error, or PID my
power error.

Obscurity again? It's not my day...

Since the plant is linear going from power in to temperature out, you
are linearizing the right way (taking the square root).
Spelling it out to avoid an obscurity moment, I should keep using
temperature error in the PID routine, and take the square root, rather
than squaring the temperature error for the PID routine and taking the
square root of the result.


Sphero's
suggestion for a "black box" constant power controller would work, but
who engineers it?
That sort of seemed like what I've been doing by taking the square root.

Yes. The problem is twofold: First, you're essentially nulling out the
pole so if your zero is off frequency by much then the cancellation
doesn't happen. Second, you're not affecting the behavior of the
subsystem with the pole, so it can still misbehave. The extreme case is
when you have an unstable pole and a non-minimum-phase zero. You end up
with an analytical transfer function that's stable, but the actual
system has a hidden state that will bang up against some limit or
another, and the system will stop working.


I just read in Leigh about moving poles. You need a transfer function
that puts zeroes on the old poles, and has a set of new poles. And he
mentioned some of the same warnings you have.

Maybe you'd need a control system to adjust the locations of the zeroes.

Now that I think of it, a lead-lag compensator would do exactly that, if
you choose the constants right.

But I don't think I have any reason to be tempted to try it.


It's generally the nature of a feedback system to move the poles around
-- that's why a feedback system with a PID doesn't usually have a pole
at s = 0 (or z = 1, take your choice [I'm being obscure again --
sorry]). So I'm not sure why he's advocating masking the existing poles
with zeros.
I don't think he was advocating it. He just mentioned it, and listed some
problems associated with it.

--
"The main, if not the only, function of the word aether has been to
furnish a nominative case to the verb 'to undulate'."
-- the Earl of Salisbury, 1894
 
In article <alv9b0ddchef5t3ksjv8jsrnlji52413kr@4ax.com>,
Spehro Pefhany <speffSNIP@interlogDOTyou.knowwhat> wrote:
On Wed, 26 May 2004 19:11:48 +0000 (UTC), the renowned
glhansen@steel.ucs.indiana.edu (Gregory L. Hansen) wrote:


snip
My error signal is linear in temperature, and my system heat capacities
and conductivities are highly independent of temperature in the range of
temperatures I'm in. So there's no trick to calculating something like

dP = k dT

because the conductivity k is a constant to high precision. But P=V^2/R,
so the steady-state change in temperature caused by a control action is
linear in V^2, not V.

So I wasn't sure whether I should PID my temperature error, or PID my
power error.

more snip

I would feed the temperature error into my controller (PID, modified
PID or whatever) and feed the control output into a
linearized-power-output black box.
What I should really probably do is simulate it and just try it. I think
I'm at the point where I can start simulating.

--
"Let us learn to dream, gentlemen, then perhaps we shall find the
truth... But let us beware of publishing our dreams before they have been
put to the proof by the waking understanding." -- Friedrich August Kekulé
 

Welcome to EDABoard.com

Sponsor

Back
Top