J
Jon Kirwan
Guest
On Tue, 2 Feb 2010 22:30:31 +0530, "pimpom"
<pimpom@invalid.invalid> wrote:
volts. Eg (and Tnom, which is the nominal temperature at
which the Is used in the Shockley equation is given at) are
used to account for and calibrate the variation of Is over
the BJT's temperature. In other words, Is is a function of
T, namely Is(T), and not a constant at all.
If you solve the Shockley equation for Vbe and then look at
the derivative (partial, since Is is momentarily taken as a
constant) of it with respect to temperature, you will see
that it varies in the _wrong_ direction... the sign is wrong:
Id(T) = Is(T) * ( e^( q*Vd / (k*T) ) - 1 )
which becomes:
Vd(T) = (k*T/q) * ln( 1 + Ic/Is(T) )
The derivative is then trivially:
d Vd(T) = (k/q) * ln( 1 + Ic/Is(T) ) dT
which is a positive trend, very nearly +2mV/K for modest
Ic... but __positive__.
Does that make sense? It just is wrong. BJTs don't _do_
that. The figure is more like -2mV/K. So why is the sign
wrong?
Because that isn't the whole picture. "Is" also varies with
temp. As in:
Is(T) = Is(Tn) * (T/Tn)^3 * e^( -(q*Eg/k) * (1/T-1/Tn) )
where "Tn" is the nominal calibration temperature point.
The new derivative is a bit large. To get it onto a silly
post page with some chance that it won't sprawl for lines and
lines, I have to set up these math phrases.
Assume:
X = T^3 * Isat * e^(q*Eg/(k*Tnom))
Y = Tnom^3 * Ic * e^(q*Eg/(k*T))
Then the derivative is (if you use fixed-spaced ASCII):
X+Y
k*Tn*T*((X+Y)*ln( -------- )-3*Y) - q*Eg*(X*T+Y*T+Y*Tn)
Isat*T^3
-------------------------------------------------------
q * Tnom * T * (X+Y)
What a mess, even then. Here again, Tn is the nominal
temperature (in Kelvin, of course) at which the device data
is taken and Eg is the effective energy gap in electron
volts for the semiconductor material. Of course, 'k' is the
usual Boltzmann's constant, q the usual electron charge
value, and T is the temperature of interest.
Eg often defaults to around 1.11eV in spice, I think. For an
Ic=10uA and a stock Isat of about 1E-15, the figure comes out
to about -2.07mV/K in the vicinity of 20 Celsius ambient.
Which is the more usual value.
The "Is" term is the y-axis intercept, which isn't actually
measured, by the way, but instead extrapolated from measured
values elsewhere.
All this is the reason I was asking about the voltage bias
mechanism (that rubber diode/Vbe multiplier thing) and
selecting its BJT vs those in the output stage. (Which, if
PNP _and_ NPN are used, probably themselves do not vary the
same as either other, even, so there is another problem there
as well.) It fries my brain thinking about selecting
"perfect" parts for this.
Another issue I'm starting to wonder about is sweeping out
charges in the BJTs at higher frequencies and providing
sufficient drive current to do it quickly enough. But one
thing at a time, I guess.
you _did_ something with yourself in this area when I did
not. Something I very much respect in you and disrespect in
me. I grew up poor enough that I had to literally live in
homes without walls and work the fields as a laborer child
(before laws today now prevent that, sadly in some ways good
in others) in order to eat and survive. So I understand
"shortage of funds" in my very gut. Perhaps what differed a
little is that I also was living near Portland and there was
a library system I could access, riding my bicycle as a kid.
And I would sometimes even take a bus and use the university
library (particularly the 5th floor where the science
subjects were located.) I scored an 800 on the math section
of my SAT and was rewarded with entry into a university
scholars program at PSU. However, I had to work to pay for
the classes and books and in the end I simply couldn't handle
all of it on my own. Without a dad (he died when I was 7)
and no family to help out, I couldn't manage to do everything
and get by at school, too. So I dropped out well before the
1st year completed. Everything I know is self-taught. It's
a commitment.
I have been honored by being asked by Portland State
University to temporarily teach as an adjunct professor,
though. And I did that for a few years until they could find
their replacement professors. I enjoyed it and I think I did
well. When I visited the department, last year after some
dozen years of absence, I was greeted in the hallways by many
others who I sometimes barely remembered with sincere smiles
and talks of those days. So I must have made some kind of
impression there.
Maybe a difference is that I've made the study of mathematics
a centerpiece for me. Besides, it's central to the work I do
so I can't really ignore it. But since I love studying it, I
would do it, anyway. None of this means I'm properly trained
in it. However, even these days I get to spend time almost
every month or two with sit-down time with an active
physicist to get some additional education in Lie Groups or
catastrophe theory or reflection spaces and manifolds, and so
on. I find I really love both finite and infinite group
theory work.
cannot remember things without understanding their deep
details. I lump this to my "autism." (I have two disabled
children on the spectrum, the youngest is almost exactly like
I was at his age.) When I took calculus at college, it was
all a blur trying to remember what was called what and how
they applied. However, if I _understood_ it deeply, could
picture it well, I could re-derive almost anything on the
spot when I needed it for a test. In other words, while most
of the other students appeared to simply take notes and keep
track of details (and shortcuts) many of which they'd
remember, I couldn't work like that. My memory was _zero_
for names of people, and similarly for names of math
formulas. I had to understand them viscerally and "see" them
well, in order to be able to remember the concept. However,
I still couldn't rememeber the specific "formula." Just the
concept -- the visualization, the image. That wouldn't
provide me with an answer to a problem, merely an approach
that "seemed right." So I would simply use that image to
guide me in re-deriving the formula from scratch, every time.
The upshot was that I took longer than most in completing my
tests, because I spent so much additional time quickly
running through the derivations of the rules I needed, but
where I answered the problems I got them right.
I've never been satisfied, as a result of my own limitations
here, to memorize shortcuts and approximations. It doesn't
give me "sight." They are useless to me, since I cannot use
them for any other derivations, since they are themselves
only blunted tools for specific purposes that cannot be used
to extrapolate anywhere else. Which then forces me to depend
upon a memory I don't have. What I need is to _see_ the
physics itself so that I can then derive those approximations
and shortcuts on the fly, deduced to the specific situation
I'm facing at the time.
rise with rising temperature, it falls.
seems to be that the Vbe multiplier is supposed to produce
about two Vbes with k=2 in k*Vbe, just when there are two
Vbe's in the output structure. That way, if the Vbe in the
multiplier moves around with temperature, the multiplier
doubles it in just the right way to handle the actual two
Vbes in the output pair. If it had been needed to set k=3 or
k=1.5 or k= anything else, there'd have been a problem again
because they wouldn't vary together.
But this brings up the other problem I am talking about. If
Eg isn't the same figure, the slope over temperature for the
Vbe multiplier and the output BJTs won't be the same slope.
That means they can intersect at some temperature, but never
really be right anywhere else at all.
Worse than this is the fact that PNPs are used on one side
and NPNs on the other. They _cannot_ possibly vary their
Vbes in matched ways. It's got to be a nasty problem. And
it seems to argue, in my mind, for some modified version of a
quasicomplementary structure on the output. What argues
against it so much is that, again, the driving structure
before the quasi structure is driving two kinds of quadrants
and this means the cross-over area _must_ be nasty looking,
indeed.
Because of that, I searched around and found out that there
is a correction structure to fix the quasi crossover problem.
It appears to use something called a Baxandall diode, though
for now I haven't learned the details of how it does what it
does.
what you are talking about because I've often seen people
talk about needing a 33 ohm base resistor on emitter follower
BJTs to snub high frequency oscillations. So you might be on
the right track there. Hopefully, someone else does know and
will feel like saying.

Jon
<pimpom@invalid.invalid> wrote:
No. Eg is the effective energy gap, specified in electronJon Kirwan wrote:
On Sat, 30 Jan 2010 01:11:28 +0530, "pimpom" wrote:
snip
I like the biasing scheme mentioned by Jon and use it for all
my
designs except the early ones using germanium transistors,
though
I don't know the name either. The biasing transistor can be
mounted on the output transistors' heatsink for temperature
tracking.
snip
Okay. I'm giving this a little more thought -- as it applies
to temperature variation. The basic idea is that the two
bases of the two output BJTs (or output BJT structures) must
be separated a little bit in order to ensure both quadrants
are in forward conduction. With a "Vbe multiplier" in place
and with its own BJT tacked onto the same heat sink, the idea
is that the the Vbe multiplier's own voltage separation will
shrink as temperature rises, exactly in some proportion
needed to maintain the designed forward conduction
relationship of the output BJTs.
To be honest, this designed forward conduction mode may not
be critital. It might move a class-AB around a little within
its AB operation, for example, if the voltage tracking with
temperature weren't flawlessly applied. And that may be
harmless. I don't know. On the other hand, if tweaked for
class-A I can imagine that it might move the operation into
class-AB; if tweaked for lower-dissipation class-AB it might
move the operation into class-B; and if class-B were desired
it could move it into class-C with associated distortion.
There are several parts of the basic Shockley equation. One
is the always-in-mind part that includes a kT/q part in it
and relates that to Vbe. The other is the Is part and Eg is
the key there. So one thing that crosses my mind is in
selecting the BJT for the "rubber diode" thingy. Unless it's
Vbe (at 27C and designed constant current) and its Eg are the
same, even though it is a small signal device, doesn't that
mean that the variations over temperature will be two lines
that cross over only at one temperature point? In other
words, basically matches nowhere except at one temperature?
It seems crude.
You lost me for a while with the Eg term. You mean the emitter
transconductance?
volts. Eg (and Tnom, which is the nominal temperature at
which the Is used in the Shockley equation is given at) are
used to account for and calibrate the variation of Is over
the BJT's temperature. In other words, Is is a function of
T, namely Is(T), and not a constant at all.
If you solve the Shockley equation for Vbe and then look at
the derivative (partial, since Is is momentarily taken as a
constant) of it with respect to temperature, you will see
that it varies in the _wrong_ direction... the sign is wrong:
Id(T) = Is(T) * ( e^( q*Vd / (k*T) ) - 1 )
which becomes:
Vd(T) = (k*T/q) * ln( 1 + Ic/Is(T) )
The derivative is then trivially:
d Vd(T) = (k/q) * ln( 1 + Ic/Is(T) ) dT
which is a positive trend, very nearly +2mV/K for modest
Ic... but __positive__.
Does that make sense? It just is wrong. BJTs don't _do_
that. The figure is more like -2mV/K. So why is the sign
wrong?
Because that isn't the whole picture. "Is" also varies with
temp. As in:
Is(T) = Is(Tn) * (T/Tn)^3 * e^( -(q*Eg/k) * (1/T-1/Tn) )
where "Tn" is the nominal calibration temperature point.
The new derivative is a bit large. To get it onto a silly
post page with some chance that it won't sprawl for lines and
lines, I have to set up these math phrases.
Assume:
X = T^3 * Isat * e^(q*Eg/(k*Tnom))
Y = Tnom^3 * Ic * e^(q*Eg/(k*T))
Then the derivative is (if you use fixed-spaced ASCII):
X+Y
k*Tn*T*((X+Y)*ln( -------- )-3*Y) - q*Eg*(X*T+Y*T+Y*Tn)
Isat*T^3
-------------------------------------------------------
q * Tnom * T * (X+Y)
What a mess, even then. Here again, Tn is the nominal
temperature (in Kelvin, of course) at which the device data
is taken and Eg is the effective energy gap in electron
volts for the semiconductor material. Of course, 'k' is the
usual Boltzmann's constant, q the usual electron charge
value, and T is the temperature of interest.
Eg often defaults to around 1.11eV in spice, I think. For an
Ic=10uA and a stock Isat of about 1E-15, the figure comes out
to about -2.07mV/K in the vicinity of 20 Celsius ambient.
Which is the more usual value.
The "Is" term is the y-axis intercept, which isn't actually
measured, by the way, but instead extrapolated from measured
values elsewhere.
All this is the reason I was asking about the voltage bias
mechanism (that rubber diode/Vbe multiplier thing) and
selecting its BJT vs those in the output stage. (Which, if
PNP _and_ NPN are used, probably themselves do not vary the
same as either other, even, so there is another problem there
as well.) It fries my brain thinking about selecting
"perfect" parts for this.
Another issue I'm starting to wonder about is sweeping out
charges in the BJTs at higher frequencies and providing
sufficient drive current to do it quickly enough. But one
thing at a time, I guess.
Your own experiences sound very much like mine, except thatPerhaps a short diversion into my own background may be
appropriate here. Shortage of funds and scarcity of good books
even for those who could afford them in a technologically
primitive environment kept me from delving deeply into
semiconductor physics when I started teaching myself electronics
over 40 years ago. I had advanced Math in college, but lack of
practice has made me very rusty. You're probably much better at
that.
you _did_ something with yourself in this area when I did
not. Something I very much respect in you and disrespect in
me. I grew up poor enough that I had to literally live in
homes without walls and work the fields as a laborer child
(before laws today now prevent that, sadly in some ways good
in others) in order to eat and survive. So I understand
"shortage of funds" in my very gut. Perhaps what differed a
little is that I also was living near Portland and there was
a library system I could access, riding my bicycle as a kid.
And I would sometimes even take a bus and use the university
library (particularly the 5th floor where the science
subjects were located.) I scored an 800 on the math section
of my SAT and was rewarded with entry into a university
scholars program at PSU. However, I had to work to pay for
the classes and books and in the end I simply couldn't handle
all of it on my own. Without a dad (he died when I was 7)
and no family to help out, I couldn't manage to do everything
and get by at school, too. So I dropped out well before the
1st year completed. Everything I know is self-taught. It's
a commitment.
I have been honored by being asked by Portland State
University to temporarily teach as an adjunct professor,
though. And I did that for a few years until they could find
their replacement professors. I enjoyed it and I think I did
well. When I visited the department, last year after some
dozen years of absence, I was greeted in the hallways by many
others who I sometimes barely remembered with sincere smiles
and talks of those days. So I must have made some kind of
impression there.
Maybe a difference is that I've made the study of mathematics
a centerpiece for me. Besides, it's central to the work I do
so I can't really ignore it. But since I love studying it, I
would do it, anyway. None of this means I'm properly trained
in it. However, even these days I get to spend time almost
every month or two with sit-down time with an active
physicist to get some additional education in Lie Groups or
catastrophe theory or reflection spaces and manifolds, and so
on. I find I really love both finite and infinite group
theory work.
And here, most likely, is our fundamental difference. IOver the years, I developed my own shortcuts and approximations
using mostly basic algebra, trigonometry and bits of calculus
here and there, blended with empirical formulas.
cannot remember things without understanding their deep
details. I lump this to my "autism." (I have two disabled
children on the spectrum, the youngest is almost exactly like
I was at his age.) When I took calculus at college, it was
all a blur trying to remember what was called what and how
they applied. However, if I _understood_ it deeply, could
picture it well, I could re-derive almost anything on the
spot when I needed it for a test. In other words, while most
of the other students appeared to simply take notes and keep
track of details (and shortcuts) many of which they'd
remember, I couldn't work like that. My memory was _zero_
for names of people, and similarly for names of math
formulas. I had to understand them viscerally and "see" them
well, in order to be able to remember the concept. However,
I still couldn't rememeber the specific "formula." Just the
concept -- the visualization, the image. That wouldn't
provide me with an answer to a problem, merely an approach
that "seemed right." So I would simply use that image to
guide me in re-deriving the formula from scratch, every time.
The upshot was that I took longer than most in completing my
tests, because I spent so much additional time quickly
running through the derivations of the rules I needed, but
where I answered the problems I got them right.
I've never been satisfied, as a result of my own limitations
here, to memorize shortcuts and approximations. It doesn't
give me "sight." They are useless to me, since I cannot use
them for any other derivations, since they are themselves
only blunted tools for specific purposes that cannot be used
to extrapolate anywhere else. Which then forces me to depend
upon a memory I don't have. What I need is to _see_ the
physics itself so that I can then derive those approximations
and shortcuts on the fly, deduced to the specific situation
I'm facing at the time.
No, it doesn't. Because the SIGN is wrong!! The Vbe doesn'tIn any case, the Shockley equation seems to hold fairly well in
practice for the purpose of bias regulation within the
temperature range normally encountered.
rise with rising temperature, it falls.
That seems to be what I'm getting. One "lucky" circumstanceTemperature tracking with
simple circuits like diodes in series or a Vbe multiplier cannot
be more than approximate.
seems to be that the Vbe multiplier is supposed to produce
about two Vbes with k=2 in k*Vbe, just when there are two
Vbe's in the output structure. That way, if the Vbe in the
multiplier moves around with temperature, the multiplier
doubles it in just the right way to handle the actual two
Vbes in the output pair. If it had been needed to set k=3 or
k=1.5 or k= anything else, there'd have been a problem again
because they wouldn't vary together.
But this brings up the other problem I am talking about. If
Eg isn't the same figure, the slope over temperature for the
Vbe multiplier and the output BJTs won't be the same slope.
That means they can intersect at some temperature, but never
really be right anywhere else at all.
Worse than this is the fact that PNPs are used on one side
and NPNs on the other. They _cannot_ possibly vary their
Vbes in matched ways. It's got to be a nasty problem. And
it seems to argue, in my mind, for some modified version of a
quasicomplementary structure on the output. What argues
against it so much is that, again, the driving structure
before the quasi structure is driving two kinds of quadrants
and this means the cross-over area _must_ be nasty looking,
indeed.
Because of that, I searched around and found out that there
is a correction structure to fix the quasi crossover problem.
It appears to use something called a Baxandall diode, though
for now I haven't learned the details of how it does what it
does.
But which quadrant do you decide to attach it closer to?Such a device can sense only the
heatsink temperature
I can believe it!and,.except under long-term static
conditions, that temp will almost always be different from Tj of
the output devices. That Tj is what needs to be tracked and when
the output transistors are pumping out audio power, that
difference can be tens of degrees.
It is often a small value, 10s of Ohms. It might just beI've seen this as a modification. In ASCII form:
A
|
,---+---,
| |
| \
| / R3
\ \
/ R2 /
\ |
/ +--- C
| |
| |
| |/c Q1
+-----|
| |>e
\ |
/ R1 |
\ |
/ |
| |
'---+---'
|
B
We've already decided that R1 might be both a simple resistor
plus a variable pot to allow adjustment. The usual case I
see on the web does NOT include R3, though. However, I've
seen a few examples where R3 (small-valued) exists and one of
the two output BJTs' base is connected at C and not at A.
The above circuit is a somewhat different version of the Vbe
multiplier/rubber diode thing. The difference being R3,
which I'm still grappling with.
I've seen R3 used in that position too, but never gave it much
thought until you brought it up. Offhand I still can't see a
reason for it either. Maybe for stability against a local
oscillation? Perhaps taking some time to think about it will
bring some revelation. Or someone else can save us the trouble
and enlighten us.
what you are talking about because I've often seen people
talk about needing a 33 ohm base resistor on emitter follower
BJTs to snub high frequency oscillations. So you might be on
the right track there. Hopefully, someone else does know and
will feel like saying.
Hehe.But does anyone know, before I go writing equations all over
the place, why R3 is added? Or is R3 just some book author's
wild ass guess?
A possibility. But I wouldn't go out on a limb and call it that
![]()
Jon
This is all pressing me into studying the output structure
more, I guess. It basically looks simple when I wave my
hands over it, but I suspect the intimate details need to be
exposed to view. On to that part, I suppose.
Jon