Driver to drive?

"rickman" <gnuarm@gmail.com> wrote in message
news:khqr7p$q4k$1@dont-email.me...
I know they are faster now days, any idea how fast a 16 bit converter
is?
How fast can you afford? ;-)

IIRC, 14 or 16 bit goes for ca. $1/MSPS. Beware the INL usually stops at
12 bits, typical of pipelined types (which the fast high-bit ones all
are). DNL is all that matters for SDR, so they aren't useless; INL could
be calibrated, as long as you can generate a 16 bit-linear ramp (now
Larkin will chime in..).

Tim

--
Deep Friar: a very philosophical monk.
Website: http://seventransistorlabs.com
 
On Mar 14, 12:07 am, "Tim Williams" <tmoran...@charter.net> wrote:
"rickman" <gnu...@gmail.com> wrote in message

news:khqr7p$q4k$1@dont-email.me...

I know they are faster now days, any idea how fast a 16 bit converter
is?

How fast can you afford?  ;-)

IIRC, 14 or 16 bit goes for ca. $1/MSPS.  Beware the INL usually stops at
12 bits, typical of pipelined types (which the fast high-bit ones all
are).  DNL is all that matters for SDR, so they aren't useless; INL could
be calibrated, as long as you can generate a 16 bit-linear ramp (now
Larkin will chime in..).

Tim
analog has a 16bit 250Ms, it is "only" ~150$



-Lasse
 
rickman <gnuarm@gmail.com> wrote:

On 3/12/2013 4:11 PM, Nico Coesel wrote:
rickman<gnuarm@gmail.com> wrote:

I would like to hear from others about why the front end is the hard
part. Exactly how do the attenuators work? Does the amp remain set to
a given gain and the large signals are attenuated down to a fixed low
range?

You need to use capacitive dividers which need adjustment. In my
design I used one varicap (controlled by a DAC) to do all the
necessary adjustments for several ranges. Nowadays you could use a 12
bit ADC so you wouldn't need a variable gain amplifier. Another trick
to get a programmable range is to vary the reference voltage of the
ADC. I think I re-did the design of the front-end about 3 or 4 times.

I don't get how a 12 bit ADC solves the attenuator problem. That is
only 2 bits more than what I would like to see in a front end. Once a
Its a factor 16 attenuation you don't need to do in hardware. So if
the hardware attenuator does 1:1.5, 1:10 and 1:100 (which is doable
with 2 relays) you save quite some circuitry. 8 bits is probably more
then enough. It will be hard to get the response so flat that more
than 8 bits actually adds accuracy of the readout.

--
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------
 
On Sat, 9 Mar 2013 10:22:38 -0800 (PST), francescopoderico@googlemail.com wrote:

Hi all,
I've started designing a 200 MSPS Oscilloscope with a 25 MHz analog bandwidth, 3Ksample/channel buffer.
The oscilloscope can be connected to a PC via USB, and eventually via Ethernet and WiFi.
The trigger is ( at moment) rising, falling... auto, normal.

I would appreciate suggestions and or comment for possible improvement and functionality that could make this oscilloscope interesting.

Thanks,
Francesco
OK, here's a patentable idea that I donate to humanity:

The scope trigger fires a delay and then makes some analog pattern, like a sine
burst or chirp or some pseudo-random mess made from delay lines or something.
Something with a nice sharp autocorrelation function.

Mix that into the scope vertical signal. The PC software looks at that, figures
out its timing, and removes the +-1 clock jitter from the displayed data.

The burst can be delayed and thereby separated in time from the displayable
waveform, or it can be superimposed, no delay, and subtracted out.

Of course, the idea works even better if you have another ADC channel to spare.


--

John Larkin Highland Technology Inc
www.highlandtechnology.com jlarkin at highlandtechnology dot com

Precision electronic instrumentation
Picosecond-resolution Digital Delay and Pulse generators
Custom timing and laser controllers
Photonics and fiberoptic TTL data links
VME analog, thermocouple, LVDT, synchro, tachometer
Multichannel arbitrary waveform generators
 
On Thu, 14 Mar 2013 09:29:48 -0700, John Larkin
<jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:

OK, here's a patentable idea that I donate to humanity:

The scope trigger fires a delay and then makes some analog pattern, like a sine
burst or chirp or some pseudo-random mess made from delay lines or something.
Something with a nice sharp autocorrelation function.

Mix that into the scope vertical signal. The PC software looks at that, figures
out its timing, and removes the +-1 clock jitter from the displayed data.

The burst can be delayed and thereby separated in time from the displayable
waveform, or it can be superimposed, no delay, and subtracted out.

Of course, the idea works even better if you have another ADC channel to spare.
We're doing something like that to synchronize phase on several
free-running ADCs in a distributed data acquisition system on a
non-deterministic network. Works, but I'd rather have it done by
hardware.
 
On 3/14/2013 12:03 PM, Nico Coesel wrote:
rickman<gnuarm@gmail.com> wrote:

On 3/12/2013 4:11 PM, Nico Coesel wrote:
rickman<gnuarm@gmail.com> wrote:

I would like to hear from others about why the front end is the hard
part. Exactly how do the attenuators work? Does the amp remain set to
a given gain and the large signals are attenuated down to a fixed low
range?

You need to use capacitive dividers which need adjustment. In my
design I used one varicap (controlled by a DAC) to do all the
necessary adjustments for several ranges. Nowadays you could use a 12
bit ADC so you wouldn't need a variable gain amplifier. Another trick
to get a programmable range is to vary the reference voltage of the
ADC. I think I re-did the design of the front-end about 3 or 4 times.

I don't get how a 12 bit ADC solves the attenuator problem. That is
only 2 bits more than what I would like to see in a front end. Once a

Its a factor 16 attenuation you don't need to do in hardware. So if
the hardware attenuator does 1:1.5, 1:10 and 1:100 (which is doable
with 2 relays) you save quite some circuitry. 8 bits is probably more
then enough. It will be hard to get the response so flat that more
than 8 bits actually adds accuracy of the readout.
I've worked with 8 bit scopes and the vertical clearly shows steps which
I find interfere with making reasonable measurements. That's why I said
2 spare bits out of 12. There is also a need for zooming in on a
portion of a captured trace. At 8 bits all you see is the steps. With
a full 12 bits you have a little bit of extra resolution so you can
actually get a bit of detail.

--

Rick
 
rickman <gnuarm@gmail.com> writes:

On 3/14/2013 12:03 PM, Nico Coesel wrote:
rickman<gnuarm@gmail.com> wrote:

On 3/12/2013 4:11 PM, Nico Coesel wrote:
rickman<gnuarm@gmail.com> wrote:

I would like to hear from others about why the front end is the hard
part. Exactly how do the attenuators work? Does the amp remain set to
a given gain and the large signals are attenuated down to a fixed low
range?

You need to use capacitive dividers which need adjustment. In my
design I used one varicap (controlled by a DAC) to do all the
necessary adjustments for several ranges. Nowadays you could use a 12
bit ADC so you wouldn't need a variable gain amplifier. Another trick
to get a programmable range is to vary the reference voltage of the
ADC. I think I re-did the design of the front-end about 3 or 4 times.

I don't get how a 12 bit ADC solves the attenuator problem. That is
only 2 bits more than what I would like to see in a front end. Once a

Its a factor 16 attenuation you don't need to do in hardware. So if
the hardware attenuator does 1:1.5, 1:10 and 1:100 (which is doable
with 2 relays) you save quite some circuitry. 8 bits is probably more
then enough. It will be hard to get the response so flat that more
than 8 bits actually adds accuracy of the readout.

I've worked with 8 bit scopes and the vertical clearly shows steps
which I find interfere with making reasonable measurements. That's
why I said 2 spare bits out of 12. There is also a need for zooming
in on a portion of a captured trace. At 8 bits all you see is the
steps. With a full 12 bits you have a little bit of extra resolution
so you can actually get a bit of detail.
I agree a 12 bit (or 16 bit!) scope would be nice. Lecroy make one I
think but it is very expensive.

The situation with the standard 8 bits is not quite as bad as you
portray in a higher end scope. They can sample at the full maximum
digitizer rate (5 or 20 GSPS say) then do real-time averaging/DSP on it
so that each point plotted at lower sweep rates represents the average
of hundreds of samples potentially. The noise at 20GSPS smears out the
steps then the averaging smooths out the noise. Or something like that.
Anyway the result is much better than you would think from the 8 bit
input.


--

John Devereux
 
rickman <gnuarm@gmail.com> wrote:

On 3/14/2013 12:03 PM, Nico Coesel wrote:
rickman<gnuarm@gmail.com> wrote:

On 3/12/2013 4:11 PM, Nico Coesel wrote:
rickman<gnuarm@gmail.com> wrote:

I would like to hear from others about why the front end is the hard
part. Exactly how do the attenuators work? Does the amp remain set to
a given gain and the large signals are attenuated down to a fixed low
range?

You need to use capacitive dividers which need adjustment. In my
design I used one varicap (controlled by a DAC) to do all the
necessary adjustments for several ranges. Nowadays you could use a 12
bit ADC so you wouldn't need a variable gain amplifier. Another trick
to get a programmable range is to vary the reference voltage of the
ADC. I think I re-did the design of the front-end about 3 or 4 times.

I don't get how a 12 bit ADC solves the attenuator problem. That is
only 2 bits more than what I would like to see in a front end. Once a

Its a factor 16 attenuation you don't need to do in hardware. So if
the hardware attenuator does 1:1.5, 1:10 and 1:100 (which is doable
with 2 relays) you save quite some circuitry. 8 bits is probably more
then enough. It will be hard to get the response so flat that more
than 8 bits actually adds accuracy of the readout.

I've worked with 8 bit scopes and the vertical clearly shows steps which
I find interfere with making reasonable measurements. That's why I said
That has more to do with how the software shows the signal.
Interpolation can solve a lot.

--
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------
 
"John Devereux" <john@devereux.me.uk> wrote in message
news:87620tvz6t.fsf@devereux.me.uk...
The situation with the standard 8 bits is not quite as bad as you
portray in a higher end scope. They can sample at the full maximum
digitizer rate (5 or 20 GSPS say) then do real-time averaging/DSP on it
so that each point plotted at lower sweep rates represents the average
of hundreds of samples potentially. The noise at 20GSPS smears out the
steps then the averaging smooths out the noise. Or something like that.
Anyway the result is much better than you would think from the 8 bit
input.
It also reduces aliasing. Mine has a "high res" mode which does this --
only works below a certain range, of course.

Tim

--
Deep Friar: a very philosophical monk.
Website: http://seventransistorlabs.com
 
On 3/15/2013 4:34 AM, John Devereux wrote:
rickman<gnuarm@gmail.com> writes:

I've worked with 8 bit scopes and the vertical clearly shows steps
which I find interfere with making reasonable measurements. That's
why I said 2 spare bits out of 12. There is also a need for zooming
in on a portion of a captured trace. At 8 bits all you see is the
steps. With a full 12 bits you have a little bit of extra resolution
so you can actually get a bit of detail.

I agree a 12 bit (or 16 bit!) scope would be nice. Lecroy make one I
think but it is very expensive.

The situation with the standard 8 bits is not quite as bad as you
portray in a higher end scope. They can sample at the full maximum
digitizer rate (5 or 20 GSPS say) then do real-time averaging/DSP on it
so that each point plotted at lower sweep rates represents the average
of hundreds of samples potentially. The noise at 20GSPS smears out the
steps then the averaging smooths out the noise. Or something like that.
Anyway the result is much better than you would think from the 8 bit
input.
You say that the 16 bit converters are expensive, then talk about using
a 20 GHz 8 bit ADC. Is that not expensive, not to mention the clocking,
the board for the high speed signals and the power supply to make all
this happen? I can't imagine this is actually a better approach to
designing a scope with a stated goal of 20-25 MHz bandwidth. I would
like to see at least 300 MHz, but the OP says 25 is good enough.

--

Rick
 
On 3/15/2013 6:06 AM, Nico Coesel wrote:
rickman<gnuarm@gmail.com> wrote:

On 3/14/2013 12:03 PM, Nico Coesel wrote:

Its a factor 16 attenuation you don't need to do in hardware. So if
the hardware attenuator does 1:1.5, 1:10 and 1:100 (which is doable
with 2 relays) you save quite some circuitry. 8 bits is probably more
then enough. It will be hard to get the response so flat that more
than 8 bits actually adds accuracy of the readout.

I've worked with 8 bit scopes and the vertical clearly shows steps which
I find interfere with making reasonable measurements. That's why I said

That has more to do with how the software shows the signal.
Interpolation can solve a lot.
That may be. The OP was talking about debugging the sinc reconstruction
and I've been thinking a little bit about just how useful that is. Is
there a downside to sinc reconstruction, other than the work required?
I was thinking an aliased signal might interfere with this, but now that
I give it some thought, I realize they are two separate issues. If you
have an aliased tone, it will just be a tone in the display whether you
use sinc reconstruction or not.

In fact, the *very* low end scope I was using may have had a limitation
in the display itself! 8 bits is 256 steps. That shouldn't be too big
a problem.

--

Rick
 
rickman <gnuarm@gmail.com> writes:

On 3/15/2013 4:34 AM, John Devereux wrote:
rickman<gnuarm@gmail.com> writes:

I've worked with 8 bit scopes and the vertical clearly shows steps
which I find interfere with making reasonable measurements. That's
why I said 2 spare bits out of 12. There is also a need for zooming
in on a portion of a captured trace. At 8 bits all you see is the
steps. With a full 12 bits you have a little bit of extra resolution
so you can actually get a bit of detail.

I agree a 12 bit (or 16 bit!) scope would be nice. Lecroy make one I
think but it is very expensive.

The situation with the standard 8 bits is not quite as bad as you
portray in a higher end scope. They can sample at the full maximum
digitizer rate (5 or 20 GSPS say) then do real-time averaging/DSP on it
so that each point plotted at lower sweep rates represents the average
of hundreds of samples potentially. The noise at 20GSPS smears out the
steps then the averaging smooths out the noise. Or something like that.
Anyway the result is much better than you would think from the 8 bit
input.

You say that the 16 bit converters are expensive, then talk about
using a 20 GHz 8 bit ADC. Is that not expensive, not to mention the
clocking, the board for the high speed signals and the power supply to
make all this happen?
Yes, it is very expensive. I did say "high end scope", by which I mean
"really expensive". They usually need to go to GHz anyway, so already
have the high speed digitizer. At lower bandwidths they can utilise the
excess samples to increase the apparent resolution.

I can't imagine this is actually a better approach to designing a
scope with a stated goal of 20-25 MHz bandwidth. I would like to see
at least 300 MHz, but the OP says 25 is good enough.
Absolutely, to me the only point of a 25MHz scope would be if it was
higher resolution, 16+ bit ideally. Otherwise you may as well just use
one of those cheap USB gadgets. A "dynamic signal analyser" that goes
above 100kHz seems to be missing from the market AFAIK. So it could do
good spectrum analysis, evaluate noise, servo loops, have a tracking
generator and plot filter responses, that sort of thing.


--

John Devereux
 
rickman <gnuarm@gmail.com> wrote:

On 3/15/2013 6:06 AM, Nico Coesel wrote:
rickman<gnuarm@gmail.com> wrote:

On 3/14/2013 12:03 PM, Nico Coesel wrote:

Its a factor 16 attenuation you don't need to do in hardware. So if
the hardware attenuator does 1:1.5, 1:10 and 1:100 (which is doable
with 2 relays) you save quite some circuitry. 8 bits is probably more
then enough. It will be hard to get the response so flat that more
than 8 bits actually adds accuracy of the readout.

I've worked with 8 bit scopes and the vertical clearly shows steps which
I find interfere with making reasonable measurements. That's why I said

That has more to do with how the software shows the signal.
Interpolation can solve a lot.

That may be. The OP was talking about debugging the sinc reconstruction
and I've been thinking a little bit about just how useful that is. Is
there a downside to sinc reconstruction, other than the work required?
It is useable if you have at least 5 samples per period. So that is
0.2fs. The whole problem though is not the number of samples per
period. According to sampling theory the signal is there but it just
needs to be displayed properly so the operator can see a signal
instead of some 'random' dots. With the proper signal reconstruction
algorithm you can display signals up the the Nyquist limit (0.5fs).

I was thinking an aliased signal might interfere with this, but now that
I give it some thought, I realize they are two separate issues. If you
have an aliased tone, it will just be a tone in the display whether you
use sinc reconstruction or not.
In my design I used a fixed samplerate (250MHz) and a standard PC
memory module. 1GB already provides for more than 2 seconds of storage
for 2 channels. That solves the whole interference issue and it allows
to use a proper anti-aliasing filter. With polynomal approximation I
could reconstruct a signal even when its close to the Nyquist limit. I
tested it and I could get it to work for frequencies up to 0.45fs.

--
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------
 
On 3/16/2013 8:56 AM, Nico Coesel wrote:
rickman<gnuarm@gmail.com> wrote:

That may be. The OP was talking about debugging the sinc reconstruction
and I've been thinking a little bit about just how useful that is. Is
there a downside to sinc reconstruction, other than the work required?

It is useable if you have at least 5 samples per period. So that is
0.2fs. The whole problem though is not the number of samples per
period. According to sampling theory the signal is there but it just
needs to be displayed properly so the operator can see a signal
instead of some 'random' dots. With the proper signal reconstruction
algorithm you can display signals up the the Nyquist limit (0.5fs).

I was thinking an aliased signal might interfere with this, but now that
I give it some thought, I realize they are two separate issues. If you
have an aliased tone, it will just be a tone in the display whether you
use sinc reconstruction or not.

In my design I used a fixed samplerate (250MHz) and a standard PC
memory module. 1GB already provides for more than 2 seconds of storage
for 2 channels. That solves the whole interference issue and it allows
to use a proper anti-aliasing filter. With polynomal approximation I
could reconstruct a signal even when its close to the Nyquist limit. I
tested it and I could get it to work for frequencies up to 0.45fs.
I'm not following. Are you saying you need a long buffer of data in
order to reconstruct the signal properly?

--

Rick
 
rickman <gnuarm@gmail.com> wrote:

On 3/16/2013 8:56 AM, Nico Coesel wrote:
rickman<gnuarm@gmail.com> wrote:

That may be. The OP was talking about debugging the sinc reconstruction
and I've been thinking a little bit about just how useful that is. Is
there a downside to sinc reconstruction, other than the work required?

It is useable if you have at least 5 samples per period. So that is
0.2fs. The whole problem though is not the number of samples per
period. According to sampling theory the signal is there but it just
needs to be displayed properly so the operator can see a signal
instead of some 'random' dots. With the proper signal reconstruction
algorithm you can display signals up the the Nyquist limit (0.5fs).

I was thinking an aliased signal might interfere with this, but now that
I give it some thought, I realize they are two separate issues. If you
have an aliased tone, it will just be a tone in the display whether you
use sinc reconstruction or not.

In my design I used a fixed samplerate (250MHz) and a standard PC
memory module. 1GB already provides for more than 2 seconds of storage
for 2 channels. That solves the whole interference issue and it allows
to use a proper anti-aliasing filter. With polynomal approximation I
could reconstruct a signal even when its close to the Nyquist limit. I
tested it and I could get it to work for frequencies up to 0.45fs.

I'm not following. Are you saying you need a long buffer of data in
order to reconstruct the signal properly?
You need about 10 samples extra at the beginning and end to do a
proper reconstruction. Lots of audio editing software does exactly the
same BTW. Using a fixed samplerate solves a lot of signal processing
problems but also dictates a lot of processing needs to be done in
hardware to keep the speed reasonable.

--
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------
 
On Sun, 17 Mar 2013 10:19:23 GMT, nico@puntnl.niks (Nico Coesel) wrote:

rickman <gnuarm@gmail.com> wrote:

On 3/16/2013 8:56 AM, Nico Coesel wrote:
rickman<gnuarm@gmail.com> wrote:

That may be. The OP was talking about debugging the sinc reconstruction
and I've been thinking a little bit about just how useful that is. Is
there a downside to sinc reconstruction, other than the work required?

It is useable if you have at least 5 samples per period. So that is
0.2fs. The whole problem though is not the number of samples per
period. According to sampling theory the signal is there but it just
needs to be displayed properly so the operator can see a signal
instead of some 'random' dots. With the proper signal reconstruction
algorithm you can display signals up the the Nyquist limit (0.5fs).

I was thinking an aliased signal might interfere with this, but now that
I give it some thought, I realize they are two separate issues. If you
have an aliased tone, it will just be a tone in the display whether you
use sinc reconstruction or not.

In my design I used a fixed samplerate (250MHz) and a standard PC
memory module. 1GB already provides for more than 2 seconds of storage
for 2 channels. That solves the whole interference issue and it allows
to use a proper anti-aliasing filter. With polynomal approximation I
could reconstruct a signal even when its close to the Nyquist limit. I
tested it and I could get it to work for frequencies up to 0.45fs.

I'm not following. Are you saying you need a long buffer of data in
order to reconstruct the signal properly?

You need about 10 samples extra at the beginning and end to do a
proper reconstruction. Lots of audio editing software does exactly the
same BTW. Using a fixed samplerate solves a lot of signal processing
problems but also dictates a lot of processing needs to be done in
hardware to keep the speed reasonable.
Interesting stuff can be done with a really long record. You can do signal
averaging of a periodic waveform with no trigger. Our new monster LeCroy scope
can take a long record of a differential PCI Express lane (2.5 gbps NRZ data),
simulate a PLL data recovery loop of various dynamics, and plot an eye diagram,
again without any trigger. Well, if it doesn't crash.




--

John Larkin Highland Technology Inc
www.highlandtechnology.com jlarkin at highlandtechnology dot com

Precision electronic instrumentation
Picosecond-resolution Digital Delay and Pulse generators
Custom timing and laser controllers
Photonics and fiberoptic TTL data links
VME analog, thermocouple, LVDT, synchro, tachometer
Multichannel arbitrary waveform generators
 
John Larkin wrote:

On Sun, 17 Mar 2013 10:19:23 GMT, nico@puntnl.niks (Nico Coesel) wrote:


rickman <gnuarm@gmail.com> wrote:


On 3/16/2013 8:56 AM, Nico Coesel wrote:

rickman<gnuarm@gmail.com> wrote:


That may be. The OP was talking about debugging the sinc reconstruction
and I've been thinking a little bit about just how useful that is. Is
there a downside to sinc reconstruction, other than the work required?

It is useable if you have at least 5 samples per period. So that is
0.2fs. The whole problem though is not the number of samples per
period. According to sampling theory the signal is there but it just
needs to be displayed properly so the operator can see a signal
instead of some 'random' dots. With the proper signal reconstruction
algorithm you can display signals up the the Nyquist limit (0.5fs).


I was thinking an aliased signal might interfere with this, but now that
I give it some thought, I realize they are two separate issues. If you
have an aliased tone, it will just be a tone in the display whether you
use sinc reconstruction or not.

In my design I used a fixed samplerate (250MHz) and a standard PC
memory module. 1GB already provides for more than 2 seconds of storage
for 2 channels. That solves the whole interference issue and it allows
to use a proper anti-aliasing filter. With polynomal approximation I
could reconstruct a signal even when its close to the Nyquist limit. I
tested it and I could get it to work for frequencies up to 0.45fs.

I'm not following. Are you saying you need a long buffer of data in
order to reconstruct the signal properly?

You need about 10 samples extra at the beginning and end to do a
proper reconstruction. Lots of audio editing software does exactly the
same BTW. Using a fixed samplerate solves a lot of signal processing
problems but also dictates a lot of processing needs to be done in
hardware to keep the speed reasonable.


Interesting stuff can be done with a really long record. You can do signal
averaging of a periodic waveform with no trigger. Our new monster LeCroy scope
can take a long record of a differential PCI Express lane (2.5 gbps NRZ data),
simulate a PLL data recovery loop of various dynamics, and plot an eye diagram,
again without any trigger. Well, if it doesn't crash.

Oh, you have that problem too? Our lecroy goes belly up now and then
for no apparent reason.. It appears to me there is some hardware to
software random issue.

Kind of reminds me back in the days when Visual Basic was first put on
us in Windows 3.xx days. A serious app designed to operate fabric
cutting machines for intricate designs would simply fault a plug in
component because it would get stuck on some missed signal from the
hardware and then time out or do a stack overflow. THe app was loaded
using VB controls that just simply was not resource friendly and
controlled properly.

On top of that, this app cost clients upwards in the $10k range. I was
offered a job where this app was developed and was allowed to see it in
operation and saw its random failures, they wanted to to join the
debugger team to resolve it and move forward. I declined on the offer.


Jamie
 
On Mar 27, 2:06 pm, bhav....@gmail.com wrote:
The schematic at the link below is to control the enable pin of LM22680 regulator. It has enable pin pulled up internally, which means it is always ON. This circuit is to pull it to Low to disable. Logic is as follows.

Ignition ON detection threshold is approx 6V at ignition wire and low battery threshold is approx 9V at battery wire.

If (GPIO from MCU is '1' OR Ignition is ON) AND (Battery Voltage is above threshold), ENABLE is left floating to keep LM22680 ON
else keep it LOW to disable.

Q1 is ON when battery is above threshold (approx 9V) else OFF. Q2 will be ON when either GPIO OR Ignition is ON. When both Q1 and Q2 ON, Q3 will be OFF leaving Enable floating.(regulator is ON)

Thresholds are not very critical +/- 1V is fine. It works as expected in simulation. Is this fine in automotive environment? Any modifications?
Looks weird to me... how did you 'expect' it to work?
(Where does the power for Q1 and Q2 come from?)

George H.
Both Battery and Ignition are reverse polarity, transient and load dump protected. Transients are clamped to 30V. Max continuous voltage is 16V. No 24V jump start protection needed.

https://picasaweb.google.com/106331244879972692887/December182012?aut...
 
On 3/27/2013 3:43 PM, George Herold wrote:
On Mar 27, 2:06 pm, bhav....@gmail.com wrote:
The schematic at the link below is to control the enable pin of LM22680 regulator. It has enable pin pulled up internally, which means it is always ON. This circuit is to pull it to Low to disable. Logic is as follows.

Ignition ON detection threshold is approx 6V at ignition wire and low battery threshold is approx 9V at battery wire.

If (GPIO from MCU is '1' OR Ignition is ON) AND (Battery Voltage is above threshold), ENABLE is left floating to keep LM22680 ON
else keep it LOW to disable.

Q1 is ON when battery is above threshold (approx 9V) else OFF. Q2 will be ON when either GPIO OR Ignition is ON. When both Q1 and Q2 ON, Q3 will be OFF leaving Enable floating.(regulator is ON)

Thresholds are not very critical +/- 1V is fine. It works as expected in simulation. Is this fine in automotive environment? Any modifications?

Looks weird to me... how did you 'expect' it to work?
(Where does the power for Q1 and Q2 come from?)
Looks ok to me. I think the circuit works ok. The OP is likely asking
about special issues from using it in cars. A car electrical system is
a tough environment to design electronics for.

Power for the inputs of Q1 and Q2 are provided by their prospective
sources, the battery in one case and the MCU in the other. The power
for the collector is provided by R22, 100K from the battery.

As long at the transistors are rated for the extreme voltages that may
be found. For example, the BE junction of Q1 needs to survive a
negative transient through the two 1 kohm resistors. In that case I
think D5 is forward biased and R18 won't impact the circuit much. The
OP says "Both Battery and Ignition are reverse polarity, transient and
load dump protected." Not sure what that implies in terms of the
voltages the inputs will then see.


Both Battery and Ignition are reverse polarity, transient and load dump protected. Transients are clamped to 30V. Max continuous voltage is 16V. No 24V jump start protection needed.

https://picasaweb.google.com/106331244879972692887/December182012?aut...
--

Rick
 
On Mar 27, 6:20 pm, rickman <gnu...@gmail.com> wrote:
On 3/27/2013 3:43 PM, George Herold wrote:

On Mar 27, 2:06 pm, bhav....@gmail.com wrote:
The schematic at the link below is to control the enable pin of LM22680 regulator. It has enable pin pulled up internally, which means it is always ON. This circuit is to pull it to Low to disable. Logic is as follows.

Ignition ON detection threshold is approx 6V at ignition wire and low battery threshold is approx 9V at battery wire.

If (GPIO from MCU is '1' OR Ignition is ON) AND (Battery Voltage is above threshold), ENABLE is left floating to keep LM22680 ON
else keep it LOW to disable.

Q1 is ON when battery is above threshold (approx 9V) else OFF. Q2 will be ON when either GPIO OR Ignition is ON. When both Q1 and Q2 ON, Q3 will be OFF leaving Enable floating.(regulator is ON)

Thresholds are not very critical +/- 1V is fine. It works as expected in simulation. Is this fine in automotive environment? Any modifications?

Looks weird to me... how did you 'expect' it to work?
(Where does the power for Q1 and Q2 come from?)

Looks ok to me.  I think the circuit works ok.  The OP is likely asking
about special issues from using it in cars.  A car electrical system is
a tough environment to design electronics for.

Power for the inputs of Q1 and Q2 are provided by their prospective
sources, the battery in one case and the MCU in the other.  The power
for the collector is provided by R22, 100K from the battery.
Oops, I missed that, thanks.

George H.
As long at the transistors are rated for the extreme voltages that may
be found.  For example, the BE junction of Q1 needs to survive a
negative transient through the two 1 kohm resistors.  In that case I
think D5 is forward biased and R18 won't impact the circuit much.  The
OP says "Both Battery and Ignition are reverse polarity, transient and
load dump protected."  Not sure what that implies in terms of the
voltages the inputs will then see.

Both Battery and Ignition are reverse polarity, transient and load dump protected. Transients are clamped to 30V. Max continuous voltage is 16V. No 24V jump start protection needed.

https://picasaweb.google.com/106331244879972692887/December182012?aut....

--

Rick
 

Welcome to EDABoard.com

Sponsor

Back
Top