ADC with very low DNL for pulse height histogram MCA ?

S

Steve Parus

Guest
I'm looking for an 100 kHz or faster analog to digital converter with
very low DNL ( much lower than 0.1 lsb or 10%) to be used for pulse
height MCA analysis by creating a histogram of digitized count
values. Standard PCI interface cards have DNL of 0.2 lsb which
produces noisy histograms when repeatedly digitizing a triangle wave
spanning the entire adc input range. PCI card would be best, but
external device or just the adc chip itself would be useable.
Wilkinson style is one known for such low DNL.
 
In article <1i7370lq26gos2esek9h03fup1dsudhueb@4ax.com>,
Steve Parus <nospam.sparus@umich.edu> wrote:
I'm looking for an 100 kHz or faster analog to digital converter with
very low DNL ( much lower than 0.1 lsb or 10%) to be used for pulse
You didn't specify the number of bits.

A very easy way to meet your specs is to use the top 8 bits of a 12 bit
converter.

--
--
kensmith@rahul.net forging knowledge
 
On Mon, 05 Apr 2004 13:58:51 -0400, Steve Parus
<nospam.sparus@umich.edu> wrote:

I'm looking for an 100 kHz or faster analog to digital converter with
very low DNL ( much lower than 0.1 lsb or 10%) to be used for pulse
height MCA analysis by creating a histogram of digitized count
values. Standard PCI interface cards have DNL of 0.2 lsb which
produces noisy histograms when repeatedly digitizing a triangle wave
spanning the entire adc input range. PCI card would be best, but
external device or just the adc chip itself would be useable.
Wilkinson style is one known for such low DNL.
The current trick is to use a DAC to add pseudo-random noise to the
signal, and subtract the corresponding amount digitally from the ADC
data. This spreads the codes all over the place and hugely improves
DNL. People typically use a DAC with an LSB smaller than the ADCs, and
spread the codes over a wide range, like 1/8 the ADC's full scale. I
don't know if this is patented. People sell boards that do this.

If you can stand to give up a bit or two, just adding wideband
Gaussian noise, then lopping off a couple of bits, works fairly well
too. There are already 12-16 bit SAR ADCs that typically have close to
the DNL you need.

John
 
John Larkin <jjlarkin@highSNIPlandTHIStechPLEASEnology.com> wrote in message news:<v2k370lq3u015eg2j01iqcsh1do3s0vd0r@4ax.com>...
On Mon, 05 Apr 2004 13:58:51 -0400, Steve Parus
nospam.sparus@umich.edu> wrote:

I'm looking for an 100 kHz or faster analog to digital converter with
very low DNL ( much lower than 0.1 lsb or 10%) to be used for pulse
height MCA analysis by creating a histogram of digitized count
values. Standard PCI interface cards have DNL of 0.2 lsb which
produces noisy histograms when repeatedly digitizing a triangle wave
spanning the entire adc input range. PCI card would be best, but
external device or just the adc chip itself would be useable.
Wilkinson style is one known for such low DNL.

The current trick is to use a DAC to add pseudo-random noise to the
signal, and subtract the corresponding amount digitally from the ADC
data. This spreads the codes all over the place and hugely improves
DNL. People typically use a DAC with an LSB smaller than the ADCs, and
spread the codes over a wide range, like 1/8 the ADC's full scale. I
don't know if this is patented. People sell boards that do this.
This is one of several techniques for "dithering" a signal.

Read R. M. Gray and T. G. Stockham, "Dithered quantizers," IEEE
Transactions on
Information Theory, vol. 39, no. 3, pp. 805-812, May 1993. Tom
Stockham was one of the founders of Soundstream.

John Watkinson's "The Art of Digital Audio" discusses a couple of
earlier papers.

Watkinson credits the trick of adding psuedo-random dither to the
analogue signal and subtracting this dither from the digitised signal
to B. Blesser, and refers to his paper in "Digital Audio" edited by
B.A. Blesser, B. Locanthi and T.G. Stockman Jnr published in New York
by the Audio Engineering Society in 1983.

Watkinson and Stockman both emphasise that you have to be a bit
careful with the amplitude distribution of the dithering signal.

------
Bill Sloman, Nijmegen
 
Bill Sloman wrote:

This is one of several techniques for "dithering" a signal.

Read R. M. Gray and T. G. Stockham, "Dithered quantizers," IEEE
Transactions on
Information Theory, vol. 39, no. 3, pp. 805-812, May 1993. Tom
Stockham was one of the founders of Soundstream.

John Watkinson's "The Art of Digital Audio" discusses a couple of
earlier papers.

Watkinson credits the trick of adding psuedo-random dither to the
analogue signal and subtracting this dither from the digitised signal
to B. Blesser, and refers to his paper in "Digital Audio" edited by
B.A. Blesser, B. Locanthi and T.G. Stockman Jnr published in New York
by the Audio Engineering Society in 1983.

Watkinson and Stockman both emphasise that you have to be a bit
careful with the amplitude distribution of the dithering signal.

------
Bill Sloman, Nijmegen
Close but no cigar- as you like to say, wise ass. The operative word
here is SLIDING SCALE A/D.:)))
 
On Mon, 05 Apr 2004 14:39:19 -0700, John Larkin
<jjlarkin@highSNIPlandTHIStechPLEASEnology.com> wrote:

On Mon, 05 Apr 2004 13:58:51 -0400, Steve Parus
nospam.sparus@umich.edu> wrote:

I'm looking for an 100 kHz or faster analog to digital converter with
very low DNL ( much lower than 0.1 lsb or 10%) to be used for pulse

The current trick is to use a DAC to add pseudo-random noise to the
signal, and subtract the corresponding amount digitally from the ADC
data. This spreads the codes all over the place and hugely improves
DNL. People typically use a DAC with an LSB smaller than the ADCs, and
spread the codes over a wide range, like 1/8 the ADC's full scale. I
don't know if this is patented. People sell boards that do this.
I've not been able to locate such boards (other than perhaps Ortec's
TRUMP). Can you sugest other suppliers ?

If you can stand to give up a bit or two, just adding wideband
Gaussian noise, then lopping off a couple of bits, works fairly well
too. There are already 12-16 bit SAR ADCs that typically have close to
the DNL you need.
I'm aware of 16 bit adc's with DNL of +/- 0.25 (AD7677) . Are there
any with DNL approaching 0.01 lsb (1 %) ?

I only need 12 bits, so use of a 16 bit adc with added noise is worth
a try.

Would is help at all to perform multiple conversions on each input
voltage value by varying the amplitude of the added noise for each
conversion ?
 
In sci.electronics.design, Steve Parus <nospam.sparus@umich.edu>
wrote:

On Mon, 05 Apr 2004 14:39:19 -0700, John Larkin
jjlarkin@highSNIPlandTHIStechPLEASEnology.com> wrote:

On Mon, 05 Apr 2004 13:58:51 -0400, Steve Parus
nospam.sparus@umich.edu> wrote:

I'm looking for an 100 kHz or faster analog to digital converter with
very low DNL ( much lower than 0.1 lsb or 10%) to be used for pulse


The current trick is to use a DAC to add pseudo-random noise to the
signal, and subtract the corresponding amount digitally from the ADC
data. This spreads the codes all over the place and hugely improves
DNL. People typically use a DAC with an LSB smaller than the ADCs, and
spread the codes over a wide range, like 1/8 the ADC's full scale. I
don't know if this is patented. People sell boards that do this.

I've not been able to locate such boards (other than perhaps Ortec's
TRUMP). Can you sugest other suppliers ?

If you can stand to give up a bit or two, just adding wideband
Gaussian noise, then lopping off a couple of bits, works fairly well
too. There are already 12-16 bit SAR ADCs that typically have close to
the DNL you need.

I'm aware of 16 bit adc's with DNL of +/- 0.25 (AD7677) . Are there
any with DNL approaching 0.01 lsb (1 %) ?

I only need 12 bits, so use of a 16 bit adc with added noise is worth
a try.

Would is help at all to perform multiple conversions on each input
voltage value by varying the amplitude of the added noise for each
conversion ?
Yes, that's exactly what the noise is for, but no, you don't "vary
the amplitude" for each conversion, you just take several conversions
with a constant amplitude of added noise. The noise adds or substracts
a small random voltage to each conversion. When several consecutive
samples are filtered ('averaged' is a crude way of digital low-pass
filtering) after being digitized, they will give a value that's closer
than a single A/D sample can give.

How short are the pulses? I'm wondering how many samples you will
read of each pulse at 100ksps. Doing it this way, more is obviously
better. Are you looking the absolute magnitude of each pulse, or
perhaps the area under the pulse? It may help if you tell us the
application. I googled for MCA analysis but got too many plausible
links to even guess what it means in your context.

Here are the usual links I give that explain dither:

http://www.national.com/an/AN/AN-804.pdf

Here's a long explanation (of dither in word length reduction, but
going from analog to digital is just like going from a long wordlength
to a shorter wordlength): click on articles, then dither:

http://digido.com


-----
http://mindspring.com/~benbradley
 
Fred Bloggs <nospam@nospam.com> wrote in message news:<4072A044.2010009@nospam.com>...
Bill Sloman wrote:



This is one of several techniques for "dithering" a signal.

Read R. M. Gray and T. G. Stockham, "Dithered quantizers," IEEE
Transactions on
Information Theory, vol. 39, no. 3, pp. 805-812, May 1993. Tom
Stockham was one of the founders of Soundstream.

John Watkinson's "The Art of Digital Audio" discusses a couple of
earlier papers.

Watkinson credits the trick of adding psuedo-random dither to the
analogue signal and subtracting this dither from the digitised signal
to B. Blesser, and refers to his paper in "Digital Audio" edited by
B.A. Blesser, B. Locanthi and T.G. Stockman Jnr published in New York
by the Audio Engineering Society in 1983.

Watkinson and Stockman both emphasise that you have to be a bit
careful with the amplitude distribution of the dithering signal.

------
Bill Sloman, Nijmegen

Close but no cigar- as you like to say, wise ass. The operative word
here is SLIDING SCALE A/D.:)))
Care to expand on that a bit Fred? The original poster wanted to use
his A/D converter to digitise pulse heights, which typically vary
pretty much randomly from one pulse to the next, though he was testing
his A/D converter by digitising a triangular wave.

A sliding scale A/D converter might be quite useful for looking at a
triangular wave, but would be singularly useless in most applications
where it is used as a front end for a multi-channel analyser.

The nearest I've got to sliding scale A/D converter was a 15-bit D/A
converter, where we tacked a 10-bit 10MHz DAC onto the top five bits
of a much slower 18-bit DAC to control the electron beam position (X
and Y) in the Cambridge Instruments Electron Beam Microfabricator
10.5.

------
Bill Sloman, Nijmegen
 
Steve Parus <nospam.sparus@umich.edu> wrote in message news:<1i7370lq26gos2esek9h03fup1dsudhueb@4ax.com>...
I'm looking for an 100 kHz or faster analog to digital converter with
very low DNL ( much lower than 0.1 lsb or 10%) to be used for pulse
height MCA analysis by creating a histogram of digitized count
values. Standard PCI interface cards have DNL of 0.2 lsb which
produces noisy histograms when repeatedly digitizing a triangle wave
spanning the entire adc input range. PCI card would be best, but
external device or just the adc chip itself would be useable.
Wilkinson style is one known for such low DNL.
The Agilent E1437 should do much better than 10% of 2^-12, though I
guess it's spec'd for spectral analysis rather than the traditional
DNL/INL, etc. At 20Ms/sec, you could average a number of readings to
lower the noise. It's also likely to be more expensive than you would
be interested in.

I've applied dither to an AD6644, and in a spectral sense, get
linearity which should easily meet your needs.

You might think a bit about noise. Is the noise in the bandwidth
you're obliged to use going to end up being the limiting factor?

Must you use a sampling converter? Delta-sigma converters should
easily be able to provide the required linearity.

Cheers,
Tom
 
On Tue, 06 Apr 2004 17:08:56 -0400, Ben Bradley
<ben_nospam_bradley@mindspring.example.com> wrote:

How short are the pulses? I'm wondering how many samples you will
read of each pulse at 100ksps. Doing it this way, more is obviously
better. Are you looking the absolute magnitude of each pulse, or
perhaps the area under the pulse? It may help if you tell us the
application. I googled for MCA analysis but got too many plausible
links to even guess what it means in your context.
Application is time correlated single photon counting to measure
nanosecond fluorescent lifetimes. A laser pulse generates a start
pulse. The first photon emitted from the sample generates a stop
pulse. The time difference is converted to a voltage pulse (0 to
+10V), 2 usec duration (I could extend that with an external
sample/hold) by a time-to-amplitude converter. The overall lifetime
of the sample is constructed by repeating this at least tens of
thousands of times. For each laser shot, the voltage pulse is
digitized and its binary code summed into a histogram bin. A plot is
made of number of occurances vs adc binary code value. MCA =
multichannel analysis. At least 12
bits of resolution is desired. The pulses to be digitized occur
randomly in time at least 10 usec apart (maybe 100 usec). Absolute
magnitude is needed.

Perhaps I could grab the 2 usec width pulse with a sample/hold and
continue digitizing it over and over again as you suggest until the
next pulse occured at some random time.

For accurate data, the widths of each histogram bin should be equal.
I test this by making a histogram from a triangle wave. The number of
occurances vs raw adc counts should be flat. Instead, it exhibits
variations of 20% (equivalent to 0.2 lsb of dnl).

A 20% deviation in adc bin width seems like a high value for something
electronic in nature ike this. What is it that makes this so ?

Here are the usual links I give that explain dither:

http://www.national.com/an/AN/AN-804.pdf
I'm using their boards.

STeve
 
On Wed, 07 Apr 2004 11:55:28 -0700, John Larkin
<jjlarkin@highSNIPlandTHIStechPLEASEnology.com> wrote:

On Wed, 07 Apr 2004 13:46:34 -0400, Steve Parus
nospam.sparus@umich.edu> wrote:

Application is time correlated single photon counting to measure
nanosecond fluorescent lifetimes. A laser pulse generates a start
pulse. The first photon emitted from the sample generates a stop
pulse. The time difference is converted to a voltage pulse (0 to
+10V), 2 usec duration (I could extend that with an external
sample/hold) by a time-to-amplitude converter.


Why use a TAC? There's stuff that will measure the time interval
directly [1] with 25-50 ps resolution and essentially perfect
differential linearity.
[1] like ours, maybe:

http://www.highlandtechnology.com/DSS/V680DS.html

http://www.highlandtechnology.com/DSS/V660DS.html
We already have the TAC and wished for custom software, better
performance and more features for the MCA, hence the interest in using
an inexpensive adc we could program for. We also have a Lecroy time
to digital converter (in a camac crate). A 250psec instrument
response with our wilkinson ADC is closer to 1.5nsec with the Lecroy
time-digital converter. Maybe your unit is much better. I came
across it elsewhere a while ago and recall it being $5-10 k ? We
really only need one channel and a maximum time of say 10 nsec.

Steve
 
On Wed, 07 Apr 2004 22:19:55 -0400, Steve P <sparus@no_spam_umich.edu>
wrote:

On Wed, 07 Apr 2004 11:55:28 -0700, John Larkin
jjlarkin@highSNIPlandTHIStechPLEASEnology.com> wrote:

On Wed, 07 Apr 2004 13:46:34 -0400, Steve Parus
nospam.sparus@umich.edu> wrote:

Application is time correlated single photon counting to measure
nanosecond fluorescent lifetimes. A laser pulse generates a start
pulse. The first photon emitted from the sample generates a stop
pulse. The time difference is converted to a voltage pulse (0 to
+10V), 2 usec duration (I could extend that with an external
sample/hold) by a time-to-amplitude converter.


Why use a TAC? There's stuff that will measure the time interval
directly [1] with 25-50 ps resolution and essentially perfect
differential linearity.
[1] like ours, maybe:

http://www.highlandtechnology.com/DSS/V680DS.html

http://www.highlandtechnology.com/DSS/V660DS.html

We already have the TAC and wished for custom software, better
performance and more features for the MCA, hence the interest in using
an inexpensive adc we could program for. We also have a Lecroy time
to digital converter (in a camac crate). A 250psec instrument
response with our wilkinson ADC is closer to 1.5nsec with the Lecroy
time-digital converter. Maybe your unit is much better. I came
across it elsewhere a while ago and recall it being $5-10 k ? We
really only need one channel and a maximum time of say 10 nsec.

Steve

Oh, OK, if you only need 10 ns span, a big old TDC isn't worth it. I
seem to recall some fluorescent photon things covering huge dynamic
ranges. Our V680 is about $5K.

So if you freeze the TAC, you could fire the ADC a bunch of times and
average the shots. If the TAC drifts a little, so much the better.

John
 
Bill Sloman wrote:

Care to expand on that a bit Fred? The original poster wanted to use
his A/D converter to digitise pulse heights, which typically vary
pretty much randomly from one pulse to the next, though he was testing
his A/D converter by digitising a triangular wave.

A sliding scale A/D converter might be quite useful for looking at a
triangular wave, but would be singularly useless in most applications
where it is used as a front end for a multi-channel analyser.

The nearest I've got to sliding scale A/D converter was a 15-bit D/A
converter, where we tacked a 10-bit 10MHz DAC onto the top five bits
of a much slower 18-bit DAC to control the electron beam position (X
and Y) in the Cambridge Instruments Electron Beam Microfabricator
10.5.

------
Bill Sloman, Nijmegen
Not really, the sliding scale is the most linear technique there is and
multi-channel pulse height analyzers running at several hundred KSPS
using sliding scale exist- and to 12 bits resolution. As usual, the
manufacturers of these instruments do not reveal all the proprietary
details, but it does not take an Einstein to realize that there is no
requirement to treat an individual A/D as a completely unknown amplitude
cell distribution, especially in those applications that allow periodic
real time calibration. I am not sure that "dithering" is all that useful
when averaging is not an option.
 
A 2004 review article of time interval measurement:
http://ej.iop.org/links/q67/PCZiJkCLgc6XuxoUXBFSiQ/met4_1_004.pdf

which mentions a PCI and PXI board time interval instrument:
http://www.vigo.com.pl/index.php?vigo=131

Pros and Cons of several time interval instruments (click on each
manufacturer's model on the left pane):
http://ilrs.gsfc.nasa.gov/engineering_technology/timing/tof_devices/manufacture_spec/

On Wed, 07 Apr 2004 13:46:34 -0400, Steve Parus
nospam.sparus@umich.edu> wrote:

Application is time correlated single photon counting to measure
nanosecond fluorescent lifetimes.
 
Steve Parus <nospam.sparus@umich.edu> wrote in message news:<76f870dqvdo4j1undbq7lirhst65epde0h@4ax.com>...
On Tue, 06 Apr 2004 17:08:56 -0400, Ben Bradley
ben_nospam_bradley@mindspring.example.com> wrote:

How short are the pulses? I'm wondering how many samples you will
read of each pulse at 100ksps. Doing it this way, more is obviously
better. Are you looking the absolute magnitude of each pulse, or
perhaps the area under the pulse? It may help if you tell us the
application. I googled for MCA analysis but got too many plausible
links to even guess what it means in your context.

Application is time correlated single photon counting to measure
nanosecond fluorescent lifetimes. A laser pulse generates a start
pulse. The first photon emitted from the sample generates a stop
pulse. The time difference is converted to a voltage pulse (0 to
+10V), 2 usec duration (I could extend that with an external
sample/hold) by a time-to-amplitude converter. The overall lifetime
of the sample is constructed by repeating this at least tens of
thousands of times. For each laser shot, the voltage pulse is
digitized and its binary code summed into a histogram bin. A plot is
made of number of occurances vs adc binary code value. MCA =
multichannel analysis. At least 12
bits of resolution is desired. The pulses to be digitized occur
randomly in time at least 10 usec apart (maybe 100 usec). Absolute
magnitude is needed.
A useful trick in this application - where the lasers can usually
generate start pulses a lot faster than the MCA can process them - is
to run the "start pulse" from the laser source through a delay line
that is longer than the range of delays you want to look at, and use
it to drive the "Stop" input of the Time-to-Amplitude Converter (TAC).
The "Start" input of your TAC is driven directly by the output of your
photon-detector.

In order to avoid distorting your delay curve by "photon pile-up" you
need to run the system with a light intensity that gives about one
detected photon per ten laser flashes, and by using the
photon-detector output as the "Start" pulse for the TAC you can flash
the laser about ten times more frequently than if you connect the
trigger output from the laser source to the "Start" input of the TAC.

If you correct for "photon pile-up" you can run the system up to
intensities that give one detected photon for every second flash, but
you have less confidence about the shape of the tail of the corrected
decay curve.

I first heard about this around 1978 - one of my co-authors then
working for Dave Phillips in Southampton learned about it when working
at a synchroton light source in France, and applied it the
laser-excited time-correlated photon-counting set up used by the
Southampton photochemistry group.

------
Bill Sloman, Nijmegen
 

Welcome to EDABoard.com

Sponsor

Back
Top