A
Andras Tantos
Guest
Thanks for the replay.
to calculate the transfer functions. That would automatically center the
switching point in the middle of the input range.
part? I would think you need sample-and-hold circuits on the analog side to
do that.
mine) is good and one can make 'bleeding edge' technology out of it. I would
think if you'd re-implement your design with current technology, you would
get much better results.
Regards,
Andras Tantos
This is almost what I've done only I've used the saturation of the op-ampsThis sounds vaguely like an A/D system I built circa 1970, and the
idea was not original then. Each stage consists of effectively
two op-amps. The input is in the range -V to +V. The first op-amp
computes the absolute value of the input, range 0..V, and emits a
digital bit based on the input sign. The second op-amp subtracts
the reference voltage V/2 (resulting in +- V/2) and multiplies the
result by 2 (range -V to +V). This gets fed to the next (less
significant) stage.
to calculate the transfer functions. That would automatically center the
switching point in the middle of the input range.
How did you match up the digital delays with the propagation of the analogThe result is effectively a gray code, and the use of a few
digital delay lines to match the op-amp propagation delays makes
the whole thing a pipe-line, so that very high speed samples can
be taken of the digital values.
part? I would think you need sample-and-hold circuits on the analog side to
do that.
Yes, monotonicity is automatic with the Gray-coded version.Each module (one per bit, with the two op-amps) can be identical.
As ever, the precision is dependant on resistor matching. Care
needs to be taken with the absolute value (rectifier) circuitry.
But note that whenever it is near the decision point (0 volts) the
stage output is at an extreme, and this extreme value remains at
all further stages. Monotonicity is automatic.
So you are basically saying that the idea (though it's far from being new orI sampled the results in gray code, and digitally converted to
binary for processing. The processor was a Microdata 800, about
the size and weight of a large PC today, with 512 16 bit words of
microcode available and 4k bytes of core memory. The microcode
was installed by mounting individual diodes (or not) end-on and
soldering. The result could sample into a buffer (burst limited
by storage and core cycle time etc.) at 1/2 megasamples per second
of 8 bits. In those days this was bleeding edge technology.
mine) is good and one can make 'bleeding edge' technology out of it. I would
think if you'd re-implement your design with current technology, you would
get much better results.
Regards,
Andras Tantos