B
bazhob
Guest
Hello there,
Unfortunately I'm not too much into electronics, but for a
project in Computer Science I need to measure the output
from an Opamp/integrator circuit with a precision of 16-bits
using a standard 8-bit ADC. The integrator is reset every
500 ms, within that interval we have a slope rising from 0V
up to a maximum of about 5V, depending on the input signal.
The paper of the original experiment (which I'm trying to replicate)
contains the following note: "A MC68HC11A0 micro-controller operated
this reset signal [...] and performed 8-bit A/D conversion on the
integrator output. A signal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration
over 256 sub-intervals."
Can someone please point me to more information about this
technique of doubling accuracy in software by dividing the
measured interval by 256? For a start, what would be the
technical term for that?
Thanks a lot!
Toby
Unfortunately I'm not too much into electronics, but for a
project in Computer Science I need to measure the output
from an Opamp/integrator circuit with a precision of 16-bits
using a standard 8-bit ADC. The integrator is reset every
500 ms, within that interval we have a slope rising from 0V
up to a maximum of about 5V, depending on the input signal.
The paper of the original experiment (which I'm trying to replicate)
contains the following note: "A MC68HC11A0 micro-controller operated
this reset signal [...] and performed 8-bit A/D conversion on the
integrator output. A signal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration
over 256 sub-intervals."
Can someone please point me to more information about this
technique of doubling accuracy in software by dividing the
measured interval by 256? For a start, what would be the
technical term for that?
Thanks a lot!
Toby