C
Chris Carlen
Guest
Hi:
I need to do this:
1. Count quadrature shaft encoder signal with max frequency (for a
single phase) of about 90kHz.
2. Output a 16 bit word where each of the 16 bits changes state at some
specific angle of the shaft encoder. There could potentially be an
arbitrary number of state changes per bit per revolution, so there would
be a list of angles at which to go high or low for each bit. But the
list is unlikely to be more than 3-4 in length.
3. However, the word pattern from one revolution to the next might not
be the same, but would repeat after N revolutions. So there is not only
a list of angles to flip bits within each revolution, but a new list for
each revolution.
4. On 1-4 of the 16 output bits, the angular event should trigger a
time-domain pulse width sequence, with up to at least 4 pulses with
adjustable widths and separations. At least 1us resolution and jitter,
preferrably <0.5us, with adjustability from about 1us to about 5ms.
Don't tell me about algorithms to do this, I already know that.
The question is, what system architecture to best implement this for a
research lab environment? There will be only a few built, maybe 6-10.
So volume cost reduction doesn't apply. We should optimize:
1. Minimum time to deploy. (We have working legacy systems, so the
concern is to avoid a long delay replacing one if it fails, rather than
getting 6-10 new systems into place all at once).
2. Moderate capital cost per unit, say up to $5000
3. Maximum flexibity to respond to changes in the design specifications.
4. Best "future-proof" design, able to last at least 10 years,
preferrably 20. Shouldn't employ esoteric hardware that few people
could be found to replace the original designers if they go away. But
the other aspect of future-proofing is that the hardware itself continue
to be available for a long time. Or if not, that future hardware will
be similar enough that a minimum of reworking of software code would be
needed. Also, if the hardware were relatively cheap on a capital basis,
then a large amount of "spare parts" could be stocked, thereby making it
very future-proof.
I see 3 most likely potential approaches:
1. PC platform with real-time operating system (RTOS) and commercial
off-the-shelf (COTS) data-acquisition/digital IO (DAQ/DIO) hardware.
DIO boards would be cabled to a patch panel containing buffering
circuits only.
2. Embedded PC or SBC with COTS DIO hardware. Similar to above, but
SBC and buffering panel could be one box. Windows PC would have a GUI
interface program, and send setup parameters to the box via RS-232 or USB.
3. Custom built hardware board with an FPGA to do most of the real-time
logical stuff, and a microcontroller to communicate to a Windows GUI
interface program via Rs-232 or USB.
I'm a hardware guy who would prefer option #3, and am currently in a
debate with a software guy who would prefer option #1 or #2. I really
don't know that there is a clear case to go either way (custom #3 vs.
COTS #1 or #2). Both approaches satisfy the requirements with slightly
different balance of pros and cons.
I will state what I think are the strengths of my approach:
1. The FPGA is incredibly flexible. Often there is the complaint with
COTS DAQ/DIO hardware: "why did they do it that way?!?!" referring to
some wierdness about the programming model, that causes a "gotcha"
making the implementation of functionality that the manufacturer assured
you could be done not quite straightforward.
Thus, with an FPGA, the software guy could actually get the hardware guy
to tailor the programming model to just what he'd like.
Also, VHDL or Verilog coding of the hardware would be very portable to
future devices. This would make it possible to implement the same
"virtual DAQ/DIO" hardware in a future FPGA, with exactly the same
programming model (the view seen by the software guy), with no changes
in the HDL. So that actually makes the user interface software codebase
very easy to maintain.
2. The hybrid uC/FPGA allows optimization of the partitioning of tasks
that are better suited to hardware vs. real-time software.
There are other potential requirements that I haven't mentioned, such as
using an encoder on another shaft to check the first encoder's
alignment, and possibly also another layer of time-domain counting to
check that the accumulated time duration of the multi-time-pulse
sequences doesn't exceed some limit. Even analog signals may have to be
acquired and compared to a digitally synthesized function of shaft angle
to see if a motion servo-control system has it's moving parts within
acceptible tolerances.
In short, the requirements are likely to change and expand in very
unpredictable ways in the future. This is an aspect of future-proofing
that is strongly to my advantage.
An architecture based on an FPGA plus powerful uC/DSP such as a Xilinx
Spartan3 300-500k FPGA and the TI F2812 DSP would create a platform with
a *huge* amount of headroom for future capability expansion.
As new requirements come along, they can be implemented quick-and-dirty
in the DSP, then later moved into a more permanent solution in the FPGA,
thereby freeing up DSP processing power for more quick enhancements in
the future.
3. Also, since the actual chips are so cheap, a large number could be
stocked to ensure that future failures if they occur can be fixed
cheaply. If we tried to keep 1-2 extra copies of all the hardware
needed to implement options #1 or #2, then that would be too expensive.
In the COTS case we could afford to only stock 3-4 replacement items
for all of the 6-10 installations, depending on the prices of course.
With a large stock of chips, the system could be duplicated far into the
future by only having to get new copies of PCBs made, if they weren't
already stocked.
Weaknesses of my approach:
There are few weaknesses in terms of it's technical capabilities. But
cost and time considerations are more arguable:
The development of a well thought out custom board with an F2812 and
Spartan3 will be a much larger time investment than putting into place
COTS hardware. So while the chips are cheap, the labor cost is greater.
However, where I work, I am "already paid for" so the labor cost is hard
to say that it matters. More important is time, for which my approach
is very weak. However, there is no urgency to replace all our legacy
systems quickly, but rather to be able to replace 1-2 failures quickly.
It turns out that the main tasks that need to be done I am already close
to coding on an F2812 development board. So I could put into place a
prototype replacement unit of our legacy system rather quickly if
needed. That means we have some time flexibility to take the care to
design a highly flexible, large headroom system for the future.
Some people think that the custom hardware is then also more difficult
for outside consultants to fix and maintain, but I disagree with this.
If it is properly documented, then it is no different from a
consultant's perspective than COTS.
Ok, enough for now.
Thanks for input on this post.
Good day!
P.S. Ever notice that you could have gotten a significant portion of a
job done during the time spent debating how to do it?
--
_________________
Christopher R. Carlen
crobc@bogus-remove-me.sbcglobal.net
SuSE 9.1 Linux 2.6.5
I need to do this:
1. Count quadrature shaft encoder signal with max frequency (for a
single phase) of about 90kHz.
2. Output a 16 bit word where each of the 16 bits changes state at some
specific angle of the shaft encoder. There could potentially be an
arbitrary number of state changes per bit per revolution, so there would
be a list of angles at which to go high or low for each bit. But the
list is unlikely to be more than 3-4 in length.
3. However, the word pattern from one revolution to the next might not
be the same, but would repeat after N revolutions. So there is not only
a list of angles to flip bits within each revolution, but a new list for
each revolution.
4. On 1-4 of the 16 output bits, the angular event should trigger a
time-domain pulse width sequence, with up to at least 4 pulses with
adjustable widths and separations. At least 1us resolution and jitter,
preferrably <0.5us, with adjustability from about 1us to about 5ms.
Don't tell me about algorithms to do this, I already know that.
The question is, what system architecture to best implement this for a
research lab environment? There will be only a few built, maybe 6-10.
So volume cost reduction doesn't apply. We should optimize:
1. Minimum time to deploy. (We have working legacy systems, so the
concern is to avoid a long delay replacing one if it fails, rather than
getting 6-10 new systems into place all at once).
2. Moderate capital cost per unit, say up to $5000
3. Maximum flexibity to respond to changes in the design specifications.
4. Best "future-proof" design, able to last at least 10 years,
preferrably 20. Shouldn't employ esoteric hardware that few people
could be found to replace the original designers if they go away. But
the other aspect of future-proofing is that the hardware itself continue
to be available for a long time. Or if not, that future hardware will
be similar enough that a minimum of reworking of software code would be
needed. Also, if the hardware were relatively cheap on a capital basis,
then a large amount of "spare parts" could be stocked, thereby making it
very future-proof.
I see 3 most likely potential approaches:
1. PC platform with real-time operating system (RTOS) and commercial
off-the-shelf (COTS) data-acquisition/digital IO (DAQ/DIO) hardware.
DIO boards would be cabled to a patch panel containing buffering
circuits only.
2. Embedded PC or SBC with COTS DIO hardware. Similar to above, but
SBC and buffering panel could be one box. Windows PC would have a GUI
interface program, and send setup parameters to the box via RS-232 or USB.
3. Custom built hardware board with an FPGA to do most of the real-time
logical stuff, and a microcontroller to communicate to a Windows GUI
interface program via Rs-232 or USB.
I'm a hardware guy who would prefer option #3, and am currently in a
debate with a software guy who would prefer option #1 or #2. I really
don't know that there is a clear case to go either way (custom #3 vs.
COTS #1 or #2). Both approaches satisfy the requirements with slightly
different balance of pros and cons.
I will state what I think are the strengths of my approach:
1. The FPGA is incredibly flexible. Often there is the complaint with
COTS DAQ/DIO hardware: "why did they do it that way?!?!" referring to
some wierdness about the programming model, that causes a "gotcha"
making the implementation of functionality that the manufacturer assured
you could be done not quite straightforward.
Thus, with an FPGA, the software guy could actually get the hardware guy
to tailor the programming model to just what he'd like.
Also, VHDL or Verilog coding of the hardware would be very portable to
future devices. This would make it possible to implement the same
"virtual DAQ/DIO" hardware in a future FPGA, with exactly the same
programming model (the view seen by the software guy), with no changes
in the HDL. So that actually makes the user interface software codebase
very easy to maintain.
2. The hybrid uC/FPGA allows optimization of the partitioning of tasks
that are better suited to hardware vs. real-time software.
There are other potential requirements that I haven't mentioned, such as
using an encoder on another shaft to check the first encoder's
alignment, and possibly also another layer of time-domain counting to
check that the accumulated time duration of the multi-time-pulse
sequences doesn't exceed some limit. Even analog signals may have to be
acquired and compared to a digitally synthesized function of shaft angle
to see if a motion servo-control system has it's moving parts within
acceptible tolerances.
In short, the requirements are likely to change and expand in very
unpredictable ways in the future. This is an aspect of future-proofing
that is strongly to my advantage.
An architecture based on an FPGA plus powerful uC/DSP such as a Xilinx
Spartan3 300-500k FPGA and the TI F2812 DSP would create a platform with
a *huge* amount of headroom for future capability expansion.
As new requirements come along, they can be implemented quick-and-dirty
in the DSP, then later moved into a more permanent solution in the FPGA,
thereby freeing up DSP processing power for more quick enhancements in
the future.
3. Also, since the actual chips are so cheap, a large number could be
stocked to ensure that future failures if they occur can be fixed
cheaply. If we tried to keep 1-2 extra copies of all the hardware
needed to implement options #1 or #2, then that would be too expensive.
In the COTS case we could afford to only stock 3-4 replacement items
for all of the 6-10 installations, depending on the prices of course.
With a large stock of chips, the system could be duplicated far into the
future by only having to get new copies of PCBs made, if they weren't
already stocked.
Weaknesses of my approach:
There are few weaknesses in terms of it's technical capabilities. But
cost and time considerations are more arguable:
The development of a well thought out custom board with an F2812 and
Spartan3 will be a much larger time investment than putting into place
COTS hardware. So while the chips are cheap, the labor cost is greater.
However, where I work, I am "already paid for" so the labor cost is hard
to say that it matters. More important is time, for which my approach
is very weak. However, there is no urgency to replace all our legacy
systems quickly, but rather to be able to replace 1-2 failures quickly.
It turns out that the main tasks that need to be done I am already close
to coding on an F2812 development board. So I could put into place a
prototype replacement unit of our legacy system rather quickly if
needed. That means we have some time flexibility to take the care to
design a highly flexible, large headroom system for the future.
Some people think that the custom hardware is then also more difficult
for outside consultants to fix and maintain, but I disagree with this.
If it is properly documented, then it is no different from a
consultant's perspective than COTS.
Ok, enough for now.
Thanks for input on this post.
Good day!
P.S. Ever notice that you could have gotten a significant portion of a
job done during the time spent debating how to do it?
--
_________________
Christopher R. Carlen
crobc@bogus-remove-me.sbcglobal.net
SuSE 9.1 Linux 2.6.5