Build it custom or buy it?

C

Chris Carlen

Guest
Hi:

I need to do this:

1. Count quadrature shaft encoder signal with max frequency (for a
single phase) of about 90kHz.
2. Output a 16 bit word where each of the 16 bits changes state at some
specific angle of the shaft encoder. There could potentially be an
arbitrary number of state changes per bit per revolution, so there would
be a list of angles at which to go high or low for each bit. But the
list is unlikely to be more than 3-4 in length.
3. However, the word pattern from one revolution to the next might not
be the same, but would repeat after N revolutions. So there is not only
a list of angles to flip bits within each revolution, but a new list for
each revolution.
4. On 1-4 of the 16 output bits, the angular event should trigger a
time-domain pulse width sequence, with up to at least 4 pulses with
adjustable widths and separations. At least 1us resolution and jitter,
preferrably <0.5us, with adjustability from about 1us to about 5ms.

Don't tell me about algorithms to do this, I already know that.

The question is, what system architecture to best implement this for a
research lab environment? There will be only a few built, maybe 6-10.
So volume cost reduction doesn't apply. We should optimize:

1. Minimum time to deploy. (We have working legacy systems, so the
concern is to avoid a long delay replacing one if it fails, rather than
getting 6-10 new systems into place all at once).
2. Moderate capital cost per unit, say up to $5000
3. Maximum flexibity to respond to changes in the design specifications.
4. Best "future-proof" design, able to last at least 10 years,
preferrably 20. Shouldn't employ esoteric hardware that few people
could be found to replace the original designers if they go away. But
the other aspect of future-proofing is that the hardware itself continue
to be available for a long time. Or if not, that future hardware will
be similar enough that a minimum of reworking of software code would be
needed. Also, if the hardware were relatively cheap on a capital basis,
then a large amount of "spare parts" could be stocked, thereby making it
very future-proof.

I see 3 most likely potential approaches:

1. PC platform with real-time operating system (RTOS) and commercial
off-the-shelf (COTS) data-acquisition/digital IO (DAQ/DIO) hardware.
DIO boards would be cabled to a patch panel containing buffering
circuits only.

2. Embedded PC or SBC with COTS DIO hardware. Similar to above, but
SBC and buffering panel could be one box. Windows PC would have a GUI
interface program, and send setup parameters to the box via RS-232 or USB.

3. Custom built hardware board with an FPGA to do most of the real-time
logical stuff, and a microcontroller to communicate to a Windows GUI
interface program via Rs-232 or USB.

I'm a hardware guy who would prefer option #3, and am currently in a
debate with a software guy who would prefer option #1 or #2. I really
don't know that there is a clear case to go either way (custom #3 vs.
COTS #1 or #2). Both approaches satisfy the requirements with slightly
different balance of pros and cons.

I will state what I think are the strengths of my approach:

1. The FPGA is incredibly flexible. Often there is the complaint with
COTS DAQ/DIO hardware: "why did they do it that way?!?!" referring to
some wierdness about the programming model, that causes a "gotcha"
making the implementation of functionality that the manufacturer assured
you could be done not quite straightforward.

Thus, with an FPGA, the software guy could actually get the hardware guy
to tailor the programming model to just what he'd like.

Also, VHDL or Verilog coding of the hardware would be very portable to
future devices. This would make it possible to implement the same
"virtual DAQ/DIO" hardware in a future FPGA, with exactly the same
programming model (the view seen by the software guy), with no changes
in the HDL. So that actually makes the user interface software codebase
very easy to maintain.

2. The hybrid uC/FPGA allows optimization of the partitioning of tasks
that are better suited to hardware vs. real-time software.

There are other potential requirements that I haven't mentioned, such as
using an encoder on another shaft to check the first encoder's
alignment, and possibly also another layer of time-domain counting to
check that the accumulated time duration of the multi-time-pulse
sequences doesn't exceed some limit. Even analog signals may have to be
acquired and compared to a digitally synthesized function of shaft angle
to see if a motion servo-control system has it's moving parts within
acceptible tolerances.

In short, the requirements are likely to change and expand in very
unpredictable ways in the future. This is an aspect of future-proofing
that is strongly to my advantage.

An architecture based on an FPGA plus powerful uC/DSP such as a Xilinx
Spartan3 300-500k FPGA and the TI F2812 DSP would create a platform with
a *huge* amount of headroom for future capability expansion.

As new requirements come along, they can be implemented quick-and-dirty
in the DSP, then later moved into a more permanent solution in the FPGA,
thereby freeing up DSP processing power for more quick enhancements in
the future.

3. Also, since the actual chips are so cheap, a large number could be
stocked to ensure that future failures if they occur can be fixed
cheaply. If we tried to keep 1-2 extra copies of all the hardware
needed to implement options #1 or #2, then that would be too expensive.
In the COTS case we could afford to only stock 3-4 replacement items
for all of the 6-10 installations, depending on the prices of course.

With a large stock of chips, the system could be duplicated far into the
future by only having to get new copies of PCBs made, if they weren't
already stocked.


Weaknesses of my approach:

There are few weaknesses in terms of it's technical capabilities. But
cost and time considerations are more arguable:

The development of a well thought out custom board with an F2812 and
Spartan3 will be a much larger time investment than putting into place
COTS hardware. So while the chips are cheap, the labor cost is greater.

However, where I work, I am "already paid for" so the labor cost is hard
to say that it matters. More important is time, for which my approach
is very weak. However, there is no urgency to replace all our legacy
systems quickly, but rather to be able to replace 1-2 failures quickly.

It turns out that the main tasks that need to be done I am already close
to coding on an F2812 development board. So I could put into place a
prototype replacement unit of our legacy system rather quickly if
needed. That means we have some time flexibility to take the care to
design a highly flexible, large headroom system for the future.

Some people think that the custom hardware is then also more difficult
for outside consultants to fix and maintain, but I disagree with this.
If it is properly documented, then it is no different from a
consultant's perspective than COTS.

Ok, enough for now.

Thanks for input on this post.


Good day!


P.S. Ever notice that you could have gotten a significant portion of a
job done during the time spent debating how to do it?



--
_________________
Christopher R. Carlen
crobc@bogus-remove-me.sbcglobal.net
SuSE 9.1 Linux 2.6.5
 
"Chris Carlen" <crobc@BOGUSFIELD.sbcglobal.net> wrote in message
news:dcvvhh01vm6@news3.newsguy.com...
1. Count quadrature shaft encoder signal with max frequency (for a
single phase) of about 90kHz.
....etc...

One significant item you don't mention: For the 16 bit outputs, can the
outputs be readily calculated "on the fly" (i.e., there's a reasonably simple
forumula to get from one output to the next)? Or is it something so esoteric
you'll need a bunch of look-up tables?

So volume cost reduction doesn't apply. We should optimize:

1. Minimum time to deploy. (We have working legacy systems, so the
concern is to avoid a long delay replacing one if it fails, rather than
getting 6-10 new systems into place all at once).
Good.

2. Moderate capital cost per unit, say up to $5000
That's a lot of cash to play with!

3. Maximum flexibity to respond to changes in the design specifications.
This is always pretty nebulous. Do you have anything in particular in mind?
Good design certainly allows for changes in design specifications, but you can
pretty much always kill off whatever remaining budget or time you might
otherwise have by deciding you can obtain just one extra bit of 'flexibility'
by adding a chip or something that will probably never be used.

4. Best "future-proof" design, able to last at least 10 years,
preferrably 20.
Wow! That's a looonnnng time these days! All of the approaches you mention
"more or less" meet that requirement... it's highly unlikely you'd be able to
find the same FPGA or data acquisition cards available 10 years from now, much
less 20, but by sticking with an HDL or PC development environment, you will
be able to migrate the design.

My own recommendation, though, would be to shoot for something more like a
5-10 year maximum lifespan (and if you build these things right, they'll last
that long anyway) -- it's very non-obvious how to design a system that, 20
years from now, is still going tbe considered easy to use, maintain, interface
to, etc.

I see 3 most likely potential approaches:
All the PC-based approaches sound like massive overkill to me, unless those 16
bit outputs you describe above are so complicated in nature that a
microcontroller/FPGA couldn't hack it.

3. Custom built hardware board with an FPGA to do most of the real-time
logical stuff, and a microcontroller to communicate to a Windows GUI
interface program via Rs-232 or USB.
You mighit be able to get away without an FPGA. With a fast microcontroller
(something operating at, say, 50MHz), with 'tight' programming you should be
able to meet your 500ns jitter spec on the outputs, since the quadrature input
sampling at 90kHz is quite leisurely (i.e., it's not like your micro would be
pre-occupied doing much else that couldn't be easily interrupted).

On the other hand, these days you can get free or inexpensive CPU cores
(compatible with PICs, AVRs, or at least the manufacturer's own tool sets,
which normally include things like GCC) and just forego the physical CPU
option. This approach gives you a lot of that extra flexilibty that you're
after...

In short, the requirements are likely to change and expand in very
unpredictable ways in the future. This is an aspect of future-proofing
that is strongly to my advantage.
Yeah, but again, unless you're going to be sending this thing to Mars, it's
just not worth it to try to plan for more than a 5-10 year life now. Anything
with requirements that are likely to "chance and expand in very unpredictable
ways" is almost always best served by a re-design (incorporating re-usable
components wherever possible -- which is something that tends to happen more
often in theory than in practice!) rather than trying to stretch some older
design for just a few more miles.

An architecture based on an FPGA plus powerful uC/DSP such as a Xilinx
Spartan3 300-500k FPGA and the TI F2812 DSP would create a platform with
a *huge* amount of headroom for future capability expansion.
Yes, although if you're going with that big of an FPGA, unless you actually
need a bunch of really fast multiplier blocks or already have some DSP
algorithms developed in C, I'd still stick with the FPGA+soft core approach.

The development of a well thought out custom board with an F2812 and
Spartan3 will be a much larger time investment than putting into place
COTS hardware. So while the chips are cheap, the labor cost is greater.
Yes. A good approach here would be to just find someone's "general purpose"
FPGA development board... they usually have a bunch of headers on them, so
then you stick your custom board on top (for only 6-10 units, you could
probably even get away with just hand soldering those interface boards -- no
custom PCB needed at all).

However, where I work, I am "already paid for" so the labor cost is hard
to say that it matters. More important is time, for which my approach
is very weak.
Do you already know an HDL and/or have experience with FPGAs? For someone who
does, if you go with an off-the-shelf FPGA development board and hand solder
your first interface board or so, from what you're said so far, this doesn't
sound like it would take more than a month to get the first prototype up and
running.

Some people think that the custom hardware is then also more difficult
for outside consultants to fix and maintain, but I disagree with this.
If it is properly documented, then it is no different from a
consultant's perspective than COTS.
Certainly true, the usual problem, of course, is that that documentation is
about the lowest priority of anything around the shop (slightly lower than
replacing the toilet paper in the bathroom) so it often never gets done. With
good documentation, I expect that most people would prefer to work on small,
self-contained systems than some big PC-based one.

---Joel
 
Joel Kolstad wrote:
One significant item you don't mention: For the 16 bit outputs, can the
outputs be readily calculated "on the fly" (i.e., there's a reasonably simple
forumula to get from one output to the next)? Or is it something so esoteric
you'll need a bunch of look-up tables?
This gadget is to develop firing control signals for an internal
combustion engine. We need to not fire the engine for say 9 cycles,
then fire it once, then repeat the pattern. So there's two crank revs
per engine cycle, with a 1/4 degree encoder so 2*1440*10=28800 counts
per program. Thus, there could be a maximum of 28800 unique 16-bit
output conditions. In practice, each bit may only have one pulse per
engine cycle. So even if statistically there are say 2 pulses (4 edges)
per cycle per bit, and all at different crank angles from bit to bit,
then there may be about 16*4*10=640 unique output states to index. So
there's a data structure of 640 two-word items: the counts at which
something changes, and the output word when that count is reached.

Pretty simple. I will probably have this going on my F2812 ezDSP
development board within a few weeks, with only 2-3 hours of effort a day.

2. Moderate capital cost per unit, say up to $5000

That's a lot of cash to play with!
I think a SBC or PC solution could be done for $1500-$3000, so that's an
upper bound.

3. Maximum flexibity to respond to changes in the design specifications.

This is always pretty nebulous. Do you have anything in particular in mind?
Good design certainly allows for changes in design specifications, but you can
pretty much always kill off whatever remaining budget or time you might
otherwise have by deciding you can obtain just one extra bit of 'flexibility'
by adding a chip or something that will probably never be used.
Yes it's nebulous. But you will notice that I mentioned a bunch of
additional likely things the gadget must do later in my OP, based on
another encoder input. That is par for the course. In the middle of
the design, or a year later, they come to me and say, we need to do this
new stuff now. What should I do? Since there is no profit motive, and
the hardware cost is trivial compared to PC DAQ/DIO hardware, I could
easily install an order of magnitude more logic and processing resources
than I need based on today's specs, for effectively negligible
additional cost.

So if that gives me a better than 50% chance that I'll have the
additional capacity to handle future additions of functionality, why not
do it?

This is no different than what most people do who design marketable
products these days, they leave a little room for add-ons that they
didn't think of for version 1.0, but not too much because for them 2x
the horsepower than they really need will cut profits. But for me, the
cost difference between a 300k Spartan3 vs. a 50k is negligible.

4. Best "future-proof" design, able to last at least 10 years,
preferrably 20.

Wow! That's a looonnnng time these days!
Yes, but it's based on the lifespan of the legacy system. There is no
guarantee that the whole methodology of what we do won't change at any
time years. But it is also quite likely that 10 years from now they
will be doing things very similar to how they do it now. They are not
engine developers. They are combustion scientists.

The most likely expansion of the unit will come in the form of needing
more IOs and delay generators. I already plan to make the board capable
of much more than 16 IOs, probably the 56 that the F2812 has, but
patched through the FPGA so the FPGA can map any CPU IO to any physical
IO pin on the board. Then a modularized buffering PCB that can make it
very easy to expand IO. Need another 16? Just punch 16 more holes in a
rack panel, bolt in another pair of 8-channel general purpose IO
buffers, plug the ribbon into an unused IO header on the main PCB, stick
on some labels, and go!

All of the approaches you mention
"more or less" meet that requirement... it's highly unlikely you'd be able to
find the same FPGA or data acquisition cards available 10 years from now, much
less 20, but by sticking with an HDL or PC development environment, you will
be able to migrate the design.
I wonder about the F2812 in that regard. CPU makers tend to like to
help customers not have to discard old code. So they had the C2x and
C20x series before C2000.

Remember, originally I planned only a small uC to link the FPGA to the
PC for passing setup parameters. But I figured the DSP makes it
possible to do some things algorithmically if desired, which can be
moved over to the FPGA in time.

My own recommendation, though, would be to shoot for something more like a
5-10 year maximum lifespan (and if you build these things right, they'll last
that long anyway) -- it's very non-obvious how to design a system that, 20
years from now, is still going tbe considered easy to use, maintain, interface
to, etc.
Unless the thing breaks, then nothing will be different 20 years later,
right? If spare parts are in stock, then no problem. Oh, also the USB
or serial interface would be easy to replace if in the future all the
PCs move to the XYZ port. That's because there will be tons of surplus
IO on the board. So the game's not over until they consume all the IO.

You mighit be able to get away without an FPGA. With a fast microcontroller
(something operating at, say, 50MHz), with 'tight' programming you should be
able to meet your 500ns jitter spec on the outputs, since the quadrature input
sampling at 90kHz is quite leisurely (i.e., it's not like your micro would be
pre-occupied doing much else that couldn't be easily interrupted).
Yes. Like I said, the F2812 can do this in it's sleep. The thing is
the delay generators. They tend to always want more of those. We
probably have 2-3 per each of 7-8 labs, at $3500 each, that's about
$65k. The FPGA is mainly intended to allow for creating time-domain
pulse sequences on the IO ports after the DSP has done its thing based
on crank angle (but I may move that to the FPGA as well). They will
probably want at least 4 channels with the ability to have 10 unique
pulse widths and delays on each. F2812 may even be able to do all of
this. But that will be pushing it.

It also has to be able to have the parameters changed on the fly, so it
must have leftover cycles to respond to new parameters from the PC.

On the other hand, these days you can get free or inexpensive CPU cores
(compatible with PICs, AVRs, or at least the manufacturer's own tool sets,
which normally include things like GCC) and just forego the physical CPU
option. This approach gives you a lot of that extra flexilibty that you're
after...
Yes, but that I'm afraid takes it into a realm of being too complex. It
is very easy to use a microcontroller chip. I think it would be much
more involved to use a soft-core. That's because it'd be like using a
bare microprocessor, where you have to build up the whole peripheral
structure, glue in memories, etc. I don't want to get that involved.
There's no point. The DSP and FPGA don't cost anything. But together
they provide a very wide set of choices of how to accomplish any of the
required tasks.

In short, the requirements are likely to change and expand in very
unpredictable ways in the future. This is an aspect of future-proofing
that is strongly to my advantage.

Yeah, but again, unless you're going to be sending this thing to Mars, it's
just not worth it to try to plan for more than a 5-10 year life now. Anything
with requirements that are likely to "chance and expand in very unpredictable
ways" is almost always best served by a re-design (incorporating re-usable
components wherever possible -- which is something that tends to happen more
often in theory than in practice!) rather than trying to stretch some older
design for just a few more miles.

An architecture based on an FPGA plus powerful uC/DSP such as a Xilinx
Spartan3 300-500k FPGA and the TI F2812 DSP would create a platform with
a *huge* amount of headroom for future capability expansion.

Yes, although if you're going with that big of an FPGA, unless you actually
need a bunch of really fast multiplier blocks or already have some DSP
algorithms developed in C, I'd still stick with the FPGA+soft core approach.
No this time, but I will learn more about that in time. I don't think
my software software guy would be any less opposed to my ideas if I
proposed using a soft-core.

The development of a well thought out custom board with an F2812 and
Spartan3 will be a much larger time investment than putting into place
COTS hardware. So while the chips are cheap, the labor cost is greater.

Yes. A good approach here would be to just find someone's "general purpose"
FPGA development board... they usually have a bunch of headers on them, so
then you stick your custom board on top (for only 6-10 units, you could
probably even get away with just hand soldering those interface boards -- no
custom PCB needed at all).
Yes. I may do it that way. I have considered using the ezDSP as is,
and plugging it into a backplane. Could do the same with an FPGA board.
That is a good idea, and is under consideration.

However, where I work, I am "already paid for" so the labor cost is hard
to say that it matters. More important is time, for which my approach
is very weak.

Do you already know an HDL and/or have experience with FPGAs? For someone who
does, if you go with an off-the-shelf FPGA development board and hand solder
your first interface board or so, from what you're said so far, this doesn't
sound like it would take more than a month to get the first prototype up and
running.
I have an introductory capability in Verilog. I have developed some
camera synchronization logic and some delay generators using Verilog.

Some people think that the custom hardware is then also more difficult
for outside consultants to fix and maintain, but I disagree with this.
If it is properly documented, then it is no different from a
consultant's perspective than COTS.

Certainly true, the usual problem, of course, is that that documentation is
about the lowest priority of anything around the shop (slightly lower than
replacing the toilet paper in the bathroom) so it often never gets done. With
good documentation, I expect that most people would prefer to work on small,
self-contained systems than some big PC-based one.
Yes. I tend to be good with documentation.

Thanks for the input.


Good day!




--
_____________________
Christopher R. Carlen
crobc@bogus-remove-me.sbcglobal.net
SuSE 9.1 Linux 2.6.5
 
Hi, have a look at the Hitachi H8\3048F processor. It has a 2phase
counter linked to a 16 channel timing pattern generator plus all the
usuall other periferals. Most of your software effort would be loading
the pattern table and setting up the periferals. Might take you all
morning
 
cbarn24050@aol.com wrote:
Hi, have a look at the Hitachi H8\3048F processor. It has a 2phase
counter linked to a 16 channel timing pattern generator plus all the
usuall other periferals. Most of your software effort would be loading
the pattern table and setting up the periferals. Might take you all
morning
Thanks for the input. I have looked at H8 in the past. Didn't know
they now have a timing pattern generator. I'll look into what it can do.


Good day!

--
_____________________
Christopher R. Carlen
crobc@bogus-remove-me.sbcglobal.net
SuSE 9.1 Linux 2.6.5
 
For recent projects I was in the exact same situation as yours. The
application was to controll fuel injection magnetics with high accuracy to
crancshaft angle. The only diffrence: I had only one output (not 16) per
device and additional PWM for pressure regulation. Totally number of about
20 pcs for pump test required. Angle Parameters has now to move with the
pressure error and crancshaft speed becouse magnet delay is constant.

I believe we all agree, those things are impossible with PLC. As far I am
familiar with HW and SW, the first attempt was a PLD solution using several
Lattice CPLD. Somewhat later I remembered a recent desgin what was a servo
controller for DC brushless motor using a Texas Instruments F240 DSP. That
Device is about $800 usable from stock without any HW modification and I
was lucky to have the complete firmware sources.

1) The CAN Bus links to a PC for parameter setup and test result statistics
2) The power stage can drive the magnets directly
3) Interface to both incremental encoder and resolver are available
4) SW updates in field possible to meet new requirements

To mention in my case, a software comparision of the angle position was not
accurate enough. The servo motor controllers most time have several HW
comparators for commutation control what can be used to switch the outputs
by HW and generate a interrupt to enable the SW to load next event distance

In the meantime there are 5 years over and it was a good desicion not to use
a CPLD solution with less flexibility. The SW had about a dozen of changes
and the magnets changed for more speed. Formerly they were simply on-off
and today they have low inductance with start pulse and further PWM current
limiting to reduce the heat...
 
Jürgen Veith wrote:
For recent projects I was in the exact same situation as yours. The
application was to controll fuel injection magnetics with high accuracy to
crancshaft angle. The only diffrence: I had only one output (not 16) per
device and additional PWM for pressure regulation. Totally number of about
20 pcs for pump test required. Angle Parameters has now to move with the
pressure error and crancshaft speed becouse magnet delay is constant.

I believe we all agree, those things are impossible with PLC. As far I am
familiar with HW and SW, the first attempt was a PLD solution using several
Lattice CPLD. Somewhat later I remembered a recent desgin what was a servo
controller for DC brushless motor using a Texas Instruments F240 DSP. That
Device is about $800 usable from stock without any HW modification and I
was lucky to have the complete firmware sources.

1) The CAN Bus links to a PC for parameter setup and test result statistics
2) The power stage can drive the magnets directly
3) Interface to both incremental encoder and resolver are available
4) SW updates in field possible to meet new requirements

To mention in my case, a software comparision of the angle position was not
accurate enough. The servo motor controllers most time have several HW
comparators for commutation control what can be used to switch the outputs
by HW and generate a interrupt to enable the SW to load next event distance

In the meantime there are 5 years over and it was a good desicion not to use
a CPLD solution with less flexibility. The SW had about a dozen of changes
and the magnets changed for more speed. Formerly they were simply on-off
and today they have low inductance with start pulse and further PWM current
limiting to reduce the heat...

Interesting project.

Thanks for the feedback!



--
_____________________
Christopher R. Carlen
crobc@bogus-remove-me.sbcglobal.net
SuSE 9.1 Linux 2.6.5
 
Chris Carlen wrote:
Hi:

snip
I see 3 most likely potential approaches:

1. PC platform with real-time operating system (RTOS) and commercial
off-the-shelf (COTS) data-acquisition/digital IO (DAQ/DIO) hardware. DIO
boards would be cabled to a patch panel containing buffering circuits only.

2. Embedded PC or SBC with COTS DIO hardware. Similar to above, but
SBC and buffering panel could be one box. Windows PC would have a GUI
interface program, and send setup parameters to the box via RS-232 or USB.

3. Custom built hardware board with an FPGA to do most of the real-time
logical stuff, and a microcontroller to communicate to a Windows GUI
interface program via Rs-232 or USB.

I'm a hardware guy who would prefer option #3, and am currently in a
debate with a software guy who would prefer option #1 or #2. I really
don't know that there is a clear case to go either way (custom #3 vs.
COTS #1 or #2). Both approaches satisfy the requirements with slightly
different balance of pros and cons.
<snip>

You missed 3.b) Use a commercial FPGA and/or uC eval PCB, as the
hardware engine. - or a number of these.

At the top end, the new STW2200 nust have an Eval PCB comming, and at
the bottom end, Boards like the C8051F064DK have full USB flows, and 16
bit 1Msps, 25 MIPs cores, so can do a lot for ~$25.
ST also have a capable DK for their uPSD3400, which is uC+CPLD.

Also, even if you use #3, you will still need a substantial level of
#1, only now the DAQ is your FPGA/uC remote PCB ?

You want the horsepower nearer the coalface, and easily distributed.

-jg
 
Chris Carlen wrote:

Hi:

I need to do this:

1. Count quadrature shaft encoder signal with max frequency (for a
single phase) of about 90kHz.
2. Output a 16 bit word where each of the 16 bits changes state at some
specific angle of the shaft encoder. There could potentially be an
arbitrary number of state changes per bit per revolution, so there would
be a list of angles at which to go high or low for each bit. But the
list is unlikely to be more than 3-4 in length.
3. However, the word pattern from one revolution to the next might not
be the same, but would repeat after N revolutions. So there is not only
a list of angles to flip bits within each revolution, but a new list for
each revolution.
4. On 1-4 of the 16 output bits, the angular event should trigger a
time-domain pulse width sequence, with up to at least 4 pulses with
adjustable widths and separations. At least 1us resolution and jitter,
preferrably <0.5us, with adjustability from about 1us to about 5ms.

Don't tell me about algorithms to do this, I already know that.

The question is, what system architecture to best implement this for a
research lab environment? There will be only a few built, maybe 6-10.
So volume cost reduction doesn't apply. We should optimize:

1. Minimum time to deploy. (We have working legacy systems, so the
concern is to avoid a long delay replacing one if it fails, rather than
getting 6-10 new systems into place all at once).
2. Moderate capital cost per unit, say up to $5000
3. Maximum flexibity to respond to changes in the design specifications.
4. Best "future-proof" design, able to last at least 10 years,
preferrably 20. Shouldn't employ esoteric hardware that few people
could be found to replace the original designers if they go away. But
the other aspect of future-proofing is that the hardware itself continue
to be available for a long time. Or if not, that future hardware will
be similar enough that a minimum of reworking of software code would be
needed. Also, if the hardware were relatively cheap on a capital basis,
then a large amount of "spare parts" could be stocked, thereby making it
very future-proof.

I see 3 most likely potential approaches:

1. PC platform with real-time operating system (RTOS) and commercial
off-the-shelf (COTS) data-acquisition/digital IO (DAQ/DIO) hardware. DIO
boards would be cabled to a patch panel containing buffering circuits only.

2. Embedded PC or SBC with COTS DIO hardware. Similar to above, but
SBC and buffering panel could be one box. Windows PC would have a GUI
interface program, and send setup parameters to the box via RS-232 or USB.

3. Custom built hardware board with an FPGA to do most of the real-time
logical stuff, and a microcontroller to communicate to a Windows GUI
interface program via Rs-232 or USB.
#3.
Me too.
Build it once, with a fair number of spare IOs, produce a stack and
forget about them.

Rene
--
Ing.Buero R.Tschaggelar - http://www.ibrtses.com
& commercial newsgroups - http://www.talkto.net
 
Chris Carlen wrote:
Hi:

I need to do this:

1. Count quadrature shaft encoder signal with max frequency (for a
single phase) of about 90kHz.
2. Output a 16 bit word where each of the 16 bits changes state at some
specific angle of the shaft encoder. There could potentially be an
arbitrary number of state changes per bit per revolution, so there would
be a list of angles at which to go high or low for each bit. But the
list is unlikely to be more than 3-4 in length.
3. However, the word pattern from one revolution to the next might not
be the same, but would repeat after N revolutions. So there is not only
a list of angles to flip bits within each revolution, but a new list for
each revolution.
4. On 1-4 of the 16 output bits, the angular event should trigger a
time-domain pulse width sequence, with up to at least 4 pulses with
adjustable widths and separations. At least 1us resolution and jitter,
preferrably <0.5us, with adjustability from about 1us to about 5ms.

Don't tell me about algorithms to do this, I already know that.

The question is, what system architecture to best implement this for a
research lab environment? There will be only a few built, maybe 6-10.
So volume cost reduction doesn't apply. We should optimize:

1. Minimum time to deploy. (We have working legacy systems, so the
concern is to avoid a long delay replacing one if it fails, rather than
getting 6-10 new systems into place all at once).
2. Moderate capital cost per unit, say up to $5000
3. Maximum flexibity to respond to changes in the design specifications.
4. Best "future-proof" design, able to last at least 10 years,
preferrably 20. Shouldn't employ esoteric hardware that few people
could be found to replace the original designers if they go away. But
the other aspect of future-proofing is that the hardware itself continue
to be available for a long time. Or if not, that future hardware will
be similar enough that a minimum of reworking of software code would be
needed. Also, if the hardware were relatively cheap on a capital basis,
then a large amount of "spare parts" could be stocked, thereby making it
very future-proof.

I see 3 most likely potential approaches:

1. PC platform with real-time operating system (RTOS) and commercial
off-the-shelf (COTS) data-acquisition/digital IO (DAQ/DIO) hardware.
DIO boards would be cabled to a patch panel containing buffering
circuits only.

2. Embedded PC or SBC with COTS DIO hardware. Similar to above, but
SBC and buffering panel could be one box. Windows PC would have a GUI
interface program, and send setup parameters to the box via RS-232 or USB.

3. Custom built hardware board with an FPGA to do most of the real-time
logical stuff, and a microcontroller to communicate to a Windows GUI
interface program via Rs-232 or USB.

I'm a hardware guy who would prefer option #3, and am currently in a
debate with a software guy who would prefer option #1 or #2. I really
don't know that there is a clear case to go either way (custom #3 vs.
COTS #1 or #2). Both approaches satisfy the requirements with slightly
different balance of pros and cons.

I will state what I think are the strengths of my approach:

1. The FPGA is incredibly flexible. Often there is the complaint with
COTS DAQ/DIO hardware: "why did they do it that way?!?!" referring to
some wierdness about the programming model, that causes a "gotcha"
making the implementation of functionality that the manufacturer assured
you could be done not quite straightforward.

Thus, with an FPGA, the software guy could actually get the hardware guy
to tailor the programming model to just what he'd like.

Also, VHDL or Verilog coding of the hardware would be very portable to
future devices. This would make it possible to implement the same
"virtual DAQ/DIO" hardware in a future FPGA, with exactly the same
programming model (the view seen by the software guy), with no changes
in the HDL. So that actually makes the user interface software codebase
very easy to maintain.

2. The hybrid uC/FPGA allows optimization of the partitioning of tasks
that are better suited to hardware vs. real-time software.

There are other potential requirements that I haven't mentioned, such as
using an encoder on another shaft to check the first encoder's
alignment, and possibly also another layer of time-domain counting to
check that the accumulated time duration of the multi-time-pulse
sequences doesn't exceed some limit. Even analog signals may have to be
acquired and compared to a digitally synthesized function of shaft angle
to see if a motion servo-control system has it's moving parts within
acceptible tolerances.

In short, the requirements are likely to change and expand in very
unpredictable ways in the future. This is an aspect of future-proofing
that is strongly to my advantage.

An architecture based on an FPGA plus powerful uC/DSP such as a Xilinx
Spartan3 300-500k FPGA and the TI F2812 DSP would create a platform with
a *huge* amount of headroom for future capability expansion.

As new requirements come along, they can be implemented quick-and-dirty
in the DSP, then later moved into a more permanent solution in the FPGA,
thereby freeing up DSP processing power for more quick enhancements in
the future.

3. Also, since the actual chips are so cheap, a large number could be
stocked to ensure that future failures if they occur can be fixed
cheaply. If we tried to keep 1-2 extra copies of all the hardware
needed to implement options #1 or #2, then that would be too expensive.
In the COTS case we could afford to only stock 3-4 replacement items
for all of the 6-10 installations, depending on the prices of course.

With a large stock of chips, the system could be duplicated far into the
future by only having to get new copies of PCBs made, if they weren't
already stocked.


Weaknesses of my approach:

There are few weaknesses in terms of it's technical capabilities. But
cost and time considerations are more arguable:

The development of a well thought out custom board with an F2812 and
Spartan3 will be a much larger time investment than putting into place
COTS hardware. So while the chips are cheap, the labor cost is greater.

However, where I work, I am "already paid for" so the labor cost is hard
to say that it matters. More important is time, for which my approach
is very weak. However, there is no urgency to replace all our legacy
systems quickly, but rather to be able to replace 1-2 failures quickly.

It turns out that the main tasks that need to be done I am already close
to coding on an F2812 development board. So I could put into place a
prototype replacement unit of our legacy system rather quickly if
needed. That means we have some time flexibility to take the care to
design a highly flexible, large headroom system for the future.

Some people think that the custom hardware is then also more difficult
for outside consultants to fix and maintain, but I disagree with this.
If it is properly documented, then it is no different from a
consultant's perspective than COTS.

Ok, enough for now.

Thanks for input on this post.


Good day!


P.S. Ever notice that you could have gotten a significant portion of a
job done during the time spent debating how to do it?



--
_________________
Christopher R. Carlen
crobc@bogus-remove-me.sbcglobal.net
SuSE 9.1 Linux 2.6.5
 
Hi Chris:

The following is a link to a magazine article describing a hardware
solution.

http://www.embedded-designer.com/articles/NIOS/Designing_with_NIOS_Part_1.html

I think is also lays a foundation for a software solution.

gm
 

Welcome to EDABoard.com

Sponsor

Back
Top