accumulator (again)

rickman <gnuarm@gmail.com> wrote:

(snip)
I'm not sure what you would want to simulate. Metastability is
probabilistic. There is For a given length of settling time there is
some probability of it happening. Increasing the settling time
reduces the probability but it will never be zero meaning there is no
max length of time it takes for the output of a metastable ff to
settle.
A favorite statistical physics problem is calculating the
probability that all the air molecules will move to one half
of a room. There are many other problems with a very small,
but non-zero, probability.

-- glen
 
Dear All,

I would like to thank you all for your contributions. I finally solved the problem, that was not in the code as I immediately decided since i'm not very experienced in VHDL, but rather in my miss interpretation of the AD9058's datashet. I feel very stupid!

It was tanks to all your comments that I decided to finally rethink the project as a all and spotted the problem.

"God saves the internet and the good people that lives there"

jmariano
 
On Sun, 08 Jul 2012 15:38:45 -0700, rickman wrote:

On Jul 6, 10:00 pm, hal-use...@ip-64-139-1-69.sjc.megapath.net (Hal
Murray) wrote:
In article <nZ-dnch1rrNvHG_SnZ2dnUVZ_qSdn...@web-ster.com>,
Tim Wescott <t...@seemywebsite.please> writes:

Paranoid logic designers will have a string of two or three registers
to avoid metastability, but I've been told that's not necessary. (I'm
not much of a logic designer).

Ahh, but are they paranoid enough?

The key is settling time.

In the old days of TTL chips, a pair of FFs (with no logic in between)
got you settling time of as much logic as the worst case delay for the
rest of the system. In practice, that was enough.

With FPGAs, routhing is important. A pair of FFs close together is
probably good enough. If you put them on opposite sides of a big chip,
the routing delays may match the long path of the logic delays and eat
up all of your slack time.

Have any FPGA vendors published recent metastability info? (Many thanks
to Peter Alfke for all his good work in this area.)

I'm not a silicon wizard. Is it reasonable to simulate this stuff? I'd
like to know worst case rather than typicals. It should be possible to
do something like verify simulations with lab typicals and then use
simulations to find the numbers for the nasty corners.

I'm not sure what you would want to simulate. Metastability is
probabilistic. There is For a given length of settling time there is
some probability of it happening. Increasing the settling time reduces
the probability but it will never be zero meaning there is no max length
of time it takes for the output of a metastable ff to settle.
The drivers of metastability are probabilistic, yes. But given enough
information you could certainly simulate the positive feedback loop that
is a flip-flop.

I suspect that unless the ball that is the flip-flop state is poised
right on the top of the mountain between the Valley of Zero and the
Valley of One, that the problem is mostly deterministic. It's only when
the after-strobe balance is perfect and the gain is so low that the FF
voltage is affected more by noise than by actual circuit forces that the
problem would remain probabilistic _after_ the strobe happened.

"Enough information", in this case, would involve a whole lot of deep
knowledge of the inner workings of the FPGA, and the simulation would be
an analog circuits problem. So I suspect that you couldn't do it for any
specific part unless you worked at the company in question.

--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.
Why am I not happy that they have found common ground?

Tim Wescott, Communications, Control, Circuits & Software
http://www.wescottdesign.com
 
On Jul 9, 1:38 am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
rickman <gnu...@gmail.com> wrote:

(snip)

Paranoid logic designers will have a string of two or three registers to
avoid metastability, but I've been told that's not necessary.  (I'm not
much of a logic designer).

(snip)

Hi Ed.  They way it was explained to me, I believe from Peter Alfke,
is that what really resolves metastability is the slack time in a
register to register path.  Over the years FPGA process has resulted
in FFs which only need a couple of ns to resolve metastability to 1 in
a million operation years or something like that (I don't remember the
metric, but it was good enough for anything I do).  It doesn't matter
that you have logic in that path, you just need those few ns in every
part of the path.  In theory, even if you use multiple registers with
no logic, what really matters is the slack time in the path and that
is not guaranteed even with no logic.  So the design protocol should
be to assure the slack time from the input register to all subsequent
registers have sufficient slack time.

I suppose that is true, but really it shouldn't be a problem.
It is usual for many systems to clock as fast as you can,
consistent with the critical path delay. As metastability
is exponential, even a slightly shorter delay is usually enough
to make enough difference in the exponent.

That assumes that there is a FF to FF path that is faster than
the FF logic FF path. I believe that is usual for FPGAs, but
if you manage to get a critical path with only one LUT, then
I am not so sure. But that is pretty hard in most real systems.

Do you remember how much time that needs to be?  I want to say 2 ns,
but it might be more like 5 ns, I just can't recall.  Of course it
depends on your clock rates, but I believe Peter picked some more
aggressive speeds like 100 MHz for his example.

I would expect most systems to have at least a 10% margin.
That is, the clock period is at least 10% longer than the
critical path delay. Probably closer to 20%, but maybe 10%.
So, with a 10ns clock there might be only 1ns slack.
Assuming some delay, say 1ns minimum from FF to FF, that
has nine times the slack, and that is in an exponent.

-- glen
You keep talking about the critical path delay as if the metastable
input is driving the critical path. There is only one critical path
in a design normally. All other paths are faster. Are you assuming
that all paths have the same amount of delay?

Regardless, all I am saying is that you don't need to use a path that
has no logic to obtain *enough* slack to give enough settling time to
the metastable input. But in all cases you need to verify this. As
mentioned in another post, Peter Alfke's numbers show that you only
need about 2 ns to get 100 million years MTBF. Of course whether this
is good enough depends on just how reliable your systems have to be
and how many there are. It is 100 million years for one unit, but for
10 million units it will only be 10 years MTBF for the group.

Rick
 
On Jul 9, 11:36 am, jmariano <jmarian...@gmail.com> wrote:
Dear All,

I would like to thank you all for your contributions. I finally solved the problem, that was not in the code as I immediately decided since i'm not very experienced in VHDL, but rather in my miss interpretation of the AD9058's datashet. I feel very stupid!

It was tanks to all your comments that I decided to finally rethink the project as a all and spotted the problem.

"God saves the internet and the good people that lives there"

jmariano
Don't think of it as a stupid mistake, think of it as a "good
catch"!

Rick
 
On Jul 9, 2:35 pm, Tim Wescott <t...@seemywebsite.com> wrote:
On Sun, 08 Jul 2012 15:38:45 -0700, rickman wrote:
On Jul 6, 10:00 pm, hal-use...@ip-64-139-1-69.sjc.megapath.net (Hal
Murray) wrote:
In article <nZ-dnch1rrNvHG_SnZ2dnUVZ_qSdn...@web-ster.com>,
 Tim Wescott <t...@seemywebsite.please> writes:

Paranoid logic designers will have a string of two or three registers
to avoid metastability, but I've been told that's not necessary.  (I'm
not much of a logic designer).

Ahh, but are they paranoid enough?

The key is settling time.

In the old days of TTL chips, a pair of FFs (with no logic in between)
got you settling time of as much logic as the worst case delay for the
rest of the system.  In practice, that was enough.

With FPGAs, routhing is important.  A pair of FFs close together is
probably good enough.  If you put them on opposite sides of a big chip,
the routing delays may match the long path of the logic delays and eat
up all of your slack time.

Have any FPGA vendors published recent metastability info? (Many thanks
to Peter Alfke for all his good work in this area.)

I'm not a silicon wizard.  Is it reasonable to simulate this stuff? I'd
like to know worst case rather than typicals.  It should be possible to
do something like verify simulations with lab typicals and then use
simulations to find the numbers for the nasty corners.

I'm not sure what you would want to simulate.  Metastability is
probabilistic.  There is For a given length of settling time there is
some probability of it happening.  Increasing the settling time reduces
the probability but it will never be zero meaning there is no max length
of time it takes for the output of a metastable ff to settle.

The drivers of metastability are probabilistic, yes.  But given enough
information you could certainly simulate the positive feedback loop that
is a flip-flop.

I suspect that unless the ball that is the flip-flop state is poised
right on the top of the mountain between the Valley of Zero and the
Valley of One, that the problem is mostly deterministic.  It's only when
the after-strobe balance is perfect and the gain is so low that the FF
voltage is affected more by noise than by actual circuit forces that the
problem would remain probabilistic _after_ the strobe happened.

"Enough information", in this case, would involve a whole lot of deep
knowledge of the inner workings of the FPGA, and the simulation would be
an analog circuits problem.  So I suspect that you couldn't do it for any
specific part unless you worked at the company in question.

--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.
Why am I not happy that they have found common ground?

Tim Wescott, Communications, Control, Circuits & Softwarehttp://www.wescottdesign.com
That's what probability is all about, dealing with the lack of
knowledge. You don't know the exact voltage of the input when the
clock edge changed and you don't know how fast either signal was
changing... etc. But you do know how often you expect all of these
events to line up to produce metastability and you know the
distribution of delay is a logarithmic taper.

I won't try to argue about how many angels can dance on the head of a
pin, but I have no information to show me that the formula that Peter
used is not accurate, even for extreme cases.

Rick
 
rickman <gnuarm@gmail.com> wrote:

(snip, I wrote)
I suppose that is true, but really it shouldn't be a problem.
It is usual for many systems to clock as fast as you can,
consistent with the critical path delay. As metastability
is exponential, even a slightly shorter delay is usually enough
to make enough difference in the exponent.

That assumes that there is a FF to FF path that is faster than
the FF logic FF path. I believe that is usual for FPGAs, but
if you manage to get a critical path with only one LUT, then
I am not so sure. But that is pretty hard in most real systems.
(snip)
You keep talking about the critical path delay as if the metastable
input is driving the critical path. There is only one critical path
in a design normally. All other paths are faster. Are you assuming
that all paths have the same amount of delay?
No, but it might be that many have about the same delay. Well,
my favorite things to design are systolic arrays, where there is
the same logic (though with different routing, different delay)
between a large number of FFs.

For any pipelined processor, the most efficient logic has about
the same delay between successive registers.

Regardless, all I am saying is that you don't need to use a path that
has no logic to obtain *enough* slack to give enough settling time to
the metastable input. But in all cases you need to verify this.
Yes. One would hope that no logic would have the shortest delay,
though in the case of FPGAs, you might not be able to count on that.

As mentioned in another post, Peter Alfke's numbers show that you only
need about 2 ns to get 100 million years MTBF. Of course whether this
is good enough depends on just how reliable your systems have to be
and how many there are. It is 100 million years for one unit, but for
10 million units it will only be 10 years MTBF for the group.
I have done designs with at most two LUTs between registers,
and might even be able to do one.

-- glen
 
On Mon, 09 Jul 2012 17:20:26 -0700, rickman wrote:

On Jul 9, 2:35 pm, Tim Wescott <t...@seemywebsite.com> wrote:
On Sun, 08 Jul 2012 15:38:45 -0700, rickman wrote:
On Jul 6, 10:00 pm, hal-use...@ip-64-139-1-69.sjc.megapath.net (Hal
Murray) wrote:
In article <nZ-dnch1rrNvHG_SnZ2dnUVZ_qSdn...@web-ster.com>,
 Tim Wescott <t...@seemywebsite.please> writes:

Paranoid logic designers will have a string of two or three
registers to avoid metastability, but I've been told that's not
necessary.  (I'm not much of a logic designer).

Ahh, but are they paranoid enough?

The key is settling time.

In the old days of TTL chips, a pair of FFs (with no logic in
between) got you settling time of as much logic as the worst case
delay for the rest of the system.  In practice, that was enough.

With FPGAs, routhing is important.  A pair of FFs close together is
probably good enough.  If you put them on opposite sides of a big
chip, the routing delays may match the long path of the logic delays
and eat up all of your slack time.

Have any FPGA vendors published recent metastability info? (Many
thanks to Peter Alfke for all his good work in this area.)

I'm not a silicon wizard.  Is it reasonable to simulate this stuff?
I'd like to know worst case rather than typicals.  It should be
possible to do something like verify simulations with lab typicals
and then use simulations to find the numbers for the nasty corners.

I'm not sure what you would want to simulate.  Metastability is
probabilistic.  There is For a given length of settling time there is
some probability of it happening.  Increasing the settling time
reduces the probability but it will never be zero meaning there is no
max length of time it takes for the output of a metastable ff to
settle.

The drivers of metastability are probabilistic, yes.  But given enough
information you could certainly simulate the positive feedback loop
that is a flip-flop.

I suspect that unless the ball that is the flip-flop state is poised
right on the top of the mountain between the Valley of Zero and the
Valley of One, that the problem is mostly deterministic.  It's only
when the after-strobe balance is perfect and the gain is so low that
the FF voltage is affected more by noise than by actual circuit forces
that the problem would remain probabilistic _after_ the strobe
happened.

"Enough information", in this case, would involve a whole lot of deep
knowledge of the inner workings of the FPGA, and the simulation would
be an analog circuits problem.  So I suspect that you couldn't do it
for any specific part unless you worked at the company in question.

--
My liberal friends think I'm a conservative kook. My conservative
friends think I'm a liberal kook. Why am I not happy that they have
found common ground?

Tim Wescott, Communications, Control, Circuits &
Softwarehttp://www.wescottdesign.com

That's what probability is all about, dealing with the lack of
knowledge. You don't know the exact voltage of the input when the clock
edge changed and you don't know how fast either signal was changing...
etc. But you do know how often you expect all of these events to line
up to produce metastability and you know the distribution of delay is a
logarithmic taper.

I won't try to argue about how many angels can dance on the head of a
pin, but I have no information to show me that the formula that Peter
used is not accurate, even for extreme cases.
Well, first, I wasn't trying to contradict you -- I just picked the wrong
place in the thread to answer Hal's question.

And second, before you can know the necessary inputs to your statistical
calculations, you need to do some simulating to see how long it takes for
the state to come down from various places on the mountaintop.

The difference between a circuit that has a narrow & sharp potential peak
vs. one that has a wide, flat, broad one is significant.

(One that had a true stable spot at 1/2 voltage would be mucho worse, but
that's not too likely in this day and age).

--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.
Why am I not happy that they have found common ground?

Tim Wescott, Communications, Control, Circuits & Software
http://www.wescottdesign.com
 
Tim Wescott <tim@seemywebsite.com> wrote:

(snip, someone wrote)
That's what probability is all about, dealing with the lack of
knowledge. You don't know the exact voltage of the input when the clock
edge changed and you don't know how fast either signal was changing...
etc. But you do know how often you expect all of these events to line
up to produce metastability and you know the distribution of delay is a
logarithmic taper.

(snip)
Well, first, I wasn't trying to contradict you -- I just picked the wrong
place in the thread to answer Hal's question.

And second, before you can know the necessary inputs to your statistical
calculations, you need to do some simulating to see how long it takes for
the state to come down from various places on the mountaintop.

The difference between a circuit that has a narrow & sharp potential peak
vs. one that has a wide, flat, broad one is significant.
Story I heard some years ago, the sharper and narrower the peak,
the harder it is to get into the metastable state, but the
longer it stays when it actually gets there.

-- glen
 
On Tue, 10 Jul 2012 04:12:21 +0000, glen herrmannsfeldt wrote:

Tim Wescott <tim@seemywebsite.com> wrote:

(snip, someone wrote)
That's what probability is all about, dealing with the lack of
knowledge. You don't know the exact voltage of the input when the
clock edge changed and you don't know how fast either signal was
changing... etc. But you do know how often you expect all of these
events to line up to produce metastability and you know the
distribution of delay is a logarithmic taper.


(snip)
Well, first, I wasn't trying to contradict you -- I just picked the
wrong place in the thread to answer Hal's question.

And second, before you can know the necessary inputs to your
statistical calculations, you need to do some simulating to see how
long it takes for the state to come down from various places on the
mountaintop.

The difference between a circuit that has a narrow & sharp potential
peak vs. one that has a wide, flat, broad one is significant.

Story I heard some years ago, the sharper and narrower the peak,
the harder it is to get into the metastable state, but the longer it
stays when it actually gets there.
Wow. That's counter-intuitive. I would think that the sharper the peak
the less likely that the device would be stuck without knowing which way
to fall.

--
Tim Wescott
Control system and signal processing consulting
www.wescottdesign.com
 
rickman wrote:


Regardless, all I am saying is that you don't need to use a path that
has no logic to obtain *enough* slack to give enough settling time to
the metastable input.
Well, one thing that I learned on this group is that metastability
is not the most likely problem, it is time skew. If an unsynchronized
input is fed to a number of LUTs scattered around the chip, it can have
several ns of skew between them. The clocks have tightly controlled
skew, so the unsynched input can be sensed differently at two locations.
I ran into this on a state machine and it caused the state logic to go
to undefined states. This was finally explained, I think by one of the
guys at Xilinx, and that it can have a thousand times higher probability
than true metastability of a single FF.

Jon
 
Tim Wescott <tim@seemywebsite.please> wrote:

(snip, I wrote)
Story I heard some years ago, the sharper and narrower the peak,
the harder it is to get into the metastable state, but the longer it
stays when it actually gets there.

Wow. That's counter-intuitive. I would think that the sharper the peak
the less likely that the device would be stuck without knowing which way
to fall.
First, remember that it is conditional on actually getting it
to the metastable state.

I don't know if it is convincing or not, but consider balancing
a knife on its edge on a table. You have a sharp and dull knife.
Once you get the sharp knife balanced, it will make a deeper
impression into the table and so stay up longer.

For the actual physics, there are some symmetries that require
some correlations in the probability of getting into, and getting
out of, a certain state. If you get it wrong, then energy
conservation fails.

There is an old favorite, of putting a dark and light colored
object in a mirrored room. (Consider an ellipoidal mirror
with two spheres at the foci.) Now, consider the effect of
black body radiation with a black and a white sphere.
The black sphere absorbs most radiation (mostly IR light)
but the white one doesn't absorb as much. Conservation of
energy requires that the black one emit more black body
radiation (that is where the name comes from). If not,
the black one would get warmer, and you could extract
energy from the temperature difference.

Note that this is why heat sinks are (usually) black.
(To get a connection to DSP.)

Warm objects have more electrons in higher (metastable) states.

-- glen
 
On Jul 9, 9:47 pm, Tim Wescott <t...@seemywebsite.com> wrote:
On Mon, 09 Jul 2012 17:20:26 -0700, rickman wrote:
On Jul 9, 2:35 pm, Tim Wescott <t...@seemywebsite.com> wrote:
On Sun, 08 Jul 2012 15:38:45 -0700, rickman wrote:
On Jul 6, 10:00 pm, hal-use...@ip-64-139-1-69.sjc.megapath.net (Hal
Murray) wrote:
In article <nZ-dnch1rrNvHG_SnZ2dnUVZ_qSdn...@web-ster.com>,
 Tim Wescott <t...@seemywebsite.please> writes:

Paranoid logic designers will have a string of two or three
registers to avoid metastability, but I've been told that's not
necessary.  (I'm not much of a logic designer).

Ahh, but are they paranoid enough?

The key is settling time.

In the old days of TTL chips, a pair of FFs (with no logic in
between) got you settling time of as much logic as the worst case
delay for the rest of the system.  In practice, that was enough.

With FPGAs, routhing is important.  A pair of FFs close together is
probably good enough.  If you put them on opposite sides of a big
chip, the routing delays may match the long path of the logic delays
and eat up all of your slack time.

Have any FPGA vendors published recent metastability info? (Many
thanks to Peter Alfke for all his good work in this area.)

I'm not a silicon wizard.  Is it reasonable to simulate this stuff?
I'd like to know worst case rather than typicals.  It should be
possible to do something like verify simulations with lab typicals
and then use simulations to find the numbers for the nasty corners.

I'm not sure what you would want to simulate.  Metastability is
probabilistic.  There is For a given length of settling time there is
some probability of it happening.  Increasing the settling time
reduces the probability but it will never be zero meaning there is no
max length of time it takes for the output of a metastable ff to
settle.

The drivers of metastability are probabilistic, yes.  But given enough
information you could certainly simulate the positive feedback loop
that is a flip-flop.

I suspect that unless the ball that is the flip-flop state is poised
right on the top of the mountain between the Valley of Zero and the
Valley of One, that the problem is mostly deterministic.  It's only
when the after-strobe balance is perfect and the gain is so low that
the FF voltage is affected more by noise than by actual circuit forces
that the problem would remain probabilistic _after_ the strobe
happened.

"Enough information", in this case, would involve a whole lot of deep
knowledge of the inner workings of the FPGA, and the simulation would
be an analog circuits problem.  So I suspect that you couldn't do it
for any specific part unless you worked at the company in question.

--
My liberal friends think I'm a conservative kook. My conservative
friends think I'm a liberal kook. Why am I not happy that they have
found common ground?

Tim Wescott, Communications, Control, Circuits &
Softwarehttp://www.wescottdesign.com

That's what probability is all about, dealing with the lack of
knowledge.  You don't know the exact voltage of the input when the clock
edge changed and you don't know how fast either signal was changing...
etc.  But you do know how often you expect all of these events to line
up to produce metastability and you know the distribution of delay is a
logarithmic taper.

I won't try to argue about how many angels can dance on the head of a
pin, but I have no information to show me that the formula that Peter
used is not accurate, even for extreme cases.

Well, first, I wasn't trying to contradict you -- I just picked the wrong
place in the thread to answer Hal's question.

And second, before you can know the necessary inputs to your statistical
calculations, you need to do some simulating to see how long it takes for
the state to come down from various places on the mountaintop.

The difference between a circuit that has a narrow & sharp potential peak
vs. one that has a wide, flat, broad one is significant.

(One that had a true stable spot at 1/2 voltage would be mucho worse, but
that's not too likely in this day and age).

--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.
Why am I not happy that they have found common ground?

Tim Wescott, Communications, Control, Circuits & Softwarehttp://www.wescottdesign.com
Sorry if my tone sounded like I was offended at all, I'm not. I was
just trying to make the point that you don't know the shape of the
"mountain" the ball is balanced on and I doubt a simulation could
model it very well. But that is outside my expertise so if I am
wrong...

But I still fail to see how that shape would affect anything
significantly. Unless it has flat spots or even indentations that
were local minima, what would the shape change? It would most likely
only change the speed at which the ball falls off the "mountain" which
is part of what is measured when they characterize a device the way
Peter Alfke did.

Still, even if there is some abnormalities in the shape of the
"mountain", is that really important? The goal is to get the
possibility so far out you just don't have to think about it. If the
shape changes the probability by a factor of 10 either way it
shouldn't make a problem. Just add another 200 ps to the slack and
get another order of magnitude in the MTBF. Or was it 100 ps?

Rick
 
On Jul 9, 9:23 pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
rickman <gnu...@gmail.com> wrote:

(snip, I wrote)

I suppose that is true, but really it shouldn't be a problem.
It is usual for many systems to clock as fast as you can,
consistent with the critical path delay. As metastability
is exponential, even a slightly shorter delay is usually enough
to make enough difference in the exponent.
That assumes that there is a FF to FF path that is faster than
the FF logic FF path. I believe that is usual for FPGAs, but
if you manage to get a critical path with only one LUT, then
I am not so sure. But that is pretty hard in most real systems.

(snip)

You keep talking about the critical path delay as if the metastable
input is driving the critical path.  There is only one critical path
in a design normally.  All other paths are faster.  Are you assuming
that all paths have the same amount of delay?

No, but it might be that many have about the same delay. Well,
my favorite things to design are systolic arrays, where there is
the same logic (though with different routing, different delay)
between a large number of FFs.

For any pipelined processor, the most efficient logic has about
the same delay between successive registers.
We are still not talking about the same thing. The *max* delay in
each stage will be roughly even, but within a stage there will be all
sorts of delays. If you are balancing all paths to achieve even
delays you are working on a very intense design akin to the original
Cray computers with hand designed ECL chip logic.


Regardless, all I am saying is that you don't need to use a path that
has no logic to obtain *enough* slack to give enough settling time to
the metastable input.  But in all cases you need to verify this.

Yes. One would hope that no logic would have the shortest delay,
though in the case of FPGAs, you might not be able to count on that.
Yes, that is the point, you need to verify the required slack time no
mater what is in the path.


As mentioned in another post, Peter Alfke's numbers show that you only
need about 2 ns to get 100 million years MTBF.  Of course whether this
is good enough depends on just how reliable your systems have to be
and how many there are.  It is 100 million years for one unit, but for
10 million units it will only be 10 years MTBF for the group.

I have done designs with at most two LUTs between registers,
and might even be able to do one.
That would be good, but I don't know if it is very practical. To make
that useful you also have to optimize the placement to minimize
routing delays. I haven't seen that done since some of Ray Andraka's
designs which are actually fairly small by today's standards. I can't
conceive of trying that with many current designs.

Rick
 

Welcome to EDABoard.com

Sponsor

Back
Top