EDK : FSL macros defined by Xilinx are wrong

scott, <p>input 125 fps, output 60fps. <BR>
2 frames sram. <BR>
write one, read other. <BR>
write pointer faster read pointer about 2x. <BR>
need to drop 65 frames per second. <BR>
sram is not dual port. <BR>
can't write while reading. <p>got it?
 
scott, <p>input 125 fps, output 60fps. <BR>
2 frames sram. <BR>
write one, read other. <BR>
read pointer faster write pointer about 2x. <BR>
need to drop 65 frames per second. <BR>
sram is not dual port. <BR>
can't write while reading. <p>got it?
 
scott, <p>input 125 fps, output 60fps. <BR>
2 frames sram. <BR>
write one, read other. <BR>
write pointer faster read pointer about 2x. <BR>
need to drop 65 frames per second. <BR>
sram is not dual port. <BR>
can't write while reading. <p>got it?
 
"Ron Huizen" &lt;rhuizen@bittware.com&gt; wrote in message news:vpfh7f3e2gc2a3@corp.supernews.com...
Rick,

I'd certainly be interested in more info on the fiber microscope you
mentioned. Debugging designs with lots of big BGAs is tough enough without
wondering whether it's an assembly issue or not, and traditional xray
techniques are good for showing shorts, but no so good for opens ...

-----
Ron Huizen
BittWare
There's also JTAG tools that can read and write arbitrary values to
I/O pins. Roughly $1K for benchtop systems, $10K for a production
tester. If the BGA is hooked to other chips with JTAG, you can make
a rather complete test.

And of course there's traditional bed-of-nails, not used much due to
cost of implementing on proto hardware.

Dave Kinsell
 
Sorry I did not get back to you sooner. The original contact was ASG at
www.asg-jergens.com. They make the IS-1000 which gets under the BGA
from what I can see. So you can see each and every ball. But you
should get a demo since the sales pictures don't clearly indicate if
they are looking at the edge row of balls or an inner row.

With a google search I found this - http://www.caltexsci.com/
They seem to make a similar product, but the web page is not too clear
if they are just looking at it from the outside.


Ron Huizen wrote:
Rick,

I'd certainly be interested in more info on the fiber microscope you
mentioned. Debugging designs with lots of big BGAs is tough enough without
wondering whether it's an assembly issue or not, and traditional xray
techniques are good for showing shorts, but no so good for opens ...

-----
Ron Huizen
BittWare

"rickman" &lt;spamgoeshere4@yahoo.com&gt; wrote in message
news:3F93E3DC.6753DCD4@yahoo.com...
Thomas Stanka wrote:

Xpost 2 cae and caf, no Fup.

Hallo,

"Geoffrey Mortimer" &lt;me@privacy.net&gt; wrote:
Anyone have any experience of BGA's (especially fine pitch types) in
high
vibration environments? Is there a more appropriate newsgroup for this
topic?

Actually that's a very hot topic as BGA seems to get usual in the
world of FPGAs and ASICs. I know that our mechanical engineers
allready research on this topic, as we are very likely to have some
fine pitch BGA in a high vibration environment in future.
I would guess, that you should ask in some mechanical newsgroups as
well.
A big problem using FBGA is the test, wether you connected all balls
proberly [1], as you have no chance of easy visual inspection.

bye Thomas

I recently saw a product that allows visual inspection of the solder
balls on a mounted BGA. It is a fiber optic microscope and has tiny
fiber probes that can run between the balls. I'll look for the info if
anyone is interested.

--

Rick "rickman" Collins

rick.collins@XYarius.com
Ignore the reply address. To email me use the above address with the XY
removed.

Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design URL http://www.arius.com
4 King Ave 301-682-7772 Voice
Frederick, MD 21701-3110 301-682-7666 FAX
--

Rick "rickman" Collins

rick.collins@XYarius.com
Ignore the reply address. To email me use the above address with the XY
removed.

Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design URL http://www.arius.com
4 King Ave 301-682-7772 Voice
Frederick, MD 21701-3110 301-682-7666 FAX
 
Valentin, <p>apparently you are trying to resolve the clock difference by cutting the stop bit in order to achieve a higher transmission rate.
<BR>
That is a very nice idea and it is completely wrong.
<BR>
In commercial (and all other) Uarts, it is the <B>receiver</B> that compensates for the clock difference. The rule is that the transmitter sends 10 bits (Start + 8 Data + Stop), but the receiver only requires 9.5 bits (1 Start + 8 Data + 0.5 Stop). It is this 0.5 bit difference which compensates the clock difference (and which also gives you the 5% that rick mentioned).
<BR>
So far, your design seems correct. But then you try to speed up the transmitter as well by sending less than 10 bits (actually 9.5 bits).
<BR>
The net effect is that you have changed the usual "transmit 10 &amp; receive 9.5" scheme to a "transmit 9.5 &amp; receive 9.5" scheme, which is as bad as a "transmit 10 &amp; receive 10" scheme when the clocks are different. By doing this you will neccessarily lose single bits long before your buffer overruns. You stole the 0.5 bits from the receiver that the receiver desperately needs to compensate for the clock differences.
<BR>
In other words, your attempt to avoid buffer overflows (which cannot occur since the receiver takes care of the clock frequencies) has actually created the problem you are describing. The solution is simple: don't touch the transmitter.
<BR>
BTW., you should check if your 10% refers to clock jitter (moving of the clock edges around a fixed reference point) rather than to a difference in clock frequency.
<BR>
/// Juergen
 
"GPG" &lt;peg@slingshot.co.nz&gt; wrote in message
news:62069f15.0311250309.f28037c@posting.google.com...
MAXIMUM error is .5 bit over one frame. In your case frame = 10 bits.
.5/10 = 5%

And if the sender is 2% too slow and the receiver is 2% too fast, you have
4% error
which is just below the 5% error tolerated.

--
Best Regards
Ulf at atmel dot com
These comments are intended to be my own opinion and they
may, or may not be shared by my employer, Atmel Sweden.
 
Valentin wrote:
...

I have a "fast" solution for my UART echo device; if transmitter has
transmitted &gt; half of stop bit and sences that there is a next byte received
it stops sending current stop bit and starts transmitting a start bit for
next byte. Untimely ceasing transmission is not good solution because
transmitter may be connected to a good matched or slightly slower running
UART. Design may be not a forwarder thus data provider may differ from 9600
bps receiver. In this case, starting early transmission of next byte while
remote peer is still receiving stop bit causes stop bit error.

...
juergen sauermann wrote:
Valentin,
apparently you are trying to resolve the clock difference by cutting
the stop bit in order to achieve a higher transmission rate.
That is a very nice idea and it is completely wrong.
And Philip writes:
Modifying the local transmited character to be a non-standard length
by changing the length of the stop bit on the fly as buffer
over-run is about to occur is not a good idea, as you don't
know the details of how the receiver that is listening to it was
designed, and it may not be very happy to see the next start bit
before it is finished with what it expects is a full length stop
bit, but it is not.

The underlying problem is that you are potentially sending very
long streams of data through a protocol that was designed for
asynchronous transmission. That is why there are start bits and
stop bits. In real systems, there is flow control, typically
implemented one of 3 ways:

1) Hardware flow control: CTS/RTS

2) Character based flow control: XON/XOFF (ctrl-q/ctrl-s)

3) Upper layer flow control: packet based transfers with
acknowledge packets used to pace transmissions.

The "real-time-ness" (new word I just invented) of the flow
control depends on the size of the receive buffer. With only
1 byte, you need (1), and even this may not be good enough,
you may need at least 2 bytes of buffer. As the buffer gets
bigger (say 8 to 100 bytes) then (2) is workable, and can even
tolerate some operating system delay. When the buffers get to
be multiple packets in size, then (3) may be appropriate.


juergen sauermann also wrote:
In commercial (and all other) Uarts, it is the receiver that
compensates for the clock difference. The rule is that the
transmitter sends 10 bits (Start + 8 Data + Stop), but the
receiver only requires 9.5 bits (1 Start + 8 Data + 0.5 Stop).
Well up to a point this is correct. The receiver can cetainly
declare that the character has arrived after the sample is taken
in the middle of the stop bit (at 9.5 bit times into the received
character).

BUT this is not a solution to the original poster's problem!
The problem still exists because the remaining .5 bit is still
going to arrive, the data is being sent with a slightly faster
clock than the transmitter is able to retransmit the character.
If there is no line idle time between the end of the inbound
stop bit and the next inbound start bit, the system will
eventually have an over-run problem, no matter how big the
input buffer. The closer the two clock rates, and the bigger
the buffer, the longer it takes to happen, but it will happen.

Do the math:

Let the far end transmitter be running at 1% faster clock rate
than the local transmitter that is going to retransmit the
character.

Here are some easy to work with numbers:
Perfect 9600 baud is 104.1666666 microseconds per bit
1 character time (1 Start,8 Data,1 Stop) is 1.041666666 ms

After 1 character has arrived, we start to retransmit it. It
doesnt matter if we start at the 9.5 or 10 bit time, it
will take us 1.010101 times longer to send it than it took to
receive.

If we have a multibyte buffer, after 100 characters arrive at
a far-end transmit rate that is 1% too fast, we have the
following:

..99 * 100 * 1.041666666 = 103.1249999 ms

If our local transmitter is right on spec for baud rate,
it will take 104.1666666 to send these characters, this is
regardless of whether it starts at the 9.5 or the 10.0 point,
because the character is going through a buffer, and changing
clock domains. the difference in time will mean that in the
time that the local transmitter takes to send 100 characters,
the far end transmitter will send 101 charcaters. If our
buffer is 10 characters long, then after 1000 characters
arrive we will only have managed to offload 990 characters,
and our 10 character buffer is full. Some time during the
next 100 characters, we will have buffer over-run.

juergen sauermann also wrote:
It is this 0.5 bit difference which compensates the clock
difference (and which also gives you the 5% that rick mentioned).
As you can see above, I disagree. This is not a solution.

So far, your design seems correct. But then you try to speed
up the transmitter as well by sending less than 10 bits
(actually 9.5 bits). The net effect is that you have changed
the usual "transmit 10 &amp; receive 9.5" scheme to a
"transmit 9.5 &amp; receive 9.5" scheme, which is as bad as a
"transmit 10 &amp; receive 10" scheme when the clocks are different.
Actually, he hasn't changed the receiver to receive 9.5, because
the far end transmitter is still sending 10 bits. ignoring the
last .5 bit does not solve the problem, as it is accumulative.

By doing this you will neccessarily lose single bits long before
your buffer overruns. You stole the 0.5 bits from the receiver
that the receiver desperately needs to compensate for the clock
differences.
Nope. This does not work.

In other words, your attempt to avoid buffer overflows (which
cannot occur since the receiver takes care of the clock
frequencies) has actually created the problem you are describing.
The solution is simple: don't touch the transmitter.
Nope. This does not work.


The following solutions can be made to work:

A)
Use one of the 3 described flow control systems above, with
a suitable length buffer, or some other flow control system
with similar effect.

B)
Deliberately force some idle time between characters at the
far end transmitter. If your system is designed for a worst
case of 5% difference in clock frequencies, forcing an idle
between the stop bit and the next start bit of .6 bit time
will achieve this (with some minor safety margin). You will
still need some buffer though between your receiver and
transmitter.

Another version of this is to just add some idle time every
N characters, such as "every 100 characters, let the go to
sleep for 2 character time".


C)
Use a PLL to derive a local clock that is phase locked to
the received data, and use this for transmit.

D)
At the far end transmitter, add some pad characters at regular
times to the data stream, that can be thrown away at the
receiver.

E)
run a clock line from the far end transmitter to your system
and use that for your transmit clock (hardly an async system
any more)

F)
Be sneaky. Most UARTs can be set for 1 , 1.5 , or 2 stop bits.
Set the far end transmitter for 8N2 (1 start, 8 data, 2 stop).
Set your receiver and transmitter for 8N1 (1 start, 8 data, 1 stop).
This works, because stop bits look just like line-idle. This
effectively implements (B), but is localized to the initialization
code for the far end transmitter.






Philip Freidin

Philip Freidin
Fliptronics
 
Philip,
<BR><BR>
after thinking about the problem once more, I hate to admit that, yes, you are right.
<BR><BR>
I still do not believe, though, that inserting idle time one way or the other (including cutting the transmitter's stop bit) is a solution. Consider the following:
<BR><BR>
Left side: Slow (9600 Baud) <BR>
Right side: Fast (9700 Baud)
<BR><BR>
Both sides use e.g. 8N2 for Tx and 8N1 for Rx.
<BR><BR>
At some point in time, Left see's its buffer filling up and hence skips a few stop bits here and there (using 8N1) in order to compensate this. Left is now faster that Right, despite of the clock rates.
<BR><BR>
As a consequence, Right sees its buffer filling up and skips stop bits (using 8N1) as well.
<BR><BR>
This continues until both sides transmit with 8N1 all the time; at this time Left will loose data.
<BR><BR>
Thus, there must be some kind of understanding between Left and Right, which of the two is the "clock master", that ultimately controls the transmission speed. Unfortunately this is sometimes not possible, for instance in symmetric configurations.
<BR><BR>
/// Juergen
 
Philip Freidin wrote:
The following solutions can be made to work:
<A to F snipped>

G) copy the scheme used in the TI MSP430, effectively a DDS.
Using this arrangement gets you a lot more speed, and you
can take the TI documentation as a spec and save a ton of
time...

If you do it, please forward the results to Xilinx for a
writeup as an App Note ;-)
 
juergen Sauermann wrote:

(snip)

I still do not believe, though, that inserting idle time one way or the
other (including cutting the transmitter's stop bit) is a solution.
Consider the following:

Left side: Slow (9600 Baud)
Right side: Fast (9700 Baud)

Both sides use e.g. 8N2 for Tx and 8N1 for Rx.

At some point in time, Left see's its buffer filling up and hence skips
a few stop bits here and there (using 8N1) in order to compensate this.
Left is now faster that Right, despite of the clock rates.

As a consequence, Right sees its buffer filling up and skips stop bits
(using 8N1) as well.

This continues until both sides transmit with 8N1 all the time; at this
time Left will loose data.
As far as I know, asynchronous transmission was intended to be between
two devices, such as a terminal and a computer, though more likely two
terminals in the early days.

The two stop bits were required by machines that mechanically decided
the bits. (The Teletype (R) ASR33, for example.) Using stop bits as
flow control seems unusual to me.

Electronic UARTs (no comment on mechanical ones) sample the bit at the
center of each bit time. For a character with no transitions (X'00' or
X'FF') timing error can accumulate for the duration of the character.
The STOP bit is the receivers chance to adjust the timing, and start
over with the new START bit.

With a 5% timing error, which is very large for a crystal controlled
clock, the stop bit could start 0.45 bit times early, but the receiver
will still detect it at the right time, and be ready to start the next
character.

The timing for each character is from the leading edge of the START bit.

This allows for difference in the bit clock rate between the transmitter
and receiver. It is unrelated to any buffering or buffer overflow
problems that may occur.

-- glen
 
"Philip Freidin" wrote
snip
And Philip writes:
Modifying the local transmited character to be a non-standard length
by changing the length of the stop bit on the fly as buffer
over-run is about to occur is not a good idea, as you don't
know the details of how the receiver that is listening to it was
designed, and it may not be very happy to see the next start bit
before it is finished with what it expects is a full length stop
bit, but it is not.
UARTs look for the START edge, from the _middle_ of the STOP bit.
With x16 clocking, typically that gives 8 possible time slots for earlier
start.

I would agree that a half-bit jump in STOP, as the OP first suggested,
is NOT a good idea, but fractional (1/16 quantized ) STOP changes are
valid and safe.

&lt;snip&gt;
BUT this is not a solution to the original poster's problem!
The problem still exists because the remaining .5 bit is still
going to arrive, the data is being sent with a slightly faster
clock than the transmitter is able to retransmit the character.
If there is no line idle time between the end of the inbound
stop bit and the next inbound start bit, the system will
eventually have an over-run problem, no matter how big the
input buffer. The closer the two clock rates, and the bigger
the buffer, the longer it takes to happen, but it will happen.
Yes, true if the stop bit is 'whole bit' quantized.
CAN be avoided if the TX can move the START edge as
needed, both left and right, in 1/16 steps. Something like
+/-4 sixteenths would leave design margin.

F)
Be sneaky. Most UARTs can be set for 1 , 1.5 , or 2 stop bits.
Set the far end transmitter for 8N2 (1 start, 8 data, 2 stop).
Set your receiver and transmitter for 8N1 (1 start, 8 data, 1 stop).
This works, because stop bits look just like line-idle. This
effectively implements (B), but is localized to the initialization
code for the far end transmitter.
Yes, by far the simplest, and most practical solution.

However, this is comp.arch.fpga, and here we can design any UART we like,
including one that can handle 100% traffic loading, single stop bits, and
1-2% region clock skews. ! :)

To illustrate this, look at the SC28C94 UART data, this from info
on their 16 possible STOP BIT options:

MR2[3..0] = Stop Bit Length
0 = 0.563
1 = 0.625
2 = 0.688
3 = 0.750
4 = 0.813
5 = 0.875
6 = 0.938
7 = 1.000
8 = 1.563
9 = 1.625
A = 1.688
B = 1.750
C = 1.813
C = 1.875
E = 1.938
F = 2.000

-jg
 
Here is my simple analysis:
There are two very different situations:

If the transmitter clocks slower than the receiver, there is no problem
on the receive end, as long as the error inside the word does not exceed
half a bit time.

If the transmitter clocks faster than the receiver, the receiver has to
be able to resynchronize after only half a stop bit (which may be touchy).

Peter Alfke
==============================
glen herrmannsfeldt wrote:
juergen Sauermann wrote:

(snip)

I still do not believe, though, that inserting idle time one way or the
other (including cutting the transmitter's stop bit) is a solution.
Consider the following:

Left side: Slow (9600 Baud)
Right side: Fast (9700 Baud)

Both sides use e.g. 8N2 for Tx and 8N1 for Rx.

At some point in time, Left see's its buffer filling up and hence skips
a few stop bits here and there (using 8N1) in order to compensate this.
Left is now faster that Right, despite of the clock rates.

As a consequence, Right sees its buffer filling up and skips stop bits
(using 8N1) as well.

This continues until both sides transmit with 8N1 all the time; at this
time Left will loose data.

As far as I know, asynchronous transmission was intended to be between
two devices, such as a terminal and a computer, though more likely two
terminals in the early days.

The two stop bits were required by machines that mechanically decided
the bits. (The Teletype (R) ASR33, for example.) Using stop bits as
flow control seems unusual to me.

Electronic UARTs (no comment on mechanical ones) sample the bit at the
center of each bit time. For a character with no transitions (X'00' or
X'FF') timing error can accumulate for the duration of the character.
The STOP bit is the receivers chance to adjust the timing, and start
over with the new START bit.

With a 5% timing error, which is very large for a crystal controlled
clock, the stop bit could start 0.45 bit times early, but the receiver
will still detect it at the right time, and be ready to start the next
character.

The timing for each character is from the leading edge of the START bit.

This allows for difference in the bit clock rate between the transmitter
and receiver. It is unrelated to any buffering or buffer overflow
problems that may occur.

-- glen
 
As far as I know, asynchronous transmission was intended to be between
two devices, such as a terminal and a computer, though more likely two
terminals in the early days.

Originally developed by Emille Baudot (google) for telex. Hence Baud.
Buffer overflows are irrelevant in the modern world since the
terminals will be handling the data far faster than the transmission
rate. All that is required is the receiving uart detect the start bit
reliably. Longer stop bits help.
 
On Tue, 25 Nov 2003 07:20:58 -0800, "juergen Sauermann" &lt;juergen.sauermann@t-online.de&gt; wrote:
Philip,
after thinking about the problem once more, I hate to admit that, yes, you are right.
Sorry.

I still do not believe, though, that inserting idle time one way or the other
(including cutting the transmitter's stop bit) is a solution. Consider the following:

Left side: Slow (9600 Baud)
Right side: Fast (9700 Baud)

Both sides use e.g. 8N2 for Tx and 8N1 for Rx.
The extra stop bit gives you about 9% margin, the difference between
9600 and 9700 is about 1%

At some point in time, Left see's its buffer filling up and hence skips
a few stop bits here and there (using 8N1) in order to compensate this.
Left is now faster that Right, despite of the clock rates.
I agree. More exactly, Left's RX is faster than Right's TX.

The examples I gave in my prior post help with unidirectional
messages between a faster transmitter and a slower receiver, and
assume the system at the slower receiver can process the received
character in the local clock domain in 10 bit times. But if you
retransmit the character with 2 stop bits in the slower clock
domain, that takes 11 bit times, and the system will fail. So
retransmitting with 11 bits throws away the advantage of the
RX using 1 stop bit and the far end TX using 2 stop bits.

If you knew which system had the slower clock, you could set its
transmitter for 1 stop bit and then the system would work.
Unfortunately this is not normally possible.


As a consequence, Right sees its buffer filling up and skips
stop bits (using 8N1) as well.
This continues until both sides transmit with 8N1 all the time;
at this time Left will loose data.
This is not what I intended. I am assuming that the number of
stop bits is fixed, and is dependent on which end has the faster
clock.

Try this:

Left side: Slow (9600 Baud)
RX: 1 + 8 + 1
TX: 1 + 8 + 1

Right side: Fast (9700 Baud)
RX: 1 + 8 + 1
TX: 1 + 8 + 2

I believe you can send stuff continuously all day this way without
over-run, in both directions. Only problem is that you need to know
which end has the faster clock.

You could figur this out by running both TX at 1+8+1 and see which
RX has over-run errors first, then adjust the other end's TX. Not
pretty, but could be made to work. Maybe OK for a 1 off project, but
not for production.


Thus, there must be some kind of understanding between Left and Right,
which of the two is the "clock master", that ultimately controls the
transmission speed. Unfortunately this is sometimes not possible, for
instance in symmetric configurations.
I agree. hence the need for flow control.

/// Juergen
Philip



Philip Freidin
Fliptronics
 
"juergen sauermann" &lt;juergen.sauermann@t-online.de&gt; writes:
Valentin, <p>apparently you are trying to resolve the clock difference by cutting the stop bit in order to achieve a higher transmission rate.

That is a very nice idea and it is completely wrong.
If it's completely wrong, why did the ITU standardize it in the V.14
standard?

Admittedly shaving stop bits should only be used in certain limited
circumstances. It is not intended to deal with sending data to a
receiver that is running slightly slower than the transmitter. Rather,
it is used when converting slightly overspeed data from a synchronous
modem modulation to async (when no error control protocol like V.42 is
in use).

Only a small number of commercially produced UARTs, such as the NEC
uPD7201, fail to work correctly with slightly short stop bits. This
problem was commonly seen on the AT&amp;T 7300 Unix PC in the late 1980s.
 
"Jim Granville" &lt;no.spam@designtools.co.nz&gt; writes:
UARTs look for the START edge, from the _middle_ of the STOP bit.
With x16 clocking, typically that gives 8 possible time slots for earlier
start.
Actually they usually start looking for a start transition 9/16 of the
way into the previous stop bit. Some UARTs with noise detection sample
the RX input at 7/16, 8/16, and 9/16 of the bit time, so those might not
start looking for a start bit until 10/16 of the way into the previous
stop bit.

I would agree that a half-bit jump in STOP, as the OP first suggested,
is NOT a good idea, but fractional (1/16 quantized ) STOP changes are
valid and safe.
Yes, with rare exceptions such as the NEC uPD7201, which actually requires
a full stop bit on receive.
 
Hi,Scott, <BR>
I'm planning a new project related to yours, though I need both DVI input and output, may I know the source of FPGA development board that you are using ? where you bought this board from ? <p>thanks, <BR>
Julian
 
15 NANDS
------------ = 3
5 NANDS

--
Greg
readgc.invalid@hotmail.com.invalid
(Remove the '.invalid' twice to send Email)


"kpk" &lt;kkaranasos@in.gr&gt; wrote in message
news:f2753b28.0312310022.7c2153f6@posting.google.com...
Can anyone send me a 4-bit binary divider circuit in this email :
kkaranasos@in.gr ? I must make this homework for my university and i
am late.
I have to make this circuit only with NAND gates.

PLEASE HELP !!!!!!!!!

Thanks a lot
 
Dear colleagues,

I need to find out that which verification tool would be better
(overall) for the (Co)-Verification of the SoC/ASIC. We are thinking
of making comparison between Seamless and Quickturn/Cadence product
i.e. Cobalt.

I know Cobalt has certain benefits such as: faster speed, Higher
capacity.

But still I need to know the following:

- complexity in setting-up the environment (Seamless&amp;Specman both)
- User friendly
- Nature of the results (% wrong values in result)
- Effort required to make the environment working for simulation
- Whether valid for synchronous designs only or more
- Debug capability


I will really appreciate help from the gurus of the
design/verification community.


Best regards,
rajan
My type is HES.
www.alatek.com

Best Regards,
Griva
 

Welcome to EDABoard.com

Sponsor

Back
Top