EDK : FSL macros defined by Xilinx are wrong

On Oct 25, 7:19 pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
Andy wrote:

(snip)

The TRS-80 Color Computer (Moto 6809 based) refreshed during the
vertical retrace. But there was a bit in the system controller that
could be set to turn it and video access off, while doubling the
processor clock.

I thought it was the display memory access that did the refresh.
I probably still have the service manual around somewhere.

As long as your Basic code was running, and not
waiting on a keyboard input or other event, the ROM interpreter's RAM
accesses managed to keep the RAM (at least the part of it being used)
refreshed. But if/when the code hit an error (and thus waited for user
response) you could watch the screen go from random pixels to all
white. Once the coding errors were eliminated, it was a reliable way
to double the processing speed when you did not need video.

If I remember, there were three modes. Normal mode, one that doubled
the clock speed some of the time, and one that doubled it all the time.
I never tried turning the display off, though.

-- glen
The video and processor were synchronized to access memory on opposite
clock cycles. In the mode that doubled the frequency some of the time,
the controller (address decoder too) checked to see whether RAM was
being accessed or not (as opposed to ROM or other resources), and if
not RAM, the clock was doubled for that access (the 6809 was a
completely static design, capable of even stopping the clock). We
called that the "1.5x poke". The DRAM refresh was done separately by
the controller in between video frames.

The always-doubled mode ("2x speed poke") doubled the processor clock
regardless, and the video displayed the pixels for whatever memory the
processor was accessing when those pixels were scanned. There were
usually some binary counters visible on the screen, but most of it was
random bits. In this mode the dram was not refreshed by the
controller, so processor accesses had to do it, which like I
mentioned, as long as the basic interpreter was running your code,
would keep it alive.

Dang, that was a fun machine.. I think it is still in my attic
somewhere.

Andy
 
"Peter Alfke" <peter@xilinx.com> wrote in message
news:1193854940.231665.110490@z24g2000prh.googlegroups.com...
Hi, John
I suppose you know about the old Xilinx app note:
http://direct.xilinx.com/bvdocs/appnotes/xapp028.pdf
which would benefit from your diode trick.
Cheers
Peter Alfke, Xilinx Applications

Guys,
Beware of XAPP028...

From
http://groups.google.com/groups/search?q=xapp028+symon+MAXSKEW+group%3Acomp.arch.fpga

Quote:-
A small note of caution when using Peter's XAPP028 in Virtex II. As
well as constraining the logic to the CLBs shown in the app note, make
sure you specify a MAXSKEW attribute on the reference signal and
feedback signal to the circuit. I use 100ps. Without this the circuit
can occasionally malfunction depending on the place and route. (These
are the signals called 'from VCO divided by N' and 'from reference
frequency'.)
There was no problem when this circuit was used on older FPGAs
where the routing to the F and G lookup tables in a single CLB was
guaranteed to have low skew. In Virtex II this is no longer the case
and a single signal that goes to both the F and G inputs of a CLB can
have significant skew if not constrained. This can cause the circuit
of XAPP28 to misbehave.

HTH., Syms.
 
On Oct 31, 3:47 pm, "Symon" <symon_bre...@hotmail.com> wrote:
"Peter Alfke" <pe...@xilinx.com> wrote in message

news:1193854940.231665.110490@z24g2000prh.googlegroups.com...> Hi, John
I suppose you know about the old Xilinx app note:
http://direct.xilinx.com/bvdocs/appnotes/xapp028.pdf
which would benefit from your diode trick.
Cheers
Peter Alfke, Xilinx Applications

Guys,
Beware of XAPP028...

Fromhttp://groups.google.com/groups/search?q=xapp028+symon+MAXSKEW+group%...

Quote:-
A small note of caution when using Peter's XAPP028 in Virtex II. As
well as constraining the logic to the CLBs shown in the app note, make
Thanks, Syms, for pointing this out.
I published this in 1990, in the XC3000 era, and I was proud of
packing it so nicely.
Your comment makes me retire the circuit, but it will unfortunately
survive on the internat...
Peter

sure you specify a MAXSKEW attribute on the reference signal and
feedback signal to the circuit. I use 100ps. Without this the circuit
can occasionally malfunction depending on the place and route. (These
are the signals called 'from VCO divided by N' and 'from reference
frequency'.)
There was no problem when this circuit was used on older FPGAs
where the routing to the F and G lookup tables in a single CLB was
guaranteed to have low skew. In Virtex II this is no longer the case
and a single signal that goes to both the F and G inputs of a CLB can
have significant skew if not constrained. This can cause the circuit
of XAPP28 to misbehave.

HTH., Syms.
 
The point is current spreading. Because they aren't intended for handling
large forward currents, their junctions aren't designed to handle the
thermal effects. Like an SCR's dI/dt rating, local heating can cause
failure not expected for that current level.
Do you have a serious reference for this?
 
In article <b66e6525cd1c8c9f136acf9d755@news.ks.uiuc.edu>,
Matthew Hicks <mdhicks2@uiuc.edu> wrote:
In FPGAs, configurations can be stored in Flash in an encrypted format that
only the FPGA to be configured has the key to . During configuration, the
FPGA does the encryption, so even data over the Flash to FPGA channel is
secure. How the FPGA keeps it's key secure, I don't remember. Maybe there
is an analogue to this in MCU land.
Specifically Altera Statrix-II FPGAs have AES 128 decryption and OTP (fuse)
non-readable key storage for the configuration bitstream.

So: run Linux on a NIOS soft core in one of these FPGAs. Encrypt the code
in flash. Add decryption units with keys to the memory interfaces (or limit
yourself to the memory built into the FPGA). The decyption unit and keys
are encrypted in the Stratix-II bitstream, so they can't be read.

Even if you were able to read the fuse settings somehow, you would then have
to reverse-engineer the undocumented bit-stream format.

I think this is all bad, except for protecting nuclear weapons. There would
be no hacked iPhones if its firmware was encrypted this well. Vernor
Vinge's _Rainbow's End_ told about a computer engineer who could no longer
tinker with hardware due to her invention of a secure hardware environment.
--
/* jhallen@world.std.com AB1GO */ /* Joseph H. Allen */
int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0)
+r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2
]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}
 
Nothing's ever completely secure, it just gets more difficult to
crack.

A platform depending on an encrypted code memory interface can be
vulnerable in many ways. For example, when the code memory interface
doesn't also authenticate.

Regards,
Marc
 
Toni Merwec wrote:
Has anyone ever had a similar problem and knows about an adequate solution?
I don't know if this meets the jitter precision you need, but we're
using the following setup for 66MHz fpga-fpga communication here:

First FPGA gets an external clock, synchronizes its internal clock to
this via DCM.

Second DCM in first FPGA outputs the clock to a pin, pin is fed back to
another pin that is fed back into DCM-> clock on pin is synchronous to
internal clock.

Second FPGA gets clock from first FPGA, synchronizes its internal clock
to this via DCM.

HTH,

- Philip
--
Weisheit wird mit Löffeln gefressen. Manche Leute
benutzen eine Gabel...
 
"Toni Merwec" <mistertorpedo@freenet.de> wrote in message
news:473d6db5$0$13113$9b4e6d93@newsspool2.arcor-online.net...
I'll be using the Xilinx Virtex-4 FX series FPGAs featuring the high-speed
MGTs. Unfortunately that leads to a clock signal that has to be
distributed to at least 6 FPGA clock inputs.

I don't think that a regular low-jitter clock device (and it HAS to be
low-jitter as for the reference for the MGTs) can drive 6 inputs over
several centimeters. I already used the ICS843020 clock synthesizer in
several other projects and wanted to use it again. Reason for the ICS is
that it features a programmable output frequency in the range of 35 - 700
MHz.
Maybe a clock buffer or multi-output clock distribution device is the
solution here, but I am afraid every additional device in the clock
network would introduce additional jitter which is the most critical
aspect in this application. Therefore I woul prefer a solution without
those kind of devices... if possible.

Hi Toni,
A proper clock distribution device will introduce very little jitter. Use
some thing like this:-
http://www.micrel.com/_PDF/HBW/sy89832u.pdf

Filter its supplies properly.

HTH., Syms.
 
MyHDL looks really nice!
Is it possible to have in a FSM one procedure-or-function for each
state? It should make the program code easy to read. It will be also
better to have an array of functions and call them according FSM state
number. Do you have any idea how to implement this in MyHDL?
Here is a vivid wishful pseudo-Python example of what I mean:

def read(args):
do smth.
def write(args):
do smth.
...
function = [1:read, 2:write, 3:sleep, 4:wake]
next_state = [1:3, 2:3, 3:3, 4:1]
def FSM_states_switch:
state = next_state[state]
def FSM_output_function:
function[state](args)
Try putting Python and MyHDL on your CV/resume instead of VHDL or
(System)Verilog and see how many job offers you get...
;-)

As and when (potential) employers adopt more modern development
methodologies, then so do/will I.
 
Refuting can be useful.
<snip />

Well refuted sir!

A great advantage of VHDL's strong (compile-time) type checking is that it
much reduces the need for a linting-type tool. Reading some styles of code,
however, indicates that it does not eliminate it, however!
;-)
 
What tools do you prefer? Why ?
I use the tools (editor, simulator, synthesis) that my employer's IT
department give me to use. They are good enough. Next job the tools may be
different, but they will still be good enough, because I am good enough to
use them sufficiently well.

HTH
;-)
 
On Nov 29, 8:42 am, "Denkedran Joe" <denkedran...@googlemail.com>
wrote:
Hi all,

I'm working on a hardware implementation (FPGA) of a lossless compression
algorithm for a real-time application. The data will be fed in to the
system, will then be compressed on-the-fly and then transmitted further.

The average compression ratio is 3:1, so I'm gonna use some FIFOs of a
certain size and start reading data out of the FIFO after a fixed
startup-time. The readout rate will be 1/3 of the input data rate The size
of the FIFOs is determined by the experimental variance of the mean
compression ratio. Nonetheless there are possible circumstances in which no
compression can be achieved. Since the overall system does not support
variable bitrates a faster transmission is no solution here.

So my idea was to put the question to all of you what to do in case of
uncompressibility? Any ideas?

Denkedran Joe
If the compression must be lossless, and you can not increase the bit
rate, you need to allow for the buffering of the input data to grow in
size to accommodate the worst case. If you can not build a big enough
FIFO inside the FPGA, add some external memory and use it as a FIFO.


Is the hardware already designed? What are your data rates, and do you
know what the worst case compression is?

Regards,

John McCaskill
www.fastertechnology.com
 
On Nov 29, 6:42 am, "Denkedran Joe" <denkedran...@googlemail.com>
wrote:
Hi all,

I'm working on a hardware implementation (FPGA) of a lossless compression
algorithm for a real-time application. The data will be fed in to the
system, will then be compressed on-the-fly and then transmitted further.

The average compression ratio is 3:1, so I'm gonna use some FIFOs of a
certain size and start reading data out of the FIFO after a fixed
startup-time. The readout rate will be 1/3 of the input data rate The size
of the FIFOs is determined by the experimental variance of the mean
compression ratio. Nonetheless there are possible circumstances in which no
compression can be achieved. Since the overall system does not support
variable bitrates a faster transmission is no solution here.

So my idea was to put the question to all of you what to do in case of
uncompressibility? Any ideas?

Denkedran Joe
You cannot solve your problem losslessly. You must guarantee your
image is in a state that will guarantee compressibility or your stream
will occasionally require more bandwidth than is available; you'd need
infinite FIFOs to cover worst-case situations.

You MUST have a lossy fallback OR supply enough bandwidth to
accommodate uncompressed data as a fallback. Variable bit-rate multi-
channel systems can borrow bandwidth from each other since -
statistically - all channels do not experience poor compression at
once unless they're all transmitting similarly uncompressible images.

If your video is dynamic in movement and in color for a length of
time, your stream will exceed your channel bandwidth. If you have
very large (disk based?) FIFOs, you can drop video for a short time
and pick back up when the compressed stream is better behaved and you
can receive continuous video again. You will not be able to recover
the delay that you introduced from the compression on the receive side
unless you skip some received video (which is lossy) or speed up the
playback. Can you deal with a fixed delay of seconds or minutes once
you've experienced a period of poor compression?

No lossless compression scheme can compress everything. You can only
have better compression schemes that will fail less often or present a
fallback: lossy compression or variable bit rates.

There are no finite alternatives; it's one of the basic principles of
compression. There are no smarter algos, just better compression
schemes that fail less often.

- John_H
 
It isin't right to applicate lossless algorithms to fixed-bandwidth
systems.
There is always dark case corner with fully uncorrerlated data set
then compression ratio will be 1:1 (or even worse
for prediction-based algorithms with not corresponding distribution
model).

Lossless algorithms are perfect in storage systems for space saving.
But transport channel should be wide enough
for the worst case.

From my knowledge of life the robust solution in your case is the
redesigning system with wide channel at the current stage.
Don't play with thin air, force oneself now and eliminate great
troubles in the future.

Digitally yours,
Michael Tsvetkov (JPEG-lossless IP Core developer)

http://www.jpegls.com
 
Op Thu, 29 Nov 2007 15:42:45 +0100 schreef Denkedran Joe
<denkedranjoe@googlemail.com>:
I'm working on a hardware implementation (FPGA) of a lossless compression
algorithm for a real-time application. The data will be fed in to the
system, will then be compressed on-the-fly and then transmitted further.

The average compression ratio is 3:1, so I'm gonna use some FIFOs of a
certain size and start reading data out of the FIFO after a fixed
startup-time. The readout rate will be 1/3 of the input data rate The
size
of the FIFOs is determined by the experimental variance of the mean
compression ratio. Nonetheless there are possible circumstances in which
no compression can be achieved.
Given that uncompressible data often resembles noise, you have to ask
yourself: what would be lost?

Since the overall system does not support
variable bitrates a faster transmission is no solution here.

So my idea was to put the question to all of you what to do in case of
uncompressibility? Any ideas?
If you can identify the estimated compression beforehand and then split
the stream into a 'hard' part and an 'easy' part, then you have a way to
retain the average.



--
Gemaakt met Opera's revolutionaire e-mailprogramma:
http://www.opera.com/mail/
 
"Boudewijn Dijkstra" <boudewijn@indes.com> writes:
Op Thu, 29 Nov 2007 15:42:45 +0100 schreef Denkedran Joe
denkedranjoe@googlemail.com>:
I'm working on a hardware implementation (FPGA) of a lossless compression
algorithm for a real-time application. The data will be fed in to the
system, will then be compressed on-the-fly and then transmitted further.

The average compression ratio is 3:1, so I'm gonna use some FIFOs of a
certain size and start reading data out of the FIFO after a fixed
startup-time. The readout rate will be 1/3 of the input data rate
The size
of the FIFOs is determined by the experimental variance of the mean
compression ratio. Nonetheless there are possible circumstances in
which no compression can be achieved.

Given that uncompressible data often resembles noise, you have to ask
yourself: what would be lost?

*Much* more information than if the signal was highly redundant.


Since the overall system does not support
variable bitrates a faster transmission is no solution here.

So my idea was to put the question to all of you what to do in case of
uncompressibility? Any ideas?

If you can identify the estimated compression beforehand and then
split the stream into a 'hard' part and an 'easy' part, then you have
a way to retain the average.

Yeah, right. And if you juggle the bowling balls when crossing
the rope-bridge, you'll not break the bridge.


Phil
--
Dear aunt, let's set so double the killer delete select all.
-- Microsoft voice recognition live demonstration
 
Problems of this type are generally known by the technical term
"insoluble".

There must be an easing of one or more of the constraints. At least the OP
has thought of one of the corner cases...

BTW, doing this "in software" won't work either, as that still would
require an unconstrained amount of memory to *never* fail.
 
On Dec 3, 4:14 am, "Boudewijn Dijkstra" <boudew...@indes.com> wrote:
Op Thu, 29 Nov 2007 15:42:45 +0100 schreef Denkedran Joe
denkedran...@googlemail.com>:

I'm working on a hardware implementation (FPGA) of a lossless compression
algorithm for a real-time application. The data will be fed in to the
system, will then be compressed on-the-fly and then transmitted further.

The average compression ratio is 3:1, so I'm gonna use some FIFOs of a
certain size and start reading data out of the FIFO after a fixed
startup-time. The readout rate will be 1/3 of the input data rate The
size
of the FIFOs is determined by the experimental variance of the mean
compression ratio. Nonetheless there are possible circumstances in which
no compression can be achieved.

Given that uncompressible data often resembles noise, you have to ask
yourself: what would be lost?
The message! Just because the message "resembles" noise does not mean
it has no information. In fact, just the opposite. Once you have a
message with no redundancy, you have a message with optimum
information content and it will appear exactly like noise.

Compression takes advantage of the portion of a message that is
predictable based on what you have seen previously in the message.
This is the content that does not look like noise. Once you take
advantage of this and recode to eliminate it, the message looks like
pure noise and is no longer compressible. But it is still a unique
message with information content that you need to convey.


Since the overall system does not support
variable bitrates a faster transmission is no solution here.

So my idea was to put the question to all of you what to do in case of
uncompressibility? Any ideas?

If you can identify the estimated compression beforehand and then split
the stream into a 'hard' part and an 'easy' part, then you have a way to
retain the average.
Doesn't that require sending additional information that is part of
the message? On the average, this will add as much, if not more to
the message than you are removing...

If you are trying to compress data without loss, you can only compress
the redundant information. If the message has no redundancy, then it
is not compressible and, with *any* coding scheme, will require some
additional bandwidth than if it were not coded at all.

Think of your message as a binary number of n bits. If you want to
compress it to m bits, you can identify the 2**m most often
transmitted numbers and represent them with m bits. But the remaining
numbers can not be transmitted in m bits at all. If you want to send
those you have to have a flag that says, "do not decode this number".
Now you have to transmit all n or m bits, plus the flag bit. Since
there are 2**n-2**m messages with n+1 bits and 2**m messages with m+1
bits, I think you will find the total number of bits is not less then
just sending all messages with n bits. But if the messages in the m
bit group are much more frequent, then you can reduce your *average*
number of bits sent. If you can say you will *never* send the numbers
that aren't in the m bit group, then you can compress the message
losslessly in m bits.
 
fazulu deen wrote:

Is there any formula to calculate processor clock cycles per
Instructions with given parameters as FPGA implemented processor clock
frequency and instruction bytes...
This is a design tradeoff between
Fmax, latency and device utilization.
Edit code, run a sim, check Fmax, repeat.

-- Mike Treseler
 

Welcome to EDABoard.com

Sponsor

Back
Top