EDK : FSL macros defined by Xilinx are wrong

On 16/07/13 18:07, rickman wrote:
On 7/16/2013 12:36 PM, Tom Gardner wrote:
On 16/07/13 17:00, rickman wrote:
So should I assume you are actually agnostic about the brand of FPGA?

Correct.


My primary requirement (almost the only requirement!) is high
speed serial interface. I've been looking at the Ztex 1.15b with the
highest speed variant of a spartan-6-75 for 239euro.
http://www.ztex.de/usb-fpga-1/usb-fpga-1.15.e.html
The company has a range of simple useful shields, and has
been around long enough to have an "obsolete boards" section.

I'm not familiar with your use of the terms "high speed serial
interface" and "shields". By shields do you mean daughter cards? I
think shields is the term used for Arduino daughter cards, I don't
normally see it used anywhere else.

Yes, I was using it in the arduino sense, e.g. ztec's PSU
module.


As to "high speed serial interface" that is a bit broad. Are you
talking about an async RS-232 type interface or something more
specific like USB high speed or Ethernet?

All I want to do is sample the output of a MAX9979 as fast as possible,
and then store and process it in fairly simple (albeit high speed) ways.
1GS/s is a good round number. 900M/s wouldn't be bad, but
is less sexy :) A stretch goal is to sample
at 2GS/s using separate interleaved channels. Looks like the
various SERDES i/o structures will be useful, provided I can
avoid having any PHY-level encoding. Raw bitstreams only, please :)

Getting hold of a MAX9979 without paying 200euro is a
separate issue :(

So the MAX9979 is some sort of buffer chip and you want to sample the output using a SERDES or two? Why didn't you say that?
Because it wasn't important to the original question
about the toolchain.


Not all FPGAs have SERDES. I've not worked with them before so I can't
tell you which have encoding or not. I do know that Lattice was the
first company to add SERDES to their low cost line of FPGAs.
Once they did it, X and A had to as well.
I can use datasheets to assess FPGAs for my application
perfectly well, thank you. I don't, however, have experience
in using the current toolchains with all their foibles.


Certainly if you are willing to spend well over $200 there are lots of
boards around. Do you have any other requirements?

Not really, although a 3+ medium-speed DACs (~1MS/s) might
be handy!

That can be done in the FPGA with a few spare pins. How much resolution do you need?
Sure can. That's why I didn't bother mentioning it!
 
[snipped]

ModelSim looks in either 'modelsim.ini' or (if you are using it
'your_project.mpf' quite often, including for library mappings. Open thes
files with a text editor (when ModelSim not running) and study them.

In your script which performs compilation you may want to include code t
delete-and-reinitialise libraries to ensure that you are using only th
latest code, for instance something like:

if { [file exists test] } {
vdel -lib test -all
}
vlib -long test
vmap test test

Different code may be required if using ModelSim 10.2 or later.

For post-synthesis simulations and post-layout simulations you will need t
reference the libraries with the technology-specific primitives.

I recommend that you Read The Fine Manuals, both the User Guide and th
Reference Manual.


---------------------------------------
Posted through http://www.FPGARelated.com
 
Hi Robert,

On 08/08/2013 15:48, RCIngham wrote:
ModelSim looks in either 'modelsim.ini' or (if you are using it)
'your_project.mpf' quite often, including for library mappings. Open these
files with a text editor (when ModelSim not running) and study them.
apparently mpf is not used. My modelsim.ini looks like this (my comments
included):

so here it begins a list of libraries...
others = $MODEL_TECH/../modelsim.ini
this one is a library, but looks like another ini file. Does it work as
include?

igloo = /opt/microsemi/Libero_v10.1/Libero/lib/modelsim/precompiled/vhdl/igloo
this is mapped with vmap igloo /bla/bla/bla

syncad_vhdl_lib = /opt/microsemi/Libero_v10.1/Libero/lib/actel/syncad_vhdl_lib
this one here apparently is not found and I believe is not used anywhere.

presynth = presynth
postsynth = postsynth
postlayout = ../designer/impl1/simulation/postlayout
as igloo above, these libraries are mapped from the path they belong to.

now it begins a section related to the compiler

VHDL93 = 93

NoDebug = 0
Explicit = 0
CheckSynthesis = 0
NoVitalCheck = 0
Optimize_1164 = 1
NoVital = 0
Quiet = 0
Show_source = 0
DisableOpt = 0
ZeroIn = 0
CoverageNoSub = 0
NoCoverage = 0
Coverage = sbcef
CoverCells = 1
CoverExcludeDefault = 0
CoverFEC = 1
CoverShortCircuit = 1
CoverOpt = 3
Show_Warning1 = 1
Show_Warning2 = 1
Show_Warning3 = 1
Show_Warning4 = 1
Show_Warning5 = 1
These details I ignore for the time being, I believe they can be easily
understood with the User Manual.

while here's a section related to the simulator

IterationLimit = 5000
VoptFlow = 1

[vlog]
[]
I do not use verilog, therefore I can safely skip this part as of now...


In your script which performs compilation you may want to include code to
delete-and-reinitialise libraries to ensure that you are using only the
latest code, for instance something like:

if { [file exists test] } {
vdel -lib test -all
}
vlib -long test
vmap test test
Uhm, I thought that bringing the project up-to-date was done by make,
simply checking at the files' timestamps. I understand that make cannot
guarantee that vcom does not write properly the 'object' in the library,
therefore your suggestion might be on the safe side. The only issue with
this approach is that I would need to recompile every time the whole
library.

In my run.do instead there's somewhat the contrary:

} {
echo "INFO: Simulation library presynth already exists"
} else {
vlib presynth
}
no need to recreate the whole project. If I remove the library I will
need than to recompile based on the missing 'object'. vmk creates
Makefile rules which are relative to empty files with the name of the
file being compiled. Every time the source is modified it will be also
recompiled and added to the library. This is fairly simple because it
allows me to do rules without the need to know what type of objects vcom
is generating.

OTOT I lately realized that if you organize the sources to have entities
and architectures (or packages definitions and bodies) on separate files
then you can drastically reduce the amount of dependencies and therefore
recompilation.
 
Am Donnerstag, 8. August 2013 15:19:27 UTC+2 schrieb alb:
Hi everyone,



I'm trying to understand the details of each individual step from my

source code to a running a postlayout simulation with ModelSim. I've

read several articles on what are the steps, I've also checked the

default run.do script which ModelSim uses to do everything, but on top

of *what* I need to do I'd appreciate to understand *why* I need to do

so and *which* are the control files I need to know.



The workflow I'm interested in is the following:



1. editing vhdl code

2. compilation

3. presynthesis simulation

4. synthesis

5. postsynthensis simulation

6. place&route

7. postlayout simulation



Step 1. is fairly easy, a text editor and I'm done (better be emacs!).

In order to perform 2. I need to use 'vcom' from ModelSim, but before

doing that I need to specify which is my library. In order to do that,

in the default run.do the following commands are executed:



$ vlib presynth

$ vmap presynth presynt

$ vmap igloo /path/to/igloo/library/from/actel



Now. While I understand the need to 'create' a library (BTW, what

happened to 'work'!) I was puzzled on the need to map the library, but

then I figured that when running vmap a modelsim.ini file is created for

ModelSim to look at at startup in order to know where to look for

libraries. To be more precise it would be 'vcom' that needs that piece

of information, correct?



If this is the case, when I need to run 5. I will need to grab the vhd

generated by the synthesis (why do I need a vhd and not the edf/edn

file?), create a postsynth library, map it and then compile in there the

file with all the necessary files for the testbench. Is that correct?



If this is correct, when moving on to 7. it seems that I need to get the

backannotated vhdl from the p&r tool, create a postlayout library, map

it and compile the vhdl in it with the associated testbench.



If I'm on track with this, then I'd like to continue with simulating in

batch mode (I'm mainly interested in regression tests automatically

started). Here's is one hit I've found on running vsim in batch mode:



https://ece.uwaterloo.ca/~ece327/protected/modelsim/htmldocs/mgchelp.htm#context=modelsim_se_user&id=16&tab=0&topic=NoNespecified



What is strange is the use of a test.do script which may severely affect

everything since I can ran whatever command in a do script... I'm not

sure how much I want to depend on that. The default run.do instead has

the following part:



[skip compilation section]

vsim -L igloo -L presynth -t 1ps presynth.testbench

add wave /testbench/*

run 1000ns

log testbench/*

exit



where I presume testbench is my top level testbench entity (what about

the architecture!?!). And I also presume that this run.do is called from

the ModelSim terminal, therefore I need to understand a little bit how

can I pass it through command line (-c option??).



It seems to me that these instructions could also be passed to vsim in

batch mode and have it logging waveforms in external files

(http://research.cs.berkeley.edu/class/cs/61c/modelsim/). For regression

testing maybe waveforms are not so much interesting and a report is more

useful, maybe with a coverage report as well, but before hitting that

phase I believe I'll be looking a lot at waves and I better be prepared

for being able to have them somewhere.



After all this rant I think I bored you already to death, but believe me

that while writing this article I checked and verified all my stupid

thoughts, ending up with knowing much more than what I did yesterday

night when I started writing...



I guess I will continue to post my reasoning as I proceed with this

quest, hoping not to annoy anyone :). And of course if anybody notices

I'm falling off track please give me a shout!



Al



--

A: Because it fouls the order in which people normally read text.

Q: Why is top-posting such a bad thing?

A: Top-posting.

Q: What is the most annoying thing on usenet and in e-mail?
Hi Al,
you seem to make things much more complicated than they are.

Some simple hints:
To start modelsim from th ecommandline:
vsim -do myscript.do

The -L option in vsim is for verilog simulations only, isn't it?
You might let it away, unless you don't do mixed mode simulations.

If you don't explicitly mention an architecture in a vsim command, the first one is taken from the library.
Testbenches mostly have just one architecture anyway.
Design models often have multiple architectures for different implementation styles or other purposes (behavioral, structural_post_par etc.).

When you create a local library, no vmap is needed.
vlib my_library
is sufficient.
Also this can be done just once, because as you already have seen it is stored in the ini file.
If you are using libraries more often, like the vendor libraries, you should put them in a higher level ini file, so they are present for all your projects.
(Ini files are read in beginning from the modelsim install path, then the users home dir and finally the project dir. The first one is global but can be partly overwritten by the later ones)

The work library is normally local, so at the creation of a project you need to do a
vlib work
just once.
Then you have a default saving you from always mentioning some self created local library name.

By the way,
behavioral simulation (presyn) and timing simulation (post-par) make sense.
What's the benefit of a post-syn simulation?
Is it just to distinguish synthesis tool bugs from design bugs?

Have a nice simulation
Eilert
 
Hi Robert,

On 08/08/2013 15:48, RCIngham wrote:
[snipped]

ModelSim looks in either 'modelsim.ini' or (if you are using it)
'your_project.mpf' quite often, including for library mappings. Ope
these
files with a text editor (when ModelSim not running) and study them.

apparently mpf is not used. My modelsim.ini looks like this (my comments
included):

[Library]
so here it begins a list of libraries...
others = $MODEL_TECH/../modelsim.ini

this one is a library, but looks like another ini file. Does it work as
include?
Yes, as described in their fine documentation.

[snip]

In your script which performs compilation you may want to include cod
to
delete-and-reinitialise libraries to ensure that you are using only the
latest code, for instance something like:

if { [file exists test] } {
vdel -lib test -all
}
vlib -long test
vmap test test

Uhm, I thought that bringing the project up-to-date was done by make,
simply checking at the files' timestamps. I understand that make cannot
guarantee that vcom does not write properly the 'object' in the library,
therefore your suggestion might be on the safe side. The only issue with
this approach is that I would need to recompile every time the whole
library.
If you want to use 'make' (or 'vmake') I cannot assist further. We don'
use it here. Formal synthesis builds are ALWAYS done from scratch, and
suspect that formal verification is similar.

[snip]


---------------------------------------
Posted through http://www.FPGARelated.com
 
On 09/08/2013 09:04, goouse99@gmail.com wrote:
[]
What is strange is the use of a test.do script which may severely
affect everything since I can ran whatever command in a do
script... I'm not sure how much I want to depend on that.
[]
you seem to make things much more complicated than they are.
if you refer to the quoted text above then I may have lots of reasons
not to depend on a script that may interfere with my previously compiled
code. Placing compiler options in two separate scripts can
be confusing and IMO prone to errors. There should always be 'one source
of truth'! :)

Some simple hints: To start modelsim from th ecommandline: vsim -do
myscript.do
I'm interested in starting vsim in batch mode. Since I'm controlling my
flow with a makefile I tend to refrain from scattering my options around.

The -L option in vsim is for verilog simulations only, isn't it? You
might let it away, unless you don't do mixed mode simulations.
from the reference manual:
-L <library_name> …
(optional) Specifies the library to search for design units instantiated
from Verilog and for VHDL default component binding. Refer to “Library
Usage” for more information. If multiple libraries are specified, each
must be preceded by the -L option. Libraries are searched in the order
in which they appear on the command line.

If you don't explicitly mention an architecture in a vsim command,
the first one is taken from the library. Testbenches mostly have just
one architecture anyway.
I may have several architectures for several testcases (even though this
is not my preferred way). And any way, it seems I can pass either the
configuration or the entity/architecture pair as object arguments.
BTW, quoting the reference manual:

"The entity may have an architecture optionally specified; if omitted
the last architecture compiled." (not the first).

When you create a local library, no vmap is needed. vlib my_library
is sufficient.
interesting. Does it mean that vcom looks always for the current
directory to look for libraries?

Also this can be done just once, because as you already have seen it
is stored in the ini file.
if no vmap is used than the library is not stored in the ini file. I did
a quick check: remove modelsim.ini from current directory, run vlib
mylib, no ini file created.

If you are using libraries more often, like the vendor libraries, you
should put them in a higher level ini file, so they are present for
all your projects. (Ini files are read in beginning from the modelsim
install path, then the users home dir and finally the project dir.
The first one is global but can be partly overwritten by the later
ones)
that's an interesting point. I got the impression that when using vmap
the first time a modelsim.ini was added to the local directory from the
modelsim install path and then modified accordingly.

[]
By the way, behavioral simulation (presyn) and timing simulation
(post-par) make sense. What's the benefit of a post-syn simulation?
Is it just to distinguish synthesis tool bugs from design bugs?
Actually, if I'm happy with my postsynth simulation I may skip the
post-par, provided that static timing analysis doesn't report any
violation. What else do you expect from your post-par simulation?
I still might be missing the point, but I do not see what is the benefit
of a post-par simulation (even though I included it in my workflow in
the OP).
 
Hi Robert,

On 09/08/2013 11:38, RCIngham wrote:
[]
In your script which performs compilation you may want to include code
to
delete-and-reinitialise libraries to ensure that you are using only the
latest code, for instance something like:

if { [file exists test] } {
vdel -lib test -all
}
vlib -long test
vmap test test

Uhm, I thought that bringing the project up-to-date was done by make,
simply checking at the files' timestamps. I understand that make cannot
guarantee that vcom does not write properly the 'object' in the library,
therefore your suggestion might be on the safe side. The only issue with
this approach is that I would need to recompile every time the whole
library.

If you want to use 'make' (or 'vmake') I cannot assist further. We don't
use it here. Formal synthesis builds are ALWAYS done from scratch, and I
suspect that formal verification is similar.
I still do not grasp the reason behind compiling everything from
scratch. VHDL has clear dependencies and if you take care (as with a
makefile) about bringing all the dependencies up to date, why should you
synthesize everything again?

If we agree that a synthesis is a repeatable process, i.e. for a given
source of code+constraints it will always result in the same logic, then
I do not understand the need for building from scratch. If my assumption
is not the same, then again I wouldn't recompile something if I know it
works (if ain't broken don't fix it!).
 
On Saturday, 10 August 2013 00:30:07 UTC+1, alb wrote:

Actually, if I'm happy with my postsynth simulation I may skip the

post-par, provided that static timing analysis doesn't report any

violation. What else do you expect from your post-par simulation?

I still might be missing the point, but I do not see what is the benefit

of a post-par simulation (even though I included it in my workflow in

the OP).
Last time I looked, Altera and XIlinx take different approaches to timing simulation. Altera docs say FPGA designs are getting too large and timing simulations take too long, so altera has dropped support for timing simulations for its newer devices, and their docs state that you should use STA. Xilinx docs iirc state that timing simulation should be done because you can't rely solely on STA for their FPGAs.

I don't know what Actel recommends.
 
On Saturday, 10 August 2013 00:53:40 UTC+1, alb wrote:
On 09/08/2013 11:38, RCIngham wrote:

[]

In your script which performs compilation you may want to include code

to

delete-and-reinitialise libraries to ensure that you are using only the

latest code, for instance something like:



if { [file exists test] } {

vdel -lib test -all

}

vlib -long test

vmap test test



Uhm, I thought that bringing the project up-to-date was done by make,

simply checking at the files' timestamps. I understand that make cannot

guarantee that vcom does not write properly the 'object' in the library,

therefore your suggestion might be on the safe side. The only issue with

this approach is that I would need to recompile every time the whole

library.



If you want to use 'make' (or 'vmake') I cannot assist further. We don't

use it here. Formal synthesis builds are ALWAYS done from scratch, and I

suspect that formal verification is similar.



I still do not grasp the reason behind compiling everything from

scratch. VHDL has clear dependencies and if you take care (as with a

makefile) about bringing all the dependencies up to date, why should you

synthesize everything again?



If we agree that a synthesis is a repeatable process, i.e. for a given

source of code+constraints it will always result in the same logic, then

I do not understand the need for building from scratch. If my assumption

is not the same, then again I wouldn't recompile something if I know it

works (if ain't broken don't fix it!).
Generally for consistancy in formal verification you build from scratch, run simulation, analyse results. Buggy build scripts are not _that_ uncommon. Also if you're only checking timestamps, what happens when you change build options, do all your files get recompiled?

That's not to say you should take this approach with your projects. It depends on the nature of your project.
 
Hi Robert,

On 09/08/2013 11:38, RCIngham wrote:
[]
In your script which performs compilation you may want to includ
code
to
delete-and-reinitialise libraries to ensure that you are using onl
the
latest code, for instance something like:

if { [file exists test] } {
vdel -lib test -all
}
vlib -long test
vmap test test

Uhm, I thought that bringing the project up-to-date was done by make,
simply checking at the files' timestamps. I understand that mak
cannot
guarantee that vcom does not write properly the 'object' in th
library,
therefore your suggestion might be on the safe side. The only issu
with
this approach is that I would need to recompile every time the whole
library.

If you want to use 'make' (or 'vmake') I cannot assist further. W
don't
use it here. Formal synthesis builds are ALWAYS done from scratch, an
I
suspect that formal verification is similar.

I still do not grasp the reason behind compiling everything from
scratch. VHDL has clear dependencies and if you take care (as with a
makefile) about bringing all the dependencies up to date, why should you
synthesize everything again?

If we agree that a synthesis is a repeatable process, i.e. for a given
source of code+constraints it will always result in the same logic, then
I do not understand the need for building from scratch. If my assumption
is not the same, then again I wouldn't recompile something if I know it
works (if ain't broken don't fix it!).
A key word here is 'formal', meaning that the resultant will be used fo
anything other than an informal engineering experiment. And 'from scratch
includes getting the files from the code repository, to ensure that yo
know which version of files are being used. We do safety-critical stuf
here, so this is important. YMMV.

However, these days it takes but a few seconds to recompile all the file
for a moderate-to-large-sized simulation, so what's not to like?


---------------------------------------
Posted through http://www.FPGARelated.com
 
Hi Al

Am Samstag, 10. August 2013 01:30:07 UTC+2 schrieb alb:
On 09/08/2013 09:04, goo...@mail... wrote:

[]

What is strange is the use of a test.do script which may severely
affect everything since I can ran whatever command in a do
script... I'm not sure how much I want to depend on that.

As I understand, test.do is getting created automatically by some tool.
So why do you use it anyway if you already have your own scripts and a makefile (as you mention later).
Just copy/paste the few things you think that might be useful in your own script and ignore the other one.

you seem to make things much more complicated than they are.

if you refer to the quoted text above then I may have lots of reasons
not to depend on a script that may interfere with my previously compiled
code. Placing compiler options in two separate scripts can
be confusing and IMO prone to errors. There should always be 'one source
of truth'! :)

I agree. But you are in controll of the files, so why are you bothered by the second script. Just leave it away.

Some simple hints: To start modelsim from the commandline: vsim -do
myscript.do

I'm interested in starting vsim in batch mode. Since I'm controlling my
flow with a makefile I tend to refrain from scattering my options around.

Ok, I got that wrong. Thought you just wanted to start vsim from batch, but missed that you also want to run it without the GUI appearing.

The documentation about this kind of batch mode states that you simply use i/o redirection for this purpose.

vsim tb_name <infile(.do) >outfile


The -L option in vsim is for verilog simulations only, isn't it? You
might let it away, unless you don't do mixed mode simulations.

from the reference manual:
-L <library_name> …
(optional) Specifies the library to search for design units instantiated
from Verilog and for VHDL default component binding. Refer to “Library
Usage” for more information. If multiple libraries are specified, each
must be preceded by the -L option. Libraries are searched in the order
in which they appear on the command line.

I see. Seems like I never needed the default binding stuff, so I never came across using -L with my VHDL Testbenches. Maybe Actel handles things differently.

If you don't explicitly mention an architecture in a vsim command,
the first one is taken from the library. Testbenches mostly have just
one architecture anyway.


I may have several architectures for several testcases (even though this
is not my preferred way). And any way, it seems I can pass either the
configuration or the entity/architecture pair as object arguments.
BTW, quoting the reference manual:
As you say for yourself, it's not your prefered way. I just wanted to point out that the automatic generated script worked on such an assumption. Maybe the programmer was just lazy.
Of course it is always better to tell the tool specifically which entity and architecture to use. Either directly or via the configuration.

"The entity may have an architecture optionally specified; if omitted
the last architecture compiled." (not the first).
Thanks for looking this up.
In any way, if one just names the entity and has several architectures, the tool might use the wrong one. So the assumption (only one arc. per tb) made by the Actel tool programmers is kind of careless and might only work within the world of their own flow.

When you create a local library, no vmap is needed. vlib my_library
is sufficient.


interesting. Does it mean that vcom looks always for the current
directory to look for libraries?

vcom will look up the lib pathes from the ini file and (as wee will see below) from the current directory too.

Actually vlib does not just take the name of a new library, but also the path.
So
vlib my_lib
really means
vlib ./my_lib
..
Therefore you can also create libraries elsewhere, rather than just the local directory.
Like
vlib ~/somwhere/below/your/homedir/my_other_lib

vmap is mainly needed to switch between several versions of the same library. E.g. if some tool changes or updates require this. But it's a rare occasion.

Also this can be done just once, because as you already have seen it
is stored in the ini file.

if no vmap is used than the library is not stored in the ini file. I did
a quick check: remove modelsim.ini from current directory, run vlib
mylib, no ini file created.

I did that test too, and found out that even vmap does not create a local ini file. It tries to access those in the modelsim install path. So, apart from the informations stored in the ini file modelsim seems to be able to find libraries in the local directory all by itself.
While
vlib nixus
will be recognized in vsim,
vlib ../nullus
won't appear in the library list since it is placed in the parent directory..
Here a vmap or a manual entry to the ini file (e.g. in case of write protection by the admin) is needed.

Never had problems to access some local libraries once they were created and you really don't want your work library path always get updated in your main ini file. Just imagine what would happen on a multi user system with a central installation (were that file will be write protected anyway) .


If you are using libraries more often, like the vendor libraries, you
should put them in a higher level ini file, so they are present for
all your projects. (Ini files are read in beginning from the modelsim
install path, then the users home dir and finally the project dir.
The first one is global but can be partly overwritten by the later
ones)


that's an interesting point. I got the impression that when using vmap
the first time a modelsim.ini was added to the local directory from the
modelsim install path and then modified accordingly.
That too, to give you some kind of template.
Still, even if you have no local ini file, modelsim knows all the library pathes when you start it. Guess from where, and don't forget that there is a ~/modelsim.ini too.
(Besides, using vmap does not create some local ini file when I use it. But this may be a version or installation specific difference if you happen to see that behavior.)

[]

By the way, behavioral simulation (presyn) and timing simulation
(post-par) make sense. What's the benefit of a post-syn simulation?
Is it just to distinguish synthesis tool bugs from design bugs?

Actually, if I'm happy with my postsynth simulation I may skip the
post-par, provided that static timing analysis doesn't report any
violation. What else do you expect from your post-par simulation?
I still might be missing the point, but I do not see what is the benefit
of a post-par simulation (even though I included it in my workflow in
the OP).
That didn't answer the question about the benefit of the postsynth simulation.

From some followup posts and other threads I know that the use of a timing simulation is under discussion. Mainly because of the long time it may take for a large design.
STA gives you some valuable informations almost instantly, but it depends on the assumption that your timing constraints are correct and complete.
It analyses the DUTs properties against their specs/constraints.
Still it does not tell how the DUT interacts with the external world.
An accurate timing simulation requires also the tb to do some kind of board level simulation. No easy task, indeed. But the only way to check for interface problems between DUT and external devices before actually touching real hardware.
It may be good to do at least some tests of the external interfaces rather than simulation the whole design with post-par simulations.


Have a nice simulation

Eilert
 
On Thursday, August 29, 2013 2:44:05 PM UTC-7, John Larkin wrote:
We're experimenting with heat sinking an Altera Cyclone 3 FPGA. To

measure actual die temperature, we built a 19-stage ring oscillator,

followed by a divide-by-16 ripple counter, and brought that out.



The heat source is the FPGA itself: we just clocked every available

flop on the chip at 250 MHz. We stuck a thinfilm thermocouple on the

top of the BGA package, and here's what we got:



https://dl.dropboxusercontent.com/u/53724080/Thermal/R2_Temp_Cal.jpg





We can now use that curve (line, actually!) to evaluate various heat

sinking options, for both this chip and the entire board.



The equivalent prop delay per CLB seems to be about 350 ps. The prop

delay slope is about 0.1% per degree C.





--



John Larkin Highland Technology, Inc



jlarkin at highlandtechnology dot com

http://www.highlandtechnology.com



Precision electronic instrumentation

Picosecond-resolution Digital Delay and Pulse generators

Custom laser drivers and controllers

Photonics and fiberoptic TTL data links

VME thermocouple, LVDT, synchro acquisition and simulation

The plot has increasing temperature with decreasing frequency which doesn't make sense. Increasing the frequency should be increasing the current consumption which results in increasing the temperature.

Are you doing this to confirm Altera's die/pkg thermal coefficient data or do they not publish that information?

Ed McGettigan
--
Xilinx Inc.
 
On 9/11/2013 7:19 PM, Ed McGettigan wrote:
On Thursday, August 29, 2013 2:44:05 PM UTC-7, John Larkin wrote:
We're experimenting with heat sinking an Altera Cyclone 3 FPGA. To

measure actual die temperature, we built a 19-stage ring oscillator,

followed by a divide-by-16 ripple counter, and brought that out.



The heat source is the FPGA itself: we just clocked every available

flop on the chip at 250 MHz. We stuck a thinfilm thermocouple on the

top of the BGA package, and here's what we got:



https://dl.dropboxusercontent.com/u/53724080/Thermal/R2_Temp_Cal.jpg





We can now use that curve (line, actually!) to evaluate various heat

sinking options, for both this chip and the entire board.



The equivalent prop delay per CLB seems to be about 350 ps. The prop

delay slope is about 0.1% per degree C.





--



John Larkin


The plot has increasing temperature with decreasing frequency which doesn't make sense. Increasing the frequency should be increasing the current consumption which results in increasing the temperature.

Are you doing this to confirm Altera's die/pkg thermal coefficient data or do they not publish that information?

Ed McGettigan
--
Xilinx Inc.

I don't think you understand what he is doing. The concept is to
construct a ring oscillator that consists of elements within the silicon
of the FPGA. This circuit will have a natural oscillation frequency
related to the delay of the circuit. The delay in the transistors will
be related to the temperature which will make the oscillation frequency
inversely related to temperature. Measure the frequency and you can
calculate the temperature. The heat dissipation of the ring oscillator
has negligible effect on the temperature.

You are confusing the cause and the effect. The temperature is the
cause and the ring oscillator frequency is the effect.

He is doing this to measure the temperature of the chip with different
heat sink designs rather than to actually design a cooling solution by
calculating results based on the thermal parameters of the various
components. Thermal design can be rather complicated if there are more
than the one heat source involved. Experimental methods can yield quick
results especially if some aspects of the design are done and fixed and
you don't really have a specific goal.

--

Rick
 
On Thursday, September 12, 2013 1:29:47 AM UTC-7, rickman wrote:
On 9/11/2013 7:19 PM, Ed McGettigan wrote:

On Thursday, August 29, 2013 2:44:05 PM UTC-7, John Larkin wrote:
We're experimenting with heat sinking an Altera Cyclone 3 FPGA. To
measure actual die temperature, we built a 19-stage ring oscillator,
followed by a divide-by-16 ripple counter, and brought that out.

The heat source is the FPGA itself: we just clocked every available
flop on the chip at 250 MHz. We stuck a thinfilm thermocouple on the
top of the BGA package, and here's what we got:

https://dl.dropboxusercontent.com/u/53724080/Thermal/R2_Temp_Cal.jpg

We can now use that curve (line, actually!) to evaluate various heat
sinking options, for both this chip and the entire board.

The equivalent prop delay per CLB seems to be about 350 ps. The prop
delay slope is about 0.1% per degree C.

--
John Larkin

The plot has increasing temperature with decreasing frequency which
doesn't make sense. Increasing the frequency should be increasing
the current consumption which results in increasing the temperature.

Are you doing this to confirm Altera's die/pkg thermal coefficient
data or do they not publish that information?


Ed McGettigan
--
Xilinx Inc.

I don't think you understand what he is doing. The concept is to
construct a ring oscillator that consists of elements within the silicon
of the FPGA. This circuit will have a natural oscillation frequency
related to the delay of the circuit. The delay in the transistors will
be related to the temperature which will make the oscillation frequency
inversely related to temperature. Measure the frequency and you can
calculate the temperature. The heat dissipation of the ring oscillator
has negligible effect on the temperature.

You are confusing the cause and the effect. The temperature is the
cause and the ring oscillator frequency is the effect.

He is doing this to measure the temperature of the chip with different
heat sink designs rather than to actually design a cooling solution by
calculating results based on the thermal parameters of the various
components. Thermal design can be rather complicated if there are more
than the one heat source involved. Experimental methods can yield quick
results especially if some aspects of the design are done and fixed and
you don't really have a specific goal.

--
Rick

You're right I misunderstood the use of the ring oscillator
with respect to the temperature measurement. I should have
read the original post more thoroughly.

Ed McGettigan
--
Xilinx Inc.
 
Hi Andy,

On 20/09/2013 23:51, jonesandy@comcast.net wrote:
If you only have one clock frequency, here is a decent starting
point:

I'm assuming that a system clock and a divided-by-two clock with an
internal PLL are still synchronous under all conditions.

Synplify has an option to set a default clock frequency and another
to apply the clock period to all unconstrained IO.

If you have any IO that need tighter constraints than one clock
period, you will need to set those up in the constraints manager.

So far I'm using SDC files to setup constraints and let Synplify check
the constraint file. But I'll explore the constraints manager...

Microsemi also has a good online tutorial for setting up constraints
for source-synchronous interfaces using virtual clocks.

I'm actually looking up the documentation for the 'SCOPE' tool, even
though I think I will convert it to SDC once everything is ruled out. I
prefer to have physical constraints and timing constraints separate.

I prefer to set timing constraints in synthesis so that they are
available for both Synplify and Designer. I do not use the Libero
front end, only Synplify and Designer.

I use ModelSim, Synplify and Designer separately. I find the Libero
front-end is most of the time in my way... I believe that setting the
time constraints during synthesis allows you to have more control on the
overall margin that can be taken away during P&R.

If you can avoid them, do not use multi-cycle or false path
constraints. They are very difficult to verify without much more
expensive tools. It is way too easy to relax the timing on unintended
paths with these constraints, and unless you hit just the right
conditions in a post-P&R simulation, you'll never know it (until
wierd stuff starts happening in the lab or in the field).

I do not particularly like false paths either, especially because it
would mean extra efforts to exclude them from verification as well...
 
That's very useful information since I'm seriously considering using a
MicroZed (or a Zybo - both have advantages and disadvantages for me).

Please continue to update us on your progress, and maybe write a
blog pointing us to how you overcome "issues" similar to the ones
you mention below.


On 22/11/13 16:57, John Larkin wrote:
We're into this signal processing project, using a microZed/ZYNQ thing as the
compute engine.

After a week or so of work by an FPGA guy and a programmer, we can now actually
read and write an FPGA register from a C program, and wiggle a bit on a
connector pin. Amazingly, the uZed eval kit does not include a demo of this, and
the default boot image does not configure the FPGA!

We're using their build tools to embed the FPGA config into the boot image. We'd
really like to be able to have a C program read a bitstream file and reconfigure
the FPGA, but we haven't been able to figure that out.

Have you asked on any of the ZedBoard/MicroZed forums?


If we run a C program that wiggles a pin as fast as it can, we can do a write to
the FPGA register about every 170 ns.

That's a useful figure to have in mind.


Without any attempts at optimization (like
dedicating the second ARM core to the loop) we see stutters (OS stealing our
CPU) that last tens or hundreds of microseconds, occasionally a full
millisecond. That might get worse if we run TCP/IP sessions or host web pages or
something, so dedicating the second ARM to realtime stuff would be good.

Personally I'm surprised that it is only a millisecond,
but then I'm a pessimist :)

I'm sure I'm teaching you to suck eggs, but you may
like to consider these points:
- contention at the hardware level, particularly w.r.t.
DRAM shared between two cores and the any FPGA logic

- cache effects. Even a 486 with its minimal cache
showed interrupt latencies that were sometimes
ten times the typical latency, all due to pessimal
caching. Larger caches would probably exhibit
poorer worst-case performance

- hard realtime systems are often best designed by
determining the worst-case software main-loop time
then once per main-loop configuring the hardware,
and then letting the hardware deal with all
actions for the next main-loop

For serious HRT work, personally I'd consider the XMOS
processors -- no caches nor interrupts so the dev system
can specifies the worst case performance. I don't know
about the propellor chips.
 
On 22/11/13 17:35, Jan Panteltje wrote:
On a sunny day (Fri, 22 Nov 2013 08:57:12 -0800) it happened John Larkin
jjlarkin@highNOTlandTHIStechnologyPART.com> wrote in
jj2v8997iq6amci40mr1t2g5l2e040ojgu@4ax.com>:

We're into this signal processing project, using a microZed/ZYNQ thing as the
compute engine.

After a week or so of work by an FPGA guy and a programmer, we can now actually
read and write an FPGA register from a C program, and wiggle a bit on a
connector pin. Amazingly, the uZed eval kit does not include a demo of this, and
the default boot image does not configure the FPGA!

We're using their build tools to embed the FPGA config into the boot image. We'd
really like to be able to have a C program read a bitstream file and reconfigure
the FPGA, but we haven't been able to figure that out.

If we run a C program that wiggles a pin as fast as it can, we can do a write to
the FPGA register about every 170 ns. Without any attempts at optimization (like
dedicating the second ARM core to the loop) we see stutters (OS stealing our
CPU) that last tens or hundreds of microseconds, occasionally a full
millisecond. That might get worse if we run TCP/IP sessions or host web pages or
something, so dedicating the second ARM to realtime stuff would be good.

In my view FPGA should be used for - or used as hardware solution.

I don't think there's any argument about that.

But it isn't the only consideration and doesn't invalidate the
concepts behind the Zynq chips.


Putting a processor in a FPGA will work. and a multitasker will constantly see I?O interruped.
use the procssor for what the processor is good for, and do the rest in hardware.
If you want I/O speed... Or any speed.

I/O speed is only one aspect. In most cases:
- predictable worst-case latency is a more significant parameter
- precision relative timing is a more significant parameter
- any processor with a cache /will/ cause problems w.r.t.
worst-case software guarantees


Else you are just building an other _slow_ mobo.
and then may as well use this:
http://www.bugblat.com/products/pif/

If you know of any boards that can be added to an RPi or something
similar and contains an FPGA that can capture three digital inputs
at >=1 GSa/s each, please let me know.

Without any defined spec for the application who knows?
Ad Jim says: You are about as vague as it gets on that,

Sure, but the poster didn't want advice about his application!
 
On 22/11/13 17:35, Jan Panteltje wrote:
Else you are just building an other _slow_ mobo.
and then may as well use this:
http://www.bugblat.com/products/pif/

What's the peak rate at which the RPi could
read data from the FPGA and copy it to DRAM?

I haven't had rime to understand the RPI's i/o, yet.
My concern is that while there are several i/o bits
available on the GPIO connector, they can't all be
read simultaneously. If true then GPIO i/o would be
reduced to the level of a bit-banged interface!
 
On Fri, 22 Nov 2013 18:36:59 +0000, Tom Gardner wrote:

That's very useful information since I'm seriously considering using a
MicroZed (or a Zybo - both have advantages and disadvantages for me).

Please continue to update us on your progress, and maybe write a blog
pointing us to how you overcome "issues" similar to the ones you mention
below.


On 22/11/13 16:57, John Larkin wrote:
We're into this signal processing project, using a microZed/ZYNQ thing
as the compute engine.

After a week or so of work by an FPGA guy and a programmer, we can now
actually read and write an FPGA register from a C program, and wiggle a
bit on a connector pin. Amazingly, the uZed eval kit does not include a
demo of this, and the default boot image does not configure the FPGA!

We're using their build tools to embed the FPGA config into the boot
image. We'd really like to be able to have a C program read a bitstream
file and reconfigure the FPGA, but we haven't been able to figure that
out.

Have you asked on any of the ZedBoard/MicroZed forums?


If we run a C program that wiggles a pin as fast as it can, we can do a
write to the FPGA register about every 170 ns.

That's a useful figure to have in mind.


Without any attempts at optimization (like dedicating the second ARM
core to the loop) we see stutters (OS stealing our CPU) that last tens
or hundreds of microseconds, occasionally a full millisecond. That
might get worse if we run TCP/IP sessions or host web pages or
something, so dedicating the second ARM to realtime stuff would be
good.

Personally I'm surprised that it is only a millisecond,
but then I'm a pessimist :)

I'm sure I'm teaching you to suck eggs, but you may like to consider
these points:
- contention at the hardware level, particularly w.r.t.
DRAM shared between two cores and the any FPGA logic

- cache effects. Even a 486 with its minimal cache
showed interrupt latencies that were sometimes ten times the
typical latency, all due to pessimal caching. Larger caches would
probably exhibit poorer worst-case performance

- hard realtime systems are often best designed by
determining the worst-case software main-loop time then once per
main-loop configuring the hardware, and then letting the hardware
deal with all actions for the next main-loop

For serious HRT work, personally I'd consider the XMOS processors -- no
caches nor interrupts so the dev system can specifies the worst case
performance. I don't know about the propellor chips.

Probably more egg sucking instruction, but if the instruction cache is
big enough and allows it, you can lock down the lines that contain the
critical ISRs and OS bits. You can only take this so far: at some point
you need to either throw your hands up in despair, or find a way to fork
that job into the FPGA.

At the price of making your software inscrutable, and slowing down
everything else, etc., etc., etc.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
On 22/11/13 19:10, Tim Wescott wrote:
On Fri, 22 Nov 2013 18:36:59 +0000, Tom Gardner wrote:
For serious HRT work, personally I'd consider the XMOS processors -- no
caches nor interrupts so the dev system can specifies the worst case
performance. I don't know about the propellor chips.

Probably more egg sucking instruction, but if the instruction cache is
big enough and allows it, you can lock down the lines that contain the
critical ISRs and OS bits.

I know the i960 allowed you to do that. Which more modern processors do?
(No, I'm not after an exhaustive list!)
 

Welcome to EDABoard.com

Sponsor

Back
Top