EDK : FSL macros defined by Xilinx are wrong

On Tue, 11 Mar 2014 16:14:31 -0700 (PDT)
langwadt@fonz.dk wrote:

Den tirsdag den 11. marts 2014 23.18.47 UTC+1 skrev glen herrmannsfeldt:
Tom Gardner <spamjunk@blueyonder.co.uk> wrote:

(snip)



I suspect the financial community as well. They will pay extraordinary

money to shave milliseconds off transaction times. Yes, they do encode

financial algorithms into FPGA hardware.



One well known example of their ability to spend money is that one

company spent $300m laying a transatlantic cable to reduce the

RTT of 65ms by 6ms.



Must not have read "Wait: the art and science of delay."


but for some reason they say that them making billions manipulating
prices by moving numbers around milliseconds faster than everyone
else is an essential service to society

-Lasse

Financial markets are much like the colon; at some point there stops
being an upside to increasing liquidity.

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order. See above to fix.
 
On 3/11/2014 2:23 PM, Jon Elson wrote:
GaborSzakacs wrote:



A quick DigiKey search showed a range of $2,583.75 (XC7VX330T-1FFG1157C)
to $39,452.40 (XC7V2000T-G2FLG1925E). These won't end up in any of my
designs any time soon.

REALLY! 1900 balls, and all of them have to solder perfectly or the chip
has to come off and be re-balled! Arghhhh! I'd LOVE to know who is
actually USING chips that expensive. Must be the military in those
$500 Million airplanes.

Jon

.... can't meet SEU numbers in an airplane with /one/ of those. You'd
need a couple!

Rob.
 
On 11/03/14 23:14, langwadt@fonz.dk wrote:
Den tirsdag den 11. marts 2014 23.18.47 UTC+1 skrev glen herrmannsfeldt:
Tom Gardner <spamjunk@blueyonder.co.uk> wrote:

(snip)



I suspect the financial community as well. They will pay extraordinary

money to shave milliseconds off transaction times. Yes, they do encode

financial algorithms into FPGA hardware.



One well known example of their ability to spend money is that one

company spent $300m laying a transatlantic cable to reduce the

RTT of 65ms by 6ms.



Must not have read "Wait: the art and science of delay."


but for some reason they say that them making billions manipulating
prices by moving numbers around milliseconds faster than everyone
else is an essential service to society

As I understand it, they say they are going to move
billions - and then withdraw a few ms later. This
allows them to gauge the way the market is going.
Or something.
 
Jon Elson wrote:
GaborSzakacs wrote:


A quick DigiKey search showed a range of $2,583.75 (XC7VX330T-1FFG1157C)
to $39,452.40 (XC7V2000T-G2FLG1925E). These won't end up in any of my
designs any time soon.

REALLY! 1900 balls, and all of them have to solder perfectly or the chip
has to come off and be re-balled! Arghhhh! I'd LOVE to know who is
actually USING chips that expensive. Must be the military in those
$500 Million airplanes.

Jon

Xilinx's traditional market for high-end parts has been ASIC hardware
co-simulation / prototyping. Maybe as a part of that $1M NRE it's not
such a big hit to buy one or two of these.

As for the number of balls, I haven't seen any indication that soldering
failure rates go up in relation to the number of balls in a BGA, at
least with the contract manufacturers that we use. And the re-balling
expense for these would still be a lot less that buying a new part...

--
Gabor
 
On 27/03/14 13:44, alb wrote:
In the past I've always followed two simple rules:
1. never break the trunk
2. commit often

So, what do you put on the trunk and what's on the branches?

One strategy which works for some types of system and
some ways of team working is:
- to have the trunk for development
- whenever you reach a milestone of some sort,
create a branch containing the milestone
- whenever the regression tests have passed,
check in to the trunk (hopefully frequently)
- if it is necessary to save when regression
tests have not been passed, checkin to a branch

Thus
- the head of the trunk always contains the latest
working system.
- significant historical waypoints can be found on
a branch
 
Still gives out error if you have clock for other signals.. in other modules..

"Pack:1107 - Unable to combine the following symbols into a single IOB"

I tried to set the "clock buffers" to just 1 so that It can contain the 1st clock in Project and It worked for Me!
 
On 30/03/14 23:07, alb wrote:
> Hi Tom, (I'm answering reposting to vhdl as well to keep the thread
Thanks; I don't subscribe to that and my reader won't let me
cross-post to a group to which I'm not subscribed.

cross-posted)

Tom Gardner <spamjunk@blueyonder.co.uk> wrote:
[]
In the past I've always followed two simple rules:
1. never break the trunk
2. commit often

So, what do you put on the trunk and what's on the branches?

The trunk has only merges from the branches [1].

The key is continuous incremental integration checked by
solid comprehensive automated tests.

My problem with your statement is that I don't agree with [1].
Merges of any kind require regression testing before re-merging.
The issue then become if your merge conflicts with another merge.

That is trapped by having a *continual* background automated
build and regression test on the head of the trunk. When a merge
causes a failure, it is picked up quickly. Traditionally there is a
screen visible to everybody that shows the build status as red
or green. Collective groans are emitted when it becomes red,
and sponge balls may be thrown at the perpetrator - who keeps
the balls ready to throw at the next malefactor :)

Since failure is picked up quickly, it is usually easy to
determine what caused the failure, and to correct it.

That's a damn sight easier to deal with than the major
inconsistencies that creep in if merging/re-integration
only occur occasionally.

The major danger is that seeing a green status light can
lull the unwary into thinking that there are no problems.
Naturally the "quality" of the greenness is completely
dependent on the quality of the tests.


When you start a
project from scratch you certainly need to start from the trunk, but
that situation lasts a very small amount of time (few days if not less)
and as soon as possible you branch from it.

Branches are - only - for two reasons [2]:

In my experience [2] (i.e. refactoring) is the tail that wags this dog.
But all development can be regarded as refactoring, e.g. refactoring
an unimplemented feature into a functioning feature.


1. adding features
2. removing bugs/problems

both of them start from the trunk and both of them should merge back in
the trunk.

Agreed, but I don't see the benefit of a branch in that process.
Trunk->workspace->trunk.


One strategy which works for some types of system and
some ways of team working is:
- to have the trunk for development

Is the trunk always functioning? If this is the case it cannot be really
for development since every commit should be done only when the whole
changeset is working (tested).

Yes, and yes.

The head of the trunk is frequently re-built and re-tested in its
entirety - continuous incremental integration and regression testing.


- whenever you reach a milestone of some sort,
create a branch containing the milestone

this is what I call 'tagging'. Yes it is a branch, but its purpose is to
freeze the status of your development at some known state. These tags
are essential when multiple repositories are pointing at each other for
the purpose of reuse. You can point to version 1.2.3 and stick to it or
follow its evolution to whatever version.

Agreed.


- whenever the regression tests have passed,
check in to the trunk (hopefully frequently)

so the regression test are run from your local copy? or from the
branches? I think I missed this part.

On your local copy whenever convenient for you,
and continually on the head of the trunk.


- if it is necessary to save when regression
tests have not been passed, checkin to a branch

uhm...at this point you branched to fix something while the trunk is
being developed. When do you decide that is time to merge back into the
trunk?

Whenever convenient for you. Preferably merge to the trunk
several times a day.

Since the deltas are small there are unlikely to be
major problems; when there occur there are not many
places where the culprit could be.


Thus
- the head of the trunk always contains the latest
working system.

This contraddicts your earlier statement or I might have misunderstood
what you mean by keeping the trunk for development. If no one pair of
consecutive commits breaks the trunk than we are on the same page, but
while I'm suggesting to branch and therefore commit even if you break
something, you are suggesting that noone should commit to the trunk
unless her/his piece is working (system wise).

Correct. Why save and publish something that is broken?


- significant historical waypoints can be found on
a branch

Yes, including the ones that broke your regression tests...

Those should only be in your private workspace.


The only one exception is for modifications which are straight
forward and do not require more than few minutes of work.

Ah, the infamous "this change is so small it can't
possibly break anything"!


refactoring is more delicate since it requires a solid regression
suite in place to be sure that functionality is not affected

All development is refactoring.
 
On Sunday, March 30, 2014 11:07:39 PM UTC+1, alb wrote:
I fully agree with you here, that would be my next item on my personal

agenda...but revolutionary change requires time and patience ;-).

In my own experience, I've found it's far easier to lead by example than battle the internal corporate structure - I soon got tired of arguing!

If the company is wedded to out-dated version control software I'll still use git locally. There are often wrappers[1] that make interfacing easy. I'll run GitLab to provide myself a nice HTTP code/diff browser etc. If there's no bug-tracker(!!) I'll use GitLab issues to track things locally. If the company has no regression, I'll run a Jenkins server on my box. If tests aren't scripted, I'll spend some time writing some Makefiles. If the tests aren't self-checking, I'll gradually add some pass/fail criteria so the tests become useful. I'll then start plotting graphs for things like simulation coverage, FPGA resource utilisation etc. using Jenkins.

Unless you're working in an extremely restrictive environment with no control over your development box, none of this requires sign-off from the powers that be. You'll find other developers and then management are suddenly curious to know how you can spot only a few minutes after they've checked something in that the resource utilisation for their block has doubled... or how you can say with such confidence that a certain feature has never been tested in simulation. Once they see the nice web interface of Jenkins and the pretty graphs, understand the ease with which you can see what's happening in the repository, they'll soon be asking for you to centralise your development set-up so they can all benefit :)

Chris

[1] https://www.kernel.org/pub/software/scm/git/docs/git-svn.html

PS apologies for breaking the cross-post again... curse GG
 
Den tirsdag den 8. april 2014 22.09.49 UTC+2 skrev Phil Hobbs:
On 04/08/2014 02:16 PM, John Larkin wrote:

On Tue, 08 Apr 2014 13:39:31 -0400, Phil Hobbs

pcdhSpamMeSenseless@electrooptical.net> wrote:



On 04/08/2014 01:25 PM, John Larkin wrote:





I got a spreadsheet from Altera that lists the on-chip power supply

bypass caps on an Arria II GX95 FPGA. I was kind of shocked to see 32

listed capacitors, most around 1 nf, but a Vcc_core (0.9 volt) cap of

501 nF. I was told that these caps are on-chip, not in-package.



Is that possible? 501 nF on an FPGA chip?







Maybe attached to the top of the die with micro-C4s.



Cheers



Phil Hobbs



Yeah, there could be discrete caps under the lid. But 32 of them?



Well, that would give you the least inductance, for sure.





It doesn't look like there are caps on/in the BGA carrier, which

appears to be a 12-layer PCB.



https://dl.dropboxusercontent.com/u/53724080/Parts/BGAs/3.jpg



https://dl.dropboxusercontent.com/u/53724080/Parts/BGAs/6.jpg



https://dl.dropboxusercontent.com/u/53724080/Parts/BGAs/GX95_caps.pdf



Cool, however they do it. Every chip should have internal bypasses.



They used to sell DIP sockets with built-in bypass caps....which

unfortunately had about an inch of lead length.

That was kinda hard to get around since someone for some reason had
decided to put gnd and vcc at opposite ends of the chips

-Lasse
 
On 4/8/2014 5:59 PM, langwadt@fonz.dk wrote:
Den tirsdag den 8. april 2014 22.09.49 UTC+2 skrev Phil Hobbs:
On 04/08/2014 02:16 PM, John Larkin wrote:

On Tue, 08 Apr 2014 13:39:31 -0400, Phil Hobbs

pcdhSpamMeSenseless@electrooptical.net> wrote:



On 04/08/2014 01:25 PM, John Larkin wrote:





I got a spreadsheet from Altera that lists the on-chip power supply

bypass caps on an Arria II GX95 FPGA. I was kind of shocked to see 32

listed capacitors, most around 1 nf, but a Vcc_core (0.9 volt) cap of

501 nF. I was told that these caps are on-chip, not in-package.



Is that possible? 501 nF on an FPGA chip?







Maybe attached to the top of the die with micro-C4s.



Cheers



Phil Hobbs



Yeah, there could be discrete caps under the lid. But 32 of them?



Well, that would give you the least inductance, for sure.





It doesn't look like there are caps on/in the BGA carrier, which

appears to be a 12-layer PCB.



https://dl.dropboxusercontent.com/u/53724080/Parts/BGAs/3.jpg



https://dl.dropboxusercontent.com/u/53724080/Parts/BGAs/6.jpg



https://dl.dropboxusercontent.com/u/53724080/Parts/BGAs/GX95_caps.pdf



Cool, however they do it. Every chip should have internal bypasses.



They used to sell DIP sockets with built-in bypass caps....which

unfortunately had about an inch of lead length.


That was kinda hard to get around since someone for some reason had
decided to put gnd and vcc at opposite ends of the chips

-Lasse

That just created a whole new market for the Rogers Q-cap (TM).
Those caps were big flat things the size of the socket with
leads on opposite corners to match the TTL pinouts. You could also
use them under an IC without a socket by sharing the same component
holes.

As for on-BGA caps, your X-Ray inspection will show these quite
nicely.

--
Gabor
 
hi i need viterbi decoder code for my project where convolutional encoder is used for encoding please if possible can you give some websites regard this.
thank you
 
sagarmemane4@gmail.com wrote:
On Friday, March 28, 2014 11:03:46 AM UTC+5:30, ahmad...@gmail.com wrote:
Still gives out error if you have clock for other signals.. in other modules..



"Pack:1107 - Unable to combine the following symbols into a single IOB"



I tried to set the "clock buffers" to just 1 so that It can contain the 1st clock in Project and It worked for Me!

ERROR:MapLib:93 - Illegal LOC on IPAD symbol "autman" or BUFGP symbol
"autman_BUFGP" (output signal=autman_BUFGP), IPAD-IBUFG should only be LOCed
to GCLKIOB site.

same wrrer
how to solve????

wher is this tab

My first guess is that you have unwittingly created latches in your
design, because "autman" doesn't sound like the typical signal name
for a clock. Check your synthesis warnings for latches.

On the other hand, if "autman" is actually a clock signal, then the
error message is pretty explicit in saying it should be assigned to
a global clock-capable pin. If you already have a board layout and
want to try to reduce the error to a warning, you could add this to
your .ucf file:

NET "autman" CLOCK_DEDICATED_ROUTE = FALSE;

On the other hand, whether that works or not depends on which FPGA
family you're working with. Some of the newer devices have no way
to use general routing resources to hook up clocks.

--
Gabor
 
sagarmemane4@gmail.com wrote:

ERROR:MapLib:93 - Illegal LOC on IPAD symbol "autman" or BUFGP symbol
"autman_BUFGP" (output signal=autman_BUFGP), IPAD-IBUFG should only be
LOCed to GCLKIOB site.


same error

how to solve this problem

i am using 9.1 ver.
Global clocks can only be input at specific pins of an FPGA. If you
absolutely MUST use a non-global clock pin, then you need to route
it through the fabric, and the timing will be less well controlled.

Jon
 
On Saturday, February 18, 1995 9:26:05 AM UTC+5:30, u801...@cc.nctu.edu.tw wrote:
Hello,

I would like to know something diffrent among them? I was always cunfused
by them all.

In my previous impression, they are:

PAL: programmable AND, fixed OR
PLD: programmable AND, programmable OR
PLA: ???????????? AND, ???????????? OR
GAL=PLD ??

Please correct the above, Thanks in advance!

Jason
 
azimalimoll@gmail.com wrote:
On Saturday, February 18, 1995 9:26:05 AM UTC+5:30, u801...@cc.nctu.edu.tw wrote:
Hello,

I would like to know something diffrent among them? I was always cunfused
by them all.

In my previous impression, they are:

PAL: programmable AND, fixed OR
PLD: programmable AND, programmable OR
PLA: ???????????? AND, ???????????? OR
GAL=PLD ??

Please correct the above, Thanks in advance!

Jason

PAL was programmable AND, fixed OR. And it was a bipolar process
with fusible links for programming (one-time programmable).

GAL, also called PALCE, was the exact same thing as a PAL from
the architecture standpoint, but was CMOS and electrically erasable
and reprogrammable.

PLD is just a generic term for all programmable logic devices, and
does not imply any particular architecture.

PLA was used mostly for programmable AND / programmable OR parts
as you noted, but there may have been exceptions.

CPLD or "complex" PLD usually implies a PAL or PLA architecture
with multiple PAL or PLA-like blocks interconnected by a global
matrix. However in recent years, the term is also used for small
FPGA devices that have built-in non-volatile configuration memory.

--
Gabor
 
On 6/16/2014 4:05 PM, GaborSzakacs wrote:
azimalimoll@gmail.com wrote:
On Saturday, February 18, 1995 9:26:05 AM UTC+5:30,
u801...@cc.nctu.edu.tw wrote:
Hello,

I would like to know something diffrent among them? I was always
cunfused
by them all.

In my previous impression, they are:

PAL: programmable AND, fixed OR
PLD: programmable AND, programmable OR
PLA: ???????????? AND, ???????????? OR
GAL=PLD ??

Please correct the above, Thanks in advance!

Jason


PAL was programmable AND, fixed OR. And it was a bipolar process
with fusible links for programming (one-time programmable).

GAL, also called PALCE, was the exact same thing as a PAL from
the architecture standpoint, but was CMOS and electrically erasable
and reprogrammable.

PLD is just a generic term for all programmable logic devices, and
does not imply any particular architecture.

PLA was used mostly for programmable AND / programmable OR parts
as you noted, but there may have been exceptions.

CPLD or "complex" PLD usually implies a PAL or PLA architecture
with multiple PAL or PLA-like blocks interconnected by a global
matrix. However in recent years, the term is also used for small
FPGA devices that have built-in non-volatile configuration memory.

Not much can be added to Gabor's explanation. But for the most part
these terms are not of much use today and are mostly either historical
or marketing. PLD in theory covers it all including FPGAs, but is often
used to denote a smaller device at the low price end of the scale. Even
that is becoming blurred as some FPGAs start to show up at the low end
of pricing.

So if you want to design a logic device into your system, don't worry
with terms and abbreviations. Just decide what features you need and
select a part that works for you.

--

Rick
 
On 18/06/14 00:34, rickman wrote:
> So if you want to design a logic device into your system, don't worry with terms and abbreviations. Just decide what features you need and select a part that works for you.

That's sensible.

A rule of thumb when /starting/ to select a device is:
- CPLD: smaller, fewer flip flops, lower maxim clock
speed, but more predictable timing that doesn't
change much as when you vary the implemented function.
Toolsets might be significantly simpler.
- FPGA: larger, more flip flops and other functions,
higher maximum internal clock speed (but similar
external clock speed), but timing that can vary
significantly with apparently trivial changes to the
VHDL/Verilog.

but /completing/ selection of a device requires a detailed
understanding of its capabilities w.r.t. your application.

For beginners, I'd suggest starting with a CPLD unless
they require an FPGA's capabilities.
 
On Wednesday, June 18, 2014 8:55:58 AM UTC+1, Tom Gardner wrote:
On 18/06/14 00:34, rickman wrote:

So if you want to design a logic device into your system, don't worry with terms and abbreviations. Just decide what features you need and select a part that works for you.



That's sensible.



A rule of thumb when /starting/ to select a device is:

- CPLD: smaller, fewer flip flops, lower maxim clock

speed, but more predictable timing that doesn't

change much as when you vary the implemented function.

Toolsets might be significantly simpler.

- FPGA: larger, more flip flops and other functions,

higher maximum internal clock speed (but similar

external clock speed), but timing that can vary

significantly with apparently trivial changes to the

VHDL/Verilog.



but /completing/ selection of a device requires a detailed

understanding of its capabilities w.r.t. your application.



For beginners, I'd suggest starting with a CPLD unless

they require an FPGA's capabilities.

Also note that a CPLD tends to need one supply whilst an FPGA tends to need at least three.
 
rickman <gnuarm@gmail.com> wrote:
Not much can be added to Gabor's explanation. But for the most part
these terms are not of much use today and are mostly either historical
or marketing.

Unsurprising since the original post was 19 years ago...

Theo
 
"colin" <colin_toogood@yahoo.com> wrote in message
news:00ed3019-24c7-43f6-83fa-f3c3d1669bc3@googlegroups.com...
On Wednesday, June 18, 2014 8:55:58 AM UTC+1, Tom Gardner wrote:
On 18/06/14 00:34, rickman wrote:

So if you want to design a logic device into your system, don't worry
with terms and abbreviations. Just decide what features you need and
select a part that works for you.



That's sensible.



A rule of thumb when /starting/ to select a device is:

- CPLD: smaller, fewer flip flops, lower maxim clock

speed, but more predictable timing that doesn't

change much as when you vary the implemented function.

Toolsets might be significantly simpler.

- FPGA: larger, more flip flops and other functions,

higher maximum internal clock speed (but similar

external clock speed), but timing that can vary

significantly with apparently trivial changes to the

VHDL/Verilog.



but /completing/ selection of a device requires a detailed

understanding of its capabilities w.r.t. your application.



For beginners, I'd suggest starting with a CPLD unless

they require an FPGA's capabilities.

Also note that a CPLD tends to need one supply whilst an FPGA tends to
need at least three.

A CPLD will generally not have any RAM available, any storage you need
having to be made up from the normal registers available throughout the
fabric. It will also not have any integrated MACs or other DSP components.
It will contain shadow flash configuration memory within the device so will
tend to be instant boot.

An FPGA will have much RAM/MACs/DSP components, and higher end newer devices
will also have processors integrated in to the fabric. They tend to be
volatile devices and require external configuration memory which slows the
boot time to several 100ms.
 

Welcome to EDABoard.com

Sponsor

Back
Top