EDK : FSL macros defined by Xilinx are wrong

On Sat, 18 Oct 2014 15:53:31 +0800
Bruce Varley wrote:

I have two clocks, clk which is the FPGA clock rate, and sclk which I
create using a simple divide by n counter. Typically, sclk is 1024
times slower than clk.

An event occurs that sets a reg, DR, for one clk cycle.

There is a register, calcreg [7:0] which is to be incremented slowly,
but reset to zero on DR.

Your problem is a hair underdefined here. Is it imperative that
calcreg be reset to zero at the exact time of DR, or is it good enough
that the DR ensure that calcreg go to zero on the next sclk edge?

Is using the two clocks simply bad practice, ie. should everything be
done in a single always block at clk rate?

"Bad practice" is a bit of a blanket statement, but in this instance
yes. The two clock domains are causing you unnecessary grief and
probably can't be justified. mnentwig already provided example code
for how to do it synchronously; the /1024 counter becomes a "count up"
enable input, and DE becomes a synchronous reset.

In an FPGA, any time you find yourself with multiple clock domains you
should always ask "What is the technical reason that _requires_ it be
this way?" Sometimes that question will have a good answer, but when
it doesn't keep it all on the same clock.

Is there a standard way to latch the DR signal when it occurs on the
fast clock, so that it will be there on the next transition of sclk,
which must then clear the DR latch? I've tried this, and come up with
the same build error with the latch.

Various forms of asynchronous horribleness. Gabor gave you one, Jan's
use of the async clear is another, and I'll present you a third by
saying to Google "flancter" (a clever little arrangement with two
cross-coupled flops and an XOR gate).

All these will work. All of them will require you to get creative
with your design's timing constraints if you want the tools to really
analyze the path and make _sure_ that it works over
process/temp/voltage variations. Timing analysis and async logic go
hand in hand, and both have the exciting feature that, unlike
synchronous logic, you'll never have a simulation that can tell you
that you've got it right. You just design it very hard, then sit there
digging through the report outputs of your timing analysis to make sure
that it actually got constrained, and that the constraint actually does
what you want, and then you hope that on subsequent recompiles key
signals don't get renamed and break it. Then you too can know the joy
of wasting an hour or two trying to get the tools to work properly on a
chunk of code that's only 20 lines long.

Or you can do it synchronously.

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order. See above to fix.
 
On Wednesday, October 22, 2014 9:10:41 AM UTC+1, alb wrote:
Hi everyone,



I've recently had to argue why it is not 'sane' to budget 500 hours of

development against 200 of verification.



If you ask the FPGA developer he'd say a factor of 2/3 has to be

considered for verification w.r.t. design (that I tend to agree to).



I'd like to give some grounds to those estimates and I asked the fpga

group leader to compare among several completed projects what is this

ratio. We are usually collecting lots of data on the amount and type of

work we do every day and this data can be used to verify the

verification effort w.r.t. the design effort.



His counter argument is that it is difficult to compare projects

due to their peculiarity, implying that there's very little that we can

learn from the past (that I obviously do not buy!).



As of your knowledge is there any source of - trusted - data that I

can point at? Is there really a ratio that can be 'generally' applied?



Any comment/opinion/pointer is appreciated.



Al

There's a regular industry survey carried out by Mentor that might help (it's a blind study across all industries/users/countries)

http://blogs.mentor.com/verificationhorizons/blog/2013/07/15/part-5-the-2012-wilson-research-group-functional-verification-study/

That found that the % of FPGA project time spent on verification has grown from a mean of 49% in 2007 to 56% in 2012, which indicates that more time is spent doing verifiation (on average) than design!
Obviously various factors need to be taken into account such as design size & complexity, ammount of reuse etc. but a figure of 20-30% is low by industry standards and is bordering on wishful thinking (the survey also found that 67% projects were late!)

Hope thta's useful
regards

- Nigel
 
On Wed, 22 Oct 2014 08:10:37 +0000, alb wrote:

Hi everyone,

I've recently had to argue why it is not 'sane' to budget 500 hours of
development against 200 of verification.

If you ask the FPGA developer he'd say a factor of 2/3 has to be
considered for verification w.r.t. design (that I tend to agree to).

I'd like to give some grounds to those estimates and I asked the fpga
group leader to compare among several completed projects what is this
ratio. We are usually collecting lots of data on the amount and type of
work we do every day and this data can be used to verify the
verification effort w.r.t. the design effort.

His counter argument is that it is difficult to compare projects due to
their peculiarity, implying that there's very little that we can learn
from the past (that I obviously do not buy!).

As of your knowledge is there any source of - trusted - data that I can
point at? Is there really a ratio that can be 'generally' applied?

Any comment/opinion/pointer is appreciated.

Al

p.s.: this thread is intentially crossposted to comp.lang.vhdl and
comp.arch.fpga. Please use the followup-to field in order to avoid
breaking the thread.

The only opinion I can offer is a cynical one: if the PM is crazy enough
not to plan on adequate verification, then he's too crazy to listen to
reason.

Make your point, but don't expect to be listened to this time around --
sometimes when you make these arguments the person who ends up listening
and taking action is a bystander today, but a PM two years from now.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
> Yes, it's fairly easy using xvhdl, xvlog, xelab and xsim as described in UG900.

The problem with command line tools is once you are in vivado gui, it is not possible to reload the files any more, one has to kill xsim. The overhead of an in memory project is extremely low and vivado know how to recompile the simulation.
 
Hi everyone,

I've recently had to argue why it is not 'sane' to budget 500 hours of
development against 200 of verification.

I have always heard that one hour of development meant four hours o
verification. That gives you 800 hours of verification against 200 o
development.


Does your boss like respins? Because that is how you get respins.

John Eaton






---------------------------------------
Posted through http://www.FPGARelated.com
 
Hi Nigel,

In article <a57633ed-42b3-42d1-b0a0-68bc2864b181@googlegroups.com> you wrote:
[]
That found that the % of FPGA project time spent on verification has
grown from a mean of 49% in 2007 to 56% in 2012, which indicates that
more time is spent doing verifiation (on average) than design!

thanks a lot for the pointer. Indeed the data are quite interesting and
it seems the 70/30 ratio is (on average) far from reality. It be more
55/45 which is the same ratio our client has for software development
(testing vs coding).

Obviously various factors need to be taken into account such as design
size & complexity, ammount of reuse etc. but a figure of 20-30% is low
by industry standards and is bordering on wishful thinking (the survey
also found that 67% projects were late!)

It would be nice to understand what made the rest of the projects in
time. Interesting enough ~40% of the verification effort is debugging
(according to the same study, see part 6:
http://blogs.mentor.com/verificationhorizons/blog/2013/07/22/part-6-the-2012-wilson-research-group-functional-verification-study/)

Interestingly enough the amount of time spent for coding the testbench
is ~20% and a similar amount of time is needed for running the tests. I
guess that these metrics could be interesting if applied to our
completed projects in order to point out strengths and weaknesses of our
flow.

Additionally I believe the amount of lines of code may represent a
valuable metric to represent the complexity of the projects, so we can
classify similar projects together and see where we are.

Al
 
Hi Tim,

Tim Wescott <seemywebsite@myfooter.really> wrote:
[]
The only opinion I can offer is a cynical one: if the PM is crazy enough
not to plan on adequate verification, then he's too crazy to listen to
reason.

Unfortunately is not only about planning. Budgets are completely out of
target (systematically!) and the main reason (at least from what I hear)
is that if we budget more we won't get the project. Here in Switzerland
salaries are expensive, as well as manufacturing. This burden is hidden
in the offer phase, but then jumps up in the development phase, where
you realized it takes 50% (or more) more time or money to do what you
promised to do.

Make your point, but don't expect to be listened to this time around --
sometimes when you make these arguments the person who ends up listening
and taking action is a bystander today, but a PM two years from now.

I agree, sooner or later somebody will make a difference, if (s)he
doesn't quit too soon. But even in that case (s)he will make the right
decision and my small contribution will finally be rewarded ;-).

Al
 
Hi John,

jt_eaton <84408@embeddedrelated> wrote:
[]
I have always heard that one hour of development meant four hours of
verification. That gives you 800 hours of verification against 200 of
development.

a ratio 20/80 in favour of verification seems a bit exagerated, but I
can understand that variance might be important. This is one of the
reasons why the analysis shown in the link posted is lacking a piece of
information.

When it comes to large spreads between data the mean is not
representative anymore, or cannot be used to draw too many conclusions.
I'd say that an additional effort could be done to select different
'populations', according to different types of metrics (lines of code,
requirements changes, turnover, ...) and see if there's really any
correlation.

The only effort to differentiate the projects is in the amount of gates,
which does not necessarily equal complexity.

Al
 
On Thu, 23 Oct 2014 07:22:36 +0000, alb wrote:

Hi Tim,

Tim Wescott <seemywebsite@myfooter.really> wrote:
[]
The only opinion I can offer is a cynical one: if the PM is crazy
enough not to plan on adequate verification, then he's too crazy to
listen to reason.

Unfortunately is not only about planning. Budgets are completely out of
target (systematically!) and the main reason (at least from what I hear)
is that if we budget more we won't get the project. Here in Switzerland
salaries are expensive, as well as manufacturing. This burden is hidden
in the offer phase, but then jumps up in the development phase, where
you realized it takes 50% (or more) more time or money to do what you
promised to do.

I used to work at a company like that. Worse, top management practically
insisted that it happen -- if a project manager came to them with a
realistic schedule, they'd say "trim it down to XXX!", but when it came
back trimmed, they'd start up the project while complaining that
engineering always lied about schedules!

So it's not just a Swiss thing.

Make your point, but don't expect to be listened to this time around --
sometimes when you make these arguments the person who ends up
listening and taking action is a bystander today, but a PM two years
from now.

I agree, sooner or later somebody will make a difference, if (s)he
doesn't quit too soon. But even in that case (s)he will make the right
decision and my small contribution will finally be rewarded ;-).

And things will get better. Then some new guy will tell the board a pack
of lies, get hired, and it'll all be in the dumpster again.

Which is why I'm now an independent consultant.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
On Wednesday, October 8, 2014 2:30:54 PM UTC-7, Mike Perkins wrote:
I have started using the TI TUSB1210 which is a USB PHY with a ULPI
interface.

However, I can virtually guarantee that during enumeration, the device
will lock up with DIR permanently DIR high in High Speed mode and
seemingly with the terminating resistor enabled such that it keeps both
D+ and D- low. I can make it happen quite reliably.

I have sent a few messages on the relevant TI forum and despite promises
the TI guys there haven't got back to me even when chased.

Unless people here suggest I persist with this device, can anyone
recommend an alternative USB PHY with a ULPI interface that has less
unintended features?


--
Mike Perkins
Video Solutions Ltd
www.videosolutions.ltd.uk

Have you looked at the Cypress device?
 
In article <caro6kFgjaqU1@mid.individual.net>, alb <al.basili@gmail.com> wrote:
snip

It would be nice to understand what made the rest of the projects in
time. Interesting enough ~40% of the verification effort is debugging
(according to the same study, see part 6:
http://blogs.mentor.com/verificationhorizons/blog/2013/07/22/part-6-the-2012-wilson-research-group-functional-verification-study/)

Interestingly enough the amount of time spent for coding the testbench
is ~20% and a similar amount of time is needed for running the tests. I
guess that these metrics could be interesting if applied to our
completed projects in order to point out strengths and weaknesses of our
flow.

Additionally I believe the amount of lines of code may represent a
valuable metric to represent the complexity of the projects, so we can
classify similar projects together and see where we are.

At some sort of user conference - over 10 years ago now I'm sure - an engineer
presented a "project management" like presentation. It's main thesis was
tracking a project's life through checkins to the revision control system.
As I recall, he measured two metrics, number of differences checked in, and
number of new files checked in.

The paper was purely looking backward at a completed project. But graphing these
two metrics with respect to time, surely showed a pretty good indicator of
where a project was. An asymptotic line approaching (but never reaching zero).

He highlighted some events on the graph - spikes when a big bug was found, and
fixed, and more interesting too me - highlighted the time of the management
'rah-rah' speeches. You know, "We need to get this stuff done, put in the hours,
it's crunch time..."

Those dates show no noticeable change in the graphs' progressions...

Thought it was funny, and interesting.

Don't have a reference, the details are murky. But fun presentation for those
(engineers) in the audience. And on a normally (IMHO) dull subject for
engineers. Lots of head nodding from the audience.

Regards,

Mark
 
On 24/10/2014 00:56, jim.tavacoli@gmail.com wrote:
On Wednesday, October 8, 2014 2:30:54 PM UTC-7, Mike Perkins wrote:
I have started using the TI TUSB1210 which is a USB PHY with a ULPI
interface.

However, I can virtually guarantee that during enumeration, the device
will lock up with DIR permanently DIR high in High Speed mode and
seemingly with the terminating resistor enabled such that it keeps both
D+ and D- low. I can make it happen quite reliably.

I have sent a few messages on the relevant TI forum and despite promises
the TI guys there haven't got back to me even when chased.

Unless people here suggest I persist with this device, can anyone
recommend an alternative USB PHY with a ULPI interface that has less
unintended features?


Have you looked at the Cypress device?

I sort of discounted them as they weren't showing in stock with DigiKey.

Have you used them? Are they reliable?

--
Mike Perkins
Video Solutions Ltd
www.videosolutions.ltd.uk
 
Hi DJ,

DJ Delorie <dj@delorie.com> wrote:
glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
Just wonder, as I haven't noticed it yet in the discussion,
and also IANAL, but how well does GPL work covering hardware?

The GPL doesn't work well on actual hardware (resistors, circuit boards,
fabricated mechanical parts, etc), because the concept of "copying" is
hugely different - software is just data, it can be copied for
effectively free.

I believe you are confusing 'free speech' with 'free beer'. There's no
such concept as 'free beer' and whoever is twisting the meaning of free
software toward believing that is 'free of cost' not only portraits a
false image (learning to use free software and maintaining it has an
economical cost), but also accepts to give up his/her rights.

Hardware has a real cost per item. IIRC this has
come up in the past and the FSF just isn't interested in trying to make
the GPL apply to hardware, although other groups have made attempts at
"open source hardware" but that's more of a promise than a license.

There are 'open source hardware' that are at a mature stage ready to use
(see CERN OHL) and actually already used in production. I wish one of
those guys can chime in here, but I'm not sure if they are frequent
users of this group.

Al
 
On Thursday, November 6, 2014 4:57:46 AM UTC-5, rickman wrote:
If the license says you
distribute the source code in the same manner as the compiled code, you
should be able to include it in the internal Flash.

The license doesn't actually say that; earlier posts in this thread were slightly misleading.

The license gives you some options on how to do it, but the gist is the source has to be made available and transferred to others downstream in a conventional manner. 8-track tapes is probably a stretch in this day and age; buried in flash blocks only accessible via JTAG/BDM is probably out of the question.
 
Hi DJ,

DJ Delorie <dj@delorie.com> wrote:
http://www.gnu.org/licenses/gpl-faq.html#GPLRequireSourcePostedPublic

Don't read the FAQ, read the license itself:

"Conveying under any other circumstances is permitted solely under the
conditions stated below."

I.e. the license makes conditions, but does not change the license of
the other parts. There may be *other* conditions which you must *also*
meet, based on the *other* licenses, but those conditionsare not changed
by also using GPL'd parts.

Quoting the Preamble:

"To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others."

and again:

"...if you distribute copies of such a program, whether gratis or for a
fee, you must pass on to the recipients the same freedoms that you
received. You must make sure that they, too, receive or can get the
source code."

So if you combine a GPL'd work with a proprietary work, the result is
not GPL'd - the result is that you just can't distribute it, since the
licenses have conflicts which you cannot resolve.

Meaning that using GPL'ed work with proprietary work is *viable* only if
proprietary work is licensed under a GPL-compatible license.

But if you release the modified version to the public in some way, the
GPL requires you to make the modified source code available to the
program's users, under the GPL. [...]"

Even this doesn't say that the license of the other parts changes, only
that the distribution must be under the terms required by the GPL, as it
applies to the GPL'd portion.

I believe you are distorting my statements. The terms required by the
GPL do not apply to the GPL'ed portion only, they apply to the entire
work:

"You must license the entire work, as a whole, under this License to
anyone who comes into possession of a copy. This License will therefore
apply, along with any applicable section 7 additional terms, to the
whole of the work, and all its parts, regardless of how they are
packaged"

This is because Xilinx licenses are not 'viral'.

No license is 'viral'. The terms either apply or you don't use it.
If you use multiple licenses, all terms apply.

Licenses like GPL are defined *viral* or *copyleft*, meaning that they
call for distrubution under the *very same terms* for any derivative
work.

In the OP use case, if (s)he uses a piece of code GPL'ed, in the event
of redistributing the final work, (s)he has to release the final work
under the GPL license. This implies that any IP license used which is
not GPL-compatible cannot be used.

The wording makes the causality vague. I would say "The only way to
legally release a work that includes GPL'd portions, is under the terms
of the GPL." I would not say "if you... then you have to..." because
that implies that you're being forced to do something that you aren't
forced to do.

Quoting RMS (http://www.gnu.org/licenses/rms-why-gplv3.html):

“If you include code under this license in a larger program, the larger
program must be under this license too.”

The license 'enforce' the obligation to license a derived work under the
same terms. And that is the reason why GPLv2 and GPLv3 are not
compatible, since they both would require to have the larger program
released under each of them, which is not possible.

Al
 
Hi Rick,

rickman <gnuarm@gmail.com> wrote:
[]
If no one can view the flash blocks, then they won't know the IP is in
there either.

from Wikipedia: "In ordinary language, the term crime denotes an
unlawful act punishable by a state".

The simple fact that is punishable, qualifies it as a crime. And sooner
or later someone may have access to those 'blocks' and legitimately sue
you for license infringement.

Al
 
On 11/6/2014 4:28 AM, alb wrote:
Hi Rick,

rickman <gnuarm@gmail.com> wrote:
[]
If no one can view the flash blocks, then they won't know the IP is in
there either.

from Wikipedia: "In ordinary language, the term crime denotes an
unlawful act punishable by a state".

The simple fact that is punishable, qualifies it as a crime. And sooner
or later someone may have access to those 'blocks' and legitimately sue
you for license infringement.

Actually there is no law broken by violating the terms of the license.
So no crime is committed in any event.

This is a licensing issue, a civil matter. If the license says you
distribute the source code in the same manner as the compiled code, you
should be able to include it in the internal Flash. Very easy on a
device that is very possibly running Linux anyway.

--

Rick
 
Hi Rick,

In article <m3fgme$jv2$1@dont-email.me> you wrote:
[]
The simple fact that is punishable, qualifies it as a crime. And sooner
or later someone may have access to those 'blocks' and legitimately sue
you for license infringement.

Actually there is no law broken by violating the terms of the license.
So no crime is committed in any event.
This is a licensing issue, a civil matter.

I'm not sure if license infringement can be qualified as copyright
infringement, but the latter may have criminal provisions. So it's a
crime. And in 2007 violations of the GPLv2 was claimed by SFLC which
filed coopyright infringement lawsuits.

If the license says you
distribute the source code in the same manner as the compiled code, you
should be able to include it in the internal Flash. Very easy on a
device that is very possibly running Linux anyway.

No matter how you turn it around, you should allow people to *see* the
source and be able to modify, no matter which distribution mean you use.
If your flash has an image of a GNU/Linux system it has to have the
sources as well (not a lot practical for an embedded system with size
constraints).

Al
 
On 06/11/14 11:21, alb wrote:
Hi Rick,

In article <m3fgme$jv2$1@dont-email.me> you wrote:
[]
The simple fact that is punishable, qualifies it as a crime. And sooner
or later someone may have access to those 'blocks' and legitimately sue
you for license infringement.

Actually there is no law broken by violating the terms of the license.
So no crime is committed in any event.
This is a licensing issue, a civil matter.

I'm not sure if license infringement can be qualified as copyright
infringement, but the latter may have criminal provisions. So it's a
crime. And in 2007 violations of the GPLv2 was claimed by SFLC which
filed coopyright infringement lawsuits.

The GPL builds on copyright laws, rather than licensing laws. There are
various reasons for this (IANAL) - I think part of it is that a licence
involves an agreement between two parties, while copyright is decided
entirely by the author/owner of the work.

Copyright laws are mostly civil laws - and therefore breaking them is
not a crime, and can lead to fines, compensation suites, and
cease-and-desist orders but not jail sentences. Copyright infringements
/can/ be a crime if there is significant financial gain by breaking the
terms of the copyright. (So if you copy a film and give it away, you
can be sued for compensation by the copyright owner - but if you sell
lots of copies, you can be jailed.) Breaking "technical restrictions to
enforce copyright" can also be a crime in some countries (like the USA
with the DCMA laws) - but that does not apply with the source code is
easily available.


Thus GPL abuses will normally be civil law violations, but might be
criminal if the abuser made money while depriving the rightful owner
from the market.

If the license says you
distribute the source code in the same manner as the compiled code, you
should be able to include it in the internal Flash. Very easy on a
device that is very possibly running Linux anyway.

No matter how you turn it around, you should allow people to *see* the
source and be able to modify, no matter which distribution mean you use.
If your flash has an image of a GNU/Linux system it has to have the
sources as well (not a lot practical for an embedded system with size
constraints).

Al
 
DJ Delorie <dj@delorie.com> wrote:

(snip)

Yes, but again, it's the designs for the hardware that are shared. You
can't share a resistor across two projects, but you can share the
schematics that include that resistor. Still, despite how easy it is to
share a schematic, and despite a license allowing you to do so, turning
that into hardware is nontrivial.

I was not so long ago wondering about PC board design from verilog.

That is, I could design a multiple FPGA, plus some other external
logic all in verilog, then generate the PC boards to connect them
all together.

Could a verilog PC board description be GPL'ed?

I presume a PC board design itself could be copyrighted,
as an expression of an idea in art, but then again someone else
could generate a different expression of the idea that does
the same thing electrically.

That might make the distinction between hardware and software
a little more obvious than the FPGA case.

-- glen
 

Welcome to EDABoard.com

Sponsor

Back
Top