I'd rather switch than fight!

Andy wrote:
IMHO, they missed the point. Any design that can be completed in a
couple of hours will necessarily favor the language with the least
overhead. Unfortunately, two-hour-solvable designs are not
representative of real life designs, and neither was the contest's
declared winner.
Well, we pretty much know that the number of errors people make in
programming languages basically depends on how much code they have to write
- a language which has less overhead and is more terse is being written
faster and has less bugs. And it goes non-linear, i.e. a program with 10k
lines of code will have less bugs per 1000 lines than a program with 100k
lines of code. So the larger the project, the better the more terse
language is.

--
Bernd Paysan
"If you want it done right, you have to do it yourself!"
http://www.jwdt.com/~paysan/
 
On Apr 16, 5:30 am, Symon <symon_bre...@hotmail.com> wrote:

Pat,
If your email client was less agile and performed better 'typing
checking' you wouldn't have sent this blank post.
HTH, Syms. ;-)
Absolutely true!

But it keeps me young trying to keep up with it.

Pat
 
On Apr 14, 10:07 pm, glen herrmannsfeldt <g...@ugcs.caltech.edu>
wrote:
In comp.arch.fpga rickman <gnu...@gmail.com> wrote:
(snip)

People say that strong typing catches bugs, but I've never seen any
real proof of that.  There are all sorts of anecdotal evidence, but
nothing concrete.  Sure, wearing a seat belt helps to save lives, but
at what point do we draw the line?  Should we have four point
harnesses, helmets, fireproof suits...?

Seatbelts may save lives, but statistically many other safety
improvements don't.  When people know that their car has air bags,
they compensate and drive less safely.  (Corner a little faster, etc.)
Enough to mostly remove the life saving effect of the air bags.
Are you making this up? I have never heard that any of the other
added safety features don't save lives overall. I have heard that
driving a sportier car does allow you to drive more aggressively, but
this is likely not actually the result of any real analysis, but just
an urban myth. Where did you hear that air bags don't save lives
after considering all?


It does seem likely that people will let down their guard and
code more sloppily knowing that the compiler will catch errors.
If you can show me something that shows this, fine, but otherwise this
is just speculation.


One of my least favorite is the Java check on variable initialization.
If the compiler can't be sure that it is initialized then it is
a fatal compilation error.  There are just too many cases that
the compiler can't get right.
I saw a warning the other day that my VHDL signal initialization "is
not synthesizable". I checked and it was appropriately initialized on
async reset, it was just complaining that I also used an
initialization in the declaration to keep the simulator from giving me
warnings in library functions. You just can't please everyone!

Then again I had to make a second trip to the customer yesterday
because of an output that got disconnected in a change and I didn't
see the warning in the ocean of warnings and notes that the tools
generate. Then I spent half an hour going through all of it in detail
and found a second disconnected signal. Reminds me of the moon
landing where there was a warning about a loss of sync which kept
happening so much it overloaded the guidance computer and they had to
land manually. TMI!

Rick
 
On Apr 15, 10:37 am, Patrick Maupin <pmau...@gmail.com> wrote:
On Apr 15, 12:20 am, Matthew Hicks <mdhic...@uiuc.edu> wrote:



In comp.arch.fpga rickman <gnu...@gmail.com> wrote: (snip)

People say that strong typing catches bugs, but I've never seen any
real proof of that.  There are all sorts of anecdotal evidence, but
nothing concrete.  Sure, wearing a seat belt helps to save lives, but
at what point do we draw the line?  Should we have four point
harnesses, helmets, fireproof suits...?

Seatbelts may save lives, but statistically many other safety
improvements don't.  When people know that their car has air bags,
they compensate and drive less safely.  (Corner a little faster, etc.)
Enough to mostly remove the life saving effect of the air bags.

It does seem likely that people will let down their guard and code
more sloppily knowing that the compiler will catch errors.

One of my least favorite is the Java check on variable initialization..
If the compiler can't be sure that it is initialized then it is
a fatal compilation error.  There are just too many cases that
the compiler can't get right.

Sorry, but I have to call BS on this whole line og "logic".  Unless you can
point to some studies that prove this, my experiences are contrary to your
assertions.  I don't change the way I code when I code in Verilog vs. VHDL
or C vs. Java, the compiler just does a better job of catching my stupid
mistakes, allowing me to get things done faster.

You can "call BS" all you want, but the fact that you don't change the
way you code in Verilog vs. VHDL or or C vs. Java indicates that your
experiences are antithetical to mine, so I have to discard your
datapoint.

Regards,
Pat
That is certainly a great way to prove a theory. Toss out every data
point that disagrees with your theory!

Rick
 
On Apr 15, 3:23 pm, Andy <jonesa...@comcast.net> wrote:
The benefits of a "strongly typed" language, with bounds checks, etc.
are somewhat different between the first time you write/use the code,
and the Nth time reuse and revise it. Strong typeing and bounds
checking let you know quickly the possibly hidden side effects of
making changes in the code, especially when it may have been a few
days/weeks/months since the last time you worked with it.

A long time ago there was a famous contest for designing a simple
circuit in verilog vs. vhdl to see which language was better. The
requirements were provided on paper, and the contestents were given an
hour or two (don't remember how long, but it was certainly not even a
day), and whoever got the fastest and the smallest (two winners)
correct synthesized circuit, their chosen language won. Verilog won
both, and I don't think vhdl even finished.
Maybe this was repeated, but the first time they tried this *NO ONE*
finished in time which is likely much more realistic compared to real
assignments in the real world. If you think it will take a couple of
hours, allocate a couple of days!

Rick
 
On Apr 15, 5:48 pm, Patrick Maupin <pmau...@gmail.com> wrote:
On Apr 15, 4:31 pm, Muzaffer Kal <k...@dspia.com> wrote:



On Thu, 15 Apr 2010 14:21:37 -0700 (PDT), Patrick Maupin

pmau...@gmail.com> wrote:
On Apr 15, 3:12 pm, David Brown <da...@westcontrol.removethisbit.com
wrote:

Another famous contest involved a C and Ada comparison.  It took the Ada
more than twice as long as the C team to write their code, but it took
the C team more than ten times as long to debug their code.

Well, this isn't at all the same then.  The Verilog teams got working
designs, and the VHDL teams didn't.

There are two issues to consider. One is the relative times of writing
the codes vs debugging ie if writing took 5 hours and debugging 10
minutes (unlikely) then C still wins. Which brings the second issue:
it is very likely that the programming contest involved a "larger"
design to be finished. If I am remembering correctly RTL was  an async
reset, synchronously loadable up-down counter which is a "smallish"
project. If programming contest involved something more "involved" it
still points to the benefit of strong typing and other features of
Ada/VHDL etc.

But it's mostly academic and FPGA people who think that VHDL might
have any future at all.  See, for example:

http://www.eetimes.com/news/design/columns/industry_gadfly/showArticl...

Regards,
Pat
Hmmm... The date on that article is 04/07/2003 11:28 AM EDT. Seven
years later I still don't see any sign that VHDL is going away... or
did I miss something?

Rick
 
In comp.arch.fpga rickman <gnuarm@gmail.com> wrote:
(snip, I wrote)
Seatbelts may save lives, but statistically many other safety
improvements don't. ?When people know that their car has air bags,
they compensate and drive less safely. ?(Corner a little faster, etc.)
Enough to mostly remove the life saving effect of the air bags.

Are you making this up? I have never heard that any of the other
added safety features don't save lives overall. I have heard that
driving a sportier car does allow you to drive more aggressively, but
this is likely not actually the result of any real analysis, but just
an urban myth. Where did you hear that air bags don't save lives
after considering all?
I believe that they still do save lives, but by a smaller
factor than one might expect. I believe the one that I saw
was not quoting air bags, but anti-lock brakes.

The case for air bags was mentioned by someone else -- that some
believe that they don't need seat belts if they have air bags.
Without seat belts, though, you can be too close to the air bag
when it deploys, and get hurt by the air bag itself. For that
reason, they now use slower air bags than they used to.

The action of anti-lock breaks has a more immediate feel while
driving, and it seems likely that many will take that into account
while driving. I believe that there is still a net gain, but
much smaller than would be expected.

-- glen
 
On Apr 16, 5:38 am, David Brown <da...@westcontrol.removethisbit.com>
wrote:
Secondly, a testbench does not check everything.  It is only as good as
the work put into it, and can be flawed in the same way as the code
itself.  
I was listening to a lecture by a college once who indicated that you
don't need to use static timing analysis since you can use a timing
based simulation! I queried him on this a bit and he seemed to think
that you just needed to have a "good enough" test bench. I was
incredulous about this for a long time. Now I realize he was just a
moron^H^H^H^H^H^H^H ill informed!

Rick
 
In comp.arch.fpga rickman <gnuarm@gmail.com> wrote:
(snip)

I was listening to a lecture by a college once who indicated that you
don't need to use static timing analysis since you can use a timing
based simulation! I queried him on this a bit and he seemed to think
that you just needed to have a "good enough" test bench. I was
incredulous about this for a long time. Now I realize he was just a
moron^H^H^H^H^H^H^H ill informed!
I suppose so, but consider it the other way around.

If your test bench is good enough then it will catch all static
timing failures (eventually). With static timing analysis, there
are many things that you don't need to check with the test bench.

Also, you can't do static timing analysis on the implemented logic.
(That is, given an actual built circuit and a logic analyzer.)

Now, setup and hold violations are easy to test with static
analysis, but much harder to check in actual logic. Among others,
you would want to check all possible clock skew failures, which is
normally not possible. With the right test bench and logic
implementation (including programmable delays on each FF clock)
it might be possible, though.

-- glen
 
glen herrmannsfeldt wrote:
If your test bench is good enough then it will catch all static
timing failures (eventually). With static timing analysis, there
are many things that you don't need to check with the test bench.
And then there are some corner cases where neither static timing analysis
nor digital simulation helps - like signals crossing asynchronous clock
boundaries (there *will* be a setup or hold violation, but a robust clock
boundary crossing circuit will work in practice).

Example: We had a counter running on a different clock (actually a VCO,
where the voltage was an analog input), and to sample it robust in the
normal digital clock domain, I grey-encoded it. There will be one bit which
is either this or that when sampling at a setup or hold violation condition,
but this is only just one bit, and it's either in the state before the
increment or after.

--
Bernd Paysan
"If you want it done right, you have to do it yourself!"
http://www.jwdt.com/~paysan/
 
On Apr 16, 12:58 pm, rickman <gnu...@gmail.com> wrote:

That is certainly a great way to prove a theory.  Toss out every data
point that disagrees with your theory!
Well, I don't really need others to agree with my theory, so if that's
how it's viewed, so be it. Nonetheless, I view it as tossing out data
that was taken under different conditions than the ones I live under.
Although the basics don't change (C, C++, Java, Verilog, VHDL are all
turing-complete, as are my workstation and the embedded systems I
sometimes program on), the details can make things qualitatively
enough different that they actually appear to be quantatively
different. It's like quantum mechanics vs. Newtonian physics.

For example, on my desktop, if I'm solving an engineering problem, I
might throw Python and numpy, or matlab, and gobs of RAM and CPU at
it. On a 20 MIPS, fixed point, low-precision embedded system with a
total of 128K memory, I don't handle the problem the same way.

I find the same with language differences. I assumed your complaint
when you started this thread was that a particular language was
*forcing* you into a paradigm you felt might be sub-optimal. My
opinion is that, even when languages don't *force* you into a
particular paradigm, there is an impedance match between coding style
and language that you ignore at the peril of your own frustration.

So when somebody says " I don't change the way I code when I code in
Verilog vs. VHDL
or C vs. Java, the compiler just does a better job of catching my
stupid mistakes, allowing me to get things done faster." I just can't
even *relate* to that viewpoint. It is that of an alien from a
different universe, so has no bearing on my day to day existence.

Regards,
Pat
 
On Apr 16, 1:25 pm, rickman <gnu...@gmail.com> wrote:

Hmmm...  The date on that article is 04/07/2003 11:28 AM EDT.  Seven
years later I still don't see any sign that VHDL is going away... or
did I miss something?
True, but you also have to remember in the early 90s that all the
industry pundits thought verilog was dead...
 
On Apr 16, 2:45 pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
In comp.arch.fpga rickman <gnu...@gmail.com> wrote:
(snip)

I was listening to a lecture by a college once who indicated that you
don't need to use static timing analysis since you can use a timing
based simulation!  I queried him on this a bit and he seemed to think
that you just needed to have a "good enough" test bench.  I was
incredulous about this for a long time.  Now I realize he was just a
moron^H^H^H^H^H^H^H ill informed!

I suppose so, but consider it the other way around.

If your test bench is good enough then it will catch all static
timing failures (eventually).  With static timing analysis, there
are many things that you don't need to check with the test bench.
I don't follow what you are saying. This first sentence seems to be
saying that a timing simulation *is* a good place to find timing
problems, or are you talking about real world test benches? The point
is that static timing is enough to catch all timing failures given
that your timing constraints cover the design properly... and I agree
that is a big given. Your second sentence seems to be agreeing with
my previous statement.


Also, you can't do static timing analysis on the implemented logic.
(That is, given an actual built circuit and a logic analyzer.)
So?


Now, setup and hold violations are easy to test with static
analysis, but much harder to check in actual logic.  Among others,
you would want to check all possible clock skew failures, which is
normally not possible.  With the right test bench and logic
implementation (including programmable delays on each FF clock)
it might be possible, though.
In twenty years of designing with FPGAs I have never found a clock
skew problem. I always write my code to allow the clock trees to
deliver the clocks and I believe the tools guaranty that there will
not be a skew problem. Static timing actually does cover clock skew,
at least the tools I use.

BTW, how do you design a "right test bench"? Static timing analysis
will at least give you the coverage level although one of my
complaints is that they don't provide any tools for analyzing if your
constraints are correct. But I have no idea how to verify that my
test bench is testing the timing adequately.

Rick
 
In comp.arch.fpga rickman <gnuarm@gmail.com> wrote:
(snip on test benches)

I suppose so, but consider it the other way around.

If your test bench is good enough then it will catch all static
timing failures (eventually). ?With static timing analysis, there
are many things that you don't need to check with the test bench.

I don't follow what you are saying. This first sentence seems to be
saying that a timing simulation *is* a good place to find timing
problems, or are you talking about real world test benches? The point
is that static timing is enough to catch all timing failures given
that your timing constraints cover the design properly... and I agree
that is a big given. Your second sentence seems to be agreeing with
my previous statement.
Yes, I was describing real world (hardware) test benches.

Depending on how close you are to a setup/hold violation,
it may take a long time for a failure to actually occur.

Also, you can't do static timing analysis on the implemented logic.
(That is, given an actual built circuit and a logic analyzer.)

So?

Now, setup and hold violations are easy to test with static
analysis, but much harder to check in actual logic. ?Among others,
you would want to check all possible clock skew failures, which is
normally not possible. ?With the right test bench and logic
implementation (including programmable delays on each FF clock)
it might be possible, though.

In twenty years of designing with FPGAs I have never found a clock
skew problem. I always write my code to allow the clock trees to
deliver the clocks and I believe the tools guaranty that there will
not be a skew problem. Static timing actually does cover clock skew,
at least the tools I use.
Yes, I was trying to cover the case of not using static timing
analysis but only testing actual hardware. For ASICs, it is
usually necessary to test the actual chips, though they should
have already passed static timing.

BTW, how do you design a "right test bench"? Static timing analysis
will at least give you the coverage level although one of my
complaints is that they don't provide any tools for analyzing if your
constraints are correct. But I have no idea how to verify that my
test bench is testing the timing adequately.
If you only have one clock, it isn't so hard. As you add more,
with different frequencies and/or phases, it gets much harder,
I agree. It would be nice to get as much help as possible
from the tools.

-- glen
 
Andrew FPGA wrote:

Interesting in the discussion on myHdl/testbenches, no-one raised
SystemVerilog. SystemVerilog raises the level of abstraction(like
myHdl), but more importantly it introduces constrained random
verification. For writing testbenches, SV is a better choice than
MyHdl/VHDL/Verilog, assuming tool availablility.
And assuming availability of financial means to buy licenses.
And assuming availability of time to rewrite existing verification
components, all of course written in VHDL.
And assuming availability of time to learn SV.
And assuming availability of time to learn OVM.
And ...

Oh man, is it because of the seasonal change that I'm feeling so tired, or
is it something else? ;-)

It would seem that SV does not bring much to the table in terms of RTL
design - its just a catchup to get verilog up to the capabilities that
VHDL already has.
Indeed.

I agree that SV seems to give most room for growth in the verification side.
VHDL is becoming too restrictive when you want to create really reusable
verification parts (reuse verification code from block level at chip
level). More often than not, the language is working against you in that
case. Most of the time because it is strongly typed. In general I prefer
prefer strongly typed over weakly typed. But sometimes it just gets in your
way.

For design I too do not see much advantage of SV over VHDL, especially when
you already are using VHDL. So then a mix would be preferable: VHDL for
design, SV/OVM for verification.

--
Paul Uiterlinden
www.aimvalley.nl
e-mail addres: remove the not.
 
On Apr 17, 7:17 pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
In comp.arch.fpga rickman <gnu...@gmail.com> wrote:
(snip on test benches)

I suppose so, but consider it the other way around.
If your test bench is good enough then it will catch all static
timing failures (eventually). ?With static timing analysis, there
are many things that you don't need to check with the test bench.
I don't follow what you are saying.  This first sentence seems to be
saying that a timing simulation *is* a good place to find timing
problems, or are you talking about real world test benches?  The point
is that static timing is enough to catch all timing failures given
that your timing constraints cover the design properly... and I agree
that is a big given.  Your second sentence seems to be agreeing with
my previous statement.

Yes, I was describing real world (hardware) test benches.

Depending on how close you are to a setup/hold violation,
it may take a long time for a failure to actually occur.
That is the point. Finding timing violations in a simulation is hard,
finding them in physical hardware is not possible to do with any
certainty. A timing violation depends on the actual delays on a chip
and that will vary with temperature, power supply voltage and process
variations between chips. I had to work on a problem design once
because the timing analyzer did not work or the constraints did not
cover (I firmly believe it was the tools, not the constraints since it
failed on a number of different designs). We tried finding the chip
that failed at the lowest temperature and then used that at an
elevated temperature for our "final" timing verification. Even with
that, I had little confidence that the design would never have a
problem from timing. Of course on top of that the chip was being used
at 90% capacity. This design is the reason I don't work for that
company anymore. The section head knew about all of these problems
before he assigned the task and then expected us to work 70 hour work
weeks. At least we got them to buy us $100 worth of dinner each
evening!

The point is that if you don't do static timing analysis (or have an
analyzer that is broken) timing verification is nearly impossible.


Also, you can't do static timing analysis on the implemented logic.
(That is, given an actual built circuit and a logic analyzer.)
So?
Now, setup and hold violations are easy to test with static
analysis, but much harder to check in actual logic. ?Among others,
you would want to check all possible clock skew failures, which is
normally not possible. ?With the right test bench and logic
implementation (including programmable delays on each FF clock)
it might be possible, though.
In twenty years of designing with FPGAs I have never found a clock
skew problem.  I always write my code to allow the clock trees to
deliver the clocks and I believe the tools guaranty that there will
not be a skew problem.  Static timing actually does cover clock skew,
at least the tools I use.

Yes, I was trying to cover the case of not using static timing
analysis but only testing actual hardware.  For ASICs, it is
usually necessary to test the actual chips, though they should
have already passed static timing.  
If you find a timing bug in the ASIC chip, isn't that a little too
late? Do you test at elevated temperature? Do you generate special
test vectors? How is this different from just testing the logic?


BTW, how do you design a "right test bench"?  Static timing analysis
will at least give you the coverage level although one of my
complaints is that they don't provide any tools for analyzing if your
constraints are correct.  But I have no idea how to verify that my
test bench is testing the timing adequately.

If you only have one clock, it isn't so hard.  As you add more,
with different frequencies and/or phases, it gets much harder,
I agree.  It would be nice to get as much help as possible
from the tools.
The number of clocks is irrelevant. I don't consider timing issues of
crossing clock domains to be "timing" problems. There you can only
solve the problem with proper logic design, so it is a logic
problem.

Rick
 
On Apr 16, 4:38 am, David Brown <da...@westcontrol.removethisbit.com>
wrote:

The old joke about Ada is that when you get your code to compile, it's
ready to ship.  I certainly wouldn't go that far, but testing is
something you do in cooperation with static checking, not as an alternative.
GOOD static checking tools are great (and IMHO part of a testbench).
I certainly hope you're not trying to imply that the typechecking
built into VHDL is a substitute for a good model checker!

Regards,
Pat
 
On Apr 10, 8:21 pm, Jan Decaluwe <jandecal...@gmail.com> wrote:
On Apr 9, 6:53 pm, Patrick Maupin <pmau...@gmail.com> wrote:



On Apr 9, 9:07 am, rickman <gnu...@gmail.com> wrote:

I think I have about had it with VHDL.  I've been using the
numeric_std library and eventually learned how to get around the
issues created by strong typing although it can be very arcane at
times.  I have read about a few suggestions people are making to help
with some aspects of the language, like a selection operator like
Verilog has.  But it just seems like I am always fighting some aspect
of the VHDL language.

I guess part of my frustration is that I have yet to see where strong
typing has made a real difference in my work... at least an
improvement.  My customer uses Verilog and has mentioned several times
how he had tried using VHDL and found it too arcane to bother with.
He works on a much more practical level than I often do and it seems
to work well for him.

One of my goals over the summer is to teach myself Verilog so that I
can use it as well as I currently use VHDL.  Then I can make a fully
informed decision about which I will continue to use.  I'd appreciate
pointers on good references, web or printed.

Without starting a major argument, anyone care to share their feelings
on the differences in the two languages?

Rick

The best online references are the Sutherland Verilog references.
There is an online HTML reference for Verilog 95 (excellent), and a
PDF for Verilog 2001 (good):

http://www.sutherland-hdl.com/online_verilog_ref_guide/vlog_ref_top.h.......

Cliff Cummings has a lot of good papers on Verilog at his site:

http://sunburst-design.com/papers/

In particular, if you read and carefully grok his paper about non-
blocking vs. blocking assignments, you will be well on your way to
being a Verilog wizard:

http://sunburst-design.com/papers/CummingsSNUG2000SJ_NBA.pdf

The infamous Guideline #5 bans variable semantics from always blocks
with sequential logic. It must be the Worst Guideline ever for RTL
designers.
The result is not wizardry but ignorance.

How are we supposed to "raise the abstraction level" if Verilog RTL
designers
can't even use variables?
I didn't notice this post until today. I think you are completely
misreading the guidelines if you think they mean "Verilog RTL
designers can't even use variables"

Regards,
Pat
 
In comp.arch.fpga rickman <gnuarm@gmail.com> wrote:
On Apr 17, 7:17?pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
(snip on test benches)

Yes, I was describing real world (hardware) test benches.

Depending on how close you are to a setup/hold violation,
it may take a long time for a failure to actually occur.

That is the point. Finding timing violations in a simulation is hard,
finding them in physical hardware is not possible to do with any
certainty. A timing violation depends on the actual delays on a chip
and that will vary with temperature, power supply voltage and process
variations between chips.
But they have to be done for ASICs, and all other chips as
part of the fabrication process. For FPGAs you mostly don't
have to do such, relying on the specifications and that the chips
were tested appropriately in the factory.

I had to work on a problem design once
because the timing analyzer did not work or the constraints did not
cover (I firmly believe it was the tools, not the constraints since it
failed on a number of different designs). We tried finding the chip
that failed at the lowest temperature and then used that at an
elevated temperature for our "final" timing verification. Even with
that, I had little confidence that the design would never have a
problem from timing. Of course on top of that the chip was being used
at 90% capacity. This design is the reason I don't work for that
company anymore. The section head knew about all of these problems
before he assigned the task and then expected us to work 70 hour work
weeks. At least we got them to buy us $100 worth of dinner each
evening!
One that I worked with, though not at all at that level, was
a programmable ASIC (for a systolic array processor). For some
reason that I never knew the timing was just a little bit off
regarding to writes to the internal RAM. The solution was to use
two successive writes, which seemed to work. In the usual operation
mode, the RAM was initialized once, so the extra cycle wasn't much
of a problem. There were also some modes where the RAM had to
be written while processing data, such that the extra cycle meant
that the processor ran that much slower.

The point is that if you don't do static timing analysis (or have an
analyzer that is broken) timing verification is nearly impossible.
And even if you do, the device might still have timing problems.

(snip)
Yes, I was trying to cover the case of not using static timing
analysis but only testing actual hardware. ?For ASICs, it is
usually necessary to test the actual chips, though they should
have already passed static timing. ?

If you find a timing bug in the ASIC chip, isn't that a little too
late? Do you test at elevated temperature? Do you generate special
test vectors? How is this different from just testing the logic?
It might be that it works at a lower clock rate, or other workarounds
can be used. Yes, it is part of testing the logic.

(snip)

If you only have one clock, it isn't so hard. ?As you add more,
with different frequencies and/or phases, it gets much harder,
I agree. ?It would be nice to get as much help as possible
from the tools.

The number of clocks is irrelevant. I don't consider timing issues of
crossing clock domains to be "timing" problems. There you can only
solve the problem with proper logic design, so it is a logic
problem.
Yes, there is nothing to do about asynchronous clocks. It just has
to work in all cases. But in the case of supposedly related
clocks, you have to verify it. There are designs that have one
clock a multiple of the other clock frequency, or multiple phases
with specified timing relationship. Or even single clocks with
specified duty cycle. (I still remember the 8086 with its 33% duty
cycle clock.)

With one clock you can run combinations of voltage, temperature,
and clock rate, not so hard but still a lot of combinations.
With related clocks, you have to verify that the timing between
the clocks works.

-- glen
 
On Apr 20, 11:46 pm, Patrick Maupin <pmau...@gmail.com> wrote:
On Apr 10, 8:21 pm, Jan Decaluwe <jandecal...@gmail.com> wrote:



On Apr 9, 6:53 pm, Patrick Maupin <pmau...@gmail.com> wrote:

On Apr 9, 9:07 am, rickman <gnu...@gmail.com> wrote:

I think I have about had it with VHDL.  I've been using the
numeric_std library and eventually learned how to get around the
issues created by strong typing although it can be very arcane at
times.  I have read about a few suggestions people are making to help
with some aspects of the language, like a selection operator like
Verilog has.  But it just seems like I am always fighting some aspect
of the VHDL language.

I guess part of my frustration is that I have yet to see where strong
typing has made a real difference in my work... at least an
improvement.  My customer uses Verilog and has mentioned several times
how he had tried using VHDL and found it too arcane to bother with.
He works on a much more practical level than I often do and it seems
to work well for him.

One of my goals over the summer is to teach myself Verilog so that I
can use it as well as I currently use VHDL.  Then I can make a fully
informed decision about which I will continue to use.  I'd appreciate
pointers on good references, web or printed.

Without starting a major argument, anyone care to share their feelings
on the differences in the two languages?

Rick

The best online references are the Sutherland Verilog references.
There is an online HTML reference for Verilog 95 (excellent), and a
PDF for Verilog 2001 (good):

http://www.sutherland-hdl.com/online_verilog_ref_guide/vlog_ref_top.h.......

Cliff Cummings has a lot of good papers on Verilog at his site:

http://sunburst-design.com/papers/

In particular, if you read and carefully grok his paper about non-
blocking vs. blocking assignments, you will be well on your way to
being a Verilog wizard:

http://sunburst-design.com/papers/CummingsSNUG2000SJ_NBA.pdf

The infamous Guideline #5 bans variable semantics from always blocks
with sequential logic. It must be the Worst Guideline ever for RTL
designers.
The result is not wizardry but ignorance.

How are we supposed to "raise the abstraction level" if Verilog RTL
designers
can't even use variables?

I didn't notice this post until today.  I think you are completely
misreading the guidelines if you think they mean "Verilog RTL
designers can't even use variables"
I use that line as a shorthand for "Guideline #5 combined with
Guideline #1, if taken seriously, forbids the use of traditional
variable semantics provided by blocking assignments, in the
context of a clocked always block".

No matter how absurd I hope this sounds to you, that's really
what it says.

Jan
 

Welcome to EDABoard.com

Sponsor

Back
Top