I'd rather switch than fight!

On Apr 20, 5:14 pm, Jan Decaluwe <j...@jandecaluwe.com> wrote:
On Apr 20, 11:46 pm, Patrick Maupin <pmau...@gmail.com> wrote:
http://sunburst-design.com/papers/CummingsSNUG2000SJ_NBA.pdf

The infamous Guideline #5 bans variable semantics from always blocks
with sequential logic. It must be the Worst Guideline ever for RTL
designers.
The result is not wizardry but ignorance.

How are we supposed to "raise the abstraction level" if Verilog RTL
designers
can't even use variables?

I didn't notice this post until today.  I think you are completely
misreading the guidelines if you think they mean "Verilog RTL
designers can't even use variables"

I use that line as a shorthand for "Guideline #5 combined with
Guideline #1, if taken seriously, forbids the use of traditional
variable semantics provided by blocking assignments, in the
context of a clocked always block".

No matter how absurd I hope this sounds to you, that's really
what it says.
Well, the guidelines absolutely _do_ "forbid the use of traditional
variable semantics provided by blocking assignments, in the context of
a clocked always block", but I don't think that translates into
"Verilog RTL designers can't even use variables."

What it *does* translate into is "The cleanest way to write Verilog is
somewhat verbose, in that it requires you to separate your
combinatorial logic from your sequential logic."

Basically, this method of thinking/coding requires two variables for
each sequential variable. It's really handy to have a nice naming
convention, like next_x is the variable that will be placed into x on
the next clock.

So you have your definitions:

reg y, next_y;
reg [5:0] x, next_x;

and your combinatorial block:

always @* begin
next_x = x + 1;
next_y = x == 0;
end

and your sequential block:

always @(posedge clk or negedge rstn)
if (!rstn) begin
x <= 0;
y <= 0;
end else begin
x <= next_x;
y <= next_y;
end

The declaration of registers and the sequential block both become,
pretty much, boilerplate code with this method of coding -- all the
action happens in the combinatorial block. I am actually, slowly, in
my spare time, working on a project that will create a lot of the
boilerplate for coding in this method.

How does this help? It's more about a human's ability to mentally
manage complexity than anything else. In the sequential block, every
line must be 'xxx' <= next_'xxx'. Very easy to verify. In the
combinatorial block, 'next_' must appear on the lhs, and only on the
lhs, of any assignment involving a sequential variable. Nothing keeps
you from using if, case statements, etc. in the combinatorial block,
and it's very easy to think about, because you see two variables
simultaneously -- what is 'x' right now, and what will 'x' be after
the next clock.

I have worked at several companies and with several individuals where
this coding style is used. I did not invent it; as far as I know it
was probably invented at multiple places independently. It works
quite well, but is, as I mentioned, a bit verbose. There is no reason
to use it for extremely simple modules, but OTOH, the breakover point
where it is better to use it comes much sooner than you might think.

Regards,
Pat
 
The point is that if you don't do static timing analysis (or have an
analyzer that is broken) timing verification is nearly impossible.

And even if you do, the device might still have timing problems.

Can you expand on this Glen?

As I have always understood it one of the bedrocks of FPGA design is that
when it's passed a properly constrained static timing analysis an FPGA design
will always work (from a timing point of view).



Nial.
 
In comp.arch.fpga Nial Stewart <nial*REMOVE_THIS*@nialstewartdevelopments.co.uk> wrote:
The point is that if you don't do static timing analysis (or have an
analyzer that is broken) timing verification is nearly impossible.

And even if you do, the device might still have timing problems.

Can you expand on this Glen?

As I have always understood it one of the bedrocks of FPGA design
is that when it's passed a properly constrained static timing
analysis an FPGA design will always work (from a timing point of view).
Well, some of the comments were regarding ASIC design, where
things aren't so sure. For FPGA designs, there is, as you say,
"properly constrained" which isn't true for all design and tool
combinations. One that I have heard of, though haven't actually
tried, is having a logic block where the delay is greater than
one clock cycle, but less than two. Maybe some tools can do that,
but I don't believe that all can.

-- glen
 
On Apr 21, 12:38 am, Patrick Maupin <pmau...@gmail.com> wrote:
On Apr 20, 5:14 pm, Jan Decaluwe <j...@jandecaluwe.com> wrote:



On Apr 20, 11:46 pm, Patrick Maupin <pmau...@gmail.com> wrote:
http://sunburst-design.com/papers/CummingsSNUG2000SJ_NBA.pdf

The infamous Guideline #5 bans variable semantics from always blocks
with sequential logic. It must be the Worst Guideline ever for RTL
designers.
The result is not wizardry but ignorance.

How are we supposed to "raise the abstraction level" if Verilog RTL
designers
can't even use variables?

I didn't notice this post until today.  I think you are completely
misreading the guidelines if you think they mean "Verilog RTL
designers can't even use variables"

I use that line as a shorthand for "Guideline #5 combined with
Guideline #1, if taken seriously, forbids the use of traditional
variable semantics provided by blocking assignments, in the
context of a clocked always block".

No matter how absurd I hope this sounds to you, that's really
what it says.

Well, the guidelines absolutely _do_ "forbid the use of traditional
variable semantics provided by blocking assignments, in the context of
a clocked always block", but I don't think that translates into
"Verilog RTL designers can't even use variables."

What it *does* translate into is "The cleanest way to write Verilog is
somewhat verbose, in that it requires you to separate your
combinatorial logic from your sequential logic."

Basically, this method of thinking/coding requires two variables for
each sequential variable.  It's really handy to have a nice naming
convention, like next_x is the variable that will be placed into x on
the next clock.

So you have your definitions:

reg       y,  next_y;
reg [5:0] x,  next_x;

and your combinatorial block:

always @* begin
    next_x = x + 1;
    next_y = x == 0;
end

and your sequential block:

always @(posedge clk or negedge rstn)
    if (!rstn) begin
        x <= 0;
        y <= 0;
    end else begin
        x <= next_x;
        y <= next_y;
    end

The declaration of registers and the sequential block both become,
pretty much, boilerplate code with this method of coding -- all the
action happens in the combinatorial block.  I am actually, slowly, in
my spare time, working on a project that will create a lot of the
boilerplate for coding in this method.

How does this help?  It's more about a human's ability to mentally
manage complexity than anything else.  In the sequential block, every
line must be 'xxx' <= next_'xxx'.  Very easy to verify.  In the
combinatorial block, 'next_' must appear on the lhs, and only on the
lhs, of any assignment involving a sequential variable.  Nothing keeps
you from using if, case statements, etc. in the combinatorial block,
and it's very easy to think about, because you see two variables
simultaneously -- what is 'x' right now, and what will 'x' be after
the next clock.

I have worked at several companies and with several individuals where
this coding style is used.  I did not invent it; as far as I know it
was probably invented at multiple places independently.  It works
quite well, but is, as I mentioned, a bit verbose.  There is no reason
to use it for extremely simple modules, but OTOH, the breakover point
where it is better to use it comes much sooner than you might think.
Thanks for explaining your coding style details, that's much
more enlightening than philosophical discussions.

I stand by my quote. The context was clearly "using variables to
raise the abstraction level". That's not what this does.

Your coding style provides a very verbose workaround for temporary
variables. I just can't imagine this is how you do test benches, that
are presumably much more complex than your RTL code. Presumably
there you use temporary variables directly where you need them without
great difficulty. Why would it have to be so different for
synthesizable
RTL?

You refer to a "mental model" to manage complexity. To your credit,
you provide an argument, something the original author of
guideline #5 never did. However, I find it dubious. To manage
complexity,
I don't need to see the hardware registers, complete with Q and D,
so explicitly in the code. I think I understand pretty well what kind
of
coding styles are efficiently supported by synthesis tools. Given
this,
I try to write the code itself as clearly as possible.

Most importantly: your coding style doesn't support non-temporary
variables. In other words, register inferencing from variables is not
supported and therefore ignored as technique. In this sense, this is
actually a good illustration of the point I'm trying to make.

I happen to think that register inferencing from variables is an
essential tool. It raises the abstraction level just one notch. The
registers are not glancing at you from the code (although
unambiguously defined) but in return your coding style can be
much more expressive.

Jan

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
On Apr 21, 8:19 am, Jan Decaluwe <j...@jandecaluwe.com> wrote:

Thanks for explaining your coding style details, that's much
more enlightening than philosophical discussions.

I stand by my quote. The context was clearly "using variables to
raise the abstraction level". That's not what this does.
I didn't show any variables raising the abstraction level, no, but I
showed a context they could be provided in.

Your coding style provides a very verbose workaround for temporary
variables. I just can't imagine this is how you do test benches, that
are presumably much more complex than your RTL code. Presumably
there you use temporary variables directly where you need them without
great difficulty. Why would it have to be so different for
synthesizable
RTL?
You're right. Testbenches do not suffer from this limitation. But,
in point of fact, I can use any sort of logic in my testbench. I use
constructs all the time that aren't realistically synthesizable, so
comparing how I code synthesizable RTL vs how I code testbenches would
turn up a lot more differences than just this.

You refer to a "mental model" to manage complexity. To your credit,
you provide an argument, something the original author of
guideline #5 never did. However, I find it dubious. To manage
complexity,
I don't need to see the hardware registers, complete with Q and D,
so explicitly in the code. I think I understand pretty well what kind
of
coding styles are efficiently supported by synthesis tools. Given
this,
I try to write the code itself as clearly as possible.
Yes, but when you use if/else, or case statements, or other complex
structures, it is easy to get lost. Humans can only hold a very few
things in their minds at a time, and this is a powerful tool. As I
said, I certainly did not invent this style, but I personally know
dozens of people who use it, and I have personally introduced it to at
least 3 people, and they all find it extremely useful.

Most importantly: your coding style doesn't support non-temporary
variables. In other words, register inferencing from variables is not
supported and therefore ignored as technique. In this sense, this is
actually a good illustration of the point I'm trying to make.
Well, it may be a good illustration to you, but now you're waxing
philosophical again. Care to show an example (preferably in verilog)
of how not using this coding style supports your preferred technique?

I happen to think that register inferencing from variables is an
essential tool. It raises the abstraction level just one notch. The
registers are not glancing at you from the code (although
unambiguously defined) but in return your coding style can be
much more expressive.
I am actually doing something similar, I think, in my verilog
automagic boilerplate code, which can determine size and type of
registers in most cases, and automatically declares them.

Regards,
Pat
 
glen herrmannsfeldt wrote:
In comp.arch.fpga rickman <gnuarm@gmail.com> wrote:
On Apr 17, 7:17?pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
(snip on test benches)

Yes, I was describing real world (hardware) test benches.

Depending on how close you are to a setup/hold violation,
it may take a long time for a failure to actually occur.

That is the point. Finding timing violations in a simulation is hard,
finding them in physical hardware is not possible to do with any
certainty. A timing violation depends on the actual delays on a chip
and that will vary with temperature, power supply voltage and process
variations between chips.

But they have to be done for ASICs, and all other chips as
part of the fabrication process. For FPGAs you mostly don't
have to do such, relying on the specifications and that the chips
were tested appropriately in the factory.
I don't follow your reasoning. Why is finding timing violations in
ASICs any different from FPGA? If the makers of ASICs can't
characterize their devices well enough for static timing analysis to
find the timing problems then ASIC designers are screwed.


I had to work on a problem design once
because the timing analyzer did not work or the constraints did not
cover (I firmly believe it was the tools, not the constraints since it
failed on a number of different designs). We tried finding the chip
that failed at the lowest temperature and then used that at an
elevated temperature for our "final" timing verification. Even with
that, I had little confidence that the design would never have a
problem from timing. Of course on top of that the chip was being used
at 90% capacity. This design is the reason I don't work for that
company anymore. The section head knew about all of these problems
before he assigned the task and then expected us to work 70 hour work
weeks. At least we got them to buy us $100 worth of dinner each
evening!

One that I worked with, though not at all at that level, was
a programmable ASIC (for a systolic array processor). For some
reason that I never knew the timing was just a little bit off
regarding to writes to the internal RAM. The solution was to use
two successive writes, which seemed to work. In the usual operation
mode, the RAM was initialized once, so the extra cycle wasn't much
of a problem. There were also some modes where the RAM had to
be written while processing data, such that the extra cycle meant
that the processor ran that much slower.

The point is that if you don't do static timing analysis (or have an
analyzer that is broken) timing verification is nearly impossible.

And even if you do, the device might still have timing problems.
You keep saying that, but you don't explain.

Yes, I was trying to cover the case of not using static timing
analysis but only testing actual hardware. ?For ASICs, it is
usually necessary to test the actual chips, though they should
have already passed static timing. ?

If you find a timing bug in the ASIC chip, isn't that a little too
late? Do you test at elevated temperature? Do you generate special
test vectors? How is this different from just testing the logic?

It might be that it works at a lower clock rate, or other workarounds
can be used. Yes, it is part of testing the logic.

(snip)

If you only have one clock, it isn't so hard. ?As you add more,
with different frequencies and/or phases, it gets much harder,
I agree. ?It would be nice to get as much help as possible
from the tools.

The number of clocks is irrelevant. I don't consider timing issues of
crossing clock domains to be "timing" problems. There you can only
solve the problem with proper logic design, so it is a logic
problem.

Yes, there is nothing to do about asynchronous clocks. It just has
to work in all cases. But in the case of supposedly related
clocks, you have to verify it. There are designs that have one
clock a multiple of the other clock frequency, or multiple phases
with specified timing relationship. Or even single clocks with
specified duty cycle. (I still remember the 8086 with its 33% duty
cycle clock.)

With one clock you can run combinations of voltage, temperature,
and clock rate, not so hard but still a lot of combinations.
With related clocks, you have to verify that the timing between
the clocks works.
But you can't verify timing by testing. You can never have any level
of certainty that you have tested all the ways the timing can fail.
If the clocks are related, what exactly are you testing, that they
*are* related? Timing is something that has to be correct by
design.

Rick
 
glen herrmannsfeldt wrote:
combinations. One that I have heard of, though haven't actually
tried, is having a logic block where the delay is greater than
one clock cycle, but less than two. Maybe some tools can do that,
but I don't believe that all can.
Just normal multicycle path, has been normal thing in tools for
a long time. At least Altera, Xilinx, Synplify, Primetime and
Precision support it.

--Kim
 
rickman wrote:
But you can't verify timing by testing. You can never have any level
of certainty that you have tested all the ways the timing can fail.
Especially with ASIC you can't verify the design by testing. There are
so many signoff corners and modes in the timing analysis. The old
worst/best case in normal and testmode are long gone. Even 6 corner
analysis in 2+ modes is for low end processes with big extra margins.
With multiple adjustable internal voltage areas, powerdown areas etc.
the analysis is hard even with STA.

--Kim
 
I also use seperate sequential and combinatorial always blocks. At
first I felt that I should be able to have just a single sequential
block but quickly became accustomed to 2 blocks and it now feels
natural and I don't think it limits my ability to express my intent at
all. Most of the experienced designers I work with use this style but
not all of them.
 
Well, some of the comments were regarding ASIC design, where
things aren't so sure. For FPGA designs, there is, as you say,
"properly constrained" which isn't true for all design and tool
combinations. One that I have heard of, though haven't actually
tried, is having a logic block where the delay is greater than
one clock cycle, but less than two. Maybe some tools can do that,
but I don't believe that all can.

As Kim says multi-cycle paths have been 'constrainable' in any FPGA
took I have used for as long as I can remember.


Nial.
 
On Apr 21, 4:34 pm, Patrick Maupin <pmau...@gmail.com> wrote:
On Apr 21, 8:19 am, Jan Decaluwe <j...@jandecaluwe.com> wrote:

Your coding style provides a very verbose workaround for temporary
variables. I just can't imagine this is how you do test benches, that
are presumably much more complex than your RTL code. Presumably
there you use temporary variables directly where you need them without
great difficulty. Why would it have to be so different for
synthesizable
RTL?

You're right.  Testbenches do not suffer from this limitation.  But,
in point of fact, I can use any sort of logic in my testbench.  I use
constructs all the time that aren't realistically synthesizable, so
comparing how I code synthesizable RTL vs how I code testbenches would
turn up a lot more differences than just this.
As you say, synthesizable RTL has a lot of inherent restrictions.
I just don't see the logic in adding artificial restricions on top of
those.

Most importantly: your coding style doesn't support non-temporary
variables. In other words, register inferencing from variables is not
supported and therefore ignored as technique. In this sense, this is
actually a good illustration of the point I'm trying to make.

Well, it may be a good illustration to you, but now you're waxing
philosophical again.  Care to show an example (preferably in verilog)
of how not using this coding style supports your preferred technique?
In my experience, we are talking about a paradigm shift here.
Easy once you "get it", but apparently confusing to many
engineers in the mean time.

Therefore, I now think that a meaningful discussion must be
more elaborate than a typical newsgroup post can bear :)
What I can offer you is a rather lengthy discussion of two
design variants that highlight the issues through their (subtle)
differences. The case is based on a real ambiguity that I once
detected in the Xilinx ISE examples.

Unfortunately, the source code is in Python :) (MyHDL).
However, there is equivalent converted Verilog available
in the article.

http://www.myhdl.org/doku.php/cookbook:jc2

Jan
 
On Apr 22, 1:15 am, Kim Enkovaara <kim.enkova...@iki.fi> wrote:

Especially with ASIC you can't verify the design by testing. There are
so many signoff corners and modes in the timing analysis. The old
worst/best case in normal and testmode are long gone. Even 6 corner
analysis in 2+ modes is for low end processes with big extra margins.
With multiple adjustable internal voltage areas, powerdown areas etc.
the analysis is hard even with STA.
For the record, I agree that lots of static analysis is necessary
(static timing, model checking, etc.) The thesis when I started this
sub-thread is that what the *language* gives you (VHDL vs. verilog) is
such a small subset of possible checking as to be unuseful. I will
now add that it comes at a huge cost (in coding things just right).

Regards,
Pat
 
On Apr 22, 3:44 am, Jan Decaluwe <j...@jandecaluwe.com> wrote:
..> Unfortunately, the source code is in Python :) (MyHDL).
However, there is equivalent converted Verilog available
in the article.

   http://www.myhdl.org/doku.php/cookbook:jc2
Well, maybe it's so subtle I still don't get it. But it looks like
'run' and 'dir' are what I would consider combinatorial variables, so
I would just stuff them in the combinatorial 'always @*' block. The
only register which would have a corresponding 'next_' is 'q'. In
fact, your whole sequential block could be converted to the
combinatorial block (with the exception of changing '<=' to '=', and
putting 'next_' in front of q on lhs), and the sequential block would
basically be 'q <= next_q'.

Or is there something else you're trying to convey that I'm missing?

Regards,
Pat
 
On Apr 22, 9:03 am, Patrick Maupin <pmau...@gmail.com> wrote:
On Apr 22, 3:44 am, Jan Decaluwe <j...@jandecaluwe.com> wrote:
.> Unfortunately, the source code is in Python :) (MyHDL).

However, there is equivalent converted Verilog available
in the article.

   http://www.myhdl.org/doku.php/cookbook:jc2

Well, maybe it's so subtle I still don't get it.  But it looks like
'run' and 'dir' are what I would consider combinatorial variables, so
I would just stuff them in the combinatorial 'always @*'  block.  The
only register which would have a corresponding 'next_' is 'q'.  In
fact, your whole sequential block could be converted to the
combinatorial block (with the exception of changing '<=' to '=', and
putting 'next_' in front of q on lhs), and the sequential block would
basically be 'q <= next_q'.

Or is there something else you're trying to convey that I'm missing?

Regards,
Pat
BTW, your myhdl does something similar, in that you use 'q.next'. So,
I really don't understand why you would think it's a terrible thing to
use 'next_q' to mean the same thing.

Regards,
Pat
 
On Apr 22, 4:29 pm, Patrick Maupin <pmau...@gmail.com> wrote:
On Apr 22, 9:03 am, Patrick Maupin <pmau...@gmail.com> wrote:



On Apr 22, 3:44 am, Jan Decaluwe <j...@jandecaluwe.com> wrote:
.> Unfortunately, the source code is in Python :) (MyHDL).

However, there is equivalent converted Verilog available
in the article.

   http://www.myhdl.org/doku.php/cookbook:jc2

Well, maybe it's so subtle I still don't get it.  But it looks like
'run' and 'dir' are what I would consider combinatorial variables, so
I would just stuff them in the combinatorial 'always @*'  block.  The
only register which would have a corresponding 'next_' is 'q'.  In
fact, your whole sequential block could be converted to the
combinatorial block (with the exception of changing '<=' to '=', and
putting 'next_' in front of q on lhs), and the sequential block would
basically be 'q <= next_q'.

Or is there something else you're trying to convey that I'm missing?

Regards,
Pat

BTW, your myhdl does something similar, in that you use 'q.next'.  So,
I really don't understand why you would think it's a terrible thing to
use 'next_q' to mean the same thing.
The two are unrelated. q.next is MyHDL's way to do
signal (non blocking) assignment ("<=").

My critique is that you use 2 signals/regs per state register
instead of 1. You could also do that in MyHDL, but I'm not doing
that in the examples I showed you. BTW, it would look like:
next_q.next = q

Jan
 
On Apr 22, 4:03 pm, Patrick Maupin <pmau...@gmail.com> wrote:
On Apr 22, 3:44 am, Jan Decaluwe <j...@jandecaluwe.com> wrote:
.> Unfortunately, the source code is in Python :) (MyHDL).

However, there is equivalent converted Verilog available
in the article.

   http://www.myhdl.org/doku.php/cookbook:jc2

Well, maybe it's so subtle I still don't get it.  But it looks like
'run' and 'dir' are what I would consider combinatorial variables, so
I would just stuff them in the combinatorial 'always @*'  block.  The
only register which would have a corresponding 'next_' is 'q'.  In
fact, your whole sequential block could be converted to the
combinatorial block (with the exception of changing '<=' to '=', and
putting 'next_' in front of q on lhs), and the sequential block would
basically be 'q <= next_q'.

Or is there something else you're trying to convey that I'm missing?
Yes, the fact that 'run' and 'dir' are state variables. Therefore,
your
proposed approach wouldn't work. You have the test vectors, you can
try it yourself.

Quoting from the article:
"""
This example is more subtle and complex than it may seem at first
sight. As said before, variables dir and run are state variables and
will therefore require a flip-flop in an implementation. However, they
are also used “combinatorially”: when they change, they may influence
the counter operation “in the same clock cycle”, that is, before the
flip-flop output changes. This is perfectly fine behavior and no
problem for synthesis tools, but it tends to confuse a lot of
designers.
"""

Jan
 
On Thu, 22 Apr 2010 08:08:59 -0700 (PDT), Jan Decaluwe
<jan@jandecaluwe.com> wrote:

On Apr 22, 4:03 pm, Patrick Maupin <pmau...@gmail.com> wrote:
On Apr 22, 3:44 am, Jan Decaluwe <j...@jandecaluwe.com> wrote:
.> Unfortunately, the source code is in Python :) (MyHDL).

However, there is equivalent converted Verilog available
in the article.

   http://www.myhdl.org/doku.php/cookbook:jc2

Well, maybe it's so subtle I still don't get it.  But it looks like
'run' and 'dir' are what I would consider combinatorial variables, so
I would just stuff them in the combinatorial 'always @*'  block.  The
only register which would have a corresponding 'next_' is 'q'.  In
fact, your whole sequential block could be converted to the
combinatorial block (with the exception of changing '<=' to '=', and
putting 'next_' in front of q on lhs), and the sequential block would
basically be 'q <= next_q'.

Or is there something else you're trying to convey that I'm missing?

Yes, the fact that 'run' and 'dir' are state variables. Therefore,
your
proposed approach wouldn't work. You have the test vectors, you can
try it yourself.

Quoting from the article:
"""
This example is more subtle and complex than it may seem at first
sight. As said before, variables dir and run are state variables and
will therefore require a flip-flop in an implementation. However, they
are also used “combinatorially”: when they change, they may influence
the counter operation “in the same clock cycle”, that is, before the
flip-flop output changes. This is perfectly fine behavior and no
problem for synthesis tools, but it tends to confuse a lot of
designers.
"""
I am not sure who is really confused here. What is suggested in the
above paragraph is not really feasible; assuming by 'dir' one refers
to the output of a flop. It's not possible to use the output of a flop
at the same clock when its input changes (without generating an
intentional hold violation by playing with clock skews). What one can
do is to have a combinational signal dir_d which gets computed by
dir_q and other signals. This dir_d can be used in the same clock
cycle but dir_q will not be available till next cycle:

if (left or dir_q) dir_d = 1;
if (right) dir_d = 0;

if (dir_d) do_left;
dir_q <= dir_d;

The problem with the last verilog block shown is the dir and run are
not flops anymore but combinational signals decoded from goleft and
goright so the last direction will not be remembered. If last
direction needs to be remembered, one needs to decode the
'instruction', use the decoded value and remember the decoded value
as above.
--
Muzaffer Kal

DSPIA INC.
ASIC/FPGA Design Services

http://www.dspia.com
 
On Apr 22, 10:08 am, Jan Decaluwe <j...@jandecaluwe.com> wrote:
Or is there something else you're trying to convey that I'm missing?

Yes, the fact that 'run' and 'dir' are state variables. Therefore,
your
proposed approach wouldn't work. You have the test vectors, you can
try it yourself.

Quoting from the article:
"""
This example is more subtle and complex than it may seem at first
sight. As said before, variables dir and run are state variables and
will therefore require a flip-flop in an implementation. However, they
are also used “combinatorially”: when they change, they may influence
the counter operation “in the same clock cycle”, that is, before the
flip-flop output changes. This is perfectly fine behavior and no
problem for synthesis tools, but it tends to confuse a lot of
designers.
"""
OK, I admit I didn't read that carefully enough; in fact, I just
glanced at the actual code before my coffee this morning. But you
know what? Where I come from, "subtle" that "tends to confuse a lot
of designers" is just a synonym for "screwed up."

The whole point of the coding style I was describing is to NOT write
stuff that would confuse other designers. After all, in C I can write
perfectly valid code like "0[x] = 3". But just because I can, doesn't
mean it's a good idea.

Regards,
Pat
 
On Apr 22, 9:11 pm, Patrick Maupin <pmau...@gmail.com> wrote:
On Apr 22, 10:08 am, Jan Decaluwe <j...@jandecaluwe.com> wrote:



Or is there something else you're trying to convey that I'm missing?

Yes, the fact that 'run' and 'dir' are state variables. Therefore,
your
proposed approach wouldn't work. You have the test vectors, you can
try it yourself.

Quoting from the article:
"""
This example is more subtle and complex than it may seem at first
sight. As said before, variables dir and run are state variables and
will therefore require a flip-flop in an implementation. However, they
are also used “combinatorially”: when they change, they may influence
the counter operation “in the same clock cycle”, that is, before the
flip-flop output changes. This is perfectly fine behavior and no
problem for synthesis tools, but it tends to confuse a lot of
designers.
"""

OK, I admit I didn't read that carefully enough; in fact, I just
glanced at the actual code before my coffee this morning.  But you
know what?  Where I come from, "subtle" that "tends to confuse a lot
of designers" is just a synonym for "screwed up."
Let me put it more correctly. Initially, I think, everyone is confused
about this. I know I was (20 years ago already!). The confusion
is resolved by a new insight: how RTL synthesis *really* works.
Some get it quickly, others need more time.
I think that's a common pattern with new paradigms.

The benefit: a coding technique to solve real problems
significantly more elegantly.

The whole point of the coding style I was describing is to NOT write
stuff that would confuse other designers.  After all, in C I can write
perfectly valid code like "0[x] = 3".  But just because I can, doesn't
mean it's a good idea.
I suggest you try your coding style with my examples. You have the
spec and test vectors. If you find your code much clearer, I don't
have
a case (with you) to argue further. Otherwise, you'll remember me
when you start applying this technique in your designs :)

Jan
 
On Apr 22, 3:04 pm, Muzaffer Kal <k...@dspia.com> wrote:
On Thu, 22 Apr 2010 08:08:59 -0700 (PDT), Jan Decaluwe
This example is more subtle and complex than it may seem at first
sight. As said before, variables dir and run are state variables and
will therefore require a flip-flop in an implementation. However, they
are also used “combinatorially”: when they change, they may influence
the counter operation “in the same clock cycle”, that is, before the
flip-flop output changes. This is perfectly fine behavior and no
problem for synthesis tools, but it tends to confuse a lot of
designers.

I am not sure who is really confused here. What is suggested in the
above paragraph is not really feasible; assuming by 'dir' one refers
to the output of a flop. It's not possible to use the output of a flop
at the same clock when its input changes (without generating an
intentional hold violation by playing with clock skews). What one can
do  is to have a combinational signal dir_d which gets computed by
dir_q and other signals. This dir_d can be used in the same clock
cycle but dir_q will not be available till next cycle:
I don't think Jan is confused. While I haven't actually simulated it,
I have no reason to disbelieve that he has built a flop which feeds
into a combinatorial network, and the only thing here with a name is
the output of the combinatorial network, which also happens to feed
back into the flop.

Verilog will certainly let you do such things. And now (I think) I
understand a bit better what Jan means by "register inferencing" -- in
this case, this capability is manifested by the ability to not give a
name to a register itself, but only to the combinatorial net feeding
the register.

When I first started using Verilog, I would do such things, and
confuse myself and others mightily. So I stand by my opinion that
having two distinct yet related names for the input and output of a
hardware register makes it easier for someone to pick up some code and
conceptualize what is going on and not get it wrong at first glance.
FWIW, in a case like this, rather than having 'next_dir' and 'dir', I
might use 'dir' and 'prev_dir'. So the start of the combinatorial
block could have "dir = prev_dir", followed by conditionally setting
dir as required. Then in the synchronous block, you would have
"prev_dir = dir".

As I originally mentioned, this style is a bit more verbose, but since
code is read much more often than it is written, it is a net win.

Regards,
Pat
 

Welcome to EDABoard.com

Sponsor

Back
Top