Paradigms in implementation of counters

E

Eli Bendersky

Guest
Hi all,

My usage of counters undergoes evolution with my gaining experience in
VHDL, and I wondering what other people use for implementing counters.

All my design's boundaries (i.e. pins) are, naturally std_logic and
std_logic_vector. So are my registers for CPU access. But I like to
implement counters with 'natural' signals, because it's simpler.

So I define:

constant MAX_COUNT: natural := 42;
signal my_counter: natural range 0 to MAX_COUNT;

Then I can simply increment the counter with my_counter <= my_counter +
1, and compare it to numeric constants like 0, 1 and MAX_COUNT.
The interesting thing comes when there's need to convert back from
natural to std_logic_vector. I prefer not to intermix mathematical
libraries and use solely ieee.numeric_std, so my conversion is:

my_slv <= std_logic_vector(to_unsigned(my_counter, 6));

(Not that I think of it, I could use my_slv'size or 'length or
something, would that be synthesizable ?)

What are other people's approaches when implementing counters ?

Thanks in advance
Eli
 
Eli Bendersky schrieb:

So I define:

constant MAX_COUNT: natural := 42;
signal my_counter: natural range 0 to MAX_COUNT;

What are other people's approaches when implementing counters ?
I prefer integer without any special reason :=).
I would never us a countersignal as range 0 to 42 because your
synthesis result will differ between several synthesis tools. I would
set MAX_COUNT to 63 (the next 2^n-1). and use a third constant
MAX_VALUE=42 to count from 0 to 42.

bye Thomas
 
"Eli Bendersky" <eliben@gmail.com> wrote in message
news:1148446758.224219.207090@j73g2000cwa.googlegroups.com...
Hi all,

My usage of counters undergoes evolution with my gaining experience in
VHDL, and I wondering what other people use for implementing counters.

All my design's boundaries (i.e. pins) are, naturally std_logic and
std_logic_vector. So are my registers for CPU access. But I like to
implement counters with 'natural' signals, because it's simpler.

So I define:

constant MAX_COUNT: natural := 42;
signal my_counter: natural range 0 to MAX_COUNT;
I prefer the "unsigned" type:

constant MAX_COUNT : natural := 42;
signal my_counter : unsigned(ceil_log2(MAX_COUNT)-1 downto 0);

Then I can simply increment the counter with my_counter <= my_counter +
1, and compare it to numeric constants like 0, 1 and MAX_COUNT.
Thanks to the magic of numeric_std, so can I. :)

The interesting thing comes when there's need to convert back from
natural to std_logic_vector. I prefer not to intermix mathematical
libraries and use solely ieee.numeric_std, so my conversion is:

my_slv <= std_logic_vector(to_unsigned(my_counter, 6));
With my definition, I get

my_slv <= std_logic_vector(my_counter);

(Not that I think of it, I could use my_slv'size or 'length or
something, would that be synthesizable ?)
Yes, it would.

What are other people's approaches when implementing counters ?
I dislike having signals that are of non std_logic-related types (i.e.
integer, boolean) unless they are enumerated types for state machines (etc),
for the following reasons:

* std_logic supports unknown-value generation/propagation
* easier to see how many bits there are in a given signal
* easier to "pull out" bits or bit-slices when needed (e.g. clock
dividers)
* often fewer conversions are needed
* superstition :)

Your approach is perfectly good (particularly the use of ieee.numeric_std,
yay!)

Cheers,

-Ben-
 
Thomas Stanka wrote:
Eli Bendersky schrieb:

So I define:

constant MAX_COUNT: natural := 42;
signal my_counter: natural range 0 to MAX_COUNT;

What are other people's approaches when implementing counters ?

I prefer integer without any special reason :=).
My preference for 'natural' over 'integer' stems from signed-ness. With
'natural' I'm always sure that I have unsigned, which is what I (and
most others) need 99.9% of the time. Not that I'm not sure with integer
when I define the range, but 'natural' just feels more, ehm, natural
:)

I would never us a countersignal as range 0 to 42 because your
synthesis result will differ between several synthesis tools. I would
set MAX_COUNT to 63 (the next 2^n-1). and use a third constant
MAX_VALUE=42 to count from 0 to 42.
IMHO it is superfluous, and I seriously doubt that any synthesizer will
give me wrong results. When I convert the natural back to slv, I
explicitly specify the bit width in 'to_unsigned' - so there is
absolutely no place for mistakes.
Or am I missing something :) ?

Eli
 
my_slv <= std_logic_vector(to_unsigned(my_counter, 6));

With my definition, I get

my_slv <= std_logic_vector(my_counter);

(Not that I think of it, I could use my_slv'size or 'length or
something, would that be synthesizable ?)

Yes, it would.

What are other people's approaches when implementing counters ?

I dislike having signals that are of non std_logic-related types (i.e.
integer, boolean) unless they are enumerated types for state machines (etc),
for the following reasons:

* std_logic supports unknown-value generation/propagation
* easier to see how many bits there are in a given signal
* easier to "pull out" bits or bit-slices when needed (e.g. clock
dividers)
* often fewer conversions are needed
* superstition :)
How large a part of this is superstition ? After all, enumerated types
for state machines are also not quite std_logic related. A bounded
natural can be viewed as an enumerated type of 1, 2, 3...MAX

Eli
 
So I define:

constant MAX_COUNT: natural := 42;
signal my_counter: natural range 0 to MAX_COUNT;

Then I can simply increment the counter with my_counter <= my_counter +
1, and compare it to numeric constants like 0, 1 and MAX_COUNT.
The interesting thing comes when there's need to convert back from
natural to std_logic_vector. I prefer not to intermix mathematical
libraries and use solely ieee.numeric_std,
All good things

so my conversion is:

my_slv <= std_logic_vector(to_unsigned(my_counter, 6));

(Not that I think of it, I could use my_slv'size or 'length or
something, would that be synthesizable ?)
This also works just fine and has no trouble synthesizing
my_slv <= std_logic_vector(to_unsigned(my_counter, my_slv'length));

In fact, wherever you have to 'hard code' in the length, the MSB, LSB, etc.
of any vector you should probably pause for a second and consider using the
appropriate signal attribute instead as shown above for 'length (i.e.
'length, 'left, 'right, etc.)

What are other people's approaches when implementing counters ?
Same as what you're talking about. I tend to use naturals since most
counters tend to be counting from 0 to something, but in any case if I need
something that counts negative I'll use integer, so in other words you can
safely use the appropriate data type without fear of retribution from
probably most synthesis tools.

From a synthesis perspective, make sure you define the range completely
since

signal Counter: integer;

will synthesize to a 32 bit counter since no range is specified. If Counter
only counts from 0 to 7 then you'll end up having 29 bits getting
synthesized that always result in 0. The synthesis tool uses the range to
figure out how many bits are needed to implement the counter, I haven't
found any that will optomize away those upper 29 bits in this example.

One kind of clumsy thing though is if the range needs to be somewhat generic
and get values from the generic map AND you need to be able to convert this
to/from std_logics since now the width of the std_logic_vector and the range
of the counter will both vary as a function of the generic. In that case,
I'll bring in the width of the std_logic_vector version of the counter as
the generic and define the range of the counter in terms of that generic.

Ex. If 'N_Bits' is the name of the generic input to the entity then
signal Counter_Slv: std_ulogic_vector(N_Bits - 1 downto 0);
signal Counter: natural range 0 to 2**(N_Bits - 1);

The 'problem' is simply one of usage and documentation. When you document
how this generic is used, you'll end up saying something to the effect that
you need to set 'N_Bits' to the base 2 log of .... In fact, calling it
N_Bits as I've done is probably NOT a good name to use.

Not that you should write your own FIFO, but if you did, one of the
parameters you would probably want to bring out is a generic that specifies
the depth of the FIFO. So to someone trying to USE your nice new FIFO
design, they would probably immediately grasp that a generic called
'FIFO_Depth' represents the depth of the FIFO. But if you follow the above
approach, what you would bring out as the generic would actually be
log2(FIFO_Depth). You could get into calling the generic 'FIFO_Depth_Bits'
or something and say that this is the number of bits needed to represent
FIFO_Depth or maybe 'FIFO_Depth_Log2' and say that the actual depth of the
FIFO is 2**FIFO_Depth_Log2. So to make a 256 entry FIFO one would need to
set this generic to 8. My preference would be to name it something like
Log2_Fifo_Depth and document it as being log2() of the desired depth of the
FIFO. Remember the perspective of someone trying to use your code but is
not intimately familiar with it as you are as you come up with names.

It's not at all difficult to grasp when you're both writing the
entity/architecture of the new component AND writing the code that
instantiates that component since you're on both sides of the fence and
obviously know what is needed. If you have no visibility into the
entity/architecture though and are now trying to use that code then having
to specify the log2 of the real thing that you would like to specify is not
terribly intuitive. Calling that generic something that has to do with the
number of bits of something is even less intuitive. Since the code that you
write is potentially code that someone else will pick up and just want to
use, not have to dig into and completely understand themselves (i.e. code
reuse) you just need to be careful about how to name those generics and try
to make it painfully clear about how the user should use that generic.

KJ
 
"Eli Bendersky" <eliben@gmail.com> wrote in message
news:1148464693.758039.42920@i40g2000cwc.googlegroups.com...
I dislike having signals that are of non std_logic-related types (i.e.
integer, boolean) unless they are enumerated types for state machines
(etc),
for the following reasons:

* std_logic supports unknown-value generation/propagation
* easier to see how many bits there are in a given signal
* easier to "pull out" bits or bit-slices when needed (e.g. clock
dividers)
* often fewer conversions are needed
* superstition :)

How large a part of this is superstition ?
The part where all my binary counters skip from 0001100 to 0001110 ;-)

After all, enumerated types
for state machines are also not quite std_logic related. A bounded
natural can be viewed as an enumerated type of 1, 2, 3...MAX
That's true. However, when I use an enumerated type it's usually as an
abstraction (e.g. because I don't care about the mapping of a state vector
to hardware). In the case of a counter, I don't want an abstraction, I want
a particular circuit.

Another good thing about the signed/unsigned types in numeric_std is that
they have a (theoretically) unlimited range. You can't write a 48-bit
counter using an integer-typed signal.

By superstition, I guess I really mean "habit". This was Standard Practice
in the company where I did most of my early VHDL work, and it still seems
like a good system, so I continue to use it.

Cheers,

-Ben-
 
Another good thing about the signed/unsigned types in numeric_std is that
they have a (theoretically) unlimited range. You can't write a 48-bit
counter using an integer-typed signal.
Good point. Most counters aren't nearly that large so it doesn't
matter if you use unsigned or natural but just a week or two ago
somebody posted a question about how to make a 102 bit adder. The
answer is trivial when using unsigned, more complicated when you run
into 32 bit boundaries and such.

KJ
 
KJ wrote:
The 'problem' is simply one of usage and documentation. When you document
how this generic is used, you'll end up saying something to the effect that
you need to set 'N_Bits' to the base 2 log of .... In fact, calling it
N_Bits as I've done is probably NOT a good name to use.

Not that you should write your own FIFO, but if you did, one of the
parameters you would probably want to bring out is a generic that specifies
the depth of the FIFO. So to someone trying to USE your nice new FIFO
design, they would probably immediately grasp that a generic called
'FIFO_Depth' represents the depth of the FIFO. But if you follow the above
approach, what you would bring out as the generic would actually be
log2(FIFO_Depth). You could get into calling the generic 'FIFO_Depth_Bits'
or something and say that this is the number of bits needed to represent
FIFO_Depth or maybe 'FIFO_Depth_Log2' and say that the actual depth of the
FIFO is 2**FIFO_Depth_Log2. So to make a 256 entry FIFO one would need to
set this generic to 8. My preference would be to name it something like
Log2_Fifo_Depth and document it as being log2() of the desired depth of the
FIFO. Remember the perspective of someone trying to use your code but is
not intimately familiar with it as you are as you come up with names.

It's not at all difficult to grasp when you're both writing the
entity/architecture of the new component AND writing the code that
instantiates that component since you're on both sides of the fence and
obviously know what is needed. If you have no visibility into the
entity/architecture though and are now trying to use that code then having
to specify the log2 of the real thing that you would like to specify is not
terribly intuitive. Calling that generic something that has to do with the
number of bits of something is even less intuitive. Since the code that you
write is potentially code that someone else will pick up and just want to
use, not have to dig into and completely understand themselves (i.e. code
reuse) you just need to be careful about how to name those generics and try
to make it painfully clear about how the user should use that generic.
I have also noticed this slight problem in terminology, having to
specify generics for bit widths all the time. I realize it's a
confusing point, so I just adapted a convention of attaching _nbits to
anything that actually specifies a log2 of a value, and heavily
documenting all the generics / signals with comments helps.

Eli
 
I would never us a countersignal as range 0 to 42 because your
synthesis result will differ between several synthesis tools.
I have yet to run across the synthesis tools that uses the range for
anything other than determining how many bits are needed to implement
the counter. No range checking or limiting is implemented for
synthesis so a natural that is defined range 0 to 42 gets implemented
exactly the same as one with a range of 0 to 63. (Assuming here that
you change nothing else in the code, just the upper end of the range)

For simulation though it does do this range checking and I've found
this to be useful during debug of the code since an out of range
calculation generally is a design error that needs to be fixed. If I
know that I should never get a value above 42 then the simulator will
catch it when I try to do such a thing and like I said, it is usually a
design error on my part that I need to fix.

Using an artifically larger range or using unsigned types you get no
automagic checking that the counter is working as intended. You can of
course add asserts to catch it yourself though (i.e. assert Count <=
42....), just means that you need to write the assert statement
yourself (i.e. more code on your part, but essentially equivalent).

KJ
 
KJ wrote:
I would never us a countersignal as range 0 to 42 because your
synthesis result will differ between several synthesis tools.

I have yet to run across the synthesis tools that uses the range for
anything other than determining how many bits are needed to implement
the counter. No range checking or limiting is implemented for
synthesis so a natural that is defined range 0 to 42 gets implemented
exactly the same as one with a range of 0 to 63. (Assuming here that
you change nothing else in the code, just the upper end of the range)

For simulation though it does do this range checking and I've found
this to be useful during debug of the code since an out of range
calculation generally is a design error that needs to be fixed. If I
know that I should never get a value above 42 then the simulator will
catch it when I try to do such a thing and like I said, it is usually a
design error on my part that I need to fix.
I recall at least 3 times in my VHDL coding experience when the
simulator signaled an error of exceeding the range. In all cases I was
very happy - with slv counters it wouldn't pop up and the bug would be
much more difficult to find.

Eli
 
KJ schrieb:

I would never us a countersignal as range 0 to 42 because your
synthesis result will differ between several synthesis tools.

I have yet to run across the synthesis tools that uses the range for
anything other than determining how many bits are needed to implement
the counter. No range checking or limiting is implemented for
synthesis so a natural that is defined range 0 to 42 gets implemented
exactly the same as one with a range of 0 to 63. (Assuming here that
you change nothing else in the code, just the upper end of the range)
I have seen that Dc_shell and Synplify generate different behavior for
values >42 when using range 0 to 42. In fact both tools have any
freedom to optimize your code for counter values >42 if you contrain
the range form 0 to 42.

For simulation though it does do this range checking and I've found
this to be useful during debug of the code since an out of range
calculation generally is a design error that needs to be fixed.
Yes, but if your simulation never raise your counter above the max
value, but your design does, you may end up with a disfunctional
netlist, because your netlist won't complain about counter values >42.

bye Thomas
 
Eli Bendersky schrieb:

Thomas Stanka wrote:
I would never us a countersignal as range 0 to 42 because your
synthesis result will differ between several synthesis tools. I would
set MAX_COUNT to 63 (the next 2^n-1). and use a third constant
MAX_VALUE=42 to count from 0 to 42.


IMHO it is superfluous, and I seriously doubt that any synthesizer will
give me wrong results. When I convert the natural back to slv, I
explicitly specify the bit width in 'to_unsigned' - so there is
absolutely no place for mistakes.
Or am I missing something :) ?
It is not superfluid if you consider your counter getting values >
MAX_COUNT for some reason (e.g. SEU, Resetproblem, clk spike, ...). If
you use range 0 to 42 you will never be able to write and simulate what
should happen if your counter value exeeds 42 due to any error.
In fact your synthesis result will be a register which could physically
contain the values 43 to 63 and there will be no way to control the
behavior of your counter for values >42.

This is no problem, as long as your counter didn't come into values
42. But you need to be sure your counter never ever exceeds 42 if you use your kind of code.
bye Thomas
 
Thomas Stanka wrote:

I have seen that Dc_shell and Synplify generate different behavior for
values >42 when using range 0 to 42. In fact both tools have any
freedom to optimize your code for counter values >42 if you contrain
the range form 0 to 42.
Thomas:
If you still have access to Dc_shell, I would
appreciate it if you could synthesize this design
with default constraints for any device, and
let me know how it does. It works fine with Synplify.

Thanks.

http://home.comcast.net/~mike_treseler/uart.vhd


-- Mike Treseler
 
Hi,

Mike Treseler schrieb:

Thomas Stanka wrote:

I have seen that Dc_shell and Synplify generate different behavior for
values >42 when using range 0 to 42. In fact both tools have any
freedom to optimize your code for counter values >42 if you contrain
the range form 0 to 42.

Thomas:
If you still have access to Dc_shell, I would
appreciate it if you could synthesize this design
with default constraints for any device, and
let me know how it does. It works fine with Synplify.
It seems that 'events (rising_edge(Clk)) are not allowed for procedures
by dc_shell 8-(.

bye Thomas
 
The following code behaves drastically different between natural and
unsigned data types for count:

if count - 1 < 0 then
count <= max_count;
else
count <= count - 1;
end if;

Since all integer operations are 32 bit signed, the initial comparison
is valid even for natural (non-negative) subtypes for count. However,
since unsigned operations are also unsigned, the comparison always
returns false with unsigned types of count.

Range (and corresponding width) restrictions are imposed on integer
subtypes only when a value is assigned to a variable or signal.
Synthesis optimizes out unused bits in the operation.

In the example above, "count - 1" is recognized and shared between the
comparison and the decrement. The comparsion implements an extra bit
(the borrow/carry bit), but no storage is consumed. I find this much
easier to code and understand than adding phantom bits to counts so
that they can be checked for rollover using the carry logic.

I vastly prefer a value system where N+1 > N is always true, which is
not the case with signed/unsigned data types. I prefer to tell the
tool exactly what I want done if I exceed the range representable by
the bits, rather than have the tool assume I wanted it to roll over
automatically.

I have also seen cases where natural range 0 to 5 optimized certain
comparisons. For example, comparing equal to 5 takes only 2 bits, not
3, whereas unsigned(2 downto 0) takes all 3 bits to compare equal to 5.

Finally, integers are _much_ faster to simulate than vector based
arithmetic, since the simulator can take advantage of native
instructions for the former.

Andy
 
Thomas Stanka wrote:

It seems that 'events (rising_edge(Clk)) are not allowed for procedures
by dc_shell 8-(.
Thanks for running the test.

What happens if I do this?

-- elsif rising_edge(clock) then
elsif clock'event and clock = '1' then


-- Mike Treseler
 
In fact both tools have any
freedom to optimize your code for counter values >42 if you contrain
the range form 0 to 42.

Synthesis doesn't have as much freedom to do as you give it. If the signal
is defined to be in the range from 0 to 42 and you have a simple counter
that is designed to stay between 0 and 42...

signal Count: natural range 0 to 42;
.....
process(Clock)
begin
if rising_edge(Clock) then
if (Reset = '1') then
Count <= 0;
elsif (Count_Enable = '1') then
if (Count = 42) then
Count <= 0;
else
Count <= Count + 1;
end if;
end if;
end if;
end process;

The synthesis tool is not allowed to do whatever it feels like if Count
somehow gets to 43. In fact, what it must implement is the above specified
logic which says that if we're not reset and the count enable signal is true
then Count must be incremented (thereby making it go to 44).

This situation is no different than when you have an enumerated type which
is used as the state variable in a state machine. Your state machine will
have a 'when others => ' clause which will pick up all of the 'unexpected'
states. For example, if you have an enumerated type with four values and a
state machine that has cases for the four defined states Synthesis could
choose to encode this into 2 physical bits in which case the cases '00',
'01', '10' and '11' are all that will ever be encountered in the 'real
world' and will correspond directly to the four defined enumerations. The
synthesis tool is also free to implement a one hot encoding in which case
there will be a total of 16 possible states that the synthesis tool needs to
handle....but since you have the 'when others' clause defining what the
signal is supposed to be set to in that situation as well, the synthesis
tools must again implement what has been defined in the code.

The 'when others' code would never be executed if synthesis chose to encode
the bits but would need to be adhered to if any other encoding is used that
resulted in more than 4 possible states.

For simulation though it does do this range checking and I've found
this to be useful during debug of the code since an out of range
calculation generally is a design error that needs to be fixed.

Yes, but if your simulation never raise your counter above the max
value, but your design does, you may end up with a disfunctional
netlist, because your netlist won't complain about counter values >42.

bye Thomas

I'm presuming that by 'dysfunctional netlist' you're talking about something
where simulation does not match reality but what you've described is not
that situation at all. Instead what you've described is a design error
(which allowed the signal to go out of range) and an inadequate test bench
(which didn't exercise the functionality as it would in a real system which
allows the signal to go out of range). But OK, let's say you have something
out in the real world where count makes it up to 43 and it wasn't caught in
simulation. What this implies is that there are two problems that need to
be addressed
- There is a design error that is allowing the signal to get outside of the
intended range.
- The testbench is not adequate to catch this situation.

Even if you now can immediately spot the design error that caused the signal
to get out of range, what you 'should' do then is the following...in this
order
1. Without changing the design being tested, improve the testbench to cause
it to do what is happening in the real world that is causing the signal to
go out of range. The simulator will stop immediately now when the offending
signal goes out of range.
2. Change the design to fix the design error that caused the signal to go
out of range.
3. Re-run the 'improved' testbench from step #1. If you've properly fixed
the design than the signal will stay in range and run to completion with no
errors.

The only 'problem' with the above approach is if the reason that the signal
went out of range is due to a power up condition where the signal powers up
in an unknown state whereas in simulation you see it come up as 0 and start
to rely on this instead of explicitly putting it into a known state (i.e.
forgetting to have the 'if Reset = '1'' in the first example. Things like
this though tend to be relatively easy to catch by simple code inspection.

KJ
 

Welcome to EDABoard.com

Sponsor

Back
Top