Inferred latches questions

Guest
Hi everybody

I have a simple question about inferred latches in combinational
processes. Are
latches good or bad or what in such cases? As far as I get it, latches
are inferred
whenever a signal is not assigned a default value in the process and
not all possible
paths of execution assign a value to the signal in consideration.
Following
is a simple example:

....
signal a : std_logic_vector(7 downto 0);
signal b : std_logic_vector(7 downto 0);
signal result : std_logic_vector(7 downto 0);
signal op : std_logic;
signal overflow : std_logic;
....

simple_alu: process(a, b, op)
-- variable to hold the sum a + b when determinig overflow
variable tmp : std_logic_vector(8 downto 0);
begin

if op = '1' then
-- compute the sum
--
tmp := ("0" & op_a) + ("0" & op_b);
-- detect overflow
overflow <= a(7) xor b(7) xor tmp(7) xor tmp(8);
else
-- compute the exclusive or of a and b
tmp := ("0" & op_a) xor ("0" & op_b);
end if;

result <= tmp(7 downto 0);

end process simple_alu;
....

In this process the signal 'overflow' is assigned only in
the first case of the 'if' statement. As I get it, a latch
here shall be inferred for this signal, that ouputs the computed
value for 'overflow' whenever signal 'op' equals 1, and will
keep the most recent valid output value when 'op' is different
than 1. Am I correct?
Also, here let us assume that the simple ALU is used by some
external logic that is clever enough to take care of the
'overflow' signal only when performing additions, effectively
discarding its value when performing exclusive-ors. In this case
it really does not matter what the ALU outputs when performing
xors, so it is safe that the latch be removed, right?
In all, I simply do not get the purpose of latches in such cases.
Also, is this not possible - suppose that (as in real life) some
synchronous logic changes the inputs to the ALU but the
inputs do not change simultaneously, some values are propagated
for 'a' and 'b' (perhaps even not all of the values in these
vectors - say a(3) and b(7) are slower, for some reason)
when the operation is 'addidtion'. The ALU computes (literally)
'something', then the type of operation gets changed to 'xor' and
in a short time all signals settle down to their stable states until
changed at the next heartbeat of the synchronous logic controlling
the ALU. If this is possible (I think it is) then the value of
'overflow' is obviously totally wrong and is therefore useless.
Any logic that relies on using the latest assigned value will
therefore behave incorrectly. I suppose this all means
that in real life such latches are a very thin ice to tread onto
and, if the desired behavior is indeed to use the last valid assigned
value to a signal, this must be handled by means of some explicit
'enable' signals and/or synchronous logic (can't think of an example
right now),
instead of relying on the input signals to indirectly drive the
inferred latch enable signals. Is this correct? I will really
appreciate any comments on these topics.

Thanks and best regards,
Stoyan Shopov
 
I have always treated inferred latches as being an indication that I
did not specify something correctly in my design. I believe you are
correct in your undertstanding that they are generated when the output
state is not defined for all possible conditions, hence a latch is used
to hold the state of the previous value until a new condition sets it.

There are two problems that I see with inferred latches. First they
represent ambiguity in a design and this is potentially an indication
that the logic isn't as intended. The second problem with them is the
introduction of propagation delays that could potentially interfer with
circuit operation.

As an interesting point, I have attempted to simulate logic that has an
inferred latch in it. The latch wasn't by choice, but by accident.
Sometimes the simulation of the logic would work and sometimes it
wouldn't. Consequently, I learned to treat them as an error.
 
Short answer: Latches are bad.

Long answer: Latches are to be used only when there is absolutely no
other way.

Yes, there are places where they are appropriate. I can't think of one
right now, and the last couple times I tried to use them it was easier
to design them out than to get them to pass timing checks.

As someone else pointed, their presence can indicate (usually
indicates) an error; most team designs require one to document all
intended latches.
 
stoyan.shopov@gmail.com wrote:


Are latches good or bad or what in such cases?
* There are targets, that do no support latches (many FPGAs).
* Latches may be forbidden by your company's design rules.
* You have to take care, that
1) Latches are transparent. Glitches will propagate, if the latch is
open.
2) The data input of a latch must not change short before the latch is
disabled. (muxed latch problem) This would lead to a wrong value
stored in the latch.
* Latches are only half as big as flipflops.
* Latches do not consume energy like flipflops, that do it for every
clock edge (even if the value is unaltered). Latches are good for
low-power designs.


As far as I get it, latches
are inferred
whenever a signal is not assigned a default value in the process and
not all possible
paths of execution assign a value to the signal in consideration.
Right. The template for a latch is

process(data_in,enable)
begin
if (enable='1') then
latch_val<=data_in;
end if;
end process;

Note, that adding a reset _may_ lead to the muxed latch problem.

process(data_in,enable, reset)
begin
if (reset='1') then
latch_val<='0';
elsif (enable='1') then
latch_val<=data_in;
end if;
end process;

This strongly depends on the synthesis tool _and_ on the library. (You
need resettable latches and the synthesos tool has to be smart enough to
use them.)


simple_alu: process(a, b, op)
-- variable to hold the sum a + b when determinig overflow
variable tmp : std_logic_vector(8 downto 0);
begin

if op = '1' then
-- compute the sum
--
tmp := ("0" & op_a) + ("0" & op_b);
-- detect overflow
overflow <= a(7) xor b(7) xor tmp(7) xor tmp(8);
else
-- compute the exclusive or of a and b
tmp := ("0" & op_a) xor ("0" & op_b);
end if;

result <= tmp(7 downto 0);

end process simple_alu;
....

In this process the signal 'overflow' is assigned only in
the first case of the 'if' statement. As I get it, a latch
here shall be inferred for this signal, that ouputs the computed
value for 'overflow' whenever signal 'op' equals 1, and will
keep the most recent valid output value when 'op' is different
than 1. Am I correct?
Right - just test it with your synthesis tool.


Also, here let us assume that the simple ALU is used by some
external logic that is clever enough to take care of the
'overflow' signal only when performing additions, effectively
discarding its value when performing exclusive-ors. In this case
it really does not matter what the ALU outputs when performing
xors, so it is safe that the latch be removed, right?
This is something, that has nothing to do with latches, it is logic
reduction. (You could model overflow as a flipflop, too.)

It is always a good idea to do logic reduction and in your case to make
the other logic discard an unneeded overflow signal.


In all, I simply do not get the purpose of latches in such cases.
Latches are storage elements (registers). Flipflops are registers, too.

You need registers to store an information. If you can eliminate the
need for storing an information, you eliminate the register.

In an ALU it may be a good idea, to store flags, like the overflow flag.
But this strongly depends on _your_ CPU. Let me give an example:
Try to add a value from one RAM position to a value from a 2nd RAM
position. Then you need 2 addresses. Lets assume, that the CPU computes
the following steps:
* fetch of the first address
* fetch of the 1st operand, store it to a temp register
* fetch of the 2nd address
* fetch of the 2nd operand, addition with the value from the temp
register and storing the value in the temp register (-> overflow
flag!)
* store the value of the temp register a the 2nd address
Lets also assume, that every move of operands during all this is done by
the ALU. (get operand(s) from somewhere -> ALU -> put operand somewhere)
As you can see, the very last step means _moving_ the result. While
moving the result the flags must not be altered, because they have to be
valid after the complete instruction is finished.
=> Sometimes the ALU is allowed to modify flags, sometimes not. -> You
need storage elements. The CPU control, if the flags are altered.


Also, is this not possible - suppose that (as in real life) some
synchronous logic changes the inputs to the ALU but the
inputs do not change simultaneously, some values are propagated
for 'a' and 'b' (perhaps even not all of the values in these
vectors - say a(3) and b(7) are slower, for some reason)
when the operation is 'addidtion'. The ALU computes (literally)
'something', then the type of operation gets changed to 'xor' and
in a short time all signals settle down to their stable states until
changed at the next heartbeat of the synchronous logic controlling
the ALU. If this is possible (I think it is) then the value of
'overflow' is obviously totally wrong and is therefore useless.
Remember, that an instruction need several steps to be computed and this
means often several ALU steps. So the state machine of the CPU has to
take care, that the operands and the instruction for the ALU has to be
valid. (And the setup- & hold times have to be met.)


Ralf
 

Welcome to EDABoard.com

Sponsor

Back
Top