Mixed clocked/combinatorial coding styles

On Mon, 25 Aug 2008 16:24:35 -0700, rickman wrote:

I find there are any number of important techniques in software design
that are much less important when coding in an HDL. The idea of scope
is one of those rules.
Locality is good because:

(1) Imagine if in C you couldn't use local variables in functions, and
had to scroll up to the top of the file to declare *every* variable, and
then scroll back down to continue writing your functions. It would
interrupt your train of thought - it would be a PITA.

(2) Using locality effectively allows easier review of code (not only for
someone else, but for you too) and debug. If you can *easily* determine a
variable is local to a chunk of code, then you don't have to concern
yourself with its use anywhere else in that file - it lightens the load on
the brain and makes the job easier.

In various studies in bug detection rates/costs in software (I suspect
there is *some* correlation to HDL coding), it has been found that bugs
found by code review cost a lot less per bug compared to unit testing. I
suspect that's why verification often consists of test benches + code
review.

But I guess not *everyone* will find those things useful.

Paul.
 
On Tue, 26 Aug 2008 12:33:13 -0700, rickman wrote:

So how does using signals preclude code review? This is just a non-
sequitur.
Of course using signals doesn't preclude code review - my point was that
localization aids code review.

Paul.
 
On Aug 26, 9:50 am, KJ <kkjenni...@sbcglobal.net> wrote:
On Aug 26, 9:32 am, rickman <gnu...@gmail.com> wrote:

On Aug 26, 2:18 am, Kim Enkovaara <kim.enkova...@iki.fi> wrote:

There is a very simple test. Does the GSR control the FF? The answer is,
Yes, the GSR *will* control the state of a FF which has an initial
value.

But only for those devices that have this extraneous 'GSR' that the
user apparently is forced to worry about...simply to get the device
into a specified state at power up. Glad I don't have those worries.
I guess you don't use FPGAs then.

This conversation seems to have taken a wrong term somewhere. I don't
know why this is getting contentious.

I have been talking about FPGAs from the beginning and I have stated
that clearly. ASICs are a whole different animal and I have not
addressed that at all. If you are using a programmable device that
does not have a global power-on reset signal, then you are either
using a very small device (CPLD) or you are using one I have never
heard of.

The "worry" is always there. You have to get the device into a known
state following configuration and any time another reset is applied.
Of course you can use configuration as a reset if you want. But the
issue is still there that you have to have a way to specify the
initial condition for the sequential logic. Most people consider the
GSR to be a *useful* feature toward this goal.

Rick
 
On Aug 26, 1:46 pm, Paul Taylor <pt@false_email.co.uk> wrote:
On Mon, 25 Aug 2008 16:24:35 -0700, rickman wrote:
I find there are any number of important techniques in software design
that are much less important when coding in an HDL. The idea of scope
is one of those rules.

Locality is good because:

(1) Imagine if in C you couldn't use local variables in functions, and
had to scroll up to the top of the file to declare *every* variable, and
then scroll back down to continue writing your functions. It would
interrupt your train of thought - it would be a PITA.
No one is suggesting that locality is useful. But micro-managing of
locality is not so useful. If you have a variable for a counter and
later you find that you *need* the value of that counter, then you
have to change the code to use a signal. On the other hand, the fact
that a signal is declared does not require that it is used anywhere
other than in the process where it is assigned.

Comparing the use of signals to C code with only global variables is
not really useful and is just a straw man argument.


(2) Using locality effectively allows easier review of code (not only for
someone else, but for you too) and debug. If you can *easily* determine a
variable is local to a chunk of code, then you don't have to concern
yourself with its use anywhere else in that file - it lightens the load on
the brain and makes the job easier.
If you spend more time reviewing code than writing, I guess this can
be important. I prefer to write code that is inherently easy to
read. Of course this is not always possible for complex problems, but
that is my goal. Using variables will do little to make the difficult
code more readable.


In various studies in bug detection rates/costs in software (I suspect
there is *some* correlation to HDL coding), it has been found that bugs
found by code review cost a lot less per bug compared to unit testing. I
suspect that's why verification often consists of test benches + code
review.
So how does using signals preclude code review? This is just a non-
sequitur.


But I guess not *everyone* will find those things useful.
I find *useful* things useful. I don't use forced techniques when
they are not needed. I use appropriate techniques. I expect that NASA
uses all of these techniques and many more. But that doesn't stop
problems from happening, like mixing feet and meters at an
interface.

Rick
 
On Thu, 21 Aug 2008 10:06:22 -0700 (PDT), rickman <gnuarm@gmail.com>
wrote:

On Aug 21, 9:07 am, Brian Drummond <brian_drumm...@btconnect.com
wrote:
On Mon, 18 Aug 2008 11:39:42 -0700 (PDT), rickman <gnu...@gmail.com
wrote:

I can't say I follow you on this. A reset input by definition defines
the state of all registers, state and output. Why would you want to
assign the outputs to be dependant on the previous state when the
reset is asserted??? I have never done this since it was not the
desired behavior.

One reason (not the case in this example): a very long delay line,
containing delayed state (to be used when a slow operation started by
the main state machine has completed). You can either reset this along
with the main state, or simply let its inputs ripple through. The latter
can be implemented at 16 bits per LUT in SRL16s for Xilinx, saving
several hundred FFs. The former can't, because the SRL16s don't have the
necessary reset connections.

Ok, so how does that relate to the issue of using variables?
I think it's an orthogonal issue, applying to both variables and
signals. It only relates to your point about always wanting reset to
define the entire state.

- Brian
 
On Aug 26, 3:09 pm, rickman <gnu...@gmail.com> wrote:
On Aug 26, 9:50 am, KJ <kkjenni...@sbcglobal.net> wrote:
There is a very simple test.  Does the GSR control the FF?  The answer is,
Yes, the GSR *will* control the state of a FF which has an initial
value.

But only for those devices that have this extraneous 'GSR' that the
user apparently is forced to worry about...simply to get the device
into a specified state at power up.  Glad I don't have those worries.

I guess you don't use FPGAs then.
I certainly do.

This conversation seems to have taken a wrong term somewhere.  I don't
know why this is getting contentious.
Going in circles somewhat, so I guess we can agree to end it here.

I have been talking about FPGAs from the beginning and I have stated
that clearly.  ASICs are a whole different animal and I have not
addressed that at all.  
Same here, no ASIC conversations from me.

If you are using a programmable device that
does not have a global power-on reset signal, then you are either
using a very small device (CPLD) or you are using one I have never
heard of.
I'm sure you've heard of Altera, Stratix, Cyclone, etc. Lattice
sometimes, Xilinx less so.

The point is that a global power on reset for the device is not needed
in order to get the flops and memory into a known state at the point
where configuration ends. I know this has been said several times and
you seem to dispute that for some reason but it is the truth.

The "worry" is always there.  You have to get the device into a known
state following configuration and any time another reset is applied.
Agreed...but also note that you've mentioned two distinct times for
reset: "following configuration" and "any time another reset is
applied". The initial default values that one can specify in the
design source code only apply to the "following configuration"
instant.

Of course you can use configuration as a reset if you want.  
Only to generate the synchronous reset signals that resets the rest of
the device...then using a synchronous process form darn near every
other place in the design. That way the design will come up properly
even if the external reset signal is busted inactive for whatever
reason.

But the
issue is still there that you have to have a way to specify the
initial condition for the sequential logic.  
Lood at what you just wrote though. The specification of the 'initial
condition of sequential logic' does not require any signal for it to
be implemented. It is simply the 'initial condition' which exists at
t=0 which for FPGAs is at the point where the part is switching to
some form of 'user mode' where the device is from then on implementing
the logic defined in your source code. For a CPLD or anything that
retains it's programming, t=0 is usually defined relative to the power
supply rails reaching some magic voltage for some specified period of
time.

Most people consider the
GSR to be a *useful* feature toward this goal.
No, most people use some form of reset signal to *try* to get their
logic into the same state that existed at t=0. I'm not trying to play
semantic games or anything, but the application of reset is not the
same thing as going back to t=0. In order to get back to t=0 you
would have to reconfigure the device. If you don't think so, then
explain how a reset alone would recover an FPGA from an SEU that upset
some important bit in the device.

I'm not suggesting such an event is likely just using that to drive
home the point that application of an external reset signal is not the
same as downloading a configuration and as I've said many times
before, the initial value specification in the code gets implemented
from downloading the configuration code, not from any GSR signal.

KJ
 
On Aug 26, 4:52 pm, KJ <kkjenni...@sbcglobal.net> wrote:
On Aug 26, 3:09 pm, rickman <gnu...@gmail.com> wrote:

If you are using a programmable device that
does not have a global power-on reset signal, then you are either
using a very small device (CPLD) or you are using one I have never
heard of.

I'm sure you've heard of Altera, Stratix, Cyclone, etc. Lattice
sometimes, Xilinx less so.

The point is that a global power on reset for the device is not needed
in order to get the flops and memory into a known state at the point
where configuration ends. I know this has been said several times and
you seem to dispute that for some reason but it is the truth.
This seems to be the crux of the discussion. I posted why I think the
GSR is required to initialize the FFs in FPGAs. Why do you think the
GSR is *not* needed to initialize the FFs in FPGAs?

If you want me to say it again... The configuration memory is loaded
from the config stream. This uses a significant number of transistors
and directly controls static signals in the logic elements and routing
matrix. Once configured, these elements do not change. The only
exception to this that I am aware of are the config elements in the
LUTs of a Xilinx device when used as a shift register. Otherwise they
are static and their construction is very different from the FFs in
the logic elements.

The FFs are initialized by the GSR which is routed to the set/reset
under control of the config memory. This logic *has* to exist in
order for the GSR to function even if you don't use it. Then to add
additional logic that will set the initial state of the FF on
configuration would be superfluous.

Have you seen anything that contradicts this? Can you cite a
reference? Just saying it is "the truth" is not really useful.

Rick
 
On Aug 26, 4:41 pm, Paul Taylor <pt@false_email.co.uk> wrote:
On Tue, 26 Aug 2008 12:33:13 -0700, rickman wrote:
So how does using signals preclude code review? This is just a non-
sequitur.

Of course using signals doesn't preclude code review - my point was that
localization aids code review.
This is a good summary of the entire discussion, "localization aids
code review". That is a statement without value. The issue is
whether the use of variables has any real advantage, sufficient to
require learning of additional complexity. So far, I have only seen
hand waving, statements that are either not supported, or that have no
clear meaning. I don't accept that localization of variables actually
provides any advantage in code reviews. I also don't accept that
"aids code review" is a sufficiently strong reason to use variables.

The question is whether the benefit is worth the cost. It has been
said that the cost of using variables is not high and is in fact near
zero once you get over the learning curve. I say this learning curve
is significant, even if it is just the fact that a large portion of
HDL designers don't use variables. So there is a significant portion
of the community that will have to work to understand your code. I
think this is a significant barrier to communication and without some
clear advantage is not worth the cost.

It has also been said that the use of variable localization "aids code
review". Maybe it does in some small way, but I don't see how this is
any significant advantage. A code review is not done in a meeting
room using paper. The code review should take place before the
meeting where the reviewers have access to all of the typical
programming tools, such as an editor with search capabilities. I can
determine the localization of both variables and signals using a
search. This is in no way hard to do, is 100% effective and works for
everything in the code, not just one class of objects.

So it seems clear to me that the cost of using variables is non-
trivial and the advantage of using variables is minimal.

Rick
 
Rick,

You use a search function to review locality of use, we use the
compiler.

But localization is not just about code review! It is also, and even
more importantly IMHO, about code maintenance and the code's ability
to be more easily changed to implement new functionality, or fix bugs
in old functionality, while limiting the effects of those changes to
the intended purpose.

One can design using global signals for everything, and still have a
good design from a locality of use standpoint (just because you
declared a global signal does not mean you used it willy-nilly all
over the place). The problem is that someone reviewing and/or
maintaining such code would have to go to significantly more effort to
verify that you did use locality if you used signals. The locality
advantage of using variables is that you, the compiler, and everyone
else that reads and/or maintains the code knows that locality was not
only intended, it was enforced.

What is clear to each of us is obviously different...

Andy
 
On Aug 26, 8:32 am, rickman <gnu...@gmail.com> wrote:
Why anyone would imagine that there is extra logic and configuration
memory to control the initial state of CLB FFs is beyond me.  There is
no reason to do it this way and the extra logic required is just
wasted silicon.

Rick
As I said previously, the extra configuration control is to determine
whether the initial/reset condition is '1' or '0' when GSR or local
reset is applied. Given there is only one GSR input and one local
reset input (and they are ORed together to reset the register), how
else do you suppose they might control whether the register is '1' or
'0' upon configuration or reset?

Andy
 
On Aug 27, 8:16 am, rickman <gnu...@gmail.com> wrote:
On Aug 26, 4:52 pm, KJ <kkjenni...@sbcglobal.net> wrote:

On Aug 26, 3:09 pm, rickman <gnu...@gmail.com> wrote:

If you are using a programmable device that
does not have a global power-on reset signal, then you are either
using a very small device (CPLD) or you are using one I have never
heard of.

I'm sure you've heard of Altera, Stratix, Cyclone, etc.  Lattice
sometimes, Xilinx less so.

The point is that a global power on reset for the device is not needed
in order to get the flops and memory into a known state at the point
where configuration ends.  I know this has been said several times and
you seem to dispute that for some reason but it is the truth.

This seems to be the crux of the discussion.  I posted why I think the
GSR is required to initialize the FFs in FPGAs.  Why do you think the
GSR is *not* needed to initialize the FFs in FPGAs?
Forgetting for a moment the set of device I/O pins that are required
to load any bitstream into the device and are basically dedicated I/O
for the bitstream load function, all I'm saying is
1. No *other* external I/O pin is *required* to initialize the FPGA
in order to implement an initial condition on a signal that happens to
be the output of a flip flop or memory (i.e. signal xyz: std_ulogic :'1';). If the device (and tool set) supports initial values then flip
flop #234 which contains signal 'xyz' will be set to a '1' at the
moment when the device switches over to user mode and starts
functioning as I've described in the HDL. (Later I'll get into if the
device/tools do not meet the condition).
2. I wouldn't use a device input that performs a device wide reset
because in order to do so the trailing edge of such an input signal
would have to be synchronized to each of the clocks internal to the
device. From a practical standpoint about the only way to get this
would be to disable the clock input to the device until some time
after that device wide reset signal goes inactive. What you're doing
here though is shifting the burden off chip to the PCBA just so you
can take advantage of what it does for you inside the FPGA.
Apparently you see this as something 'free' to be taken advantage of
and in some instances it might be. In general though there will be
some cost to be incurred to implement your requirements. As an
example, any other part on the PCBA that intends to use that same osc
as a clock input had better be able to cope with the clock shutting
off during the period of time when the FPGA is getting reset. What if
it can't cope with it? You'll need multiple clocks (one gated to the
FPGA, one not) and there will now be clock skew now between those
two...does the clock skew matter? Yeah, sometimes it does when the
two devices are expected to be receiving the 'same' clock.

You can argue that putting a reset signal in all of the code now chews
up some logic resources (it does) and can hurt clock cycle performance
(it could) but I've only found this to be any sort of issue on designs
where *everything* gets reset (i.e. control and data path) for no
reason instead of just resetting the much more limited set of control
signals and state machines that need it.

So the bottom line here is that that there is a cost to be paid for
using the device wide reset pin and that cost could be evaluated on a
design by design basis. While there can be a logic/performance cost
to not using it, I've found that simply resetting only what needs to
be brings that cost down to 0 in many cases (i.e. when the input to
the LUT would not have been used) What I've found so often though is
that using the device wide reset pin costs more than it's worth so I
don't use it.

I do always have some reset input pin to a design (in case there was
some doubt that I only rely on end of config to get me off to the
correct starting point). But that reset input goes into the 'D' input
of one flip flop which is the first flop of a shift register and it
goes to the async reset of every flop of that shift register. There
is one of these shift registers for each internal clock in the
device. The outputs of the shift registers become the reset signals
that get distributed to the design. That reset signal though has no
requirement to go active at the end of configuration, it need only go
active when something on the board decides that the FPGA needs to be
reset.

Given that, even if the device/tools doesn't happen to support initial
values I wouldn't need to change anything in the RTL code as a
result. What that would mean is that there would be a requirement now
on this reset input to be active for some period of time after the
device is configured. Ysing the device wide reset you wouldn't change
your code either so there is no (dis)advantage either way from the
perspective of writing code.

If you want me to say it again...  The configuration memory is loaded
from the config stream.  This uses a significant number of transistors
and directly controls static signals in the logic elements and routing
matrix.  Once configured, these elements do not change.  The only
exception to this that I am aware of are the config elements in the
LUTs of a Xilinx device when used as a shift register.  Otherwise they
are static and their construction is very different from the FFs in
the logic elements.
I'm not quite sure what you're getting at here, but it seems that
you're missing the fact that the configuration bitstream also contains
initial value contents of flops and memory (How do you think a read
only memory would get implemented? Discrete logic would work, but for
performance you might want to take advantage of the internal RAM
blocks that the FPGA has).

The FFs are initialized by the GSR which is routed to the set/reset
under control of the config memory.  This logic *has* to exist in
order for the GSR to function even if you don't use it.  Then to add
additional logic that will set the initial state of the FF on
configuration would be superfluous.
There is no additional logic to set the initial state of the FF on
configuration. It simply changes some zeros to ones in the
configuration bitstream.

Have you seen anything that contradicts this?  Can you cite a
reference?  Just saying it is "the truth" is not really useful.
I believe I've said way more than just 'the truth' prior to this
post. Refer to the configuration guide for the device you're
interested in, or the Altera reference I provided earlier and/or
Altera's Quartus manual for what it has to say about initial values.

Kevin Jennings
 
On Aug 27, 9:57 am, Andy <jonesa...@comcast.net> wrote:
Rick,

You use a search function to review locality of use, we use the
compiler.

But localization is not just about code review! It is also, and even
more importantly IMHO, about code maintenance and the code's ability
to be more easily changed to implement new functionality, or fix bugs
in old functionality, while limiting the effects of those changes to
the intended purpose.

One can design using global signals for everything, and still have a
good design from a locality of use standpoint (just because you
declared a global signal does not mean you used it willy-nilly all
over the place). The problem is that someone reviewing and/or
maintaining such code would have to go to significantly more effort to
verify that you did use locality if you used signals. The locality
advantage of using variables is that you, the compiler, and everyone
else that reads and/or maintains the code knows that locality was not
only intended, it was enforced.

What is clear to each of us is obviously different...
This is the stuff that you keep saying with no real basis. For one
you keep referring to "global" signals. Signals are not global. They
are local to an entity. The only way another entity has access to
them is through a defined interface. You are saying that if you don't
use signals inside of processes (which tend to be rather small pieces
of code) then the signals are "global" with all the problems that
creates.

My point is that you are exaggerating the issues of using signals. I
think I said before that in a real software program, true global
variables can be very easily misused. But that is totally different
from signals in entities.

Sure using local variables in VHDL sounds like a good idea, but
compared to using signals, it provides very little advantage in a real
situation. I can very easily view a couple hundred lines of code
using an editor. There is *no* difficulty. By using variables to
make the use of an object strictly local to a process instead of the
entity buys you very little.

You want to make it sound like managing signals is a tough thing. It
is not.

I have worked in situations where there was very little enforced
order. Yes, that was fairly much chaos. In that environment it was
100% up to the programmer to produce good code and for it to be
maintainable. I have worked in environments where there were very
strict methods, like the use of local variables, that were enforced.
I found very little difference in productivity between the two. Why?
Because there are a million ways to write crappy code and techniques
like using variables as a method of coding regulation is in the
noise. It just isn't worth much.

I guess we can argue this all day and never come to any agreement.
But no one has yet given any real supporting information to show that
there is a significant advantage to using variables as a means of
coding control. Everything said here is just arm waving. It is not a
matter of what is "obviously different". I based decisions like this
on what I can measure. I have never been able to determine that there
was a difference for this sort of coding guidelines. Bad programmers
are still bad and good programmers are still good no matter what
guidelines you use.

I guess in a hierarchy, you have to do something to keep the coders in
line. Otherwise managers couldn't justify their paychecks.

Rick
 
On Aug 27, 11:45 am, KJ <kkjenni...@sbcglobal.net> wrote:
On Aug 27, 8:16 am, rickman <gnu...@gmail.com> wrote:



On Aug 26, 4:52 pm, KJ <kkjenni...@sbcglobal.net> wrote:

On Aug 26, 3:09 pm, rickman <gnu...@gmail.com> wrote:

If you are using a programmable device that
does not have a global power-on reset signal, then you are either
using a very small device (CPLD) or you are using one I have never
heard of.

I'm sure you've heard of Altera, Stratix, Cyclone, etc. Lattice
sometimes, Xilinx less so.

The point is that a global power on reset for the device is not needed
in order to get the flops and memory into a known state at the point
where configuration ends. I know this has been said several times and
you seem to dispute that for some reason but it is the truth.

This seems to be the crux of the discussion. I posted why I think the
GSR is required to initialize the FFs in FPGAs. Why do you think the
GSR is *not* needed to initialize the FFs in FPGAs?

Forgetting for a moment the set of device I/O pins that are required
to load any bitstream into the device and are basically dedicated I/O
for the bitstream load function, all I'm saying is
1. No *other* external I/O pin is *required* to initialize the FPGA
in order to implement an initial condition on a signal that happens to
be the output of a flip flop or memory (i.e. signal xyz: std_ulogic :=
'1';). If the device (and tool set) supports initial values then flip
flop #234 which contains signal 'xyz' will be set to a '1' at the
moment when the device switches over to user mode and starts
functioning as I've described in the HDL. (Later I'll get into if the
device/tools do not meet the condition).
2. I wouldn't use a device input that performs a device wide reset
because in order to do so the trailing edge of such an input signal
would have to be synchronized to each of the clocks internal to the
device. From a practical standpoint about the only way to get this
would be to disable the clock input to the device until some time
after that device wide reset signal goes inactive. What you're doing
here though is shifting the burden off chip to the PCBA just so you
can take advantage of what it does for you inside the FPGA.
Apparently you see this as something 'free' to be taken advantage of
and in some instances it might be. In general though there will be
some cost to be incurred to implement your requirements. As an
example, any other part on the PCBA that intends to use that same osc
as a clock input had better be able to cope with the clock shutting
off during the period of time when the FPGA is getting reset. What if
it can't cope with it? You'll need multiple clocks (one gated to the
FPGA, one not) and there will now be clock skew now between those
two...does the clock skew matter? Yeah, sometimes it does when the
two devices are expected to be receiving the 'same' clock.
First, you have made assumptions about the design to be used in the
FPGA that may or may not be correct for any given design.

The GSR signal does not *have* to be sync'd to the clock in order to
be useful. If you have a FSM which is waiting for an input and you
know for a fact that the input is not asserted on powerup, then you
don't have to worry about the FFs in the FSM all being released on the
same clock cycle. That is one example of when you don't need to sync
the GSR to the clock, there are others.

So as a designer, you have choices.

Regardless, I have already explained how you can use the GSR so that
it *is* sync'd to the system clock. Then the clock does not have to
be held and the GSR utility is "free" consuming no additional
resources. But I still don't get your point? Regardless of how the
FFs in an FPGA are initialized, you still have the problem of how to
synchronize their startup. I think you are glossing over the details
of how this happens. It happens when GSR is released. Check the data
sheets for the parts you are using. If that doesn't work for you,
then you need to make other arrangements. But for a large percentage
of designs, the GSR can be used to both set the initial state and to
synchronize the startup of the chip. If this is not done by the GSR
signal, what does control the startup timing of the chip?


You can argue that putting a reset signal in all of the code now chews
up some logic resources (it does) and can hurt clock cycle performance
(it could) but I've only found this to be any sort of issue on designs
where *everything* gets reset (i.e. control and data path) for no
reason instead of just resetting the much more limited set of control
signals and state machines that need it.
You keep saying stuff like this but it is not true. The global reset
is "global" and dedicated. It is *allways* used. You have two
choices, control it, or let it default to initializing all of the FFs
to zeros.


So the bottom line here is that that there is a cost to be paid for
using the device wide reset pin and that cost could be evaluated on a
design by design basis. While there can be a logic/performance cost
to not using it, I've found that simply resetting only what needs to
be brings that cost down to 0 in many cases (i.e. when the input to
the LUT would not have been used) What I've found so often though is
that using the device wide reset pin costs more than it's worth so I
don't use it.
I don't get what you mean by all of your cost stuff. What you
explained above for cost is the startup of the chip, not the initial
conditions. If you don't control GSR, how do you control the startup
of the chip? When you reset what you want reset, how does that
happen? Doesn't that have a cost?


I do always have some reset input pin to a design (in case there was
some doubt that I only rely on end of config to get me off to the
correct starting point).
Ahhh, how do you control this??? How do you get your freshly
configured design all started on the same clock cycle?

But that reset input goes into the 'D' input
of one flip flop which is the first flop of a shift register and it
goes to the async reset of every flop of that shift register. There
is one of these shift registers for each internal clock in the
device. The outputs of the shift registers become the reset signals
that get distributed to the design. That reset signal though has no
requirement to go active at the end of configuration, it need only go
active when something on the board decides that the FPGA needs to be
reset.
This is using a *LOT* of resources that may be unnecessary. I'm not
clear on why you use multiple outputs from the shift register. I use
one output as the async reset to the entire design. This is then
mapped to the GSR signal by the software and my entire design is
released from reset synchronous to the clock and on the same clock
cycle. Since it uses the GSR signal, the chip ORs its own end of
configuration reset in with the one from the shift register and all is
right with the world.

Yes, if you have multiple clocks in the design, then this is not
viable. But I seldom have multiple clocks that require this sort of
reset release. Instead I try to use a single clock throughout the
chip and sync other clock domains at the I/O point. This eliminates
the need for multiple clock domains in most cases.


Given that, even if the device/tools doesn't happen to support initial
values I wouldn't need to change anything in the RTL code as a
result. What that would mean is that there would be a requirement now
on this reset input to be active for some period of time after the
device is configured. Ysing the device wide reset you wouldn't change
your code either so there is no (dis)advantage either way from the
perspective of writing code.
Except that your solution is using a lot more of the chip and
especially the routing resources.


If you want me to say it again... The configuration memory is loaded
from the config stream. This uses a significant number of transistors
and directly controls static signals in the logic elements and routing
matrix. Once configured, these elements do not change. The only
exception to this that I am aware of are the config elements in the
LUTs of a Xilinx device when used as a shift register. Otherwise they
are static and their construction is very different from the FFs in
the logic elements.

I'm not quite sure what you're getting at here, but it seems that
you're missing the fact that the configuration bitstream also contains
initial value contents of flops and memory (How do you think a read
only memory would get implemented? Discrete logic would work, but for
performance you might want to take advantage of the internal RAM
blocks that the FPGA has).
The configuration bitstream *doesn't* contain initial values for FFs.
You keep referring to memory. Memory is not the same as the logic
element FFs. Memory can be initialized by the configuration stream.
The LUTs can be initialized by the configuration stream. But the FFs
in the logic elements can only be initialized by the GSR. As I have
tried to explain, the GSR is ***always*** a part of release from
configuration. So there is no need for the extra transistors inside
the chip to make the FF state part of the configuration stream. If
you don't believe me, ask Peter Alfke. I bet he can clear this up.

I am happy to admit that I could be wrong about this. But I am very
confident I am not. I would be happy to hear Peter set the record
straight.


The FFs are initialized by the GSR which is routed to the set/reset
under control of the config memory. This logic *has* to exist in
order for the GSR to function even if you don't use it. Then to add
additional logic that will set the initial state of the FF on
configuration would be superfluous.

There is no additional logic to set the initial state of the FF on
configuration. It simply changes some zeros to ones in the
configuration bitstream.
So how do the zeros and ones in the bitstream get into the FFs???
That would require that the FF have a mux on the D input and that the
clock have a mux to clock it from the bitstream clock. Just think of
how much extra logic that would require on every FF in the design.


Have you seen anything that contradicts this? Can you cite a
reference? Just saying it is "the truth" is not really useful.

I believe I've said way more than just 'the truth' prior to this
post. Refer to the configuration guide for the device you're
interested in, or the Altera reference I provided earlier and/or
Altera's Quartus manual for what it has to say about initial values.
Does any of these docs actually say that the FFs are directly
controlled by the configuration bitstream or does it say that the FFs
can be initialized using the GSR signal with the set/reset *selected*
by the configuration stream???

I think you have been reading between the lines. But then I am more
familiar with the Xilinx parts than the innerds of the Altera parts.
Although, I will say that on the ACEX 1K parts, there was no
configuration loading of the FFs. The FFs were ***always*** reset by
the GSR and the tools would invert the use of the signal if you wanted
the initial state to be a 1.

Can you quote something that clearly says what you are saying?

Rick
 
On Aug 27, 11:12 am, Andy <jonesa...@comcast.net> wrote:
On Aug 26, 8:32 am, rickman <gnu...@gmail.com> wrote:

Why anyone would imagine that there is extra logic and configuration
memory to control the initial state of CLB FFs is beyond me. There is
no reason to do it this way and the extra logic required is just
wasted silicon.

Rick

As I said previously, the extra configuration control is to determine
whether the initial/reset condition is '1' or '0' when GSR or local
reset is applied. Given there is only one GSR input and one local
reset input (and they are ORed together to reset the register), how
else do you suppose they might control whether the register is '1' or
'0' upon configuration or reset?

Andy
I like these shorter messages better!

You seem like you understand the internal structure of these devices.
But I can't say I understand your question.

As you said, the GSR signal is or'd with a local async reset input.
This control is ***configured*** to either set or reset the FF. That
is what I am saying. But others are saying that the FF is directly
loaded by the configuration bit stream in the same way that the LUT or
block rams are loaded. This is not the case. The LUTs and memory are
unaffected by the GSR signal but the FFs are controlled.

If you could suppress the GSR signal on configuration, the
configuration bitstream would have no effect on the state of the FFs
in the logic elements. Loading the FFs directly from the
configuration bitstream would require muxes on the D input and the
clock input. This would be a lot of extra logic in the chip for no
purpose since the GSR signal will set or reset every FF in the design
during configuration. You can then choose to use the GSR signal after
configuration for your own purposes or not.

Rick
 
On Aug 27, 8:16 am, rickman <gnu...@gmail.com> wrote:
On Aug 26, 4:52 pm, KJ <kkjenni...@sbcglobal.net> wrote:
Maybe it's too late in the thread to be more succint, but....

This seems to be the crux of the discussion.  I posted why I think the
GSR is required to initialize the FFs in FPGAs.  Why do you think the
GSR is *not* needed to initialize the FFs in FPGAs?
In 100 words or more...I think a discrete device wide reset pin is not
needed because...

- The documentation for the devices that I commonly use does not in
any way indicate that a device wide reset input signal from some pin
is in any way needed to get the device configured.

- The documentation for the devices that I commonly use states that
the function of the device wide reset pin performs a function that I
do not happen to require in order to complete my design and meet all
function, performance, etc.

- In actual implementations, once configured the part implements the
logic that I've described. When it hasn't for some reason I open a
case with the supplier. The root cause for that failure has never
turned out to be due to the lack of use of the device wide reset.

- Actual implementations also support the first two
statements...again, for the devices that I commonly use.

The FFs are initialized by the GSR which is routed to the set/reset
under control of the config memory.
- For which devices can you reference documentation to support your
statement?
- Have you considered that there may very well be devices existing
today that do not require an external GSR pin to function properly?

 This logic *has* to exist in
order for the GSR to function even if you don't use it.  Then to add
additional logic that will set the initial state of the FF on
configuration would be superfluous.
- If it means avoiding a timing problem on the trailing edge of the
device wide reset relative to a clock, I wouldn't consider that to be
superfluous.

KJ
 
On Aug 27, 11:22 am, rickman <gnu...@gmail.com> wrote:
On Aug 27, 9:57 am, Andy <jonesa...@comcast.net> wrote:



Rick,

You use a search function to review locality of use, we use the
compiler.

But localization is not just about code review! It is also, and even
more importantly IMHO, about code maintenance and the code's ability
to be more easily changed to implement new functionality, or fix bugs
in old functionality, while limiting the effects of those changes to
the intended purpose.

One can design using global signals for everything, and still have a
good design from a locality of use standpoint (just because you
declared a global signal does not mean you used it willy-nilly all
over the place). The problem is that someone reviewing and/or
maintaining such code would have to go to significantly more effort to
verify that you did use locality if you used signals. The locality
advantage of using variables is that you, the compiler, and everyone
else that reads and/or maintains the code knows that locality was not
only intended, it was enforced.

What is clear to each of us is obviously different...

This is the stuff that you keep saying with no real basis.  For one
you keep referring to "global" signals.  Signals are not global.  They
are local to an entity.  The only way another entity has access to
them is through a defined interface.  You are saying that if you don't
use signals inside of processes (which tend to be rather small pieces
of code) then the signals are "global" with all the problems that
creates.

My point is that you are exaggerating the issues of using signals.  I
think I said before that in a real software program, true global
variables can be very easily misused.  But that is totally different
from signals in entities.

Sure using local variables in VHDL sounds like a good idea, but
compared to using signals, it provides very little advantage in a real
situation.  I can very easily view a couple hundred lines of code
using an editor.  There is *no* difficulty.  By using variables to
make the use of an object strictly local to a process instead of the
entity buys you very little.

You want to make it sound like managing signals is a tough thing.  It
is not.

I have worked in situations where there was very little enforced
order.  Yes, that was fairly much chaos.  In that environment it was
100% up to the programmer to produce good code and for it to be
maintainable.  I have worked in environments where there were very
strict methods, like the use of local variables, that were enforced.
I found very little difference in productivity between the two.  Why?
Because there are a million ways to write crappy code and techniques
like using variables as a method of coding regulation is in the
noise.  It just isn't worth much.

I guess we can argue this all day and never come to any agreement.
But no one has yet given any real supporting information to show that
there is a significant advantage to using variables as a means of
coding control.  Everything said here is just arm waving.  It is not a
matter of what is "obviously different".  I based decisions like this
on what I can measure.  I have never been able to determine that there
was a difference for this sort of coding guidelines.  Bad programmers
are still bad and good programmers are still good no matter what
guidelines you use.

I guess in a hierarchy, you have to do something to keep the coders in
line.  Otherwise managers couldn't justify their paychecks.

Rick
You're probably right; variables are just too hard for some people to
use. Maybe those people shouldn't use them.

Agreeing to disagree...

Andy
 
On Aug 27, 12:55 pm, rickman <gnu...@gmail.com> wrote:
On Aug 27, 11:45 am, KJ <kkjenni...@sbcglobal.net> wrote:

On Aug 27, 8:16 am, rickman <gnu...@gmail.com> wrote:

On Aug 26, 4:52 pm, KJ <kkjenni...@sbcglobal.net> wrote:

On Aug 26, 3:09 pm, rickman <gnu...@gmail.com> wrote:

First, you have made assumptions about the design to be used in the
FPGA that may or may not be correct for any given design.

The GSR signal does not *have* to be sync'd to the clock in order to
be useful.  
Probably true, but I view every storage element as having setup and
hold time requirements that must be met on all inputs and not waste
time and effort on the special cases where I might be able to get away
with violating those requirements (with the exception of course of my
reset shift register previously mentioned). If you choose to do
otherwise, in other situations that's your concern.

<snip special case example> If you have a FSM which is waiting for an
input and you

be held and the GSR utility is "free" consuming no additional
resources.  But I still don't get your point?  Regardless of how the
FFs in an FPGA are initialized,
I though "how the FFs in an FPGA are initialized" was the point of
this part of the thread...oh well.

you still have the problem of how to
synchronize their startup.  I think you are glossing over the details
of how this happens.  
No, when the device comes out of configuration and is starting up (and
is doing so independently of any input clock to the device), my reset
shift register is outputting a reset signal that is asserted for
several clock cycles due to the initial value specification. Since
the rest of the design is using that signal as their reset input,
everything starts up properly.

It happens when GSR is released.  Check the data
sheets for the parts you are using.  
I'll refer you to...
http://www.altera.com/literature/hb/cyc2/cyc2_cii5v1.pdf (Pages
363-365,375, 384-385, all interesting reading)
http://www.altera.com/literature/hb/cyc2/cyc2_cii51013.pdf (Pages 8-11
as an example)
http://www.altera.com/literature/hb/qts/qts_qii51007.pdf (Pages 40-42
are interesting).

I do always have some reset input pin to a design (in case there was
some doubt that I only rely on end of config to get me off to the
correct starting point).

Ahhh, how do you control this???  How do you get your freshly
configured design all started on the same clock cycle?
I've explained that several times already...including the following
paragraph from the previous post.

But that reset input goes into the 'D' input
of one flip flop which is the first flop of a shift register and it
goes to the async reset of every flop of that shift register.  There
is one of these shift registers for each internal clock in the
device.  The outputs of the shift registers become the reset signals
that get distributed to the design.  That reset signal though has no
requirement to go active at the end of configuration, it need only go
active when something on the board decides that the FPGA needs to be
reset.

This is using a *LOT* of resources that may be unnecessary.  I'm not
clear on why you use multiple outputs from the shift register.  
I only use one output (the last one) from the shift register. Many
designs have multiple clocks so I have one shift register per clock
domain so that each clock domain gets a reset signal that is
synchronized to their clock.

I use
one output as the async reset to the entire design.  
I use it as a synchronous reset, but OK. On benchmarks I've seen some
designs better with sync some better with async. I haven't been
convinced that one way is always (or usually) better than the
other...that might be device family specific though I admit.

Yes, if you have multiple clocks in the design, then this is not
viable.  
But my method is viable.

But I seldom have multiple clocks that require this sort of
reset release.  
Well at least you do have designs with more than one clock...do you
just punt on those designs? Or do you in fact synchronize to each
clock domain?

Instead I try to use a single clock throughout the
chip and sync other clock domains at the I/O point.  This eliminates
the need for multiple clock domains in most cases.
Most of your cases perhaps...but even there it is 'most' not 'all'.

Whether you need a separate clock domain or not generally depends on
the relative difference there is between two domains and what the
timing requirements of the external devices actually are. The greater
the difference in clock frequency, the more likely that the fast clock
can be used to meet the timing requirements of the slower clock
domain.

Given that, even if the device/tools doesn't happen to support initial
values I wouldn't need to change anything in the RTL code as a
result.  What that would mean is that there would be a requirement now
on this reset input to be active for some period of time after the
device is configured.  Ysing the device wide reset you wouldn't change
your code either so there is no (dis)advantage either way from the
perspective of writing code.

Except that your solution is using a lot more of the chip and
especially the routing resources.
One shift register per clock domain is a handful of LUTs which is not
a lot of anything. I haven't had routing resource issues since the
early 90s and brand X 3K parts.

The configuration bitstream *doesn't* contain initial values for FFs.
You keep referring to memory.  Memory is not the same as the logic
element FFs.  Memory can be initialized by the configuration stream.
The LUTs can be initialized by the configuration stream.  But the FFs
in the logic elements can only be initialized by the GSR.  
Initialized to a value that is determined by data in the configuration
stream if it is going to implement initial values. If not, then it
would appear that your devices do not support initial value
specification which is the reason for the ongoing talk.

As I have
tried to explain, the GSR is ***always*** a part of release from
configuration.  So there is no need for the extra transistors inside
the chip to make the FF state part of the configuration stream.  
If you don't believe me, ask Peter Alfke.  I bet he can clear this up.
Besides the fact that Peter isn't posting here anymore due to a recent
re-org at X, I doubt that he would care to have any comment on the
workings of competitors.

There is no additional logic to set the initial state of the FF on
configuration.  It simply changes some zeros to ones in the
configuration bitstream.

So how do the zeros and ones in the bitstream get into the FFs???
That would require that the FF have a mux on the D input and that the
clock have a mux to clock it from the bitstream clock.  Just think of
how much extra logic that would require on every FF in the design.
Or perhaps just read the previously mentioned Altera docs for some
insights to their approach. Maybe brand X is better than A, maybe
not...but you're making your own assumptions on how it would have to
be done.

Does any of these docs actually say that the FFs are directly
controlled by the configuration bitstream or does it say that the FFs
can be initialized using the GSR signal with the set/reset *selected*
by the configuration stream???
In either case the configuration bit stream is used to get the flip
flop into the specified state at the end of configuration mode as the
device is entering user mode. Whether it is 'directly controlled' by
the bitstream or 'selected' by the bitstream is irrelevant.

I think you have been reading between the lines.  But then I am more
familiar with the Xilinx parts than the innerds of the Altera parts.
Although, I will say that on the ACEX 1K parts, there was no
configuration loading of the FFs.  The FFs were ***always*** reset by
the GSR and the tools would invert the use of the signal if you wanted
the initial state to be a 1.
And they call that technique 'not gate push back', do you find it
offensive or something? Since invertors are free in an FPGA,
complaining about push back is like complaining that synthesis
implemented the DeMorgan equivalent of your logic.

Can you quote something that clearly says what you are saying?

Hopefully I have.

KJ
 
rickman wrote:

So it seems clear to me that the cost of using variables is non-
trivial
true, as is the cost of using:

functions,
procedures
types and subtypes,
type attributes
array attributes
numeric_std.all
I could get by without any of these.

They were all hard to learn,
but once apprehended,
they are harder to leave alone.

and the advantage of using variables is minimal.
maybe true, in an alternate universe where I never happened to
use a debugger,
printf,
trace code,
compare algorithms,
attach 32 little clips to an address bus and wait for trigger


-- Mike Treseler
 
rickman wrote:

in the logic elements. Loading the FFs directly from the
configuration bitstream would require muxes on the D input and the
clock input. This would be a lot of extra logic in the chip for no
purpose since the GSR signal will set or reset every FF in the design
during configuration. You can then choose to use the GSR signal after
configuration for your own purposes or not.
But for example in V5 you can set different init value for the FF
compared to the set/reset value that can be connected to the GSR signal
in functional mode. The real implementation might be that first the
init0/init1 attributes are set from the configuration to the SR
settings, after that internally GSR is asserted and after that the real
srhigh/srlow attributes are set up.

One again this is from the V5 manual:
"The initial state after configuration or global initial state is
defined by separate INIT0 and INIT1 attributes. By default, setting the
SRLOW attribute sets INIT0, and setting the SRHIGH attribute sets INIT1.
Virtex-5 devices can set INIT0 and INIT1 independent of SRHIGH and
SRLOW."

But without the design spec for V5 we can just guess. My understanding
is that the high level logical schematic of a logical element (LUT+FF
etc) is quite different compared to the real transistor level
implementation. Also the FPGAs contain hidden undocumented features all
over the place, for certain customers those features can even be enabled
with special tool patches. So there are many things inside the chip
that are just not documented.


--Kim
 
On Aug 28, 1:52 am, Kim Enkovaara <kim.enkova...@iki.fi> wrote:
rickman wrote:
in the logic elements. Loading the FFs directly from the
configuration bitstream would require muxes on the D input and the
clock input. This would be a lot of extra logic in the chip for no
purpose since the GSR signal will set or reset every FF in the design
during configuration. You can then choose to use the GSR signal after
configuration for your own purposes or not.

But for example in V5 you can set different init value for the FF
compared to the set/reset value that can be connected to the GSR signal
in functional mode. The real implementation might be that first the
init0/init1 attributes are set from the configuration to the SR
settings, after that internally GSR is asserted and after that the real
srhigh/srlow attributes are set up.

One again this is from the V5 manual:
"The initial state after configuration or global initial state is
defined by separate INIT0 and INIT1 attributes. By default, setting the
SRLOW attribute sets INIT0, and setting the SRHIGH attribute sets INIT1.
Virtex-5 devices can set INIT0 and INIT1 independent of SRHIGH and
SRLOW."

But without the design spec for V5 we can just guess. My understanding
is that the high level logical schematic of a logical element (LUT+FF
etc) is quite different compared to the real transistor level
implementation. Also the FPGAs contain hidden undocumented features all
over the place, for certain customers those features can even be enabled
with special tool patches. So there are many things inside the chip
that are just not documented.
From the use of SRLOW and SRHIGH I suspect this is not even addressing
the FF state. SR likely refers to the shift register that you get
when using the LUT memory as logic. The LUT memory *can* be set by
the configuration bit stream, but in none of the logic families I have
seen can it be controlled by the GSR signal. Did you find any mention
of what SRLOW, SRHIGH, INIT0 and INIT1 control?

Rick
 

Welcome to EDABoard.com

Sponsor

Back
Top