Global Reset using Global Buffer

  • Thread starter rgamer1981@gmail.com
  • Start date
Hello Group

I=B4ve read a lot about resets and I=B4ve decided that for my designs, an
asynchronous solution with a synchronous source is a better solution.
No discussions here, this is a personal (almost religious) choice.
Religious choice is an apt description. How else can you justify somethin
that has zero scientific evidence to support it?

What you are looking for is called a synchronous reset distribution tree
Google that and you find some papers on it on Cliff Cumming's web site. I
is the only way to distribute an asynchronous assert/synchronous deasser
reset without skew.

Global resources are used when you have to deal with fast signals. PowerO
reset is very very slow. Don't waste resources on it.

Remember that the reset system only has two requirements.

1) Force the chip into a known good state while the reset button i
pressed.

2) Do nothing while the reset button is not pressed.

Most designers obsess over making sure the first condition is met whil
barely considering the second one. This is odd because if you screw u
either one then your product will fail. Which one should keep you awake a
night?


Your product will spend 99.999999% of its power on time running an
susceptible to esd induced resets but it is only possible to have a powe
on reset failure during the 0.000001% of the time that it is powering up.

Modern digital tool flows can easily catch design errors that would preven
a chip reset in the RTL phase. Simulating esd events is a lot harder an
most of this is done by QA testing.


These failures are asymmetric. If an esd event can get into your chip an
change state on a flop then your product is crap. If your power on rese
fails to reach a flop it will simply take on the next state value provide
by the mission mode logic. What will that be? It will almost always be th
same as the reset state.

The power on reset system has a good deal of redundancy.Everybody firs
puts in a power on reset system and then adds their mission mode logic tha
backs up the power on reset with the mission mode soft reset systems. Yo
can forget to connect a flop into the power on system and it will likel
still work.

It is almost impossible to screw up the reset system so that your produc
fails to power up AND to do so in a way that your verification suite won'
catch it.

But it is very easy to have a esd entry path and not catch it till you ge
the customer returns. You should first worry about preventing phanto
resets and after you have solved that then worry about getting power o
reset into your chip.


If you do that then you will never run an asynchronous reset signal down t
the core flops. Read Xilinx WP-231.


John Eaton













---------------------------------------
Posted through http://www.FPGARelated.com
 
jt_eaton <1590@embeddedrelated> wrote:
Hello Group

I=B4ve read a lot about resets and I=B4ve decided that for my designs, an
asynchronous solution with a synchronous source is a better solution.
No discussions here, this is a personal (almost religious) choice.

Religious choice is an apt description. How else can you
justify something that has zero scientific evidence to support it?

What you are looking for is called a synchronous reset distribution tree.
Google that and you find some papers on it on Cliff Cumming's web site. It
is the only way to distribute an asynchronous assert/synchronous deassert
reset without skew.
For FPGAs, power up is always asynchronous, and you have to have
some way to get from configuration reset to operation.

Global resources are used when you have to deal with fast signals.
PowerOn reset is very very slow. Don't waste resources on it.
Hopefully the FPGA designers did use (not waste) resources on it.

Remember that the reset system only has two requirements.

1) Force the chip into a known good state while the reset button is
pressed.

2) Do nothing while the reset button is not pressed.

Most designers obsess over making sure the first condition is met while
barely considering the second one. This is odd because if you screw up
either one then your product will fail. Which one should keep you awake at
night?
If you can stop the clock until the system is out of reset and ready,
then there should be no problem. Otherwise, yes, you do have to be
very careful about the transition.

-- glen
 
On Tue, 18 Sep 2012 07:31:18 -0700 (PDT)
Carl <carwer0@gmail.com> wrote:

An article on the local/global and synchronous/asynchronous aspects
of resets:

http://www.fpga-dev.com/resets-make-them-synchronous-and-local/
Something that always seems to be missing when I read articles like
this is any mention whatsoever of the FPGA’s built-in reset
capabilities. For Xilinx Spartan FPGAs anyway, this is called
“GSR” (global set/reset). It’s certainly global, and I believe it’s
asynchronous. I don’t know what kind of skew it typically has, but it
has one wonderful benefit when it’s usable: it’s absolutely 100% free.
The GSR network is built into the chip whether you use it or not, so
using it to reset all your FFs costs absolutely nothing in terms of
LUTs and routing, something no other solution can claim. When dealing
with a nearly-full FPGA, that’s a very attractive property.

As far as issues of different FFs leaving reset on different clock
cycles are concerned, could one not solve these issues by asserting GSR
for long enough to reset all FFs, deassert it, then activate the clocks
afterwards? Using BUFGCEs (in Xilinx parlance) one could do this with a
very small chunk of logic, far less than the overhead of building
synchronous resets by routing the reset signal around the chip and then
feeding it into all the LUTs that make up feedback loops (increasing
LUT count in, say, one sixth of such cases where the LUT was already
populated by six inputs). Driving the ENABLE input of a canned
oscillator off a shift register of sufficient length clocked by the
internal configuration clock would probably achieve something similar.
Am I missing something about this proposed solution?

I really wish someone would write one of these articles about system
reset that acknowledges the existence of GSR and compares it to other
reset mechanisms and points out the advantages and disadvantages of
each. Unfortunately “someone” won’t be me, because I’m nowhere near an
FPGA expert and couldn’t hope to give it a proper treatment. Maybe
someone has written such an article, but I haven’t seen it.

Chris
 
On Wed, 19 Sep 2012 00:15:52 -0700, Christopher Head wrote:

On Tue, 18 Sep 2012 07:31:18 -0700 (PDT)
Carl <carwer0@gmail.com> wrote:

An article on the local/global and synchronous/asynchronous aspects of
resets:

http://www.fpga-dev.com/resets-make-them-synchronous-and-local/

Something that always seems to be missing when I read articles like this
is any mention whatsoever of the FPGA’s built-in reset capabilities. For
Xilinx Spartan FPGAs anyway, this is called “GSR” (global set/reset).

As far as issues of different FFs leaving reset on different clock
cycles are concerned, could one not solve these issues by asserting GSR
for long enough to reset all FFs, deassert it, then activate the clocks
afterwards?
Yes. Perhaps better, activate clock enable(s) afterwards.

Either way, you may need a hierarchy of clock activation; after reset,
you don't want your main clock generator to wait for several cycles of a
(stopped) clock...

- Brian
 
On Wed, 19 Sep 2012 00:15:52 -0700, Christopher Head wrote:

On Tue, 18 Sep 2012 07:31:18 -0700 (PDT)
Carl <carwer0@gmail.com> wrote:

An article on the local/global and synchronous/asynchronous aspects of
resets:

http://www.fpga-dev.com/resets-make-them-synchronous-and-local/

Something that always seems to be missing when I read articles lik
this
is any mention whatsoever of the FPGA’s built-in reset capabilities
For
Xilinx Spartan FPGAs anyway, this is called “GSR” (globa
set/reset).

As far as issues of different FFs leaving reset on different clock
cycles are concerned, could one not solve these issues by asserting GSR
for long enough to reset all FFs, deassert it, then activate the clocks
afterwards?

Yes. Perhaps better, activate clock enable(s) afterwards.

Either way, you may need a hierarchy of clock activation; after reset,
you don't want your main clock generator to wait for several cycles of a
(stopped) clock...

- Brian
Guys,

You don't need to stop the clock at all. The reset deassert to clock edge
spec only applies when you are trying to change state. So if you reset
the flop to 0 and have a 1 sitting on the D input then you must meet
timing or it will go metastable. If you have a 0 on the D input then
it doesn't matter if you meet timing. The flop will stay at 0.


Most designs already do this. When an ethernet interface comes out o
reset
it doesn't suddenly start ethernetting, It waits for the CPU to writ
setup
and other data before it does anything. That means that once all of it
flops
are in reset state then they all have that reset state applied to their D
inputs until the first cpu write.

You can deassert a asynchronous reset at any time as long as your
asynchronous reset system is backed up with a synchronous one provided
by the mission mode logic. You do have to be careful with the cpu or any
other block that self starts but thats easy to deal with.



John Eaton







---------------------------------------
Posted through http://www.FPGARelated.com
 
On Tuesday, September 18, 2012 9:02:29 PM UTC-5, jt_eaton wrote:
If you do that then you will never run an asynchronous reset signal down to the core flops. Read Xilinx WP-231.
I've read WP-231, and it best be taken with a big dose of salt, like most generalizations. Please note the date, and the applicable series of FPGAs.

If we are trying to avoid religion, "never" is a very long time and is almost certainly untrue.

There are many valid reasons to use an asynchronous reset, and many valid reasons to use a synchronous reset. Let's just leave it at that, without all the pontifications invoking "never" (or "always").

Besides, the OP's Q has nothing to do with async vs sync reset; both usually need to meet timing anyway.

Andy
 
I've read WP-231, and it best be taken with a big dose of salt, like mos
generalizations. Please note the date, and the applicable series of FPGAs.

Andy
It's dated 2006. While that is considered old in some parts of thi
industry it is very modern when it comes to reset systems. The global asyn
assert/sync
deassert reset that is prevalent through out the IC world has been around
since the 1980's and was derived from the board level reset systems use
back
in the days of disco.

The important thing about WP-231 is that it was written by the engineer
that
are the experts on the silicon and tools and provides very specifi
examples
of why decades old design practices are not optimal for todays deep
submicron processes

A lot of advice that you hear from component designers is out of date and
should be ignored, but when the people who wear bunny suits at work give
you advice then you really need to listen.


John Eaton



---------------------------------------
Posted through http://www.FPGARelated.com
 
On Wed, 19 Sep 2012 09:50:35 -0500, jt_eaton wrote:

On Wed, 19 Sep 2012 00:15:52 -0700, Christopher Head wrote:

As far as issues of different FFs leaving reset on different clock
cycles are concerned, could one not solve these issues by asserting
GSR for long enough to reset all FFs, deassert it, then activate the
clocks afterwards?

Yes. Perhaps better, activate clock enable(s) afterwards.

Either way, you may need a hierarchy of clock activation; after reset,
you don't want your main clock generator to wait for several cycles of a
(stopped) clock...

- Brian


Guys,

You don't need to stop the clock at all. The reset deassert to clock
edge spec only applies when you are trying to change state.
I know. But I was being a little facetious, after one occasion when I
shot myself in the foot with a synchronous reset for a DLL...

- Brian
 
On Wed, 19 Sep 2012 09:50:35 -0500, jt_eaton wrote:

On Wed, 19 Sep 2012 00:15:52 -0700, Christopher Head wrote:

As far as issues of different FFs leaving reset on different clock
cycles are concerned, could one not solve these issues by asserting
GSR for long enough to reset all FFs, deassert it, then activate the
clocks afterwards?

Yes. Perhaps better, activate clock enable(s) afterwards.

Either way, you may need a hierarchy of clock activation; after reset,
you don't want your main clock generator to wait for several cycles of a
(stopped) clock...

- Brian


Guys,

You don't need to stop the clock at all. The reset deassert to clock
edge spec only applies when you are trying to change state.
I know. But I was being a little facetious, after one occasion when I
shot myself in the foot with a synchronous reset for a DLL...

- Brian
 
jt_eaton <1590@embeddedrelated> wrote:

(snip)

You don't need to stop the clock at all. The reset deassert to clock edge
spec only applies when you are trying to change state. So if you reset
the flop to 0 and have a 1 sitting on the D input then you must meet
timing or it will go metastable. If you have a 0 on the D input then
it doesn't matter if you meet timing. The flop will stay at 0.
More specifically, if your FFs have a clock enable input, and you
can be sure that they are not enabled as they come out of reset,
then you don't have to worry about the clock timing.

Most designs already do this. When an ethernet interface comes out of
reset it doesn't suddenly start ethernetting, It waits for the
CPU to write setup and other data before it does anything.
That means that once all of its flops are in reset state then
they all have that reset state applied to their D inputs until
the first cpu write.
There has to be at least one FF with the enable determined though
outside logic, but that should be usual in the case of a processor.

You can deassert a asynchronous reset at any time as long as your
asynchronous reset system is backed up with a synchronous one provided
by the mission mode logic. You do have to be careful with the cpu or any
other block that self starts but thats easy to deal with.
-- glen
 
On 19/09/2012 15:38, jonesandy@comcast.net wrote:
On Tuesday, September 18, 2012 9:02:29 PM UTC-5, jt_eaton wrote:
If you do that then you will never run an asynchronous reset signal down to the core flops. Read Xilinx WP-231.

I've read WP-231, and it best be taken with a big dose of salt, like most generalizations. Please note the date, and the applicable series of FPGAs.

If we are trying to avoid religion, "never" is a very long time and is almost certainly untrue.

There are many valid reasons to use an asynchronous reset, and many valid reasons to use a synchronous reset.
Right!

http://microelectronics.esa.int/asic/fpga_001_01-0-2.pdf

Skip to section 3.1.

In a nutshell, the Nasa Wire satellite was lost due to the use of a
synchronous reset.

Hans
www.ht-lab.com


Let's just leave it at that, without all the pontifications invoking "never" (or "always").

Besides, the OP's Q has nothing to do with async vs sync reset; both usually need to meet timing anyway.

Andy
 
In article <47cbefa7-3c40-4131-974e-0a1a09b61d3f@googlegroups.com>,
Carl <carwer0@gmail.com> wrote:
An article on the local/global and synchronous/asynchronous aspects of resets:

http://www.fpga-dev.com/resets-make-them-synchronous-and-local/
Okay, long post - this should probably go somewhere else, but here's
my reponse - it's some ideas that I've often thought of when seeing this
advice like this, but never get "pen to paper" as it were.

Thinking about resets are good, and having a good strategy,
very important, but I disagree with a lot of this advice.
And this type of advice been coming out of the FPGA companies for
a while.

But here's my 4 cents. (Too long for just 2 cents.)

I come from an ASIC background - where logic's cheap. ( I know this
is an FPGA newsgroup - bare with me. )

There, for all of our designs we used a global resets streategy, which
ASYNCHRONOUSLY reset FFs.

i.e. in verilog:

Example 1.

reg foo;
always @( posedge clk or negedge reset_n )
if( ~reset_n )
foo <= 0;
else
foo <= bar;

The reset_n input to the module was generated near the
top level of the chip. At the top level, the main system
reset would have it's de-assertion edge synchronized to each
clock domain - effectively creating a per-clock reset
signal. (I'm leaving out a LOT of qualifiers for this
including control of reset during test, etc. But the
key, the resets are generated at the top-level globally).

Furthermore, we reset EVERTHING that could be reset.

The requirements for the reset to work:
1. The input reset needs to be appropriately de-glitched
to avoid false assertions or de-assertions.

2. It's pulse width large enough to be capture by
the async input (on the order of nanoseconds)

That's it. You meet the above requirements, you'll safely enter the
reset state. You'll succesfully exit the reset state - once the reset
de-asserts AND THE CLOCK is running. If the clocks not running, you'll
stay reset.

The advantage of this approach:

Point 1. - You're always right. You can't hurt anything be resetting
a FF.

Point 2. - Simulating bringup is faster - perhaps MUCH faster. I've
spent sometimes weeks chasing the "sea-of-red" bring ups in
simulation - the results of X-Pessimism. I've seen all sorts of
kludge "demeta" tricks in simulation to work around these issue - and
there just that - Kludge testbench work that usually offers nothing
towards to the reliability of the design.

Point 3. - Reducing X's in simulation (by resetting everything) also
reduces the chances of X-Optimism. A much lower occuring problem,
but one with a larger penalty if missed (RTL sim vs. gates
mismatch).

The disadvantages - it's not optimal. You may waste resources
distributing the resets, and meeting the timing of the
reset recovery path. Note that all modern STA tools have
no trouble checking this timing.

Ask any business manager if this is the correct priority:

1. Correct, and Reliable.
2. Faster debug and verification.
3. Optimal

You won't get much argument. And that 3rd objective will probably be
MUCH lower that the first two.

The amount of logic "wasted" on this strategy, is tiny in this deep
sub-micron age. We even used the same clock tree methodology for
distributing the resets. Wasteful - sure. But who cares?

After a while of that I came over to FPGAs. FPGA's haven't been
designed to waste a global low-skew route on this async reset.
So, I switched to using synchronous resets:

Example 2.

reg foo;
always @( posedge clk )
if( ~reset_n )
foo <= 0;
else
foo <= bar;

Again the reset is generated globally, with de-assertion edge
synchronized to respective clock.

The requirements for reset to work : same as above PLUS:

3. You're clock must be running to enter reset.

A normally simple-to-meet requirement, but can lead to perplexing debug
in areas around non-free running clocks (i.e. around PLLs, and other
clock-management logic).

Again, I reset EVERYTHING that can be reset.

This strategy is not too different that the first. Some extra
head-scratching to be sure we're okay with extra reset requirement.
But in general it works.

The advantages and disadvantages of this approach are similar to the
first case. The back end tools must meet timing on all reset
paths. All FPGA tools will check this - the reset's basically just
like any other signal, albeit usually with a larger fanout.

To alleviate the large fanout - if it's even a problem, I can just
pipeline it (at the top level - globally!) a few times. Then allow
the tool to do any neccesary register duplication and perhaps register
balancing. The back end tools have solved this problem for me.

This leads my next evolution in resets: "Selective" reseting.

As many designers are apt to point out, EVERYTHING does not need
to be reset. This is true. Datapath logic is probably safe to leave
un-initialized.

Example 3.

reg [ 7 : 0 ] foo_data;
reg foo_control;
always @( posedge clk )
if( ~reset_n )
begin
foo_control <= 0;
//foo_data <= 0;
end
else
begin
foo_control <= bar;
foo_data <= some_data;
end

Here the control signal "foo_control" is reset. The datapath element
"foo_data" is not. And I usually code it just as above with the
reset clause explicity commented out.

The advantage of this approach - we can pick up a lot of optimizations
over reseting everything. Especially in targetting delay shift
registers (i.e. SRL16s in Xilinx).

But the disadvantages - slower coding, and the risk of missing a needed
reset. When I'm coding a low-level module - possible one that's going
to be shared across many FPGAs- I need to stop and think a bit more on
what must be reset, and what's safe to leave uninitialized. Do I know
of all use cases of this module? Will it be safe in all cases to not
reset this signal? This is often a hard requirement to decide. You
aften don't have all the information at the time of the coding to
make this decision.

So often I punt and leave the reset in - especially when
I bringup first testbenches and see that dreaded "sea-of-red".
If I suspect its got ANYTHING to do with that uninitialized signal -
that reset's going back in. I can always go back in and re-comment
out the reset assignement, to be more optimal.

But once the design is right how often will a designer be given the
time to go back in and make it more "optimal"? Probably only when
pushed into a corner.

The last style, often touted by FPGA companies, basically boils down
to - don't reset at all. The design comes up out of FPGA configuration
in a known, default state. Often the synthesis tools can be configured
to adjust this default state with init values:

Example 4.
reg [ 7 : 0 ] foo_data = 0;
reg foo_control = 0;
always @( posedge clk )
begin
foo_control <= bar;
foo_data <= some_data;
end

I.e. the synthesis tool uses the verilog init value as the configuration
time value. Here, reset's are the exception not the rule. Only reset
those very few elements that explicitly require it, and must see that
edge.

Advantages of this : an optimal design.

The biggest drawback... FPGA Configuration is usaully NOT equal to
RESET. Most of my boards have at least some kind of push-button reset,
probably another controlled by some sort of processor. This reset is
a distinct operation - quite seperate from FPGA "CONFIG".

Plus, well, I'm very hesitant with this approach. Reset and
initialization problems can lie hidden for a long time. During bringup,
often a lot of things are in flux, and a random init problem is often
written off as some random event, and ignored. "Hey I just ran it
again it worked this time."

So the problem lies hidden much longer. Months down the line when
my manufacturing line ( or worse - customer ) comes back to be and
says, "Hey I sometimes have to reset this thing twice to get it to
work". I'd be MUCH more comfortable in this situation knowing that
I had a solid reset in my FPGA design.

"Correct and Reliable" has priority over "Optimal" for me.

Regards,

Mark
 
Mark,

Nice write-up! In the end, if you count up all the time (money) spent determining AND VERIFYING what does not NEED to be reset, you'd have been better off resetting everything to start with, and only pulling out the reset where it kills your utilization (by kill, I don't mean raises util from 85% to 86%, I mean it doesn't fit!).

The pipe-lining/retiming trick for handling fanout on the synchronous reset also works great for synchronously deasserted asynchronous resets!

When coding a process that has some registers reset and others not, if you use the familiar "reset ... elsif clock ..." structure, your fanout on reset may not be reduced, because every register that saved a reset, added a clock enable (or another input to existing clock enable logic).

When I have both reset and non-reset signals/variables in the same process (e.g. ram arrays, etc.), I use the following structure to avoid clock disable on non-reset signals/variables:

process (clk, rst) is
begin
clocked: if rising_edge(clk) then
-- logic assignments here
end if clocked;
reset: if rst then
-- reset assignments here
end if reset;
end process;

Although this style works fine for the general case (when everything is reset), I do not use this style unless I actuall need it. The reason is, in the conventional reset elsif clock structure, you will get synthesis warnings (from synplify at least) about feedback multiplexers on non-reset registers. You won't get those warnings for them if you use the method above. This also makes the above approach rarer, which stands out for the reviewer, where it gets extra attention to make sure that everything that should (can?) be reset is reset.

The same clock enable problem also happens when you use an "if reset else logic" structure for a synchronous reset. Nothing after that "else" executes when reset is true, which results in extra clock enable logic on non-reset registers. Use a similar approach to the above, by moving the (synchronous) reset assignments to their own if-statement just before the end of the clocked if-statement:

process (clk) is
begin
if rising_edge(clk) then
-- logic assignments here
if rst then
-- reset assignments here
-- to avoid clock enables
-- on non-reset registers
end if;
end if;
end process;

Strange how I didn't see any of this in that white paper written by the "experts"...

Andy
 
Awesome, thanks! Seems to me this makes it a lot easier to reason
about: you just give every variable or signal an initial value which is
loaded during bitstream load and let the configuration-clock-based GSR
handling deal with the rest. Make sure PLLs and DCMs start up after GSR
is deasserted and that everything uses a clock downstream of a PLL or
DCM (and perhaps introduce some BUFGCEs) and everything should be OK—at
least, for my application it’s easy enough to things that way. Clock
domain crossings might come up in either order, but that’s pretty easy
to deal with.

Chris

On Wed, 19 Sep 2012 16:06:58 +0000 (UTC)
Brian Drummond <brian@shapes.demon.co.uk> wrote:

On Wed, 19 Sep 2012 09:50:35 -0500, jt_eaton wrote:

On Wed, 19 Sep 2012 00:15:52 -0700, Christopher Head wrote:

As far as issues of different FFs leaving reset on different clock
cycles are concerned, could one not solve these issues by
asserting GSR for long enough to reset all FFs, deassert it, then
activate the clocks afterwards?

Yes. Perhaps better, activate clock enable(s) afterwards.

Either way, you may need a hierarchy of clock activation; after
reset, you don't want your main clock generator to wait for several
cycles of a (stopped) clock...

- Brian


Guys,

You don't need to stop the clock at all. The reset deassert to clock
edge spec only applies when you are trying to change state.

I know. But I was being a little facetious, after one occasion when I
shot myself in the foot with a synchronous reset for a DLL...

- Brian
 

Welcome to EDABoard.com

Sponsor

Back
Top