Explicitly setting a variable to undefined

Guest
Hello,
my question is probably best explained on a piece of code (the snippet is Verilog, but the question should be mostly language-agnostic)

reg memory_access;
reg[1:0] memory_access_size;

always @ (posedge clk) begin
if (clk_en && should_decode_input) begin
memory_access_size <= 2'bxx; // <---

case(some_input_data)
ACTION_1: begin
memory_access <= 1;
memory_access_size <= 2'b10;
end
ACTION_2: begin
memory_access <= 1;
memory_access_size <= 2'b01;
end
ACTION_3: begin
memory_access <= 0;
end
endcase
end
end

(the actual scenario is a Thumb instruction decoder)

My question is about the line marked with "// <---". If I understand the semantics correctly, including this line should make the compiler's job easier, by basically saying "unless I assign a new value to memory_access_size, do whatever with it". Thus, in the ACTION_3 case, it doesn't have to care about preserving its previous value (which is no longer relevant), presumably reducing the logic complexity.

I'm wondering whether this really is the case, in particular:
- Will this actually lead to more efficient logic realization with generally available (Altera, Xilinx) tools?
- Does this introduce any caveats to be aware of?
- Would you generally consider this a good coding practice?

Thanks in advance
-M
 
In article <f2fbb4be-84a9-48cc-8210-1c9ef0830ea9@googlegroups.com>,
<minexew@gmail.com> wrote:
Hello,
my question is probably best explained on a piece of code (the snippet is Verilog, but the question should be mostly language-agnostic)

reg memory_access;
reg[1:0] memory_access_size;

always @ (posedge clk) begin
if (clk_en && should_decode_input) begin
memory_access_size <= 2'bxx; // <---

case(some_input_data)
ACTION_1: begin
memory_access <= 1;
memory_access_size <= 2'b10;
end
ACTION_2: begin
memory_access <= 1;
memory_access_size <= 2'b01;
end
ACTION_3: begin
memory_access <= 0;
end
endcase
end
end

(the actual scenario is a Thumb instruction decoder)

My question is about the line marked with "// <---". If I understand the semantics correctly, including this line should make the compiler's job easier, by basically saying
"unless I assign a new value to memory_access_size, do whatever with it". Thus, in the ACTION_3 case, it doesn't have to care about preserving its previous value (which is no
longer relevant), presumably reducing the logic complexity.

I'm wondering whether this really is the case, in particular:
- Will this actually lead to more efficient logic realization with generally available (Altera, Xilinx) tools?
- Does this introduce any caveats to be aware of?
- Would you generally consider this a good coding practice?

I *AGGRESSIVELY* avoid introducing Xs, and work hard to eliminate ANY sources of
X in my design. Goggle search term "X-optimism", and "X-pessimism".
Any logic optimization's going to me TINY in the grand scheme of things.
The hazards waiting to bite you are not worth it.

I'd "eye" optimize it, by just assigning it to one of the other values
you've already assigned in the other qualifications. It'll likely
come out darn near equivalent.

Regards,

Mark
 
Dne středa 25. května 2016 20:16:39 UTC+2 Mark Curry napsal(a):
In article <f2fbb4be-84a9-48cc-8210-1c9ef0830ea9@googlegroups.com>,
minexew@gmail.com> wrote:
Hello,
my question is probably best explained on a piece of code (the snippet is Verilog, but the question should be mostly language-agnostic)

reg memory_access;
reg[1:0] memory_access_size;

always @ (posedge clk) begin
if (clk_en && should_decode_input) begin
memory_access_size <= 2'bxx; // <---

case(some_input_data)
ACTION_1: begin
memory_access <= 1;
memory_access_size <= 2'b10;
end
ACTION_2: begin
memory_access <= 1;
memory_access_size <= 2'b01;
end
ACTION_3: begin
memory_access <= 0;
end
endcase
end
end

(the actual scenario is a Thumb instruction decoder)

My question is about the line marked with "// <---". If I understand the semantics correctly, including this line should make the compiler's job easier, by basically saying
"unless I assign a new value to memory_access_size, do whatever with it".. Thus, in the ACTION_3 case, it doesn't have to care about preserving its previous value (which is no
longer relevant), presumably reducing the logic complexity.

I'm wondering whether this really is the case, in particular:
- Will this actually lead to more efficient logic realization with generally available (Altera, Xilinx) tools?
- Does this introduce any caveats to be aware of?
- Would you generally consider this a good coding practice?

I *AGGRESSIVELY* avoid introducing Xs, and work hard to eliminate ANY sources of
X in my design. Goggle search term "X-optimism", and "X-pessimism".
Any logic optimization's going to me TINY in the grand scheme of things.
The hazards waiting to bite you are not worth it.

I'd "eye" optimize it, by just assigning it to one of the other values
you've already assigned in the other qualifications. It'll likely
come out darn near equivalent.

Regards,

Mark

Isn't that exactly the point, though? The variable at that point really becomes undefined - and if any other code assumes it to be defined, it is a bug. If the X ends up propagating where it shouldn't, it means there is something wrong with the logic.

I'll look into the terms you mentioned. They seem to be what I was looking for, but couldn't find.

Thank you,
M.
 
In article <10bd1028-a05c-45aa-a9db-ed43392e0b14@googlegroups.com>,
<minexew@gmail.com> wrote:
Dne středa 25. května 2016 20:16:39 UTC+2 Mark Curry napsal(a):
In article <f2fbb4be-84a9-48cc-8210-1c9ef0830ea9@googlegroups.com>,
minexew@gmail.com> wrote:
Hello,
my question is probably best explained on a piece of code (the snippet is Verilog, but the question should be mostly language-agnostic)

reg memory_access;
reg[1:0] memory_access_size;

always @ (posedge clk) begin
if (clk_en && should_decode_input) begin
memory_access_size <= 2'bxx; // <---

case(some_input_data)
ACTION_1: begin
memory_access <= 1;
memory_access_size <= 2'b10;
end
ACTION_2: begin
memory_access <= 1;
memory_access_size <= 2'b01;
end
ACTION_3: begin
memory_access <= 0;
end
endcase
end
end

(the actual scenario is a Thumb instruction decoder)

My question is about the line marked with "// <---". If I understand the semantics correctly, including this line should make the compiler's job easier, by basically saying
"unless I assign a new value to memory_access_size, do whatever with it". Thus, in the ACTION_3 case, it doesn't have to care about preserving its previous value (which is no
longer relevant), presumably reducing the logic complexity.

I'm wondering whether this really is the case, in particular:
- Will this actually lead to more efficient logic realization with generally available (Altera, Xilinx) tools?
- Does this introduce any caveats to be aware of?
- Would you generally consider this a good coding practice?

I *AGGRESSIVELY* avoid introducing Xs, and work hard to eliminate ANY sources of
X in my design. Goggle search term "X-optimism", and "X-pessimism".
Any logic optimization's going to me TINY in the grand scheme of things.
The hazards waiting to bite you are not worth it.

I'd "eye" optimize it, by just assigning it to one of the other values
you've already assigned in the other qualifications. It'll likely
come out darn near equivalent.

Regards,

Mark

Isn't that exactly the point, though? The variable at that point really becomes undefined
- and if any other code assumes it to be defined, it is a bug. If the X ends up
propagating where it shouldn't, it means there is something wrong with the logic.

There's finding bugs and creating the most optimal design.
The second goal is way behind the first, IMHO. I'll not introduce
X's to get a more optimal design. Ever.

As a matter of fact I go extensively out of my way to avoid hidden bugs
at sometimes significant costs to Quality of Results.

Xilinx likes to preach "Don't reset everything. Reset should be the
exception not the rule." My design style is exacty the opposite.
Reset everything (to avoid initialization Xs), with some exceptions.
I'm of the opinion that Xilinx is hopelessly wrong in this regard,
and is advocating wreckless guidance.

As to finding bugs, your mileage may vary. I avoid introducing X's. I don't
think they buy me anything actually, and may hinder. Read the papers -
there's a lot out there. There's no easy answer.

Regards,

Mark
 
Dne středa 25. května 2016 20:51:33 UTC+2 rickman napsal(a):
On 5/25/2016 2:30 PM, minexew@gmail.com wrote:
Dne středa 25. května 2016 20:16:39 UTC+2 Mark Curry napsal(a):
In article <f2fbb4be-84a9-48cc-8210-1c9ef0830ea9@googlegroups.com>,
minexew@gmail.com> wrote:
Hello,
my question is probably best explained on a piece of code (the snippet is Verilog, but the question should be mostly language-agnostic)

reg memory_access;
reg[1:0] memory_access_size;

always @ (posedge clk) begin
if (clk_en && should_decode_input) begin
memory_access_size <= 2'bxx; // <---

case(some_input_data)
ACTION_1: begin
memory_access <= 1;
memory_access_size <= 2'b10;
end
ACTION_2: begin
memory_access <= 1;
memory_access_size <= 2'b01;
end
ACTION_3: begin
memory_access <= 0;
end
endcase
end
end

(the actual scenario is a Thumb instruction decoder)

My question is about the line marked with "// <---". If I understand the semantics correctly, including this line should make the compiler's job easier, by basically saying
"unless I assign a new value to memory_access_size, do whatever with it".. Thus, in the ACTION_3 case, it doesn't have to care about preserving its previous value (which is no
longer relevant), presumably reducing the logic complexity.

I'm wondering whether this really is the case, in particular:
- Will this actually lead to more efficient logic realization with generally available (Altera, Xilinx) tools?
- Does this introduce any caveats to be aware of?
- Would you generally consider this a good coding practice?

I *AGGRESSIVELY* avoid introducing Xs, and work hard to eliminate ANY sources of
X in my design. Goggle search term "X-optimism", and "X-pessimism".
Any logic optimization's going to me TINY in the grand scheme of things.
The hazards waiting to bite you are not worth it.

I'd "eye" optimize it, by just assigning it to one of the other values
you've already assigned in the other qualifications. It'll likely
come out darn near equivalent.

Regards,

Mark

Isn't that exactly the point, though? The variable at that point really becomes undefined - and if any other code assumes it to be defined, it is a bug. If the X ends up propagating where it shouldn't, it means there is something wrong with the logic.

I'll look into the terms you mentioned. They seem to be what I was looking for, but couldn't find.

I'm not sure what you intend. I think you are saying the
"some_input_data" can have values other than the defined cases in normal
operation. But they should only occur at times the following logic
won't care. I would normally say check your input data, but it seems
you are allowing undefined states.

memory_access_size only takes the values 'b01 or 'b10. I would assign a
value of say 'b11 and have the downstream logic check for that. If the
downstream logic is using that input when it is in the wrong value it
can explicitly throw a flag. Can you define those times easily?

--

Rick C

Maybe it isn't as obvious as I hoped - in the case of ACTION_3, no memory access will take place and no other code should attempt to make decisions based on this memory access' size (because there isn't any!)
Also, from the synthesizer's point of view, the logic for setting memory_access_size should become simpler.

I believe, however, that I'm starting to understand one of the deeper problem with X's. I didn't realize that

if (1'b1 == 1'bx)

will evaluate to false, instead of immediately aborting the simulation with an error, which would be the right thing to do IMO.
Of course, it's not as simple as it may seem, because an expression like (1'b0 && 1'bx) is perfectly valid. I'm not even sure if determining the validity of these expressions would be trivial.

Now I see how X-values can mask actual errors in the design and I'll probably start to avoid them too.

-M.
 
On 5/25/2016 2:30 PM, minexew@gmail.com wrote:
Dne středa 25. května 2016 20:16:39 UTC+2 Mark Curry napsal(a):
In article <f2fbb4be-84a9-48cc-8210-1c9ef0830ea9@googlegroups.com>,
minexew@gmail.com> wrote:
Hello,
my question is probably best explained on a piece of code (the snippet is Verilog, but the question should be mostly language-agnostic)

reg memory_access;
reg[1:0] memory_access_size;

always @ (posedge clk) begin
if (clk_en && should_decode_input) begin
memory_access_size <= 2'bxx; // <---

case(some_input_data)
ACTION_1: begin
memory_access <= 1;
memory_access_size <= 2'b10;
end
ACTION_2: begin
memory_access <= 1;
memory_access_size <= 2'b01;
end
ACTION_3: begin
memory_access <= 0;
end
endcase
end
end

(the actual scenario is a Thumb instruction decoder)

My question is about the line marked with "// <---". If I understand the semantics correctly, including this line should make the compiler's job easier, by basically saying
"unless I assign a new value to memory_access_size, do whatever with it".. Thus, in the ACTION_3 case, it doesn't have to care about preserving its previous value (which is no
longer relevant), presumably reducing the logic complexity.

I'm wondering whether this really is the case, in particular:
- Will this actually lead to more efficient logic realization with generally available (Altera, Xilinx) tools?
- Does this introduce any caveats to be aware of?
- Would you generally consider this a good coding practice?

I *AGGRESSIVELY* avoid introducing Xs, and work hard to eliminate ANY sources of
X in my design. Goggle search term "X-optimism", and "X-pessimism".
Any logic optimization's going to me TINY in the grand scheme of things.
The hazards waiting to bite you are not worth it.

I'd "eye" optimize it, by just assigning it to one of the other values
you've already assigned in the other qualifications. It'll likely
come out darn near equivalent.

Regards,

Mark

Isn't that exactly the point, though? The variable at that point really becomes undefined - and if any other code assumes it to be defined, it is a bug. If the X ends up propagating where it shouldn't, it means there is something wrong with the logic.

I'll look into the terms you mentioned. They seem to be what I was looking for, but couldn't find.

I'm not sure what you intend. I think you are saying the
"some_input_data" can have values other than the defined cases in normal
operation. But they should only occur at times the following logic
won't care. I would normally say check your input data, but it seems
you are allowing undefined states.

memory_access_size only takes the values 'b01 or 'b10. I would assign a
value of say 'b11 and have the downstream logic check for that. If the
downstream logic is using that input when it is in the wrong value it
can explicitly throw a flag. Can you define those times easily?

--

Rick C
 
On Wednesday, May 25, 2016 at 1:58:21 PM UTC-4, min...@gmail.com wrote:
<snip>Thus, in the ACTION_3 case, it doesn't have to care about preserving
its previous value (which is no longer relevant), presumably reducing the
logic complexity.

That's an assumption. When you go to validate that assumption, I think you'll find that there is no benefit, you'll likely get the exact same output binary file.

I'm wondering whether this really is the case, in particular:
- Will this actually lead to more efficient logic realization with generally available (Altera, Xilinx) tools?

No (or at least it didn't used to with Altera). What Quartus would do is replace the don't care (and any other metavalues) with a 0. This gets reported in the transcript window as a note.

> - Does this introduce any caveats to be aware of?

Not really. If it simulates properly (and it should), then it will work the same in real hardware.

- Would you generally consider this a good coding practice?
Here are some yay/nays
- No, because it does not produce any actual benefit.
- Yes, if you try it out and find that all of the latest releases of the tools from whoever you would tend to target parts does actually benefit.
- Maybe, if you are producing code that is intended for others to use and you have no idea whether the tools that they may use would benefit. In this case though, as long as their is no harm (like the tool erroring out) then it's OK since code that you are providing is typically not meant to be monkeyed around with by the user.

Kevin Jennings
 
On 5/25/2016 3:07 PM, minexew@gmail.com wrote:
Dne středa 25. května 2016 20:51:33 UTC+2 rickman napsal(a):
On 5/25/2016 2:30 PM, minexew@gmail.com wrote:
Dne středa 25. května 2016 20:16:39 UTC+2 Mark Curry napsal(a):
In article <f2fbb4be-84a9-48cc-8210-1c9ef0830ea9@googlegroups.com>,
minexew@gmail.com> wrote:
Hello,
my question is probably best explained on a piece of code (the snippet is Verilog, but the question should be mostly language-agnostic)

reg memory_access;
reg[1:0] memory_access_size;

always @ (posedge clk) begin
if (clk_en && should_decode_input) begin
memory_access_size <= 2'bxx; // <---

case(some_input_data)
ACTION_1: begin
memory_access <= 1;
memory_access_size <= 2'b10;
end
ACTION_2: begin
memory_access <= 1;
memory_access_size <= 2'b01;
end
ACTION_3: begin
memory_access <= 0;
end
endcase
end
end

(the actual scenario is a Thumb instruction decoder)

My question is about the line marked with "// <---". If I understand the semantics correctly, including this line should make the compiler's job easier, by basically saying
"unless I assign a new value to memory_access_size, do whatever with it".. Thus, in the ACTION_3 case, it doesn't have to care about preserving its previous value (which is no
longer relevant), presumably reducing the logic complexity.

I'm wondering whether this really is the case, in particular:
- Will this actually lead to more efficient logic realization with generally available (Altera, Xilinx) tools?
- Does this introduce any caveats to be aware of?
- Would you generally consider this a good coding practice?

I *AGGRESSIVELY* avoid introducing Xs, and work hard to eliminate ANY sources of
X in my design. Goggle search term "X-optimism", and "X-pessimism".
Any logic optimization's going to me TINY in the grand scheme of things.
The hazards waiting to bite you are not worth it.

I'd "eye" optimize it, by just assigning it to one of the other values
you've already assigned in the other qualifications. It'll likely
come out darn near equivalent.

Regards,

Mark

Isn't that exactly the point, though? The variable at that point really becomes undefined - and if any other code assumes it to be defined, it is a bug. If the X ends up propagating where it shouldn't, it means there is something wrong with the logic.

I'll look into the terms you mentioned. They seem to be what I was looking for, but couldn't find.

I'm not sure what you intend. I think you are saying the
"some_input_data" can have values other than the defined cases in normal
operation. But they should only occur at times the following logic
won't care. I would normally say check your input data, but it seems
you are allowing undefined states.

memory_access_size only takes the values 'b01 or 'b10. I would assign a
value of say 'b11 and have the downstream logic check for that. If the
downstream logic is using that input when it is in the wrong value it
can explicitly throw a flag. Can you define those times easily?

--

Rick C

Maybe it isn't as obvious as I hoped - in the case of ACTION_3, no memory access will take place and no other code should attempt to make decisions based on this memory access' size (because there isn't any!)
Also, from the synthesizer's point of view, the logic for setting memory_access_size should become simpler.

I believe, however, that I'm starting to understand one of the deeper problem with X's. I didn't realize that

if (1'b1 == 1'bx)

will evaluate to false, instead of immediately aborting the simulation with an error, which would be the right thing to do IMO.
Of course, it's not as simple as it may seem, because an expression like (1'b0 && 1'bx) is perfectly valid. I'm not even sure if determining the validity of these expressions would be trivial.

Now I see how X-values can mask actual errors in the design and I'll probably start to avoid them too.

I'm a bit unclear. In your synthesizable code, you don't have a
comparison like this do you?

I'm not so familiar with Verilog as I am VHDL. In VHDL they have an
assert statement that can throw an error flag. But I think the issue
is that there will be logic that uses the memory access size, but when
the size is not valid, the logic should not be used. I'm not sure how
you would distinguish those two states. That's the issue, is that
logic in use when there is no memory access? Why can't you define this
in terms of logic and detect it either in your test bench or in the
synthesized code?

--

Rick C
 
On 05/25/2016 10:58 AM, minexew@gmail.com wrote:
Hello,
my question is probably best explained on a piece of code (the snippet is Verilog, but the question should be mostly language-agnostic)

reg memory_access;
reg[1:0] memory_access_size;

always @ (posedge clk) begin
if (clk_en && should_decode_input) begin
memory_access_size <= 2'bxx; // <---

case(some_input_data)
ACTION_1: begin
memory_access <= 1;
memory_access_size <= 2'b10;
end
ACTION_2: begin
memory_access <= 1;
memory_access_size <= 2'b01;
end
ACTION_3: begin
memory_access <= 0;
end
endcase
end
end

(the actual scenario is a Thumb instruction decoder)

My question is about the line marked with "// <---". If I understand the semantics correctly, including this line should make the compiler's job easier, by basically saying "unless I assign a new value to memory_access_size, do whatever with it". Thus, in the ACTION_3 case, it doesn't have to care about preserving its previous value (which is no longer relevant), presumably reducing the logic complexity.

I'm wondering whether this really is the case, in particular:
- Will this actually lead to more efficient logic realization with generally available (Altera, Xilinx) tools?
- Does this introduce any caveats to be aware of?
- Would you generally consider this a good coding practice?

Thanks in advance
-M

Other than the creeping red cancer eating your simulation viewer, I
don't think you will get much notice of the X values in a simulation.

Stylistically, I would put a default in the case statement. I have seen
enough state machines go wrong in unpleasant and difficult to identify
ways due to not completely specifying results.

I think that a static value would result in the optimization results you
want without the risks.

Good Luck,
BobH
 
On Wednesday, May 25, 2016 at 8:58:21 PM UTC+3, min...@gmail.com wrote:
Hello,
my question is probably best explained on a piece of code (the snippet is Verilog, but the question should be mostly language-agnostic)

reg memory_access;
reg[1:0] memory_access_size;

always @ (posedge clk) begin
if (clk_en && should_decode_input) begin
memory_access_size <= 2'bxx; // <---

case(some_input_data)
ACTION_1: begin
memory_access <= 1;
memory_access_size <= 2'b10;
end
ACTION_2: begin
memory_access <= 1;
memory_access_size <= 2'b01;
end
ACTION_3: begin
memory_access <= 0;
end
endcase
end
end

(the actual scenario is a Thumb instruction decoder)

My question is about the line marked with "// <---". If I understand the semantics correctly, including this line should make the compiler's job easier, by basically saying "unless I assign a new value to memory_access_size, do whatever with it". Thus, in the ACTION_3 case, it doesn't have to care about preserving its previous value (which is no longer relevant), presumably reducing the logic complexity.

I'm wondering whether this really is the case, in particular:
- Will this actually lead to more efficient logic realization with generally available (Altera, Xilinx) tools?
- Does this introduce any caveats to be aware of?
- Would you generally consider this a good coding practice?

Thanks in advance
-M

It could be dangerous for simulation because
memory_access_size(0) /= '0' will evaluate to True if memory_access_size(0) = 'U'

But I use this in my testbenches assigning DataIn <= x"UUUUUUUU" when DataInValid = '0'. Then I check if DataOut never has 'U' or 'X' when DataOutValid = '1'. That means that I never use data which are not Valid.
 
On Wednesday, May 25, 2016 at 9:46:51 PM UTC+3, Mark Curry wrote:
In article <10bd1028-a05c-45aa-a9db-ed43392e0b14@googlegroups.com>,
min..w@gmail.com> wrote:
Dne středa 25. května 2016 20:16:39 UTC+2 Mark Curry napsal(a):
In article <f2fbb4be-84a9-48cc-8210-1c9ef0830ea9@googlegroups.com>,
min...@gmail.com> wrote:
Hello,
my question is probably best explained on a piece of code (the snippet is Verilog, but the question should be mostly language-agnostic)

reg memory_access;
reg[1:0] memory_access_size;

always @ (posedge clk) begin
if (clk_en && should_decode_input) begin
memory_access_size <= 2'bxx; // <---

case(some_input_data)
ACTION_1: begin
memory_access <= 1;
memory_access_size <= 2'b10;
end
ACTION_2: begin
memory_access <= 1;
memory_access_size <= 2'b01;
end
ACTION_3: begin
memory_access <= 0;
end
endcase
end
end

(the actual scenario is a Thumb instruction decoder)

My question is about the line marked with "// <---". If I understand the semantics correctly, including this line should make the compiler's job easier, by basically saying
"unless I assign a new value to memory_access_size, do whatever with it". Thus, in the ACTION_3 case, it doesn't have to care about preserving its previous value (which is no
longer relevant), presumably reducing the logic complexity.

I'm wondering whether this really is the case, in particular:
- Will this actually lead to more efficient logic realization with generally available (Altera, Xilinx) tools?
- Does this introduce any caveats to be aware of?
- Would you generally consider this a good coding practice?

I *AGGRESSIVELY* avoid introducing Xs, and work hard to eliminate ANY sources of
X in my design. Goggle search term "X-optimism", and "X-pessimism".
Any logic optimization's going to me TINY in the grand scheme of things.
The hazards waiting to bite you are not worth it.

I'd "eye" optimize it, by just assigning it to one of the other values
you've already assigned in the other qualifications. It'll likely
come out darn near equivalent.

Regards,

Mark

Isn't that exactly the point, though? The variable at that point really becomes undefined
- and if any other code assumes it to be defined, it is a bug. If the X ends up
propagating where it shouldn't, it means there is something wrong with the logic.

There's finding bugs and creating the most optimal design.
The second goal is way behind the first, IMHO. I'll not introduce
X's to get a more optimal design. Ever.

As a matter of fact I go extensively out of my way to avoid hidden bugs
at sometimes significant costs to Quality of Results.

Xilinx likes to preach "Don't reset everything. Reset should be the
exception not the rule." My design style is exacty the opposite.
Reset everything (to avoid initialization Xs), with some exceptions.
I'm of the opinion that Xilinx is hopelessly wrong in this regard,
and is advocating wreckless guidance.

As to finding bugs, your mileage may vary. I avoid introducing X's. I don't
think they buy me anything actually, and may hinder. Read the papers -
there's a lot out there. There's no easy answer.

Regards,

Mark

They advise it for a reason. In big and complex designs big reset network with high fanout dramatically decrease maximum achievable frequency. I follow that for a long time and I've never had any problems as long as you reset properly that few signals which really need it. Usually it's some sort of Valid signal and the state of some FSMs.
 
On Wednesday, May 25, 2016 at 9:46:51 PM UTC+3, Mark Curry wrote:
Xilinx likes to preach "Don't reset everything. Reset should be the
exception not the rule." My design style is exacty the opposite.
Reset everything (to avoid initialization Xs), with some exceptions.
I'm of the opinion that Xilinx is hopelessly wrong in this regard,
and is advocating wreckless guidance.

They advise it for a reason. In big and complex designs big reset network with high fanout dramatically decrease maximum achievable frequency. I follow that for a long time and I've never had any problems as long as you reset properly that few signals which really need it. Usually it's some sort of Valid signal and the state of some FSMs.
 
Dne sobota 28. května 2016 19:31:19 UTC+2 Ilya Kalistru napsal(a):
On Wednesday, May 25, 2016 at 9:46:51 PM UTC+3, Mark Curry wrote:
Xilinx likes to preach "Don't reset everything. Reset should be the
exception not the rule." My design style is exacty the opposite.
Reset everything (to avoid initialization Xs), with some exceptions.
I'm of the opinion that Xilinx is hopelessly wrong in this regard,
and is advocating wreckless guidance.

They advise it for a reason. In big and complex designs big reset network with high fanout dramatically decrease maximum achievable frequency. I follow that for a long time and I've never had any problems as long as you reset properly that few signals which really need it. Usually it's some sort of Valid signal and the state of some FSMs.

Can't this be solved by a pipelined reset?
With each step you could increase the fan-out significantly.
 
On 5/28/2016 3:49 PM, minexew@gmail.com wrote:
Dne sobota 28. května 2016 19:31:19 UTC+2 Ilya Kalistru napsal(a):
On Wednesday, May 25, 2016 at 9:46:51 PM UTC+3, Mark Curry wrote:
Xilinx likes to preach "Don't reset everything. Reset should be the
exception not the rule." My design style is exacty the opposite.
Reset everything (to avoid initialization Xs), with some exceptions.
I'm of the opinion that Xilinx is hopelessly wrong in this regard,
and is advocating wreckless guidance.

They advise it for a reason. In big and complex designs big reset network with high fanout dramatically decrease maximum achievable frequency. I follow that for a long time and I've never had any problems as long as you reset properly that few signals which really need it. Usually it's some sort of Valid signal and the state of some FSMs.

Can't this be solved by a pipelined reset?
With each step you could increase the fan-out significantly.

The issue with a large reset is not on the leading edge, it is on the
trailing edge or exit from reset. That can be mitigated by careful
design so the logic does not care about the exact timing of the reset
release. In other words, design your circuits as if the reset signal
were asynchronous.

Often separate circuits do not need to come out of reset synchronously.
The circuits will however need to have multiple FFs synchronized. This
can be done by including one FF to provide a synchronized reset to that
circuit. In other cases it can be done by making sure the exit from the
reset state only affects one FF in your circuit. As long as the
circuits do not require a synchronous release from reset, the problem is
much simpler to handle.

--

Rick C
 
Can't this be solved by a pipelined reset?
With each step you could increase the fan-out significantly.

We could come up with a bunch of ideas how to solve this problem in different situations but it's much wiser just not to create the problem if you can.
 
Ilya Kalistru wrote:
They advise it for a reason. In big and complex designs big reset
network with high fanout dramatically decrease maximum achievable
frequency.

That's only part of the reason. The other part is that every FF, every
BRAM, every component of the FPGA is guaranteed by design to come up as
'0' at power up (after configuration is complete). So their claim is
that a reset (at least a global power-up reset) is simply unneccessary
and only maybe needed for things you do not wish to start up at '0'
(like, maybe a FSM state variable that dictates the initial state of an
FSM). And even in these cases it's not really needed, since the Xilinx
tools honor signal initialization values (in VHDL), and BRAMs can be
pre-loaded also. So you can be absolutely sure how every component in
the FPGA comes up after power-up, without having to use a reset signal.

You can forget about the resources the global reset signal needs,
pipelining or how to code it properly because it plain and simple is
useless and unnecessary in most cases.*
If you need to set FFs or so to specific values after power-up, then
that's a set, not a reset. Different port on the FF, different scenario,
and certainly needed in a lot less occasions/signals, hence a signal
with much smaller fanout.

* = That's their claim, not necessarily my personal view...
 
On 5/31/2016 8:09 AM, Sean Durkin wrote:
Ilya Kalistru wrote:
They advise it for a reason. In big and complex designs big reset
network with high fanout dramatically decrease maximum achievable
frequency.

That's only part of the reason. The other part is that every FF, every
BRAM, every component of the FPGA is guaranteed by design to come up as
'0' at power up (after configuration is complete). So their claim is
that a reset (at least a global power-up reset) is simply unneccessary
and only maybe needed for things you do not wish to start up at '0'
(like, maybe a FSM state variable that dictates the initial state of an
FSM). And even in these cases it's not really needed, since the Xilinx
tools honor signal initialization values (in VHDL), and BRAMs can be
pre-loaded also. So you can be absolutely sure how every component in
the FPGA comes up after power-up, without having to use a reset signal.

You can forget about the resources the global reset signal needs,
pipelining or how to code it properly because it plain and simple is
useless and unnecessary in most cases.*
If you need to set FFs or so to specific values after power-up, then
that's a set, not a reset. Different port on the FF, different scenario,
and certainly needed in a lot less occasions/signals, hence a signal
with much smaller fanout.

* = That's their claim, not necessarily my personal view...

I don't believe Xilinx or any other FPGA vendor makes that claim.
First, the reset from configuration is done via the global set/reset
signal (GSR) which covers the entire chip like the clock signals, but
without the drive tree. The problem with this is the relatively weak
drive which results in a slow propagation time. So there is no
guarantee that it will meet setup/hold times on any given FF when coming
*out* of reset. The result is you must treat this signal as
asynchronous to the clocks in the chip and each section of clocked logic
should be designed accordingly. The good part is that it does not use
any of the conventional routing resources and so is otherwise "free".

It doesn't matter of a device is being set or reset by the GSR. The
programmable inverter is in the FF logic and so is also "free".

--

Rick C
 
In article <280c1ce8-b623-4e61-ab4e-969be974d29e@googlegroups.com>,
Ilya Kalistru <stebanoid@gmail.com> wrote:
Can't this be solved by a pipelined reset?
With each step you could increase the fan-out significantly.

We could come up with a bunch of ideas how to solve this problem in different
situations but it's much wiser just not to create the problem if you can.

Which is *precisely* why I reset almost everything.

The argument is that "resets" are expensive may be true. Reset trees
are expensive, and may be overkill. But first-pass success, and not
having latent (and hard to find bugs) trumps this for me. Reset
and initialization problems can be the devil to find and debug.

I'd rather have a correct design first, rather than an optimal one.
(Maybe my industry can tolerate this more).

I just think Xilinx emphasizes the "don't reset everything" way too
much - and actually doesn't spend much effort on the other side trying
to create a better/more efficient reset mechanisms in their technology and
software. They think it's a training problem not a technology one.

Regards,

Mark
 
Den tirsdag den 31. maj 2016 kl. 15.45.47 UTC+2 skrev rickman:
On 5/31/2016 9:31 AM, Allan Herriman wrote:
On Tue, 31 May 2016 08:53:27 -0400, rickman wrote:

On 5/31/2016 8:09 AM, Sean Durkin wrote:
Ilya Kalistru wrote:
They advise it for a reason. In big and complex designs big reset
network with high fanout dramatically decrease maximum achievable
frequency.

That's only part of the reason. The other part is that every FF, every
BRAM, every component of the FPGA is guaranteed by design to come up as
'0' at power up (after configuration is complete). So their claim is
that a reset (at least a global power-up reset) is simply unneccessary
and only maybe needed for things you do not wish to start up at '0'
(like, maybe a FSM state variable that dictates the initial state of an
FSM). And even in these cases it's not really needed, since the Xilinx
tools honor signal initialization values (in VHDL), and BRAMs can be
pre-loaded also. So you can be absolutely sure how every component in
the FPGA comes up after power-up, without having to use a reset signal.

You can forget about the resources the global reset signal needs,
pipelining or how to code it properly because it plain and simple is
useless and unnecessary in most cases.*
If you need to set FFs or so to specific values after power-up, then
that's a set, not a reset. Different port on the FF, different
scenario,
and certainly needed in a lot less occasions/signals, hence a signal
with much smaller fanout.

* = That's their claim, not necessarily my personal view...

I don't believe Xilinx or any other FPGA vendor makes that claim.


It seems they do (at least Ken Chapman does) make that claim.

Xilinx WP272:
"applying a global reset to your FPGA designs is not a very good
idea and should be avoided"

We are miscommunicating. I thought Sean was saying Xilinx was claiming
a proper reset was not needed. If so, I'd love to read the details on
how they justify that claim. Sean was saying the configuration reset is
adequate, which is not correct for most designs (which uses the GSR).
Yes, every FF is guaranteed to be set to a known state, but since the
max delay is typically greater than the clock cycle used, this signal
much be considered to be async with the clock which means you have to
code with this in mind.

should be easy to handle by using a BUFGCE and an SRL16
to keep the clock stopped until 16 cycles after reset

-Lasse
 
On Tue, 31 May 2016 08:53:27 -0400, rickman wrote:

On 5/31/2016 8:09 AM, Sean Durkin wrote:
Ilya Kalistru wrote:
They advise it for a reason. In big and complex designs big reset
network with high fanout dramatically decrease maximum achievable
frequency.

That's only part of the reason. The other part is that every FF, every
BRAM, every component of the FPGA is guaranteed by design to come up as
'0' at power up (after configuration is complete). So their claim is
that a reset (at least a global power-up reset) is simply unneccessary
and only maybe needed for things you do not wish to start up at '0'
(like, maybe a FSM state variable that dictates the initial state of an
FSM). And even in these cases it's not really needed, since the Xilinx
tools honor signal initialization values (in VHDL), and BRAMs can be
pre-loaded also. So you can be absolutely sure how every component in
the FPGA comes up after power-up, without having to use a reset signal.

You can forget about the resources the global reset signal needs,
pipelining or how to code it properly because it plain and simple is
useless and unnecessary in most cases.*
If you need to set FFs or so to specific values after power-up, then
that's a set, not a reset. Different port on the FF, different
scenario,
and certainly needed in a lot less occasions/signals, hence a signal
with much smaller fanout.

* = That's their claim, not necessarily my personal view...

I don't believe Xilinx or any other FPGA vendor makes that claim.

It seems they do (at least Ken Chapman does) make that claim.

Xilinx WP272:
"applying a global reset to your FPGA designs is not a very good
idea and should be avoided"


Allan
 

Welcome to EDABoard.com

Sponsor

Back
Top