engineered data path versus inferred data path

Guest
Seem to get better results when using inferred data paths?

E.g. letting the synthesis tools insert the multiplexers where they see fit gives better Fmax than laying out the datapath in complete detail.
Also don't need to remember and code all the control signals for the muxes.
Still code intermediate adders and such to keep the number of inferred carry chains down.

Comments?

Jim Brakefield
 
It depends if you structure your design to allow it to be mapped to the features of the underlying hardware.

If you do this well you can get optimal performance. If you do this poorly (e.g have a reset on an internal register that can't be reset) then you will be left scratching your head trying to work out why performance is poor.
 
On Thursday, April 26, 2018 at 5:27:19 AM UTC-5, Mike Field wrote:
It depends if you structure your design to allow it to be mapped to the features of the underlying hardware.

If you do this well you can get optimal performance. If you do this poorly (e.g have a reset on an internal register that can't be reset) then you will be left scratching your head trying to work out why performance is poor.

Am thinking there are two issues:
Mapping to FPGA resources, which is always an issue.
(and particular FPGA families differ on things such as resets)

And best Fmax, where the packer and router have the interconnect delay info.
My thinking is that the location and ordering of the mux inputs makes a significant difference. And, unless you are a Jan Gray type, the tools (ISE, Quartus, etc) can do a better job?
(for the uninitiated, take a look at some real world routing on an FPGA)

Jim Brakefield
 
I'll ask Jan what his magic is and let you know :)

I feel it is usually better to work at the highest level of abstraction you can, but always being sympathic to the lower levels.

That way you have to dive into the lower levels less frequently to resolve what in retrospect were trivial issues.

Of course you have to dive to the low levels or read and understand lots and lots of documentation to get an appreciation of what is sympathic to the lower levels. Experience has to be earned!

Sometimes it is the smallest changes that can matter - using active high vs active low signallimg, or registering a 'clear accumulator' flag so the register can be absorbed into a DSP block. If you are aware of tham you can save a lot of time. You see what patterns do or don't work, and only use the ones that do

But of course, at the sharp end where absolute performance is needed in complex designs then carefully engineered datapaths may be needed. The tools are good, but are only tools.

Mike.
 
On Saturday, 28 April 2018 03:03:47 UTC+12, jim.bra...@ieee.org wrote:
On Thursday, April 26, 2018 at 5:27:19 AM UTC-5, Mike Field wrote:
It depends if you structure your design to allow it to be mapped to the features of the underlying hardware.

If you do this well you can get optimal performance. If you do this poorly (e.g have a reset on an internal register that can't be reset) then you will be left scratching your head trying to work out why performance is poor.

Am thinking there are two issues:
Mapping to FPGA resources, which is always an issue.
(and particular FPGA families differ on things such as resets)

And best Fmax, where the packer and router have the interconnect delay info.
My thinking is that the location and ordering of the mux inputs makes a significant difference. And, unless you are a Jan Gray type, the tools (ISE, Quartus, etc) can do a better job?
(for the uninitiated, take a look at some real world routing on an FPGA)

Jim Brakefield

A asked:

"For a new moderately complex design, should I go for an engineered or inferred data path? (a.k.a. Should I initially trust the tools or not?)"

Jan's reply was....

"IMO you should have the technology mapped solution in mind before you start coding -- then do a min effort bottom up implementation to emit that, whether inferred or structurally instantiated. http://www.fpgacpu.org/log/aug02.html#art … :)"

Pragmatic as always!
 
On Saturday, April 28, 2018 at 2:26:22 PM UTC-5, Mike Field wrote:
On Saturday, 28 April 2018 03:03:47 UTC+12, jim.bra...@ieee.org wrote:
On Thursday, April 26, 2018 at 5:27:19 AM UTC-5, Mike Field wrote:
It depends if you structure your design to allow it to be mapped to the features of the underlying hardware.

If you do this well you can get optimal performance. If you do this poorly (e.g have a reset on an internal register that can't be reset) then you will be left scratching your head trying to work out why performance is poor.

Am thinking there are two issues:
Mapping to FPGA resources, which is always an issue.
(and particular FPGA families differ on things such as resets)

And best Fmax, where the packer and router have the interconnect delay info.
My thinking is that the location and ordering of the mux inputs makes a significant difference. And, unless you are a Jan Gray type, the tools (ISE, Quartus, etc) can do a better job?
(for the uninitiated, take a look at some real world routing on an FPGA)

Jim Brakefield

A asked:

"For a new moderately complex design, should I go for an engineered or inferred data path? (a.k.a. Should I initially trust the tools or not?)"

Jan's reply was....

"IMO you should have the technology mapped solution in mind before you start coding -- then do a min effort bottom up implementation to emit that, whether inferred or structurally instantiated. http://www.fpgacpu.org/log/aug02.html#art … :)"

Pragmatic as always!

Better info than expected. Although it's from almost 16 years ago.

Would be good to hear from the IP vendors who offer customizable CPUs:
RPMs will presumably work for the CPU core.
What about, say, NIOS or microBlaze with their many configuration options?

Jim Brakefield
 

Welcome to EDABoard.com

Sponsor

Back
Top