Phrasing!

On 23/11/16 16:33, Richard Damon wrote:
On 11/21/16 5:07 AM, Tom Gardner wrote:
On 20/11/16 22:43, Tim Wescott wrote:
On Sat, 19 Nov 2016 14:15:18 -0800, Kevin Neilson wrote:

Here's an interesting synthesis result. I synthesized this with Vivado
for Virtex-7:

reg [68:0] x;
reg x_neq_0;
always@(posedge clk) x_neq_0 <= x!=0; // version 1

Then I rephrased the logic:

reg [68:0] x;
reg x_neq_0;
always@(posedge clk) x_neq_0 <= |x; // version 2

These should be the same, right?

Version 1 uses 23 3-input LUTs on the first level followed by a 23-long
carry chain (6 CARRY4 blocks). This is twice as big as it should be.

Version 2 is 3 levels of LUTs, 12 6-input LUTs on the first level, 15
total.

Neither is optimal. What I really want is a combination, 12 6-input
LUTs followed by 3 CARRY4s.

This is supposed to be the era of high-level synthesis...

I'm not enough of an FPGA guy to make really deep comments, but this
looks like the state of C compilers about 20 or so years ago. When I
started coding in C one had to write the code with an eye to the assembly
that the thing was spitting out. Now, if you've got a good optimizer
(and the gnu C optimizer is better than I am on all but a very few of the
processors I've worked with recently), you just express your intent and
the compiler makes it happen most efficiently.

Clearly, that's not yet the case, at least for that particular synthesis
tool. It's a pity.

Of course sometimes you don't want optimisation.
Consider, for example, bridging terms in an asynchronous
circuit.


If you are thinking in terms of an AND-OR tree for the typical LUT based FPGA,
you aren't going to get it right. Most FPGA's now use the LUT, which, at least
for a single LUT, are normally guaranteed to be glitch free for single line
transitions (so no need for the bridging terms). If you need more inputs than a
single LUT provides, and you need need the glitch free performance, than trying
to force a massive AND-OR tree is normally going to be very inefficient, and I
find it worth building the exact structure I need with the Low Level, vendor
provided fundamental LUT/Carry primatives.

Agreed.
 
On Sat, 19 Nov 2016 14:15:18 -0800, Kevin Neilson wrote:

Here's an interesting synthesis result. I synthesized this with Vivado
for Virtex-7:

reg [68:0] x;
reg x_neq_0;
always@(posedge clk) x_neq_0 <= x!=0; // version 1

Then I rephrased the logic:

reg [68:0] x;
reg x_neq_0;
always@(posedge clk) x_neq_0 <= |x; // version 2

These should be the same, right?

Version 1 uses 23 3-input LUTs on the first level followed by a 23-long
carry chain (6 CARRY4 blocks). This is twice as big as it should be.

Version 2 is 3 levels of LUTs, 12 6-input LUTs on the first level, 15
total.

Neither is optimal. What I really want is a combination, 12 6-input
LUTs followed by 3 CARRY4s.

This is supposed to be the era of high-level synthesis...

Reading this whole thread, I'm reminded of a gripe I have about the FPGA
manufacturers, or at least Xilinx and Altera.

If they -- or just any one of them -- would PUBLISH the details of how
their bitfiles map to the workings of their FPGAs, then all of a sudden
that company's stuff would be the subject of all the intensive university
research on FPGA optimization that you might desire, and possibly even
see an open-source tools ecology grow around its parts.

It wouldn't immediately lead to nirvana, but it may at least lead to more
and better optimization, and lots of people experimenting with different
optimization approaches.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

I'm looking for work -- see my website!
 
On Fri, 25 Nov 2016 23:57:31 -0500, rickman wrote:

On 11/25/2016 4:26 PM, Tim Wescott wrote:

Reading this whole thread, I'm reminded of a gripe I have about the
FPGA manufacturers, or at least Xilinx and Altera.

If they -- or just any one of them -- would PUBLISH the details of how
their bitfiles map to the workings of their FPGAs, then all of a sudden
that company's stuff would be the subject of all the intensive
university research on FPGA optimization that you might desire, and
possibly even see an open-source tools ecology grow around its parts.

It wouldn't immediately lead to nirvana, but it may at least lead to
more and better optimization, and lots of people experimenting with
different optimization approaches.

Let's say I am Xilinx... I have a bazillion dollars of investment into
my products and the support software. I sell to large companies who
want reliable, consistent products. I open up my chip design and a
bunch of university idealists start creating tools for my devices. The
tools work to varying degrees and are used for a number of different
designs by a wide variety of groups.

So what happens when some of these groups report problems "with the
chips"? Are these problems really with the chips or with the tools? If
any of these groups ask us to deal with these problems, how do we begin?

In other words, how do we keep these tools from causing problems with
our reputation?

"You have reached the Xilinx automated help line. To ask about problems
using our chips with unapproved tools, please hang up now..."

But yes, I see your point.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

I'm looking for work -- see my website!
 
On 11/25/2016 4:26 PM, Tim Wescott wrote:
Reading this whole thread, I'm reminded of a gripe I have about the FPGA
manufacturers, or at least Xilinx and Altera.

If they -- or just any one of them -- would PUBLISH the details of how
their bitfiles map to the workings of their FPGAs, then all of a sudden
that company's stuff would be the subject of all the intensive university
research on FPGA optimization that you might desire, and possibly even
see an open-source tools ecology grow around its parts.

It wouldn't immediately lead to nirvana, but it may at least lead to more
and better optimization, and lots of people experimenting with different
optimization approaches.

Let's say I am Xilinx... I have a bazillion dollars of investment into
my products and the support software. I sell to large companies who
want reliable, consistent products. I open up my chip design and a
bunch of university idealists start creating tools for my devices. The
tools work to varying degrees and are used for a number of different
designs by a wide variety of groups.

So what happens when some of these groups report problems "with the
chips"? Are these problems really with the chips or with the tools? If
any of these groups ask us to deal with these problems, how do we begin?

In other words, how do we keep these tools from causing problems with
our reputation?

--

Rick C
 
On 25/11/16 21:26, Tim Wescott wrote:
On Sat, 19 Nov 2016 14:15:18 -0800, Kevin Neilson wrote:

Here's an interesting synthesis result. I synthesized this with Vivado
for Virtex-7:

reg [68:0] x;
reg x_neq_0;
always@(posedge clk) x_neq_0 <= x!=0; // version 1

Then I rephrased the logic:

reg [68:0] x;
reg x_neq_0;
always@(posedge clk) x_neq_0 <= |x; // version 2

These should be the same, right?

Version 1 uses 23 3-input LUTs on the first level followed by a 23-long
carry chain (6 CARRY4 blocks). This is twice as big as it should be.

Version 2 is 3 levels of LUTs, 12 6-input LUTs on the first level, 15
total.

Neither is optimal. What I really want is a combination, 12 6-input
LUTs followed by 3 CARRY4s.

This is supposed to be the era of high-level synthesis...

Reading this whole thread, I'm reminded of a gripe I have about the FPGA
manufacturers, or at least Xilinx and Altera.

If they -- or just any one of them -- would PUBLISH the details of how
their bitfiles map to the workings of their FPGAs, then all of a sudden
that company's stuff would be the subject of all the intensive university
research on FPGA optimization that you might desire, and possibly even
see an open-source tools ecology grow around its parts.

It wouldn't immediately lead to nirvana, but it may at least lead to more
and better optimization, and lots of people experimenting with different
optimization approaches.

Many people have suggested that advantage.

I presume the information will never be released because
the information is
- not part of an "API" that is guaranteed over time
- highly proprietary,
- highly device specific,
- possibly varies and is corrected over time (Xilinx
is good at suite updates)
- only available in a form that is directly relevant
to their design tools, i.e. tightly coupled)
and hence is
- difficult for someone else to interpret and process
correctly
- opens all sorts of cans of worms if a third party
gets it wrong

Analogy: Intel guarantee the "machine code API" of
their processors, but the detailed internal structure
is closely and varies significantly across processor
generations.
 
So what happens when some of these groups report problems "with the
chips"? Are these problems really with the chips or with the tools? If
any of these groups ask us to deal with these problems, how do we begin?

In other words, how do we keep these tools from causing problems with
our reputation?

I can totally understand why Xilinx wouldn't want to mess with this. It's a support nightmare and customers would definitely associate poor open-source software with the chips. I'm not even convinced that open-source tools would be any better.

At least, in the worst case, I can instantiate primitives, but I wish the tools gave me a little more ability to override bad decisions without doing that. There are a lot of case in which I can do a better job but if I put in KEEPs, the tools ignore them, leaving me little choice but to instantiate primitives. Another thing they could do is to have more synthesis directives. There are lots of good structures in the hardware, such as F7-F9 muxes, that the synthesizer almost refuses to use, and I can only make use of by using primitives. Perhaps a directive would allow me to infer a mux but to force (*USE_F7*).

And the built-in FIFOs: why can't I infer them? Preferably using the push/pop keywords from SystemVerilog. That is the kind of "high-level synthesis" I am looking for: to not have to write structural HDL. These kinds of incremental changes to their tools would be far more useful than their HLS or AccelDSP or whatever.

I wish, after working on this since 1984, they had more solid synthesis. There is supposed to be an intermediate layer between language-parsing and synthesis to primitives. When I write the same logic in two slightly different ways and get totally different primitives, I know that something is kludged. When I have to DeMorganize by hand to get better synthesis something is wrong. I am being paid to work out complex problems in Galois arithmetic, not to do freshman-level Boolean logic.
 
Tom Gardner <spamjunk@blueyonder.co.uk> wrote:
Analogy: Intel guarantee the "machine code API" of
their processors, but the detailed internal structure
is closely and varies significantly across processor
generations.

There is indeed work going on into FPGA 'virtualisation' - creating
vendor-neutral intermediate structures that open tools can compile down to,
that either vendor tools can pick up and compile or will map to some
pre-synthesised FPGA-on-FPGA.

I'm not sure if there's anything near mainstream, but I can see it's going
to become increasingly relevant - if Microsoft have a datacentre containing
a mix of Virtex 6, Virtex 7, Ultrascale, Stratix V, Stratix 10, ... FPGAs,
based on whatever models were cheap when they bought that batch of servers,
the number of images that needs to be supported will start multiplying and
so an 'ISA' for FPGAs would help the heterogeneity problem.

Theo
 
On 11/25/16 4:26 PM, Tim Wescott wrote:
Reading this whole thread, I'm reminded of a gripe I have about the FPGA
manufacturers, or at least Xilinx and Altera.

If they -- or just any one of them -- would PUBLISH the details of how
their bitfiles map to the workings of their FPGAs, then all of a sudden
that company's stuff would be the subject of all the intensive university
research on FPGA optimization that you might desire, and possibly even
see an open-source tools ecology grow around its parts.

It wouldn't immediately lead to nirvana, but it may at least lead to more
and better optimization, and lots of people experimenting with different
optimization approaches.

The big issue is that much of the information that would be included in
such a publication is information classified by the companies has highly
competition sensitive. While companies document quite well the basic
structure of the fundamental Logic Elements and I/O Blocks (and other
special computational blocks), what is normally not well described,
except in very general terms, is the routing. In many ways the routing
is the secret sauce that will make or break a product line. If the
routing is too weak, users will find they can't use a lot of the logic
in the device (what they think they are paying for), too much routing
and the chips get slower and too expensive (since building the routing
IS a significant part of the cost of a device).

To 'Open Source' the bitfiles, you be necessity need to explain and
document how you configured your sparse routing matrix, which may well
help your competitors next generation of products, at the cost of your
future products.
 
Kevin Neilson <kevin.neilson@xilinx.com> wrote:
I'm not enough of an FPGA guy to make really deep comments, but this
looks like the state of C compilers about 20 or so years ago. When I
started coding in C one had to write the code with an eye to the assembly
that the thing was spitting out. Now, if you've got a good optimizer
(and the gnu C optimizer is better than I am on all but a very few of the
processors I've worked with recently), you just express your intent and
the compiler makes it happen most efficiently.

I know! I often feel like I'm a software guy, but stuck in the 80s,
poring over every line generated by the assembler to make sure it's optimized.

There is an IEEE standard for synthesizable VHDL.
https://standards.ieee.org/findstds/standard/1076.6-2004.html

But is *is* like writing C per-ANSI, when every compiler had its own
variant.

--
mac the naďf
 
In article <1973855991.502148597.323655.acolvin-efunct.com@news.eternal-september.org>,
mac <acolvin@efunct.com> wrote:
Kevin Neilson <kevin.neilson@xilinx.com> wrote:
I'm not enough of an FPGA guy to make really deep comments, but this
looks like the state of C compilers about 20 or so years ago. When I
started coding in C one had to write the code with an eye to the assembly
that the thing was spitting out. Now, if you've got a good optimizer
(and the gnu C optimizer is better than I am on all but a very few of the
processors I've worked with recently), you just express your intent and
the compiler makes it happen most efficiently.

I know! I often feel like I'm a software guy, but stuck in the 80s,
poring over every line generated by the assembler to make sure it's optimized.


There is an IEEE standard for synthesizable VHDL.
https://standards.ieee.org/findstds/standard/1076.6-2004.html

But is *is* like writing C per-ANSI, when every compiler had its own
variant.

There's a IEEE standard for the synthesizeable subset of Verilog-2001 too:
(IEEE 1364.1 - 2002) I know it well, as I contributed to it. It's a shame they
never did one for SystemVerilog. It was suggested, but some internal politicking
on the working group struck it down.

It's left us with a hit-and-miss method of finding the least common ground between
toolsets. We're actively struggling with this now.

But this doesn't change Kevin's observations much. Defining what the tool should
accept, still gives the tool a LOT of leeway on HOW to build it - as Kevin's
shown with this example. After all, all implementions shown in this example
are "correct". Some are just more optimal than others (and like always the definition
of "optimal" isn't concrete...)

Regards,

Mark
 
> But is *is* like writing C per-ANSI, when every compiler had its own variant.

pre-ANSI

The churn in language revisions isn't helping either
 
On Friday, December 2, 2016 at 3:10:19 AM UTC-5, o pere o wrote:
This is a point, although a weak one. The same should also happen to the
microcontroller industry... but does not. They are quite happy selling
chips that are being programmed by the open-sourced gcc toolchain.

Also consider that on the gcc side of things that they have a lot of people who know how to write software that are writing and developing the gcc software. There aren't nearly as many people who know how to write software working on software stuff for hardware like FPGAs. Witness that we don't have much of anything other than GHDL as a basic simulator. Development on that is up to the whims of one person who chooses to work on it (or not in which case everything stops...as it has of late).

It wouldn't surprise me if FPGA companies probably consider that the open source community would take forever to develop anything.

Kevin
 
On 26/11/16 05:57, rickman wrote:
On 11/25/2016 4:26 PM, Tim Wescott wrote:

Reading this whole thread, I'm reminded of a gripe I have about the FPGA
manufacturers, or at least Xilinx and Altera.

If they -- or just any one of them -- would PUBLISH the details of how
their bitfiles map to the workings of their FPGAs, then all of a sudden
that company's stuff would be the subject of all the intensive university
research on FPGA optimization that you might desire, and possibly even
see an open-source tools ecology grow around its parts.

It wouldn't immediately lead to nirvana, but it may at least lead to more
and better optimization, and lots of people experimenting with different
optimization approaches.

Let's say I am Xilinx... I have a bazillion dollars of investment into
my products and the support software. I sell to large companies who
want reliable, consistent products. I open up my chip design and a
bunch of university idealists start creating tools for my devices. The
tools work to varying degrees and are used for a number of different
designs by a wide variety of groups.

So what happens when some of these groups report problems "with the
chips"? Are these problems really with the chips or with the tools? If
any of these groups ask us to deal with these problems, how do we begin?

In other words, how do we keep these tools from causing problems with
our reputation?

This is a point, although a weak one. The same should also happen to the
microcontroller industry... but does not. They are quite happy selling
chips that are being programmed by the open-sourced gcc toolchain.

Pere
 
o pere o <me@somewhere.net> wrote:
This is a point, although a weak one. The same should also happen to the
microcontroller industry... but does not. They are quite happy selling
chips that are being programmed by the open-sourced gcc toolchain.

An ISA is an API. A lot of work goes on implementing the microarchitecture
which implements the API, either in hardware or software (microcode).
Another lot of work goes on proving that the implementation matches the API
and is bug-free (and 'halt and catch fire' is definitely a bug).

FPGA silicon doesn't have an API, you just have the raw transistors to
control. The main safeguards to prevent them generating flame-inducing
configurations (or more prosaically customer returns) are in the tools.

You can put an API on top of an FPGA (eg as the 'FPGA virtualisation' folks
do) but the performance and flexibility impact is substantial. As far as
I'm aware of the current state of FPGA virtualisation (admittedly not that
much), there's nothing there that would be usable in a product anytime soon.

Theo
 
On 12/2/2016 3:10 AM, o pere o wrote:
On 26/11/16 05:57, rickman wrote:
On 11/25/2016 4:26 PM, Tim Wescott wrote:

Reading this whole thread, I'm reminded of a gripe I have about the FPGA
manufacturers, or at least Xilinx and Altera.

If they -- or just any one of them -- would PUBLISH the details of how
their bitfiles map to the workings of their FPGAs, then all of a sudden
that company's stuff would be the subject of all the intensive
university
research on FPGA optimization that you might desire, and possibly even
see an open-source tools ecology grow around its parts.

It wouldn't immediately lead to nirvana, but it may at least lead to
more
and better optimization, and lots of people experimenting with different
optimization approaches.

Let's say I am Xilinx... I have a bazillion dollars of investment into
my products and the support software. I sell to large companies who
want reliable, consistent products. I open up my chip design and a
bunch of university idealists start creating tools for my devices. The
tools work to varying degrees and are used for a number of different
designs by a wide variety of groups.

So what happens when some of these groups report problems "with the
chips"? Are these problems really with the chips or with the tools? If
any of these groups ask us to deal with these problems, how do we begin?

In other words, how do we keep these tools from causing problems with
our reputation?


This is a point, although a weak one. The same should also happen to the
microcontroller industry... but does not. They are quite happy selling
chips that are being programmed by the open-sourced gcc toolchain.

I think this is apples and oranges. It is easy to look at the output of
a compiler and see what it is doing. The details of the configuration
file for an FPGA is not so easy to analyze. Also, the configuration bit
stream isn't just a matter of telling the chip what to do, it can set up
internal conflicts that can damage the chip.

Even if the FPGA vendors release their technical info on the
configuration bit stream, it is a very complex thing to deal with.

--

Rick C
 
On Friday, November 25, 2016 at 4:26:11 PM UTC-5, Tim Wescott wrote:
If they -- or just any one of them -- would PUBLISH the details of how
their bitfiles map to the workings of their FPGAs, then all of a sudden
that company's stuff would be the subject of all the intensive university
research on FPGA optimization that you might desire, and possibly even
see an open-source tools ecology grow around its parts.

I kind of doubt that would happen. I don't think there are enough hardware knowledgeable folks who can write good software. If there were, we would all be simulating with some open source VHDL+Verilog simulator that puts Modelsim and Aldec to shame [1]. But instead most people I think are using Modelsim/Aldec or some lame simulator that comes bundled with the tools.

Since there apparently aren't many folks diving in to develop something that would benefit many with a decent simulator, why would you think there would be enough to dive in and help develop something that benefits a subset of that market?

Open source developers tend to use their tools in their day to day work as well. Those developers 'only' need to be good at developing software in order to develop open source stuff. When you talk about open source tools to be used for hardware development, you would want to have someone good at developing hardware and software so that the tools they develop they would then use day to day. If nothing else, there are fewer of those folks that have both skills and, based on the lack of developers for open source simulators, I think there just isn't the critical mass.

Manufacturers keeping their details secret could be seen as nothing more than their seeing that there aren't enough people out there who could even develop what is needed in an open source scenario. So they pay people to develop what they need.

It wouldn't immediately lead to nirvana, but it may at least lead to more
and better optimization, and lots of people experimenting with different
optimization approaches.

I know you're not holding your breath waiting for that nirvana...but I'd wager that you will have taken your last breath long before such nirvana was reached.

Kevin Jennings

[1] Yes I know about GHDL and it is and appears to have always been a one man band. The original band member left the band and years later someone else picked it up. Last two releases are 0.31 released in Oct-2015 and 0.33 in Dec-2015, nothing new in about 12 months and counting. It is still not up to release 1.0 and there will be another revision to the VHDL standard in 2017. One person can only do so much...in addition to doing something that puts food on the table.
 
On Friday, December 2, 2016 at 3:41:04 PM UTC-5, KJ wrote:
[1] Yes I know about GHDL <snip
Last two releases are 0.31 released in Oct-2015 and 0.33 in Dec-2015,
nothing new in about 12 months and counting.
Tristan moved the development to GitHub:
https://github.com/tgingold/ghdl/releases

-Brian
 
On Friday, December 2, 2016 at 7:28:42 PM UTC-5, Brian Davis wrote:
On Friday, December 2, 2016 at 3:41:04 PM UTC-5, KJ wrote:

[1] Yes I know about GHDL <snip
Last two releases are 0.31 released in Oct-2015 and 0.33 in Dec-2015,
nothing new in about 12 months and counting.

Tristan moved the development to GitHub:
https://github.com/tgingold/ghdl/releases

That's good to hear, now you need to let Google in on the secret. I do see there is mention of the move to Github on the summary page on SourceForge page, but maybe you should consider pulling all the old stuff down from SourceForge and put some more redirection links to the new home.

Kevin Jennings
 
> I kind of doubt that would happen. I don't think there are enough hardware knowledgeable folks who can write good software. If there were, we would all be simulating with some open source VHDL+Verilog simulator that puts Modelsim and Aldec to shame [1]. But instead most people I think are using Modelsim/Aldec or some lame simulator that comes bundled with the tools.

I agree. It doesn't seem likely that there would be a lot of people versed in low-level silicon design and high-level software and who would also be willing to put in a lot of time for free.

I don't suppose there is anything stopping somebody from making a synthesizer--you can just convert RTL into structural HDL containing primitives--but I don't see a lot of free synthesizers around.
 
On Fri, 2 Dec 2016 03:32:16 -0800 (PST)
KJ <kkjennings@sbcglobal.net> wrote:

Also consider that on the gcc side of things that they have a
lot of people who know how to write software that are writing
and developing the gcc software. There aren't nearly as many
people who know how to write software working on software
stuff for hardware like FPGAs. Witness that we don't have
much of anything other than GHDL as a basic simulator.
Development on that is up to the whims of one person who
chooses to work on it (or not in which case everything
stops...as it has of late).

The MyHDL community is vibrant [1], and this fast[2] simulator
exports Verilog and VHDL for synthesis.


It wouldn't surprise me if FPGA companies probably consider
that the open source community would take forever to develop
anything.

I'm using open source tools [3]. As for community, there are many
users, not sure about developers, maybe clues here [4].

Clifford did/does warn about frying chips, but the value of the
project far exceeds the cost of a few sacrificial ICs.

Jan Coombs
--
[1] http://myhdl.org/
[2] http://myhdl.org/docs/performance.html
[3] http://www.clifford.at/icestorm/
[4] https://www.reddit.com/r/yosys/comments/4ocilz/icestorm_adding_support_for_new_devices_part_1/
 

Welcome to EDABoard.com

Sponsor

Back
Top