Why is this group so quiet?

On 9/10/2015 5:13 AM, Jan-54 wrote:
On Wed, 9 Sep 2015 23:17:25 -0400
rickman <gnuarm@gmail.com> wrote:

I found their site and only the small part comes in the 100
pin QFP. The larger part only comes in the 256 and larger
BGAs. So no magic for me. The smaller part has no
distributed RAM which seems weird and no multipliers which is
not uncommon.

Go on, bite the bullet! Try the QFN32 or WLCSP25, you can
get your cheap proto PCBs also from China:

http://dirtypcbs.com/

I have a product which uses around 80% of 3 KLUT in a 100 QFP. The
Lattice part in use is obsolete and I would like to have a replacement.
So a 32 QFN is of no use in this app.


There are some oddities about the parts; are there really only 3
flops per four LUTs? Anyone seen a patent?

That is not unusual as you will find in some of the Lattice parts. Also
note that more recent parts have grown the LUTs to 6 inputs which
amounts to having 4 of the 4 input LUTs per FF.


Device | GW1N-1K | GW1N-9K
-------------+-----------+---------
Lut | 1,152 | 8,640
FF | 864 | 6,480
CLU Array | 11x20 |
Dist. RAM | 0 | 17,280
Block SRAM | 72Kb | 198Kb
NVM Bits [1] | 96K | 1,792K
Mult. 18x18 | 0 | 20
Max User IO | 120 | 272
PLLs+DLLs | 0 | 2+3
WLCSP25 | 15 |
QFN32 | 21 |
LQFP100 | 79 |
LQFP144 | 116 |
MBGA160 | 120 |
UBGA204 | 120 |
PBGA204 | 120 |
PBGA256 | | 180
PBGA484 | | 272

[1] Random Access

Abbreviated from:
http://gowinsemi.com.cn/productsShow.aspx?n_id=353

They seem to be following in the footsteps of the large FPGA makers, in
particular Lattice. Any idea of what the various dimensions of the
MBGA160 are? My problem with BGA is when the ball pitch gets too small
that I have to drop from 6/6 design rules (maybe 5/5 these days).

--

Rick
 
On 9/11/2015 2:17 PM, glen herrmannsfeldt wrote:
rickman <gnuarm@gmail.com> wrote:
On 9/10/2015 5:13 AM, Jan-54 wrote:

(snip)
There are some oddities about the parts; are there really only 3
flops per four LUTs? Anyone seen a patent?

That is not unusual as you will find in some of the Lattice parts. Also
note that more recent parts have grown the LUTs to 6 inputs which
amounts to having 4 of the 4 input LUTs per FF.

Someone should know the scaling laws for logic, which should show
how the number of FF scales with LUTs, and how 4LUTs scale relative
to 6LUTs.

There is no such thing as a scaling "law". Different designs are
different. That's why different chips from the same maker have
different ratios... different target markets.


Note that a 6LUT isn't quite as useful as four 4LUTs.
(Especially if you need a FF at an appropriate point.)

Not sure what your point is. This completely ignores the main issue
which is designing the most useful FPGA for the cost. As devices have
grown in size they found the routing was hogging too much real estate so
they needed to push up from the bottom to eliminate the lowest level of
interconnect. LUT6 based blocks get rid of a lot of low end routing in
essence.

Remember, they sell you the routing and give you the logic for free.


Seems to me that FPGA designers are getting closer to what is actually
used in the usual case, and optimizing more for that.

I think this is a poor way of understanding what is needed in FPGAs. As
devices grow, logic blocks grow. Consider the memory block, the
multiplier/DSP block and now full fledged CPUs all "part" of the FPGA.
This trend will continue with more and more built in functions being
included as hard IP rather than soft IP using the FPGA fabric.

If we could get them to make FPGAs at the low end with better I/O
capability (analog functions, clock oscillators, etc) such as is found
in MCUs, they might just take out many of the general MCUs. How long
can they continue to grow at the top end?

--

Rick
 
rickman <gnuarm@gmail.com> wrote:

(snip, I wrote)

Someone should know the scaling laws for logic, which should show
how the number of FF scales with LUTs, and how 4LUTs scale relative
to 6LUTs.

There is no such thing as a scaling "law". Different designs are
different. That's why different chips from the same maker have
different ratios... different target markets.

Different designs are different, but usually there is a trend.

There are, for example, scaling laws for MOS transistors, even
though there are differences in the way individual ones are made.

If you make a log-log graph with the number of gates on one axis
and number of FF on the other, you can see if there is a trend.
Most often, there is.

Note that a 6LUT isn't quite as useful as four 4LUTs.
(Especially if you need a FF at an appropriate point.)

Not sure what your point is. This completely ignores the main issue
which is designing the most useful FPGA for the cost. As devices have
grown in size they found the routing was hogging too much real estate so
they needed to push up from the bottom to eliminate the lowest level of
interconnect. LUT6 based blocks get rid of a lot of low end routing in
essence.

Yes.

So, as you note, routing increases faster than logic as logic grows.
There will be an exponent, likely between 1 and 2, that shows how
it grows.

Remember, they sell you the routing and give you the logic for free.

Seems to me that FPGA designers are getting closer to what is actually
used in the usual case, and optimizing more for that.

I think this is a poor way of understanding what is needed in FPGAs. As
devices grow, logic blocks grow. Consider the memory block, the
multiplier/DSP block and now full fledged CPUs all "part" of the FPGA.
This trend will continue with more and more built in functions being
included as hard IP rather than soft IP using the FPGA fabric.

I suppose, but since different designs are different, it might need
a few different types of chips. Some designs have no use for block
RAM or multipliers, no matter how useful they are on average.

If we could get them to make FPGAs at the low end with better I/O
capability (analog functions, clock oscillators, etc) such as is found
in MCUs, they might just take out many of the general MCUs. How long
can they continue to grow at the top end?

Could be interesting.

-- glen
 
On 9/11/2015 11:34 PM, glen herrmannsfeldt wrote:
rickman <gnuarm@gmail.com> wrote:

(snip, I wrote)

Someone should know the scaling laws for logic, which should show
how the number of FF scales with LUTs, and how 4LUTs scale relative
to 6LUTs.

There is no such thing as a scaling "law". Different designs are
different. That's why different chips from the same maker have
different ratios... different target markets.

Different designs are different, but usually there is a trend.

There are, for example, scaling laws for MOS transistors, even
though there are differences in the way individual ones are made.

If you make a log-log graph with the number of gates on one axis
and number of FF on the other, you can see if there is a trend.
Most often, there is.

I get what a trend is. I'm saying that the people from the FPGA
companies have said there are wide variations based on the type of
design being done. They shoot for the best compromise, but even that
depends on a variety of factors. As I said before, even within one
maker's lines, the ratio varies. So clearly there are different markets
and not a one-size-fits-all.


Note that a 6LUT isn't quite as useful as four 4LUTs.
(Especially if you need a FF at an appropriate point.)

Not sure what your point is. This completely ignores the main issue
which is designing the most useful FPGA for the cost. As devices have
grown in size they found the routing was hogging too much real estate so
they needed to push up from the bottom to eliminate the lowest level of
interconnect. LUT6 based blocks get rid of a lot of low end routing in
essence.

Yes.

So, as you note, routing increases faster than logic as logic grows.
There will be an exponent, likely between 1 and 2, that shows how
it grows.

Remember, they sell you the routing and give you the logic for free.

Seems to me that FPGA designers are getting closer to what is actually
used in the usual case, and optimizing more for that.

I think this is a poor way of understanding what is needed in FPGAs. As
devices grow, logic blocks grow. Consider the memory block, the
multiplier/DSP block and now full fledged CPUs all "part" of the FPGA.
This trend will continue with more and more built in functions being
included as hard IP rather than soft IP using the FPGA fabric.

I suppose, but since different designs are different, it might need
a few different types of chips. Some designs have no use for block
RAM or multipliers, no matter how useful they are on average.

And that is the sticky wicket! FPGA makers have resisted adding lots of
hard IP (especially CPUs) because of the proliferation of combinations
that ensue. With block memory and multipliers they generally picked a
ratio to logic and worked with that. But even there, they have come out
with lines with more or less of these relatively generic blocks
depending on the applications. CPUs are a horse of another color. If
you can work with off chip memory, that at least is out of the equation.
Eventually there will be more and more hard IP used and FPGAs will
proliferate the same way MCUs have. Or will it be that FPGA fabric will
be added to CPUs and FPGAs will go the way of the PDA?


If we could get them to make FPGAs at the low end with better I/O
capability (analog functions, clock oscillators, etc) such as is found
in MCUs, they might just take out many of the general MCUs. How long
can they continue to grow at the top end?

Could be interesting.

Needs to be economically "interesting" to the FPGA companies. Atmel had
one at one time, but their technology was behind the power curve and
they could never bring the cost down. Now Microsemi has a line of them
with the same cost problem.

--

Rick
 
On 08.09.2015 20:52, Tim Wescott wrote:

(snip)

After lurking for about a year you can answer
most questions by just regurgitating answers to previous questions.)

To employ the language of a poet, that's metaphorically like an LDPC
code. The group has a sparse error-correcting matrix (meaning a lot of
circularly shifted answers that may sound alike), but with that it
achieves the goal of rapid error-correction and error-detection
capability in the low SNR region (meaning producing good and fruitful
answers to all the poorly formulated questions and/or mistaken
propositions)!

Evgeny.
 
On Wednesday, 23 September 2015 10:20:12 UTC+12, Evgeny Filatov wrote:

To employ the language of a poet, that's metaphorically like an LDPC
code. The group has a sparse error-correcting matrix (meaning a lot of
circularly shifted answers that may sound alike), but with that it
achieves the goal of rapid error-correction and error-detection
capability in the low SNR region (meaning producing good and fruitful
answers to all the poorly formulated questions and/or mistaken
propositions)!

Evgeny.

Inspired answer! +1
 

Welcome to EDABoard.com

Sponsor

Back
Top