Question about partial multiplication result in transposed F

On Tue, 06 Oct 2015 23:02:27 -0400, rickman wrote:

[snip]
Ok, I'm not at all familiar with GFs. I see now a bit of what you are
saying. But to be honest, I don't know the tools would have any trouble
with the example you give. The tools are pretty durn good at
optimizing... *but*... there are two things to optimize for, size and
performance. They are sometimes mutually exclusive, sometimes not. If
you ask the tool to give you the optimum size, I don't think you will do
better if you code it differently, while describing *exactly* the same
behavior.

You seemed to have misinterpreted my earlier post in this thread in which
I had two implementations of *exactly* the same function that were giving
very different speed and area results after synthesis.

I think what you say (about the coding not mattering and the tools doing
a good job) is true for "small" functions.
Once you get past a certain level of complexity, the tools (at least the
Xilinx ones) seem to do a poor job if the function has been coded the
canonical way. Recoding as a tree of XORs can give better results.

This is a fault in the synthesiser and is independent of the speed vs
area optimisation setting.

Regards,
Allan
 
David,
I like your explanation of mapping the field elements to something abstract, like 0, 1, a, b, c, d, e, f. I've only seen one textbook that mentioned something like this (it used Greek letters) and I found it very helpful to realize that the field elements are only arbitrarily *mapped* to integers and are not equivalent to them.

I don't use ROMs for field multiplication. A GF(2048) multiplier takes about 64 6-input LUTs, as I recall, with about 3 levels of logic. A fixed multiplier (multiplying by a constant) is more like 20 LUTs. I use ROMs for division (find the reciprocal), cubing, etc. If you use ROMs for multiplication it induces a lot of latency, because you have 2-3 cycles for each blockRAM, and you have to do a log and antilog lookup, so you can end up with 5+ cycles of latency.
Kevin
 
On 23/10/15 20:55, Kevin Neilson wrote:
David, I like your explanation of mapping the field elements to
something abstract, like 0, 1, a, b, c, d, e, f. I've only seen one
textbook that mentioned something like this (it used Greek letters)
and I found it very helpful to realize that the field elements are
only arbitrarily *mapped* to integers and are not equivalent to
them.

For non-algebraists, the use of numbers and "addition" and
"multiplication" in field theory can be very confusing - "normal" people
are too used to identities such as "two times x equals x plus x" that
have no equivalent in fields such as GF(2^n). Using letters or other
symbols helps make things clear.

I don't use ROMs for field multiplication. A GF(2048) multiplier
takes about 64 6-input LUTs, as I recall, with about 3 levels of
logic. A fixed multiplier (multiplying by a constant) is more like
20 LUTs. I use ROMs for division (find the reciprocal), cubing, etc.
If you use ROMs for multiplication it induces a lot of latency,
because you have 2-3 cycles for each blockRAM, and you have to do a
log and antilog lookup, so you can end up with 5+ cycles of latency.
Kevin

There's always a balance between throughput, latency and resource usage,
depending on your requirements. My experience with these is in software
rather than FPGA, where the balances are a little different.
 
For non-algebraists, the use of numbers and "addition" and
"multiplication" in field theory can be very confusing - "normal" people
are too used to identities such as "two times x equals x plus x" that
have no equivalent in fields such as GF(2^n). Using letters or other
symbols helps make things clear.

It *is* confusing. If x is an element of a field, then (x+1)^2 = x^2+2x+1, but the '2' in '2x' isn't the '2' from the field (i.e., alpha^1); it's the regular integer 2. This was confusing to me initially. It just makes more sense to write 'x+x' instead of '2x' (which is 0, of course, if the field is Gf(2^n)).
 

Welcome to EDABoard.com

Sponsor

Back
Top