EDK : FSL macros defined by Xilinx are wrong

On 17 Apr, 04:40, "evilkid...@googlemail.com"
<evilkid...@googlemail.com> wrote:
For example, with MyHDL you will also have to learn about latch
inference and how to avoid "unwanted latches". However, just like in
VHDL/Verilog there is a much better solution for this than using a
limited HDL: use a clocked process template by default.

I don't agree with this.  Why provide such a general framework when
all you really want is the "clocked process" anyway.  VHDL, Verilog
and MyHDL all let you make the same mistake over and over again.
AFAIK, to avoid latch inference you need a non-sequential language,
and most don't want that.
 
On Apr 17, 5:40 am, "evilkid...@googlemail.com"
<evilkid...@googlemail.com> wrote:
For example, with MyHDL you will also have to learn about latch
inference and how to avoid "unwanted latches". However, just like in
VHDL/Verilog there is a much better solution for this than using a
limited HDL: use a clocked process template by default.

I don't agree with this.  Why provide such a general framework when
all you really want is the "clocked process" anyway.  VHDL, Verilog
and MyHDL all let you make the same mistake over and over again.
The context of the "clocked process" paradigm is synthesizable RTL
code.
For many engineers working on complex projects, writing such code is
only
a fraction of their work. For high-level modeling and verification,
you can
use the power of the language in its full generality.

For powerful HDLs such as VHDL/Verilog/MyHDL, the synthesis coding
constraints are imposed by synthesis technology, not by the language.
It is easy to design a "fully synthesizable" HDL that incorporates
such
constraints in the language definition itself. It just seems that the
market
doesn't want those. I certainly don't. You are of course free to
ignore
that observation.

Jan

P.S not all latches are "unwanted" :)
 
On Apr 17, 2:21 am, Paul <pault...@googlemail.com> wrote:
On 17 Apr, 04:40, "evilkid...@googlemail.com"

evilkid...@googlemail.com> wrote:
For example, with MyHDL you will also have to learn about latch
inference and how to avoid "unwanted latches". However, just like in
VHDL/Verilog there is a much better solution for this than using a
limited HDL: use a clocked process template by default.

I don't agree with this.  Why provide such a general framework when
all you really want is the "clocked process" anyway.  VHDL, Verilog
and MyHDL all let you make the same mistake over and over again.

AFAIK, to avoid latch inference you need a non-sequential language,
and most don't want that.
I'm not clear on what either of you are saying. I don't seem to have
a problem with latch inferrence mainly because I know what causes
inferred latches. It has nothing to do with sequential or non-
sequential languages. VHDL has non-sequential capabilities and I can
infer a latch using that.

a <= b when (c = '1'); -- use "c" as a latch enable

What am I missing?

Rick
 
On Apr 14, 3:23 pm, Jan Decaluwe <j...@jandecaluwe.com> wrote:
Seriously, that's why conversion to VHDL/Verilog gets so much
attention. It allows you to view MyHDL simply as a more effective
or fun way to create your trusted VHDL/Verilog design.

Therefore, no need to ask nor tell anyone. If you're intrigued,
just do it, and do it as a good engineer: start with a simple
but relevant module, not with a whole design. After conversion,
few will be able to tell (you may even get praise for the
code quality :)).

And do what? Be forced into a design/coding paradigm that is the least
common denominator of verilog and vhdl?

No thanks, I don't need or want another code generator.

Code conversion is only applicable if you never have to read it or
maintain it in its converted form. I can't rely on myhdl in order to
maintain the source.

Andy
 
On Apr 19, 7:18 pm, Andy <jonesa...@comcast.net> wrote:
On Apr 14, 3:23 pm, Jan Decaluwe <j...@jandecaluwe.com> wrote:

Seriously, that's why conversion to VHDL/Verilog gets so much
attention. It allows you to view MyHDL simply as a more effective
or fun way to create your trusted VHDL/Verilog design.

Therefore, no need to ask nor tell anyone. If you're intrigued,
just do it, and do it as a good engineer: start with a simple
but relevant module, not with a whole design. After conversion,
few will be able to tell (you may even get praise for the
code quality :)).

And do what? Be forced into a design/coding paradigm that is the least
common denominator of verilog and vhdl?
Not necessarily, because conversion happens after elaboration
by the Python interpreter, and because MyHDL's type system
for RTL is at a more abstract level.

No thanks, I don't need or want another code generator.
Sure, don't bother if it doesn't solve a real problem
for you. Just let it be an informed decision.

Please: don't call it code generation. It's essentially a
powerful HDL with strong conversion capabilities.

Also, last time I forgot to mention that there actually is
commercial support (though it may not be expensive enough
to impress you :))

http://www.myhdl.org/doku.php/support

Code conversion is only applicable if you never have to read it or
maintain it in its converted form. I can't rely on myhdl in order to
maintain the source.
I wouldn't know why not. You can even maintain equivalent VHDL and
Verilog simultanuously. What other technology can do that?

Jan
 
All the abstraction is gone when you convert to VHDL/Verilog, rather
than trying to represent the abstraction intact as much as possible in
the copnverted code (significant in VHDL, not so much in Verilog).

Without a proven, supported tool chain I cannot depend on maintaining
code in the MyHDL domain. Therefore, I have to use it only as a code
generator, and be able to maintain the generated VHDL/Verilog code in
case said tool goes away (with all the limitations inherent in the
converted code). If I started out in VHDL, the VHDL would be much more
maintainable. I'll look into the support link you provided, but until
a major synthesis tool supports it directly, I can't say that it would
make any difference.

Your definition of maintaining equivalent VHDL and Verilog is only
through the as-yet-unsupported language. That's not maintenance in my
book. It may work for commercial products that are here and gone in a
year or two, but in my business, support is measured in decades.

All this said, I am attracted to MyHDL as an academic exercise (even
though I hate some of the syntactic baggage, especially ".next"), not
as a useable tool in my professional environment. At least not yet...

Andy
 
I'm trying to dump eight hex values per line
into a file, and can't work out how to do it.

for index in 0 to 127 loop
for sample_sel in 0 to 7 loop
sample_val := integer(scale * sin(phase(sample_sel)));
write ( sample_line, sample_val, RIGHT, 10);
phase(sample_sel) := phase(sample_sel) + phase_inc(sample_sel);
end loop;
writeline ( ip_dat, sample_line );
end loop;

does what I want, but with decimal values.

If I change to:
hwrite ( sample_line, sample_val, RIGHT, 10);
or:
write ( sample_line, to_hstring(sample_val), RIGHT, 10);
it doesn't compile.

Any thoughts?

Thanks

Pete




Here is an example of hex write:
hwrite(my_line1, '0' & Ptr.Address);

The entire code is posted at:
http://bknpk.no-ip.biz/my_web/IP_STACK/sync_wr_vhdl_memory.html

---------------------------------------
Posted through http://www.FPGARelated.com
 
niyander <mightycatniyander@gmail.com> wrote:

I am trying to implement (simulation + synthesis) for 32bit floating
point division unit.
To perform division basically the 23+1bit (1 hidden bit) mantissa part
is divided with the other mantissa, and then 8bit exponents are
subtracted and finally normalization is applied.
So for the mantissa division part I am following Binary Division by
Shift and Subtract method (http://courses.cs.vt.edu/~cs1104/Division/
ShiftSubtract/Shift.Subtract.html).

I can use this algorithm if both the mantissa's are such that no
remainder is left (i.e. remainder=0) but if mantissa's are such that a
remainder is left then how can i proceed with the division? if i
proceed then quotient would be inaccurate.
You either truncate or round. Unless you are implementing an
existing architecture, it is your choice. IBM hex floating point
truncates, most of the others, including IEEE, round.

I have already searched google for srt division algorithm but i am not
able to find an simple example. If some one give me srt division
example/algorithm for a value of 22/7 i would really appreciate that.
That will help you do it faster, but it won't change the question
about what to do with a remainder. If shift and subtract, or
more likely a non-restoring algorithm, is fast enough then you
might just as well use it.

-- glen
 
On May 8, 9:56 am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
niyander <mightycatniyan...@gmail.com> wrote:
I am trying to implement (simulation + synthesis) for 32bit floating
point division unit.
To perform division basically the 23+1bit (1 hidden bit) mantissa part
is divided with the other mantissa, and then 8bit exponents are
subtracted and finally normalization is applied.
So for the mantissa division part I am following Binary Division by
Shift and Subtract method (http://courses.cs.vt.edu/~cs1104/Division/
ShiftSubtract/Shift.Subtract.html).
I can use this algorithm if both the mantissa's are such that no
remainder is left (i.e. remainder=0) but if mantissa's are such that a
remainder is left then how can i proceed with the division? if i
proceed then quotient would be inaccurate.

You either truncate or round.  Unless you are implementing an
existing architecture, it is your choice. IBM hex floating point
truncates, most of the others, including IEEE, round.  

I have already searched google for srt division algorithm but i am not
able to find an simple example. If some one give me srt division
example/algorithm for a value of 22/7 i would really appreciate that.

That will help you do it faster, but it won't change the question
about what to do with a remainder.  If shift and subtract, or
more likely a non-restoring algorithm, is fast enough then you
might just as well use it.  

-- glen
thanks
 
I would be amazed if you cant get XST to keep all 16 flipflops. I know yo
can do it in Synplify.

Jon

---------------------------------------
Posted through http://www.FPGARelated.com
 
2010 http://www.ecshop365.com discount cheap handbags.


http://www.ecshop365.com
cheap coach purses,coach purse,coach bags,coach bag
cheap coach zoe,coach hand,coach company
cheap coach tv dvd,gymnast coach,cheap coach purse
cheap coach brand,coach fashion,coach watches
cheap coach handbags coach bags
cheap coach tote,coach wristlet,coach handbag
cheap coach wallet,coach outlet,coach outlets
cheap coach purse,coach bags,coach bag,coach leather
cheap coach wallets,coach handbags
cheap coach watch,long distance coach
cheap coach sabrina
cheap my word coach
cheap coach op , best coach
buy cheap coach
cheap coach 09
 
I am mainly refering to: http://www.techfocusmedia.net/embeddedtechnologyjournal/feature_articles/20101123-stellarton

I think there will be some market for such a device, especially in the
medical and industrial control market, maybe also networking. In fact
we did already design a board with an FPGA connected to a Qseven Atom
module by PCIe. Also I think, that $72 is a reasonable price for that
kind of FPGA (so the Atom is really almost for free, if it is not even
"negative" priced...)

But I think there are some "buts":
- There are 3 main (new) families of FPGAs from Altera (low-, mid- and
high-end), all with some "sub-families" and a lot of different family-
members varying in size (and price) dramatically. The same is with
Xilinx, Lattice and Actel (ehm... Microsemi). And then there are some
newcomers (e.g. Achronix, which is fabbed by, hmmm, Intel). From
Intel, I can choose from one FPGA. At least the Atom offers different
speed-grades... So, if I just want to add just 20 UARTs to my design,
the FPGA will be way too large/expensive. For some high end number-
crunching-support, or integrating a lot of south-bridge-functionality,
it might still be too small.
- To use the Atom, in the end you have to design a PC. I doubt there
are many designers out there having experience with this (dealing with
BIOS, etc.), it will be quite some design-effort. Projects with that
size are typically cost-critical and will try to find cheaper
solutions. The other option is to use this Atom+FPGA as a module (like
the Qseven Atom-modules) which takes away a lot of design effort from
the product developer. (There is already one available:
http://de.kontron.com/products/boards+and+mezzanines/pc104+sbc+and+peripherals/microspace+pc104+cpus/msmst.html.
Not sure how to connect e.g. a DDR2-SDRAM to the FPGA. The headers do
not look very promosing...) (But as easily a module-manufacturer could
integrate any FPGA on an Atom-module, no need to use the Intel-combo)
- It is possible to integrate a soft-core-CPU that runs uC-Linux in a
$10 FPGA. FPGA-Products with Cortex A9 are on the roadmap of Altera
and I think also Xilinx (no idea about pricing yet, may be even more
expensive...) Then there is also the option of using a Cortex-A8-CPU
with many peripherals (or any other) + a FPGA. This will be the
solutions that Intel has to compete with, both with pricing and also
power-consumption.
- In the past, Hard-Core-CPU + FPGA-combinations from Altera and
Xilinx were no success.
- Doubts if this product from Intel has a future, if they are really
serious with it in the long term, may customers keep away from using
it.

I am curios how this develops. I think the module-solution, where you
get a quite big FPGA for an attractive price, will be the most
interesting thing. For this applications, pricing is not that
critical, development should be easy/quick. But I am not sure if this
market is large enough to satisfy Intel in the long term...

Thomas

www.entner-electronics.com

P.S.: Sorry for cross-posting, but I think this is interesting for
both newsgroups.
 
Thomas Entner wrote:

Not sure how to connect e.g. a DDR2-SDRAM to the FPGA.
There is no problem. You can connect it even to an el cheapo Spartan FPGA.
The only issue might be with the supply voltage -- you must reserve an
entire bank to match your memory chip's ratings, which can be a problem
in low pin devices.

FPGA-Products with Cortex A9 are on the roadmap of Altera
and I think also Xilinx
Which IMHO would be even better than x86.

- In the past, Hard-Core-CPU + FPGA-combinations from Altera and
Xilinx were no success.
I don't think the Virtex family is a failure.

I am curios how this develops. I think the module-solution, where you
get a quite big FPGA for an attractive price, will be the most
interesting thing.
Yeah... the project may end up as a big, cheap FPGA with a built-in
x86-based bootloader. :D

Best regards
Piotr Wyderski
 
On Nov 24, 12:03 am, Thomas Entner <thomas.ent...@entner-
electronics.com> wrote:
I am mainly refering to:http://www.techfocusmedia.net/embeddedtechnologyjournal/feature_artic...

I think there will be some market for such a device, especially in the
medical and industrial control market, maybe also networking. In fact
we did already design a board with an FPGA connected to a Qseven Atom
module by PCIe. Also I think, that $72 is a reasonable price for that
kind of FPGA (so the Atom is really almost for free, if it is not even
"negative" priced...)

But I think there are some "buts":
- There are 3 main (new) families of FPGAs from Altera (low-, mid- and
high-end), all with some "sub-families" and a lot of different family-
members varying in size (and price) dramatically. The same is with
Xilinx, Lattice and Actel (ehm... Microsemi). And then there are some
newcomers (e.g. Achronix, which is fabbed by, hmmm, Intel). From
Intel, I can choose from one FPGA. At least the Atom offers different
speed-grades... So, if I just want to add just 20 UARTs to my design,
the FPGA will be way too large/expensive. For some high end number-
crunching-support, or integrating a lot of south-bridge-functionality,
it might still be too small.
- To use the Atom, in the end you have to design a PC. I doubt there
are many designers out there having experience with this (dealing with
BIOS, etc.), it will be quite some design-effort. Projects with that
size are typically cost-critical and will try to find cheaper
solutions. The other option is to use this Atom+FPGA as a module (like
the Qseven Atom-modules) which takes away a lot of design effort from
the product developer. (There is already one available:http://de.kontron.com/products/boards+and+mezzanines/pc104+sbc+and+pe....
Not sure how to connect e.g. a DDR2-SDRAM to the FPGA. The headers do
not look very promosing...) (But as easily a module-manufacturer could
integrate any FPGA on an Atom-module, no need to use the Intel-combo)
- It is possible to integrate a soft-core-CPU that runs uC-Linux in a
$10 FPGA. FPGA-Products with Cortex A9 are on the roadmap of Altera
and I think also Xilinx (no idea about pricing yet, may be even more
expensive...) Then there is also the option of using a Cortex-A8-CPU
with many peripherals (or any other) + a FPGA. This will be the
solutions that Intel has to compete with, both with pricing and also
power-consumption.
- In the past, Hard-Core-CPU + FPGA-combinations from Altera and
Xilinx were no success.
- Doubts if this product from Intel has a future, if they are really
serious with it in the long term, may customers keep away from using
it.

I am curios how this develops. I think the module-solution, where you
get a quite big FPGA for an attractive price, will be the most
interesting thing. For this applications, pricing is not that
critical, development should be easy/quick. But I am not sure if this
market is large enough to satisfy Intel in the long term...

Thomas

www.entner-electronics.com

P.S.: Sorry for cross-posting, but I think this is interesting for
both newsgroups.
So Stellarton is basically a product that is the optimal choice for
almost no application, almost always too small, or too expensive.

So why did Intel have this one built? Who ordered it? Which is the One
Single App where an Atom E and 60 000 fpga gates is the optimal
combination?

- Jan Tĺngring, journalist
 
On 11/24/2010 07:01 AM, gnirre wrote:

So Stellarton is basically a product that is the optimal choice for
almost no application, almost always too small, or too expensive.

So why did Intel have this one built? Who ordered it? Which is the One
Single App where an Atom E and 60 000 fpga gates is the optimal
combination?
Well, I sort of have one. I make some boards that interface to X86
systems through the parallel port, and generate signals to run stepper
or servo drives. I currently use a Xilinx Spartan IIE 50K gate part,
and have to add several voltage regulators and a bunch of voltage level
translators. I definitely don't want to get into CPU board design, the
high data rates to memory, etc. make any design somewhat speculative,
and I really want to avoid multiple revisions. Also, I'm not set up to
do anything finer than 0.4 mm lead pitch on leaded parts, no BGA
capability. If there were a board that brought out the necessary number
of pins, then I'd still need to supply the level translator chips.

So, really, the only benefit is a better interface between CPU and the
FPGA, as opposed to the parallel port in EPP mode. So, while it would
be possible to use this platform, I'm not sure it buys me anything. I
can get an Atom mini-ITX motherboard with memory, power supply and SSD
for a little over $100, and it still had an EPP parallel port.

Oh, the application is a CNC motion control program called EMC2, it runs
under Linux with RTAI.

Jon
 
gnirre <gnirre@gmail.com> writes:

So why did Intel have this one built? Who ordered it? Which is the One
Single App where an Atom E and 60 000 fpga gates is the optimal
combination?
I don't know which is the one single application for this device. But
it would be nice to use the FPGA to connect to video/audio interfaces
and do image processing, then let the Atom to run the GUI and the
network stack.

Petter
--
..sig removed by request.
 
On 11/24/2010 3:06 AM, Piotr Wyderski wrote:
Thomas Entner wrote:

- In the past, Hard-Core-CPU + FPGA-combinations from Altera and
Xilinx were no success.

I don't think the Virtex family is a failure.
I have heard people at Xilinx say that the embedded PowerPC CPUs in the
Virtex line were a failure. $ wise.
 
In article <nYKdnXW5a-Bi7HDRnZ2dnUVZ_q6dnZ2d@giganews.com>,
Jon Elson <jmelson@wustl.edu> wrote:
So why did Intel have this one built? Who ordered it? Which is the One
Single App where an Atom E and 60 000 fpga gates is the optimal
combination?
Well, I sort of have one. I make some boards that interface to X86
systems through the parallel port, and generate signals to run stepper
or servo drives. I currently use a Xilinx Spartan IIE 50K gate part,
and have to add several voltage regulators and a bunch of voltage level
translators. I definitely don't want to get into CPU board design, the
high data rates to memory, etc. make any design somewhat speculative,
and I really want to avoid multiple revisions. Also, I'm not set up to
do anything finer than 0.4 mm lead pitch on leaded parts, no BGA
capability. If there were a board that brought out the necessary number
of pins, then I'd still need to supply the level translator chips.

So, really, the only benefit is a better interface between CPU and the
FPGA, as opposed to the parallel port in EPP mode.
I think two lanes of PCI-E is a qualitatively better interface than
the parallel port in EPP mode (4 gigabit per second each way); on the
other hand you obviously don't need gigabit interface rates to go to
servo motors, and the Atom+Altera chip will be using low-voltage IO
and on aggressive lead-free BGAs so you still have the soldering
issues and need the level translators.

Tom
 
Andy "Krazy" Glew wrote:

I have heard people at Xilinx say that the embedded PowerPC CPUs in the
Virtex line were a failure. $ wise.
OK -- if *you* say that, I take it for granted.

Best regards
Piotr Wyderski
 
On Nov 24, 3:01 pm, gnirre <gni...@gmail.com> wrote:
On Nov 24, 12:03 am, Thomas Entner <thomas.ent...@entner-



So Stellarton is basically a product that is the optimal choice for
almost no application, almost always too small, or too expensive.

So why did Intel have this one built?
Being cynical, because they have MCM technology.
Unlike current generation, Intel's next generation desktop/laptop
processors are single die so the MCM packaging guys @Intel will have
to go unless there is something new to keep them busy.

Who ordered it? Which is the One
Single App
The whole point behind the combo is : Not One App But Many.

where an Atom E and 60 000 fpga gates is the optimal
combination?

- Jan Tĺngring, journalist
~60,000 equivalent logic elements. That's something like 1.5M to 2M
gates just in fabric without counting embedded memory, multipliers,
"hard" PCIe IP block, transceivers etc...
Intel site doesn't tell us which Altera chip they are using, but from
description it looks like EP2AGX65.
 

Welcome to EDABoard.com

Sponsor

Back
Top