EDK : FSL macros defined by Xilinx are wrong

On Nov 24, 8:01 am, gnirre <gni...@gmail.com> wrote:
On Nov 24, 12:03 am, Thomas Entner <thomas.ent...@entner-









electronics.com> wrote:
I am mainly refering to:http://www.techfocusmedia.net/embeddedtechnologyjournal/feature_artic...

I think there will be some market for such a device, especially in the
medical and industrial control market, maybe also networking. In fact
we did already design a board with an FPGA connected to a Qseven Atom
module by PCIe. Also I think, that $72 is a reasonable price for that
kind of FPGA (so the Atom is really almost for free, if it is not even
"negative" priced...)

But I think there are some "buts":
- There are 3 main (new) families of FPGAs from Altera (low-, mid- and
high-end), all with some "sub-families" and a lot of different family-
members varying in size (and price) dramatically. The same is with
Xilinx, Lattice and Actel (ehm... Microsemi). And then there are some
newcomers (e.g. Achronix, which is fabbed by, hmmm, Intel). From
Intel, I can choose from one FPGA. At least the Atom offers different
speed-grades... So, if I just want to add just 20 UARTs to my design,
the FPGA will be way too large/expensive. For some high end number-
crunching-support, or integrating a lot of south-bridge-functionality,
it might still be too small.
- To use the Atom, in the end you have to design a PC. I doubt there
are many designers out there having experience with this (dealing with
BIOS, etc.), it will be quite some design-effort. Projects with that
size are typically cost-critical and will try to find cheaper
solutions. The other option is to use this Atom+FPGA as a module (like
the Qseven Atom-modules) which takes away a lot of design effort from
the product developer. (There is already one available:http://de.kontron.com/products/boards+and+mezzanines/pc104+sbc+and+pe....
Not sure how to connect e.g. a DDR2-SDRAM to the FPGA. The headers do
not look very promosing...) (But as easily a module-manufacturer could
integrate any FPGA on an Atom-module, no need to use the Intel-combo)
- It is possible to integrate a soft-core-CPU that runs uC-Linux in a
$10 FPGA. FPGA-Products with Cortex A9 are on the roadmap of Altera
and I think also Xilinx (no idea about pricing yet, may be even more
expensive...) Then there is also the option of using a Cortex-A8-CPU
with many peripherals (or any other) + a FPGA. This will be the
solutions that Intel has to compete with, both with pricing and also
power-consumption.
- In the past, Hard-Core-CPU + FPGA-combinations from Altera and
Xilinx were no success.
- Doubts if this product from Intel has a future, if they are really
serious with it in the long term, may customers keep away from using
it.

I am curios how this develops. I think the module-solution, where you
get a quite big FPGA for an attractive price, will be the most
interesting thing. For this applications, pricing is not that
critical, development should be easy/quick. But I am not sure if this
market is large enough to satisfy Intel in the long term...

Thomas

www.entner-electronics.com

P.S.: Sorry for cross-posting, but I think this is interesting for
both newsgroups.

So Stellarton is basically a product that is the optimal choice for
almost no application, almost always too small, or too expensive.

So why did Intel have this one built? Who ordered it? Which is the One
Single App where an Atom E and 60 000 fpga gates is the optimal
combination?

Being as big as it is and having the resources it has, including a
temporarily unassailable lead in a very profitable business, Intel can
afford to throw things randomly at the wall to see if they stick.
They go around buying and selling companies, announcing and abandoning
technologoy initiatives, and just generally behaving like a drunken
sailor with too much money to spend.

Intel can afford to do that. In fact, it almost needs to do that, as
Intel's track record at barging into new businesses would have put any
normal company out of business, but Intel absolutely must find new
business areas and/or materially expand the territory implied by its
x86 franchise--or go the way of Continental Can.

Robert.
 
On 11/25/2010 8:49 AM, Michael S wrote:
On Nov 24, 3:01 pm, gnirre<gni...@gmail.com> wrote:
On Nov 24, 12:03 am, Thomas Entner<thomas.ent...@entner-



So Stellarton is basically a product that is the optimal choice for
almost no application, almost always too small, or too expensive.

So why did Intel have this one built?

Being cynical, because they have MCM technology.
Unlike current generation, Intel's next generation desktop/laptop
processors are single die so the MCM packaging guys @Intel will have
to go unless there is something new to keep them busy.
MCM is a great way to build real world prototype products. Place two or
more chips in a package, sell it at a reasonable price - and if a lot of
people buy it, then and only then should you invest in a design team to
put it into a single chip and reduce costs.

Avoids those pointless hypothetical arguments about what people would
buy. This way, you get to find out.

Of course, in this case it is a real world prototype of a chip good for
prototyping.

Plus - with MCM you can imagine doing stuff that we can't currently put
on the same die. RF?
 
Intel site doesn't tell us which Altera chip they are using, but from
description it looks like EP2AGX65.
It is an Altera Arria II GX, so you might well be right.
 
On 11/24/2010 05:58 PM, Thomas Womack wrote:

I think two lanes of PCI-E is a qualitatively better interface than
the parallel port in EPP mode (4 gigabit per second each way); on the
other hand you obviously don't need gigabit interface rates to go to
servo motors, and the Atom+Altera chip will be using low-voltage IO
and on aggressive lead-free BGAs so you still have the soldering
issues and need the level translators.
Even one PCI-E lane would be way higher performance than the EPP
parallel port. But, that requires putting a board inside the PC, rather
than in the CNC machine, and also involves some IP from somebody.
I don't have tools to debug the PCI-E, so if it doesn't work, I'm pretty
well stuck. If the EPP parallel port has a problem, I hook up my logic
analyzer and see what is wrong. Also, some of these boards I make have
rows of solid state relays and wire terminal blocks, so they won't fit
in a PC. Now, there is a LOT to be said for making a PCI-E adaptor, and
maybe even using something like the EPP scheme, just using a much better
cable allowing the speed to be turned up. A twisted-pair ribbon cable
could probably handle handshaked byte transfers in 200 ns easily.

So, I want to avoid BGAs (I do down to 0.4mm leaded chips), have to
pretty much stay with open-source IP as I am a VERY small volume
manufacturer, and probably want to keep the main board outside the PC
cabinet. If I can do everything I want with the EPP parallel port, that
seems to be the best solution, it keeps the system down to one board.

But, the Atom + FPGA does have some possibilities, and I am certainly
keeping an eye on the technology. I also have worked with the Beagle
Board, and am waiting for RTAI to get ported to it to move forward on
the CNC project. I have built a little TCP server based on it that
operates some signal switches in an inaccessible location.

Jon
 
On 11/25/2010 12:19 PM, Robert Myers wrote:

Intel can afford to do that. In fact, it almost needs to do that, as
Intel's track record at barging into new businesses would have put any
normal company out of business, but Intel absolutely must find new
business areas and/or materially expand the territory implied by its
x86 franchise--or go the way of Continental Can.
Yeah, in the old days, when porting apps and OSes to different
architectures was a multi-year/multi-man project, a lock on a particular
architecture was both a competitive edge AND a huge curse.
Now that software can sometimes be recompiled for a different
architecture literally in hours, being locked to the hoary, decades-old
X86 architecture is in itself a curse, and why the ARM Cortex processors
can have an entire system run on under 3 W when the Atom needs 20+. If
Intel can't move to something a lot more modern and efficient, it may be
NECESSARY for them to die.

Jon
 
In article <MfqdnSqFPqJ2KWvRnZ2dnUVZ_jmdnZ2d@giganews.com>,
Jon Elson <jmelson@wustl.edu> wrote:
On 11/24/2010 05:58 PM, Thomas Womack wrote:

I think two lanes of PCI-E is a qualitatively better interface than
the parallel port in EPP mode (4 gigabit per second each way); on the
other hand you obviously don't need gigabit interface rates to go to
servo motors, and the Atom+Altera chip will be using low-voltage IO
and on aggressive lead-free BGAs so you still have the soldering
issues and need the level translators.
Even one PCI-E lane would be way higher performance than the EPP
parallel port. But, that requires putting a board inside the PC, rather
than in the CNC machine, and also involves some IP from somebody.
I don't have tools to debug the PCI-E, so if it doesn't work, I'm pretty
well stuck. If the EPP parallel port has a problem, I hook up my logic
analyzer and see what is wrong. Also, some of these boards I make have
rows of solid state relays and wire terminal blocks, so they won't fit
in a PC. Now, there is a LOT to be said for making a PCI-E adaptor, and
maybe even using something like the EPP scheme, just using a much better
cable allowing the speed to be turned up. A twisted-pair ribbon cable
could probably handle handshaked byte transfers in 200 ns easily.
The relevance of PCI-E is simply that the connection between the Atom
and the Altera FPGA in the composite chip is by two lanes of PCI-E
within the chip package (and, it appears, you have to use transceiver
resources in the FPGA fabric to provide the PCI-E interface; not clear
whether Intel provides you with that IP and what the licensing terms
are); I presume that a lot of the I/O pins of the Altera chip are
brought out to balls on the bottom of the package, and thinking that
you'd connect those via level translators to the relays and thence the
servos/stepper motors ... they're about four orders of magnitude
faster than you need for stepper motors.

But, yes, it seems as if your application realm is much more suited to
something like a beagleboard, unless you actually want to run
Solidworks on the motor-controller computer inside the machine tool.

Tom
 
On Dec 6, 1:00=A0pm, Andy <jonesa...@comcast.net> wrote:
I think I would use a function for the intermediate calculation, and
then call the function in both concurrent assignment statements per
the original implementation.

Integers give you the benefits of bounds checking in simulation (even
below the 2^n granularity if desired), and a big improvement in
simulation performance, especially if integers are widely used in the
design (instead of vectors).

Andy

I know everyone says that integers run faster, but is this a
significant effect? Has it been measured or at least verified on
current simulators?

Rick
They certainly use less memory in simulation than wide vectors. An intege
(32 bit) is 4 bytes. A std_logic_vector (9 states) is 3 bits per bit. I
your data is >= 11 bits in width, integers are more efficient.

Also, no need for resolution function calls, either.


---------------------------------------
Posted through http://www.FPGARelated.com
 
On Dec 6, 1:00=A0pm, Andy <jonesa...@comcast.net> wrote:
I think I would use a function for the intermediate calculation, and
then call the function in both concurrent assignment statements per
the original implementation.

Integers give you the benefits of bounds checking in simulation (even
below the 2^n granularity if desired), and a big improvement in
simulation performance, especially if integers are widely used in the
design (instead of vectors).

Andy

I know everyone says that integers run faster, but is this a
significant effect? Has it been measured or at least verified on
current simulators?

Rick

[Correction]

They certainly use less memory in simulation than wide vectors. An integer
(32 bit) is 4 bytes. A std_logic_vector (9 states) is 4 bits per bit. If
your data is > 8 bits in width, integers are more efficient.

Also, no need for resolution function calls, either.


---------------------------------------
Posted through http://www.FPGARelated.com
 
On Dec 8, 12:07 pm, "RCIngham" wrote:

[integers]
certainly use less memory in simulation than wide vectors. An integer
(32 bit) is 4 bytes. A std_logic_vector (9 states) is 4 bits per bit. If
your data is > 8 bits in width, integers are more efficient.
That's the minimum memory footprint. In practice, simulators probably
use a representation that gives them less work to do in packing and
unpacking the
bits. For example, enumerations are probably stored in 8 bits rather
than 4.

Having said that, all tool vendors have accelerated implementations
of numeric_std and they may have other, proprietary representations
to support the acceleration. I don't know if this really happens, but
if I were doing it I'd be tempted to use a 32-bit integer to store
the
value if it has only 0/1 bits in it, which is true most of the time
for most uses of signed/unsigned vectors. And then I'd keep
the full vector in an array of bytes, and a flag to say which
representation was currently in use. That would allow for
excellent arithmetic performance in the majority of cases,
while allowing full std_logic behaviour if needed.

Also, no need for resolution function calls, either.
If a simulator can statically determine that a
signal has only one driving process, it can
skip the resolution function altogether.
Once again I don't know whether real commercial
simulators do this, but it seems like an obvious
and easy optimization for the IEEE standard types,
all of which have well-behaved resolution functions
that are identity functions for a single driver.
--
Jonathan Bromley
 
On 7 Dez., 17:21, "RCIngham"
<robert.ingham@n_o_s_p_a_m.n_o_s_p_a_m.gmail.com> wrote:
On Dec 6, 1:00=A0pm, Andy <jonesa...@comcast.net> wrote:
Also, no need for resolution function calls, either.
Yes. Everybody should be using std_ulogic instead of std_logic. It is
a lot faster.
It even catches some bugs earlier because now multiple drivers are
illegal at compile time.

BUT: The tool vendors make it very hard to do that.
For some reason only known to Xilinx (other vendors are similar), all
entities in timing simulation
models are changed to std_logic. Also, all ports of their IP are
std_logic.

Todays devices do not even support internal tristate signals, so why
use resolution functions?

Kolja
 
On 12/8/2010 12:04 PM, Kolja Sulimma wrote:

Yes. Everybody should be using std_ulogic instead of std_logic. It is
a lot faster.
It even catches some bugs earlier because now multiple drivers are
illegal at compile time.

BUT: The tool vendors make it very hard to do that.
For some reason only known to Xilinx (other vendors are similar), all
entities in timing simulation
models are changed to std_logic. Also, all ports of their IP are
std_logic.
It is std_ulogic_vector that is the problem.
std_ulogic is compatible for bits.

-- Mike Treseler
 
On Dec 8, 5:10 pm, Mike Treseler <mtrese...@gmail.com> wrote:
On 12/8/2010 12:04 PM, Kolja Sulimma wrote:

Yes. Everybody should be using std_ulogic instead of std_logic. It is
a lot faster.
It even catches some bugs earlier because now multiple drivers are
illegal at compile time.

BUT: The tool vendors make it very hard to do that.
For some reason only known to Xilinx (other vendors are similar), all
entities in timing simulation
models are changed to std_logic. Also, all ports of their IP are
std_logic.

It is std_ulogic_vector that is the problem.
std_ulogic is compatible for bits.
Yes, but it is mostly an easily overcome problem. If your signals are
all std_u and you're caught having to interface to a widget with std_l
then the vector conversions can be made right in the port map.

The_std_logic_widget :entity std_logic_widget port map
(
gazinta_std_logic_vector =>
std_logic_vector(some_std_ulogic_vector),
std_ulogic_vector(gazouta_std_logic_vector) =>
some_other_std_ulogic_vector
);

It does make for a bit more space on the lines in the port map in
order to line up the => into a tidy column...but that can be improved
with two short name aliases if you want. That way you can always use
std_ulogic/vector everywhere you write your own code to get the
benefit of the compiler catching multiple drivers without having to
debug to find that problem.

Kevin Jennings
 
On Dec 8, 10:42 pm, KJ <kkjenni...@sbcglobal.net> wrote:
On Dec 8, 5:10 pm, Mike Treseler <mtrese...@gmail.com> wrote:



On 12/8/2010 12:04 PM, Kolja Sulimma wrote:

Yes. Everybody should be using std_ulogic instead of std_logic. It is
a lot faster.
It even catches some bugs earlier because now multiple drivers are
illegal at compile time.

BUT: The tool vendors make it very hard to do that.
For some reason only known to Xilinx (other vendors are similar), all
entities in timing simulation
models are changed to std_logic. Also, all ports of their IP are
std_logic.

It is std_ulogic_vector that is the problem.
std_ulogic is compatible for bits.

Yes, but it is mostly an easily overcome problem.  If your signals are
all std_u and you're caught having to interface to a widget with std_l
then the vector conversions can be made right in the port map.

The_std_logic_widget :entity std_logic_widget port map
(
   gazinta_std_logic_vector =
std_logic_vector(some_std_ulogic_vector),
   std_ulogic_vector(gazouta_std_logic_vector) =
some_other_std_ulogic_vector
);

It does make for a bit more space on the lines in the port map in
order to line up the => into a tidy column...but that can be improved
with two short name aliases if you want.  That way you can always use
std_ulogic/vector everywhere you write your own code to get the
benefit of the compiler catching multiple drivers without having to
debug to find that problem.

Kevin Jennings
I don't have a problem with multiple drivers very often, but it could
help once in a while. But the downside of std_ulogic is that it
doesn't the math operators that unsigned and signed have does it?
Maybe this has been discussed here before and I have forgotten, but if
I primarily use numeric_std types and seldom use std_logic_vector
(heck, forget the math, just saving on the typing is enough for me to
use numeric_std) isn't the whole ulogic/logic thing moot?

I am using integers more as I get used to the ways of getting them to
do the things I want. Integers are good for math related signals, but
not so good for logic. There recently was a thread here or in
comp.lang.vhdl about (or got hijacked about) making integers capable
of logic and bit operations. I think the suggestion was to treat all
integers as being implemented as 2's complement by default. If you
could do logic operations on integers I might not use anything
else...

But how would the various std_logic states be handled if integers are
used? If the signal is uninitialized, std_logic shows this with a 'U'
I believe. You can't have multiple drivers for an integer, so I don't
know that the other values of std_logic would be missed when using
integer. Do I need 'X', 'H', 'L' if there is only one driver? I can
see where 'U' and possibly '-' would be useful which you lose with
integers though.

Rick
 
On Friday, December 10, 2010 3:57:43 PM UTC+1, Thomas Stanka wrote:
Open cores tend to have a lack in documentation and verification,
I'd even say a lack in everything. Very often, architecture and implementation choices aren't brilliant either. GRLIB is well above most of what you can find at opencores.org, and AMBA is a lesser evil.

I would also take what it said in the Opencores.org newsletter with a grain of salt. Those people have lots of delusions.

which is a no-go for developing space electronics.
Any serious electronics at all.
S.
 
On 12/18/2010 1:35 AM, Sebastien Bourdeauducq wrote:
On Friday, December 10, 2010 3:57:43 PM UTC+1, Thomas Stanka wrote:
Open cores tend to have a lack in documentation and verification,

I'd even say a lack in everything. Very often, architecture and
implementation choices aren't brilliant either. GRLIB is well above
most of what you can find at opencores.org, and AMBA is a lesser
evil.
Appreciate your feedback, even though I believe that a review to
opencores is a bit OT. I may agree with you that it can be chosen a
different location for the project (can you propose any valid
alternative to opencores?), but the main idea was to sample an overall
feeling about the project itself.

Thanks anyway!

Al
 
On Monday, December 20, 2010 5:28:12 PM UTC+1, al wrote:
can you propose any valid alternative to opencores?
You can simply use Github. It's not specific to logic design, but it is superior to many other project hosting websites in many ways. Since I switched to it (and to Git), I wonder how I have lived so long with Subversion and Trac...

S.
 
I dont think you should have much problem learning Verilog as I found i
easier than VHDL. I would read some of the articles at Sunburst design a
they explain some of the problems you might encounter like blocking vs no
blocking. I wouldnt bother buying any books as you can get most info of
the internet.

Regards

Jon

---------------------------------------
Posted through http://www.FPGARelated.com
 
"maxascent" <maxascent@n_o_s_p_a_m.n_o_s_p_a_m.yahoo.co.uk> wrote
in message news:TqednQam6MfNN63QnZ2dnUVZ_g-dnZ2d@giganews.com...
I dont think you should have much problem learning Verilog as
I found it easier than VHDL. I would read some of the articles
at Sunburst design as they explain some of the problems you might
encounter like blocking vs non-blocking. I wouldnt bother buying
any books as you can get most info off the internet. Regards
Jon
---------------------------------------
Posted through http://www.FPGARelated.com
I agree with Jon, in that you have prior knowledge in the art,
but, if you are like me, learning from scratch for a particular
device, I bought this:

"FPGA PROTOTYPING BY VERILOG EXAMPLES"
'XILINX SPARTAN™-3 VERSION'
PONG P. CHU

It is very informative, and helps you build things properly.

I got mine through my sister, who works at Baker and Taylor.

Bill
 
"John Larkin" <jjlarkin@highNOTlandTHIStechnologyPART.com> wrote in message
news:6ndnj6taqep5e1fno3cj0gkcehv892m018@4ax.com...
On Sat, 22 Jan 2011 23:03:16 -0600, "krw@att.bizzzzzzzzzzzz"
krw@att.bizzzzzzzzzzzz> wrote:

On Sat, 22 Jan 2011 18:20:55 -0800, John Larkin
jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:

http://www.eetimes.com/electronics-news/4212400/Xilinx-to-shutter-French-R-D-operation

Yikes, this explains some stuff. I wonder how long it will take to
undo the damage.

Damage? The damage caused by closing a software development lab?

I meant the damage likely *done* by that lab. We'd been speculating
how Xininx managed to snarl up their software so thoroughly, and

I can't imagine why they'd outsource something this important to France.
Me neither, don't they know that all brilliant programmers are in the US!

Hans
www.ht-lab.com
 
On Sat, 22 Jan 2011 23:27:40 -0600, "krw@att.bizzzzzzzzzzzz"
<krw@att.bizzzzzzzzzzzz> wrote:

|I don't know why anyone would do business in France (or most of Europe), given
|their work rules.


Possible local content requirements?

james
 

Welcome to EDABoard.com

Sponsor

Back
Top