EDK : FSL macros defined by Xilinx are wrong

John_H wrote:
Did you declare their datasheets a piece of crap because
they don't provide what you believe to be "proper" power data?
Actually I had two large proposals out with one of them pressing me
like hell about the power design, and the FAE could not get me an
answer, and the potential customer gave up. I got drilled about several
other items the data sheet could not answer either. That was a useful
exercise in learning that the info just isn't available.

Did you call their business methods a scam?
And when was that?

Geeze, man - listen to yourself. Ask the FAEs that have been so helpful to
look at your threads and see if they'll side with you or try to help educate
you in the other aspects of Xilinx that are there to benefit the customer
base.
If Austin and Peter want to be confrontational .... it's their choice.
I've tried to cool that. I'm also not going to back down and let them
ridicule me every thing they disagree without a cost to them. I'm just
a few years younger, have spent my life in other engineering areas, and
I do understand my business areas. There are some very clear
differences between our perspectives, which are not based on absolute
rights and wrongs, and would normally be matters to agree to disagree
on. If their intent is to destroy my credibility because I'm not an
insider, and I've a different view point they don't like ... then I
will reluctantly play that game as well until they get tired of being
burned too - or Xilinx management steps in and pulls the plug. If they
can act responsibly, so will I. But I will continue to push for things
that are needed for a pure reconfigurable computing market place, a
niche market that is growing, and they clearly have mixed interest in.

It was fully amusing to watch last week go from Austin and the other
poster say that RC and PR were money sinks and being dropped, and right
after I pressed to open source it, another Xilinx guy steps in and says
wait a year and they will finally get it fixed in the next major
release. That is still a tiled PR solution using PAR, which is just to
slow and requires far to my floor planning for my market. With Austin
asserting JBits is dead, that kills the alternate strategies of using
the backend tools from JHDL or JHDLBits into Jbits. Xilinx has very
mixed internal positions on that whole tool set. I had been told that I
would never be able to use JHDLBits, Then Austin pops in and trys to
change that. Then the next week is declaring JBits a failure and dead.
Then another source tells me Austin doesn't speak for the JBits team,
that the JBits isn't dead.

The written word is often a poor way to communicate for those who don't have
a solid understanding of professional interaction. Often all it takes is a
good conversation - face to face - to help the understanding come through.
If you're a customer in need and you ask for help from a respectable
company, you get courteous assistance. On this newsgroup you haven't been a
customer in need, you've been an agitator - specifically in this tired
thread you've been an underinformed "devil's advocate."
That's a two way street. Yes I've pressed hard, but the personal
attacks in reponse were never justified. Frankly, based on a business
that stands behind Austin and Peter, I've considered not ever doing
business with Xilinx if that level of utter arrogance is to be
expected. I have about 4K Xilinx parts in my inventory that I can dump,
and never deal with the company again. Or, I can do as I've planned for
two years, and that is build a new company around reconfigurable
computing boards. I've pressed that the current ISE software model with
very poor place and route for compile, load and go operations just
doesn't fit that market. It was designed to do an excellent job at all
costs, not a very good job quickly. It's ability to handle dynamic
reconfiguration has been marginal and error prone. After talking with
several people that had gone down that path, the suggest was to roll my
own based on the jbits and JHDL code. The legal issues with that are
less than clear. Nor do the high ISE per seat license costs work
trying to sell FPGA's as a very fast computer accellerator.

That Xilinx is a bit thin skinned about criticism, and constructive
criticism, is a bit of an understatement from my perspective. I do
know that when my FAE can not provide worst case power numbers, and I'm
being pressed hard for them, there are problems. The customer had
already had the same discussion and lack of results from a prior
proposal and was WAY ahead of me. There are also problems when
customer interfaces are not trained to listen to the customers needs,
and instead jump in and argue why the customers are wrong. There is a
lot of truth that the customer understands their business, and it's the
vendors job to understand that the customer probably isn't just wrong
about their business needs. In tech land, that concept that the
customer is always right, needs some serious refinement. Sure customers
get it wrong, and need guidance, but they are generally very clueful
about what they need for their business.

In talking with others I've gotten similar mixed feelings about Altera,
but no first hand experience yet.

You help noone.
I've actually interacted with a fair number of people with radically
different perspectives. The problem in a nut shell is that RC isn't
taken seriously by Xilinx, as it's been a 15 year pipe dream. Their
tools and business model are for a different market place -- high
volume embedded. And there staff are used to telling customers how to
use Xilinx product, and have some serious problems when you step
outside the high volume embedded application areas. First of all, the
biggest sales get the support. And as we have clearly seen, niche
markets, get little and are quickly subjected to being dropped, to go
chase another large customer. Small customers either need a way to fit
in and pickup the crumbs, or go to the seven dwarfs as Austin puts it.
IE send the small customers to the small players.

Given this has been the status quo for some decade .... clearly things
are not likely to change without a shove from my perspective. I'm more
than willing to step up and push for change, rather than watch the
opportunities slip by. I don't think watching the chances slip by
another decade is the right choice. When it comes to Xilinx and RC,
either they need to embrace it, and clearly get behind it, or step
aside. Their indecison is hurting the market place seriously. Other
than a few ruffled feathers the last few weeks have been very useful in
airing differences in market requirements. The side emails I've gotten
have been supportive in general.

So I leave you with this challenge ... layout a road map that will
either effect the required changes, or get a clear decision from Xilinx
management they do not want to be a major player in the RC market -
firm decision inside 3-6 months.

I'm avocating being vocal, direct, and a bit of a squeeky wheel, as the
passive approach has created 15 years of indicision that we see even in
the last few weeks with radically different views from several
different Xilinx spokes persons. I'm willing to actively and intensely
engage Austin, Peter, and other Xilinx staff on all the related issues
to fully air the differences in opionion about the divergent needs of
the various markets. So far, the intense, and informative debate here
has actually been very useful to provoke discussion that would normally
just be ignored.

Austin and I differ on the impact that patent expirations will have,
but history clearly relates that the expiration of base patents in
other technology areas was followed by a rapid change in the guard as
off shore companies stepped in and took over the market globally
leafing the US market founders dino's. In the next four years all the
major patents that control XC2000, XC3000, and much of XC4000
technology expire ... which means off shore companies will be free to
market bigger and faster version of those product technologies. They
will not be Virtext-II Pro's or XC4V's, but they will be big, fast, and
cheap FPGAs. And five years after that, about a decade from now, the
landscape may well be very very different in who are market leaders.

Fairly major revenue choices, like the Zero Defect is Quality
perspective the prevents Xilinx from wringing maxium revenue from every
wafer are very strong indicators that Xilinx may not be nimble enough
to adapt to a comodity FPGA market place price pressures that will
force severe cuts in the margins they have held for years. The
layoffs, the market restructuring, sweeping changes in management teams
could easily send Xilinx to it's grave inside as little as a few years
- or leave it a minority low volume player for a long lingering death
or takeover/buyout target for the IP.

I can be vocal, and raise the issues. Or I can shut my trap and watch
:)

engage the debate .... make up your mind .... and if the changes come
true as I suspect, at least everyone had their day to plan ahead and
not cry over the changes. Austin and Peter are likely to retire before
long, so it will not be their watch on duty if the market loss happens
..... but it will be their direction and attitudes that set the stage
for it.
 
Thread name changed to explore an interesting point:

fpga_toys@yahoo.com wrote:
Peter Alfke wrote:

Without EasyPath any production device with any known defect (that is
not covered by an errata note) goes into the garbage can.
That has been and will remain our policy, and I assume the policy of
any reputable IC manufacturer.

snip
I remember the 1980's when the disk drive industry was going thru that
too. Just try and buy a half million zero defect drives today.

Maybe some stock holders aught to be asking your board why you are so
eager to crush into the can near perfect dies and packaged parts for a
single (or small number) of flaws that might easily bring another
10-50% in sales into users perfectly willing to purchase parts with
classified flaws they can design around.
Sounds OK, at first glance.

But Disk drives have an inherent storage for defect maps, and LCD
screens rather 'self document' any faulty pixels.

So, how to actually do that, in a RAM based FPGA ?

You don't REALLY want to do what the Russians used to, and ship a
errata per device ?!

I think Altera have a method for re-mapping defective areas, so they
can make real yields higher.
Not sure about Xilinx, or others ?
Xilinx did have a patent swap, after they both finally tired of
feeding the lawyers, but it takes years for that to work into silicon.


I'm perfectly willing to design reconfigurable computers with a few
routing and LUT failures that we can map around with a defect list
keepaway table. I'd be even happy to get them as bare, but mountable
die.
So, that means the Tools have to be defect-literate, and be able to
read a device ID and then find that device's defect map ?

I suppose that is do-able, but it does not sound cheap, and the SW
teams are struggling with quality now, do we really want them
distracted with defect mapping ?

How long can you tolerate running a Place/Route, for just one device ?

Anybody else here that would be willing to purchase high end FPGA's for
reconfigurable computing with a few minor flaws? Since they are trash
to you, can I get several thousand XC4VLX200's at say a 90-95%
discount? hmm ... maybe that's to low an offer, and there is a bidding
war ready to errupt.
Another minus to this idea, is that of counterfeit devices.
How can Xilinx prevent the defect devices, entering a grey market, sold
as fully functional devices ?
Sounds like a Device ID again...

Problem is, device ID is not in any present Xilinx silicon ?

Others are looking at this (IIRC Actel use something like this, to
'lock' their ARM cores, to Silicon that includes the license fees? )

There might be long-term potential, for some FPGA vendor to
make their Tools and Silicon Defect-map-smart, but the P&R would have
to be way faster than present - and anyway, why not just fix it in
silicon, with some redundancy and fuse re-mapping ?.

Seems only a tiny portion of users could tolerate the custom P&R's ?

-jg
 
Retire?

Wow.

That is a very strange thought.

Both Peter and I are "retire-adverse."

We are having far too much fun watching and helping Xilinx grow.

And I think I am more than young enough to be Peter's child, not his peer.

Amusing post,

Austin
 
Jim Granville wrote:
So, that means the Tools have to be defect-literate, and be able to
read a device ID and then find that device's defect map ?
Having a unique serial number for identification might be nice, but is
certainly not necessary to apply defect mapping to a particular well
known FPGA device. Two likely environments exist ... the fpga device,
or devices are mounted on a pci card and installed in a traditional
system. The installation process for that card would run extensive
screening diagnostics, and develop and error map for it. The driver for
that device, interfaced to the tool chain would make the map available
as a well known service. In addition the device/card would be sold with
either media or internet access to the more accurate testing done prior
to sale by the mfg.

The other likely RC environment, are FPGA centric processor clusters,
built arround a mix of pure logic FPGA's (like XC4VLX200's) coupled
with cpu core FPGAs (II-Pro and SX parts) possibly coupled to 32/64bit
traditional risc processors. These have been my research for the last 5
years. These super computers would be targeting extereme performance
for mostly high end simulation and processing applications
traditionally found doing nuke simulations, heat/stress sim's, weather
sims, genetic sim and searches, and other specialty applications.
Machines doing this in various degress exist today in both research and
production environments. The software for controlling these machines
and ground up vendor specific designs .... and defect management is a
trivial task for that software.

I suppose that is do-able, but it does not sound cheap, and the SW
teams are struggling with quality now, do we really want them
distracted with defect mapping ?
Defect mapping is an integral part of every operating system, and you
will find it to cover for faults on floppy media, optical media, and
even hard drives .... it's part of most filesystems.
Providing defect mapping generated keep out zones on the fpga for place
and route is rather trivial. That is a very small price to pay to have
access to large numbers of relatively inexpensive FPGA's. Anything that
will allow effectively higher yields will lower the prices for RC
computing based on defect management, AND lower the price for zero
defect parts where the design and deployment infrastructure is unable
to handle defect mangement due to fixed bitstreams.

How long can you tolerate running a Place/Route, for just one device ?
For RC ... not long at all. Which is why different strategies which are
baed in fast acceptable placement and routing, with dynamic clock
fitting are better for RC, while extensive optimization for fixed
bitstreams used in embedded applications need the tools used today. RC
has very very different goals .... bitstreams whose life may be
measured in seconds, or hours, maybe even a few days. Embedded is
trying to optimize many other variables, and for the goal of using
bitstreams with lifetimes in years.

There might be long-term potential, for some FPGA vendor to
make their Tools and Silicon Defect-map-smart, but the P&R would have
to be way faster than present - and anyway, why not just fix it in
silicon, with some redundancy and fuse re-mapping ?.
Much easier said that done, and loaded with the same problems that
dynamic sparing has in disk drives. To access a spared sector requires
a seek, and rotational latency loss TWICE for each error .... huge
performance penalty. Ditto for FPGA's when you have to transparently
alter routing to another LUT in an already densely routed design.

Defect tollerant is a completely different strategy where place and
route happens defect aware. It's actually not that difficult to edit a
design on the fly .... using structures similar to todays cores, which
are linked as a block into an existing netlist. That can happen both
quickly and distort the prelinked/routed object during the load process
to effect the remapping around the failed resources.

Anyway ... zero defect designs need zero defect parts, systems designed
around defect tollarent strategies are built from the ground up to
edit/alter/link the design around defects to avoid them.
This could be done using a soft core or a $1 micro on board with the
fpga for embedded designs that do not want to suffer the zero defect
price premium.

Seems only a tiny portion of users could tolerate the custom P&R's ?
With todays ISE tools ... that is certainly true. Using custom JBits
style loaders, such as found in JHDL and JHDLBits, really a piece of
cake using mature tools that have been around for many years on the
educational side, with some small tweeks for defect mapping. All the
same tools the FpgaC project needs for compile load and go to an FPGA
coprocessor board.

Tiny relative to the size of the FPGA universe today ... sure. Tiny in
terms of dollars and importance certainly not. Completely disjoint to
embedded fpga design today ... different customers, different designs,
different cost structures, different applications.
 
fpga_toys@yahoo.com wrote:
Jim Granville wrote:

So, that means the Tools have to be defect-literate, and be able to
read a device ID and then find that device's defect map ?


Having a unique serial number for identification might be nice, but is
certainly not necessary to apply defect mapping to a particular well
known FPGA device. Two likely environments exist ... the fpga device,
or devices are mounted on a pci card and installed in a traditional
system. The installation process for that card would run extensive
screening diagnostics, and develop and error map for it. The driver for
that device, interfaced to the tool chain would make the map available
as a well known service. In addition the device/card would be sold with
either media or internet access to the more accurate testing done prior
to sale by the mfg.

The other likely RC environment, are FPGA centric processor clusters,
built arround a mix of pure logic FPGA's (like XC4VLX200's) coupled
with cpu core FPGAs (II-Pro and SX parts) possibly coupled to 32/64bit
traditional risc processors. These have been my research for the last 5
years. These super computers would be targeting extereme performance
for mostly high end simulation and processing applications
traditionally found doing nuke simulations, heat/stress sim's, weather
sims, genetic sim and searches, and other specialty applications.
Machines doing this in various degress exist today in both research and
production environments. The software for controlling these machines
and ground up vendor specific designs .... and defect management is a
trivial task for that software.


I suppose that is do-able, but it does not sound cheap, and the SW
teams are struggling with quality now, do we really want them
distracted with defect mapping ?


Defect mapping is an integral part of every operating system, and you
will find it to cover for faults on floppy media, optical media, and
even hard drives .... it's part of most filesystems.
Providing defect mapping generated keep out zones on the fpga for place
and route is rather trivial. That is a very small price to pay to have
access to large numbers of relatively inexpensive FPGA's. Anything that
will allow effectively higher yields will lower the prices for RC
computing based on defect management, AND lower the price for zero
defect parts where the design and deployment infrastructure is unable
to handle defect mangement due to fixed bitstreams.


How long can you tolerate running a Place/Route, for just one device ?


For RC ... not long at all. Which is why different strategies which are
baed in fast acceptable placement and routing, with dynamic clock
fitting are better for RC, while extensive optimization for fixed
bitstreams used in embedded applications need the tools used today. RC
has very very different goals .... bitstreams whose life may be
measured in seconds, or hours, maybe even a few days. Embedded is
trying to optimize many other variables, and for the goal of using
bitstreams with lifetimes in years.


There might be long-term potential, for some FPGA vendor to
make their Tools and Silicon Defect-map-smart, but the P&R would have
to be way faster than present - and anyway, why not just fix it in
silicon, with some redundancy and fuse re-mapping ?.


Much easier said that done, and loaded with the same problems that
dynamic sparing has in disk drives. To access a spared sector requires
a seek, and rotational latency loss TWICE for each error .... huge
performance penalty. Ditto for FPGA's when you have to transparently
alter routing to another LUT in an already densely routed design.

Defect tollerant is a completely different strategy where place and
route happens defect aware. It's actually not that difficult to edit a
design on the fly .... using structures similar to todays cores, which
are linked as a block into an existing netlist. That can happen both
quickly and distort the prelinked/routed object during the load process
to effect the remapping around the failed resources.

Anyway ... zero defect designs need zero defect parts, systems designed
around defect tollarent strategies are built from the ground up to
edit/alter/link the design around defects to avoid them.
This could be done using a soft core or a $1 micro on board with the
fpga for embedded designs that do not want to suffer the zero defect
price premium.


Seems only a tiny portion of users could tolerate the custom P&R's ?


With todays ISE tools ... that is certainly true. Using custom JBits
style loaders, such as found in JHDL and JHDLBits, really a piece of
cake using mature tools that have been around for many years on the
educational side, with some small tweeks for defect mapping. All the
same tools the FpgaC project needs for compile load and go to an FPGA
coprocessor board.

Tiny relative to the size of the FPGA universe today ... sure. Tiny in
terms of dollars and importance certainly not. Completely disjoint to
embedded fpga design today ... different customers, different designs,
different cost structures, different applications.
A few points:
1) The routing structure is many times larger than the LUT structures.
A defect in the FPGA is far more likely to show up in the routing
structure, and it may not be a hard failure.

2) The testing only identifies bad devices. It does not isolate or map
the exact fault, to do so would add considerably to the tester time for
a part that can't be sold at full price anyway.

3) Defect map dependent PAR is necessarily unique to each device with a
defect, so you wind up not being able to use the same bitstream for each
copy of a product. Fine for onesy-twosy, but a nightmare for anything
that is going into production. The administration cost would far exceed
the savings even if you get the parts for free.

4) Each part would need to come with a defect map stored electronically
somewhere. Since the current parts have no non-volatile storage, that
means a separate electronic record has to be kept for each part. This
is expensive to administer for everyone involved from the manufacturer,
the distributors, and the end user. Again, the administration costs
would overshadow any savings for parts with a reasonable yield.

5) Timing closure has to be considered when re-spinning an FPGA
bitstream to avoid defects. In dense high performance designs, it may
be difficult to meet timing in a good part, much less one that has to
allow for any route to be moved to a less direct routing.
 
Good points Ray.

Ray Andraka wrote:
A few points:
1) The routing structure is many times larger than the LUT structures.
A defect in the FPGA is far more likely to show up in the routing
structure, and it may not be a hard failure.
Intermittant failures on all mediums have been a difficult testing
problem, but is something that can reach closure if part of the system
design includes regular testing. This would have to be part of idle
activity for a reliable RC system design.

2) The testing only identifies bad devices. It does not isolate or map
the exact fault, to do so would add considerably to the tester time for
a part that can't be sold at full price anyway.
I suspect that this would not be a tester project, but more like
specialized board fixturing that would facilitate loadable self tests
under various voltage and temp corner cases. That is significantly
cheaper to implment for the RC board vendor.

3) Defect map dependent PAR is necessarily unique to each device with a
defect, so you wind up not being able to use the same bitstream for each
copy of a product. Fine for onesy-twosy, but a nightmare for anything
that is going into production. The administration cost would far exceed
the savings even if you get the parts for free.
That was addressed initially. For RC using incremental place and route
for fast compile, load and go operation a keep out zone is really no
different than an existing utilized resource that can not be used.

For more mainstream production use, I suggested that the go-nogo
testing of the part look for errors in 16 sub quadrants, and bin parts
failing to each. That would allow purchasing a run of parts which all
had different failures in the same sub quadrant, and the rest of the
die was known good and usable. That is much more manageable, without
creating too many sku's.

4) Each part would need to come with a defect map stored electronically
somewhere. Since the current parts have no non-volatile storage, that
means a separate electronic record has to be kept for each part. This
is expensive to administer for everyone involved from the manufacturer,
the distributors, and the end user. Again, the administration costs
would overshadow any savings for parts with a reasonable yield.
For RC systems that would have to be addressed on a system by system
basis, as part of the host development software ... not a big deal.

individual resources faults at a detailed level for embedded
applications is quite unrealistic, which is why I suggested subquadrant
level sorting of the parts.

5) Timing closure has to be considered when re-spinning an FPGA
bitstream to avoid defects. In dense high performance designs, it may
be difficult to meet timing in a good part, much less one that has to
allow for any route to be moved to a less direct routing.
Certainly. I've suggested several times that RC applications may well
need to actually assign clock nets at link time based on the nets
linked delays, and choose from a list of clocks that satisfy the timing
closure. I have this on my list of things for FpgaC this spring, along
with writing a spec for RC boards suggesting that derived rising edge
aligned clocks be implemented on the RC board covering a certain range
of periods. That would allow the runtime linker (dynamic incremental
place and route) to merge the netlist onto the device, and assign
reasonable clocks for each sub-block in the design. This is necessary
to be able to reuse libraries of netlist compiled subroutines for a
particular architecture, across a number of host boards and clock
resources.

A very different model of timing closure than embedded designs today.
 
Pszemol@PolBox.com posted on Thu, 23 Feb 2006 14:36:34 -0600:

""Robert F. Jarnot" <jarnot@mls.jpl.nasa.gov> wrote in message news:dtl22v$73q$1@nntp1.jpl.nasa.gov...
quickcores has become SiliconLaude -- www.siliconlaude.com -- and
interesting 8051 cores with real-time JTAG debug are available.
I have just visited their website and could not find IP cores available.
Instead they offer radiation-hardened silicon...
I am looking for an IP core to be put into a generic FPGA device."


After Mr. Jarnot's post I contacted Silicon Laude as a radiation-hardened
8051 would be very useful for me, and in under a week was given a very
reasonable offer.

Silicon Laude does not make silicon, it sells its intellectual property
as an FPGA chip programming file for a specific radiation-hardened
FPGA. As any non-radiation-hardened device would be suitable for you,
you could ask Silicon Laude to produce a netlist for a common FPGA.
 
Colin Paul Gloster posted on 18 Mar 2006 17:04:18 GMT:

"[..]

Silicon Laude does not make silicon, it sells its intellectual property
as an FPGA chip programming file for a specific radiation-hardened
FPGA. As any non-radiation-hardened device would be suitable for you,
you could ask Silicon Laude to produce a netlist for a common FPGA."


Indeed, Silicon Laude already mentions that it does this on its
webpage WWW.SiliconLaude.com/products.html :

"[..]
SL80C051-AX001
Functionally equivalent to the SL80RT051-AX001 except that it is
implemented in a commercial grade Actel Axcelerator FPGA and
plastic 208-pin PQFP package.
SL80C051-AF001
Functionally the equivalent to the SL80RH051-AF001 except that
it is implemented in a commercial grade QuickLogic Eclipse
FPGA and plastic 208-pin PQFP package.

[..]"
 
John,

The long post in response to mine is honestly the first rather
level-headed discourse I've seen from you. Sincere thanks for taking
the time to put together a constructive post.

Just to clear up the one point you had question about, when I suggested
you called their business methods a scam I was referring to your utter
disbelief that the EasyPath model made money - that any 80% discount
meant that they were dumping parts. Dumping is illegal and any smoke
and mirrors that provide dumped parts to customers is a sham. It's my
own belief that they have a solid business model with significant ROI
without obliterating the margins; the result is more customers using
Xilinx in high production with significantly lower per-device
infrastructure and support costs. If you already have the silicon and
IP, get paid to customize tests, and reduce the cost associated with
getting a part out the door, you have great ROI - you've invested very
little to support this business model that wasn't already invested.
Big, incremental business is tremendous to have.

I hope you have the opportunity to get Reconfigurable Computing up to
the level of performance and supportability that you envision. It may
be a tough road because the market is small. The market may demand the
higher premium devices to support the efforts from you and other RC
advocates which may get you some attention from strategic marketing
folks that help shape the business decisions on development.
Unfortunately, Peter and Austin are not Strategic Marketing employees
but instead are involved with support of existing and evolving devices
that are about to hit the market. This forum has the wrong audience for
actively changing where Xilinx is going.

I don't fault any one car company for not having the features that I
feel would make my driving experience so much better. If I felt
strongly about my position, I wouldn't get much activity at the
dealership or on the technician's bulletin board where the nitty-gritty
details are known to so many. Direct exposure the appropriate Xilinx
people is about the only way to truly effect change. This is only my
own opinion, of course - I don't pretend to know everything that goes on
in the industry but I do have my own perspective. In the corporation I
work for, our two dozen or so hardware engineers have had the
opportunity to meet with some of the VPs in Xilinx as they give us the
direction they see their next products going. We've even had Xilinx CEO
Wim Roelandts visit us here in Oregon. If you can get your Xilinx sales
engineer and/or FAE to understand your needs and the potential market
you feel is there not only for you but for others that could leverage
tools and silicon tailored for better RC, you might have a chance to
shape the vision of those who shape the direction of Xilinx.

As helpful as they are and as respected within their own corporation
they may be, the folks who participate in this forum are not the ones
who shape the vision - they may have influence, but it's not the
influence you need to push for better RC support, tools, or "permission"
to do what you feel needs to be done to blaze the trail.


I've seen Peter and Austin have troubles when dealing with stubborn
people through the limits of the newsgroup. I have troubles with people
myself when there's obstinance, dimwittedness, or just plain insulting
behavior. I've never had a problem with Peter. When you annoy one of
the most level-headed, market-experienced technical people I've had the
chance to meet, it's time to reevaluate your own stance.

If all your communications were as civil and well considered as the one
I'm now responding to, you may have gotten a lot further with the
limited influence available through this forum.

I wish you luck in your endeavors and hope you have a chance to realize
your visions.

- John Handwork

fpga_toys@yahoo.com wrote:
John_H wrote:

Did you declare their datasheets a piece of crap because
they don't provide what you believe to be "proper" power data?


Actually I had two large proposals out with one of them pressing me
like hell about the power design, and the FAE could not get me an
answer, and the potential customer gave up. I got drilled about several
other items the data sheet could not answer either. That was a useful
exercise in learning that the info just isn't available.


Did you call their business methods a scam?


And when was that?


Geeze, man - listen to yourself. Ask the FAEs that have been so helpful to
look at your threads and see if they'll side with you or try to help educate
you in the other aspects of Xilinx that are there to benefit the customer
base.


If Austin and Peter want to be confrontational .... it's their choice.
I've tried to cool that. I'm also not going to back down and let them
ridicule me every thing they disagree without a cost to them. I'm just
a few years younger, have spent my life in other engineering areas, and
I do understand my business areas. There are some very clear
differences between our perspectives, which are not based on absolute
rights and wrongs, and would normally be matters to agree to disagree
on. If their intent is to destroy my credibility because I'm not an
insider, and I've a different view point they don't like ... then I
will reluctantly play that game as well until they get tired of being
burned too - or Xilinx management steps in and pulls the plug. If they
can act responsibly, so will I. But I will continue to push for things
that are needed for a pure reconfigurable computing market place, a
niche market that is growing, and they clearly have mixed interest in.

It was fully amusing to watch last week go from Austin and the other
poster say that RC and PR were money sinks and being dropped, and right
after I pressed to open source it, another Xilinx guy steps in and says
wait a year and they will finally get it fixed in the next major
release. That is still a tiled PR solution using PAR, which is just to
slow and requires far to my floor planning for my market. With Austin
asserting JBits is dead, that kills the alternate strategies of using
the backend tools from JHDL or JHDLBits into Jbits. Xilinx has very
mixed internal positions on that whole tool set. I had been told that I
would never be able to use JHDLBits, Then Austin pops in and trys to
change that. Then the next week is declaring JBits a failure and dead.
Then another source tells me Austin doesn't speak for the JBits team,
that the JBits isn't dead.


The written word is often a poor way to communicate for those who don't have
a solid understanding of professional interaction. Often all it takes is a
good conversation - face to face - to help the understanding come through.
If you're a customer in need and you ask for help from a respectable
company, you get courteous assistance. On this newsgroup you haven't been a
customer in need, you've been an agitator - specifically in this tired
thread you've been an underinformed "devil's advocate."


That's a two way street. Yes I've pressed hard, but the personal
attacks in reponse were never justified. Frankly, based on a business
that stands behind Austin and Peter, I've considered not ever doing
business with Xilinx if that level of utter arrogance is to be
expected. I have about 4K Xilinx parts in my inventory that I can dump,
and never deal with the company again. Or, I can do as I've planned for
two years, and that is build a new company around reconfigurable
computing boards. I've pressed that the current ISE software model with
very poor place and route for compile, load and go operations just
doesn't fit that market. It was designed to do an excellent job at all
costs, not a very good job quickly. It's ability to handle dynamic
reconfiguration has been marginal and error prone. After talking with
several people that had gone down that path, the suggest was to roll my
own based on the jbits and JHDL code. The legal issues with that are
less than clear. Nor do the high ISE per seat license costs work
trying to sell FPGA's as a very fast computer accellerator.

That Xilinx is a bit thin skinned about criticism, and constructive
criticism, is a bit of an understatement from my perspective. I do
know that when my FAE can not provide worst case power numbers, and I'm
being pressed hard for them, there are problems. The customer had
already had the same discussion and lack of results from a prior
proposal and was WAY ahead of me. There are also problems when
customer interfaces are not trained to listen to the customers needs,
and instead jump in and argue why the customers are wrong. There is a
lot of truth that the customer understands their business, and it's the
vendors job to understand that the customer probably isn't just wrong
about their business needs. In tech land, that concept that the
customer is always right, needs some serious refinement. Sure customers
get it wrong, and need guidance, but they are generally very clueful
about what they need for their business.

In talking with others I've gotten similar mixed feelings about Altera,
but no first hand experience yet.


You help noone.


I've actually interacted with a fair number of people with radically
different perspectives. The problem in a nut shell is that RC isn't
taken seriously by Xilinx, as it's been a 15 year pipe dream. Their
tools and business model are for a different market place -- high
volume embedded. And there staff are used to telling customers how to
use Xilinx product, and have some serious problems when you step
outside the high volume embedded application areas. First of all, the
biggest sales get the support. And as we have clearly seen, niche
markets, get little and are quickly subjected to being dropped, to go
chase another large customer. Small customers either need a way to fit
in and pickup the crumbs, or go to the seven dwarfs as Austin puts it.
IE send the small customers to the small players.

Given this has been the status quo for some decade .... clearly things
are not likely to change without a shove from my perspective. I'm more
than willing to step up and push for change, rather than watch the
opportunities slip by. I don't think watching the chances slip by
another decade is the right choice. When it comes to Xilinx and RC,
either they need to embrace it, and clearly get behind it, or step
aside. Their indecison is hurting the market place seriously. Other
than a few ruffled feathers the last few weeks have been very useful in
airing differences in market requirements. The side emails I've gotten
have been supportive in general.

So I leave you with this challenge ... layout a road map that will
either effect the required changes, or get a clear decision from Xilinx
management they do not want to be a major player in the RC market -
firm decision inside 3-6 months.

I'm avocating being vocal, direct, and a bit of a squeeky wheel, as the
passive approach has created 15 years of indicision that we see even in
the last few weeks with radically different views from several
different Xilinx spokes persons. I'm willing to actively and intensely
engage Austin, Peter, and other Xilinx staff on all the related issues
to fully air the differences in opionion about the divergent needs of
the various markets. So far, the intense, and informative debate here
has actually been very useful to provoke discussion that would normally
just be ignored.

Austin and I differ on the impact that patent expirations will have,
but history clearly relates that the expiration of base patents in
other technology areas was followed by a rapid change in the guard as
off shore companies stepped in and took over the market globally
leafing the US market founders dino's. In the next four years all the
major patents that control XC2000, XC3000, and much of XC4000
technology expire ... which means off shore companies will be free to
market bigger and faster version of those product technologies. They
will not be Virtext-II Pro's or XC4V's, but they will be big, fast, and
cheap FPGAs. And five years after that, about a decade from now, the
landscape may well be very very different in who are market leaders.

Fairly major revenue choices, like the Zero Defect is Quality
perspective the prevents Xilinx from wringing maxium revenue from every
wafer are very strong indicators that Xilinx may not be nimble enough
to adapt to a comodity FPGA market place price pressures that will
force severe cuts in the margins they have held for years. The
layoffs, the market restructuring, sweeping changes in management teams
could easily send Xilinx to it's grave inside as little as a few years
- or leave it a minority low volume player for a long lingering death
or takeover/buyout target for the IP.

I can be vocal, and raise the issues. Or I can shut my trap and watch
:)

engage the debate .... make up your mind .... and if the changes come
true as I suspect, at least everyone had their day to plan ahead and
not cry over the changes. Austin and Peter are likely to retire before
long, so it will not be their watch on duty if the market loss happens
.... but it will be their direction and attitudes that set the stage
for it.
 
fpga_toys@yahoo.com wrote:
Good points Ray.

Ray Andraka wrote:
snip

For more mainstream production use, I suggested that the go-nogo
testing of the part look for errors in 16 sub quadrants, and bin parts
failing to each. That would allow purchasing a run of parts which all
had different failures in the same sub quadrant, and the rest of the
die was known good and usable. That is much more manageable, without
creating too many sku's.
Yes, getting more managable, and the new Xilinx strip-fpgas could
lend themselves to this - but you still need some audit-trail to
link the defect to the part = so this really needs FPGAs with
fuses. (not many, and they can be OTP, but fuses nonetheless)

5) Timing closure has to be considered when re-spinning an FPGA
bitstream to avoid defects. In dense high performance designs, it may
be difficult to meet timing in a good part, much less one that has to
allow for any route to be moved to a less direct routing.


Certainly. I've suggested several times that RC applications may well
need to actually assign clock nets at link time based on the nets
linked delays, and choose from a list of clocks that satisfy the timing
closure. I have this on my list of things for FpgaC this spring, along
with writing a spec for RC boards suggesting that derived rising edge
aligned clocks be implemented on the RC board covering a certain range
of periods. That would allow the runtime linker (dynamic incremental
place and route) to merge the netlist onto the device, and assign
reasonable clocks for each sub-block in the design. This is necessary
to be able to reuse libraries of netlist compiled subroutines for a
particular architecture, across a number of host boards and clock
resources.

A very different model of timing closure than embedded designs today.
Another path, would be to do runtime checking of results, and have
a 'bad answer' system, that remaps the problem to known good ALUs.

This would require good intital tester code, which could, as suggested,
also run in the downtimes.

That way you can use lower yield devices, but not have to know
explicitly ( at P&R time ) where the defects are.

Of course, a method to tell the P&R to avoid known 'FPGA sectors' would
also improve the RC yields, so a two-pronged development would seem
a good idea.

Perhaps there are features in the new Virtex 5 that would help this ?
[Should be a good supply of low yield parts, as they ramp these ! :) ]



-jg
 
Hal Murray wrote:
Xilinx isn't stupid. They will retest or recycle, whichever is
less expensive (more profitable) overall.
Stupid isn't the right word. Complacent with their margins is probably
a better descripton. It's why they are only a $1.3B company instead of
$10-40B dollar company like Sun Microsystems or Microsoft which are
similar ages. The founders had some great ideas 21 years ago, and other
than incremental refinement, the real innovation in both the business
plan and technology has been lacking a bit. The high margins and high
costs hinder the growth of their market.

http://www.shareholder.com/visitors/dynamicdoc/document.cfm?CompanyID=SUNW&documentID=1014&PIN=&resizeThree=no&Scale=100&Keyword=type%20keyword%20here&Page=25

http://72.14.203.104/search?q=cache:YsLX3NJSemkJ:www.microsoft.com/msft/download/10K%25202005.doc+microsoft+form+10k+2005&hl=en&gl=us&ct=clnk&cd=1

My Idea of making Xilinx successful would be to once again aggressively
push the state of the art and grow the company into several related
markets That would bring their revenues into the 20B range inside this
decade.

Reconfigurable computing as a market for Xilinx could have been grown
to something in the $50B range by today, but they got stuck in their
view of their business plan. I believe some new management, a
restructured technology development program, and one could turn Xilinx
around this year, and get it back on track as a $50B company over the
next decade ... or better.

Or any of the A-team FPGA companies, and buy Xilinx at a discount for
pennies on the dollar in 5 years.
 
fpga_toys@yahoo.com wrote:
Hal Murray wrote:

Xilinx isn't stupid. They will retest or recycle, whichever is
less expensive (more profitable) overall.


Stupid isn't the right word. Complacent with their margins is probably
a better descripton. It's why they are only a $1.3B company instead of
$10-40B dollar company like Sun Microsystems or Microsoft which are
similar ages.
But in quite different fields, so impossible to compare.

The founders had some great ideas 21 years ago, and other
than incremental refinement, the real innovation in both the business
plan and technology has been lacking a bit.
Maybe Virtex 5 will turn all that around ?

The high margins and high
costs hinder the growth of their market.
This makes interesting reading
http://i.cmpnet.com/siliconstrategies/2006/03/isupplitables.gif

and quite a contrast to Austin's original arm waving :)

Seems that yes, Xilinx is the largest Programmable Logic company,
(which is not trivial, so applaud them for that ), but no, their growth
is BEHIND the Fabless group's average of 10.4%, at a modest 3.7%.
Adding $59M in revenue. [Still, it IS positive :) ]

Also the Fabless numbers seem to exclude larger companies ASIC flows,
so the true ASIC market is rather larger again.
( eg IBM Microelectronics has a large chunk of ASIC flow in that
revenue.... )

So, design starts in ASIC do seem to be falling, but the revenues seem
to be growing faster than the programmable logic business ?

Not an easy pill for the spin merchants at Xilinx to digest ? :)

http://www.shareholder.com/visitors/dynamicdoc/document.cfm?CompanyID=SUNW&documentID=1014&PIN=&resizeThree=no&Scale=100&Keyword=type%20keyword%20here&Page=25

http://72.14.203.104/search?q=cache:YsLX3NJSemkJ:www.microsoft.com/msft/download/10K%25202005.doc+microsoft+form+10k+2005&hl=en&gl=us&ct=clnk&cd=1

My Idea of making Xilinx successful would be to once again aggressively
push the state of the art and grow the company into several related
markets That would bring their revenues into the 20B range inside this
decade.

Reconfigurable computing as a market for Xilinx could have been grown
to something in the $50B range by today, but they got stuck in their
view of their business plan. I believe some new management, a
restructured technology development program, and one could turn Xilinx
around this year, and get it back on track as a $50B company over the
next decade ... or better.
Why not take them a sound business plan, I'm sure they would listen ?

They could seed this with some easypath FPGAs, and see how quickly
you really can grow the RC sector.....

Programmable Logic has some fundamental limits, that will relegate it
to a niche business. To hit $50B, you are talking about another Intel,
or another Samsung, and that would need truly radical changes.

-jg
 
Jim Granville wrote:

Stupid isn't the right word. Complacent with their margins is probably
a better descripton. It's why they are only a $1.3B company instead of
$10-40B dollar company like Sun Microsystems or Microsoft which are
similar ages.

But in quite different fields, so impossible to compare.
Nothing could be farther from the truth.

Sun's strength was it's SPARC prococessor line which allowed it to grow
as a high end systems company without being a MS/Intel clone.

Had Xilinx embraced RC as a systems company, it would have leveraged
it's strengths into a high dollar market. I believe that is still
possible with Xilinx before it's core patents expire. Or inspite of
Xilinx, using an A-Team competitor with an agressive technology plan.

I have several different roadmaps that I've been developing over the
last several years. Today is the right time to start a new tech
industy, as we are just on the back side of a very deep tech slump,
that should progress into a boom cycle. All the core technologies,
operating systems, processors, FPGAs/CPLDs are mature products that
have been incrementally refined for two decades.

The time is ripe to innovate hard, as we did between 1978 thru 1987,
and in the process use strong vision to take the industry to the next
level.
 
fpga_toys@yahoo.com wrote:
Jim Granville wrote:

Stupid isn't the right word. Complacent with their margins is probably
a better descripton. It's why they are only a $1.3B company instead of
$10-40B dollar company like Sun Microsystems or Microsoft which are
similar ages.

But in quite different fields, so impossible to compare.

Nothing could be farther from the truth.
So your first thought was "right", laced with sarcasm. Let's examine
this a bit. Xilinx produces about as many board level products as Apple
Computer did in 1984, including an outstanding entry into the system
level market with the ATX form factor ML310 motherboard. It's "only"
problem is it's cost of ownership coupled with marginal software
support to actually use it as a reconfigurable computer -- IE nearly a
complete lack of software support for the applications developer
community, focusing instead on embedded markets. Considering the Xilinx
ROI for those boards, and Apples ROI for it's board development, there
are questions to consider.

Consider that Apple produced 1 million systems with the MacIntosh 128,
512, and Plus design, with a price tag of over $2K each over three
years from introduction. That nearly doubled the companies sales in
just over a year, to just under $2B for 1985 and 1986 while redefining
the personal computer industry. Apple did this with one of the most
agressive cost of ownership designs in the computer industry at the
time, and aggressively creating a developer network that produced some
800 application programs that "sold" the MacIntosh for Apple.

http://apple.computerhistory.org/stories/storyReader$24

Everything on the ML310 board, except the VP30, can be found on a
commodity $50 retail ATX motherboard. The value that Xilinx failed to
capitalize on was taking reconfigurable computing mainstream by
agressively pricing this product, and creating a large reconfigurable
computing developer network to produce applications for the platform.
Having Xilinx FPGA literate developers, and lots of them, would easily
push Xilinx's chip sales and market share for an explosive growth.

Missing at Xilinx was system product level architects and a management
team visionary enough to build and capitalize on a systems level growth
market. To do so, would require Xilinx doing an about face on it's
software product licensing and embracing open software in a very
different way.

I believe that Xilinx leading the reconfigurable computing market could
easily take 5-10% of the global computing market share, just as Apple
has for the last two decades. Apple for fiscal 2005, generated revenue
of $13.93 billion, ten times Xilinx concentrating on chip sales.

With it's patent protection for the cash cows expiring, and a very
likely boom in off shore competition in the comodity FPGA market, I
believe that Xilinx needs to seriously pick up some vision about it's
future. While it has some major volume design in's for the US auto
industry, it has also created a huge introductory market for off shore
fabs to produce FPGA's for foriegn auto producers that will follow the
US lead.

So, while the Xilinx staff here are critical of offering "advise"
because they are so successful, that "success" does have other
measures. There is another view that Xilinx doesn't need advice, it
needs a completely different "vision" in it's management team to create
new and larger markets as FPGAs go comodity and off shore competitors
chip away at the cash cows. The $155M structured ASIC market is
peanuts compared to the possiblities as a systems level company.

Xilinx with an agressive cost of ownership strategy, could produce
ML310 like motherboards into the market at pretty substantial volumes,
along with fully packaged retail systems later. Deep seeding of the
educational and open source developers communities would result in a
rapid expansion of RC literate programmers, and applications, creating
an RC market that has a very likely chance of securing 5-10% of the
computer market inside a few years. RC established on Xilinx product,
would set a defacto binary standard (and resulting market share) that
would be hard to erase for a long time - the Intel effect. Or, one of
the A-Team companies, or new off shore entrants, can establish that
standard first.
 
Ray Andraka wrote:
It appears to me that perhaps you are assuming the yeild is high, more
than 50% anyway. What happens to your assumption if the yield is more
like, say, 10-20%. It seems to me that the lower the yield, the more
attractive easypath becomes, especially if, as Austin indicated, most
yield fallout is only one defect.
Actually the concept of managed defects becomes even stronger
economically with lower yeilds, as the ratio to valuable recovered
defect product to discards gets higher. Instead of discarding 80-90% of
the product, you then add that yield to your revenue. Only untestable
dies (those with power rail shorts, failed jtag interfaces, failed
configuration paths, etc) are discarded. And even some of those can be
recovered with additional design for failure strategies.

Maybe if you have enough spare cycles in your RC system, you can do that
in the background and hope you don't hit a defect in operational builds
before the defect map for the system is completed.
Every system I've worked with has spare cycles ... doing
testing/scrubbing in the idle loop is always a possibility.

The current tools make this even harder since the user has little
control over the what routing resources are used (there's directed
route, but it is tedious to use and is a largely manual effort), and
even less control over what routing resources can't be used. Granted,
this is a tools issue more than anything else, but the fact remains that
with the current state of the tools, I don't see this as feasible
right now. Yeah, I know, this supports your contention that the tools
should be open.
Yep.

Look at it from Xilinx's point of view though. What is in it for them?
As I've posted elsewhere in this thread, increasing their revenue by
10-20x in a few years, and their long term market share substantially.
Or, just staying in business.

More software that would need development and testing, more user
support, devices with defects out on the market that could wind up in
the hands of people thinking they have zero defect devices,
or, as the disk drive market ... just assume every device has defects
and design for it.

not to
mention their increased testing and administration cost to even
partially map the defects, or even determine to what degree the part
fails.
Designing for defect detection and management changes the entire view
of their process and ATE .... as I posted just a few minutes ago, why
isn't this integrated on wafer, instead of continuing to do it with ATE
as has been done for several decades?

I can see where the cost of doing it could exceed the potential
benefit. If it were profitable for them to do it, I'm sure they would
be pursuing it. In any event, it is a business decision on their part;
one they have every right to make.
I can see where the cost of ATE could make the difference between being
in business, or not. Designing for test, in a very different wafer
oriented way I see as critical.

Anyway, it still seems to me that the amount of extra work to manage
parts with defects would cost more than the cost savings in the part
prices not just for XIlinx, but also for the end user.
For some applications sure .... for system level RC applications it's
completely trival (and necessary) in the grand scheme of things. A very
very minor amount of software.

Consider that a new system could be brought up using triple redundancy
and run live in that reduced capacity configuration for it's first few
hundred/thousand hours, then back off to single redundancy for
checking, and after the part is well qualified run only with background
idle loop testing and scrubbing.

Using the racked wafer strategy this is very viable to hold wafers in
test for 72 hours or more before they are cut and packaged. Then the
same strategy is extended to the package after it's brought up in a
system.

Designing for test, good test, coupled with defect management should
only increase yields, lower costs, and benifit both the mfg and
customer long term.
 
Ray Andraka wrote:
A few points:
1) The routing structure is many times larger than the LUT structures.
A defect in the FPGA is far more likely to show up in the routing
structure, and it may not be a hard failure.
Very true. Having been responsible for both factory level burnin
testing and field testing, intermittants are by far the toughest nut to
crack as they seldom show up at the ATE station.

2) The testing only identifies bad devices. It does not isolate or map
the exact fault, to do so would add considerably to the tester time for
a part that can't be sold at full price anyway.
Good software based diagnostics generally attempt issolation to a
component set. Which in the FPGA sense, would include searching for the
specific resource set that fails. I generally see design for testing to
use ATE for screening of dangerous hard failures (power faults) and
completely dead devices.

3) Defect map dependent PAR is necessarily unique to each device with a
defect, so you wind up not being able to use the same bitstream for each
copy of a product. Fine for onesy-twosy, but a nightmare for anything
that is going into production. The administration cost would far exceed
the savings even if you get the parts for free.
Production in an RC world ... not problem. Production for an embedded
design that is not defect aware a complete nightmare. Designing for
test, desiging for defect management, I believe is not an option ...
even for embedded.

4) Each part would need to come with a defect map stored electronically
somewhere. Since the current parts have no non-volatile storage, that
means a separate electronic record has to be kept for each part. This
is expensive to administer for everyone involved from the manufacturer,
the distributors, and the end user. Again, the administration costs
would overshadow any savings for parts with a reasonable yield.
Seems that it can be completely transparent with very very modest
effort. The parts all have non-volatile storage for configuration. If
the defect list is stored with the bitstream, then the installation
process to that storage just needs to read the defect list out before
erasing it, merge the defect list into the new bit stream, as the part
is linked (place and routed) for that system.
With a system level design based on design for test, and design for
defect management, the costs are ALWAYS in favor of defect management
as it increases yeilds at the mfg, and extends the life in the field by
making the system tollarent of intermittants that escape ATE and life
induced failures like migration effects.

5) Timing closure has to be considered when re-spinning an FPGA
bitstream to avoid defects. In dense high performance designs, it may
be difficult to meet timing in a good part, much less one that has to
allow for any route to be moved to a less direct routing.
In RC that is not a problem ... it's handled by design. For embedded
designs, that is a different problem.
 
Peter Alfke wrote:
Are we continuing this thread until John (aka Mr Toy) has made a
complete fool of himself ?
His ranting is neither coherent nor entertaining or amusing anymore.
Peter Alfke
Your turn.
 
fpga_toys@yahoo.com wrote:

Seems that it can be completely transparent with very very modest
effort. The parts all have non-volatile storage for configuration. If
the defect list is stored with the bitstream, then the installation
process to that storage just needs to read the defect list out before
erasing it, merge the defect list into the new bit stream, as the part
is linked (place and routed) for that system.
With a system level design based on design for test, and design for
defect management, the costs are ALWAYS in favor of defect management
as it increases yeilds at the mfg, and extends the life in the field by
making the system tollarent of intermittants that escape ATE and life
induced failures like migration effects.
Which reconfigurable FPGAs would those be with the non-volatile
bitstreams? I'm not aware of any. Posts like these really make me
wonder whether you've really done any actual FPGA design. They instead
indicate to me that perhaps it has been all back of the envelope concept
stage stuff with little if any carry through to a completed design
(which is fine, but it has to be at least tempered somewhat with actual
experience garnered from those who have been there). In particular,
your concerns about power dissipation being stated on the data sheet,
your claims of high performance using HLLs without getting into hardware
description, your complaints about tool licensing while not seeming to
understand the existing tool flow very well, the handwaving in the
current discussion you are doing to convince us that defect mapping is
economically viable for FPGAs, and now this assertion that all the parts
have non-volatile storage sure makes it sound like you don't have the
hands on experience with FPGAs you'd like us to believe you have.


5) Timing closure has to be considered when re-spinning an FPGA
bitstream to avoid defects. In dense high performance designs, it may
be difficult to meet timing in a good part, much less one that has to
allow for any route to be moved to a less direct routing.


In RC that is not a problem ... it's handled by design. For embedded
designs, that is a different problem.
What are you doing different in the RC design then? From my
perspective, the only ways to be able to be able to tolerate changes in
the PAR solution and still make timing are to either be leaving a
considerable amount of excess performance margin (ie, not running the
parts at the high performance/high density corner), or spending an
inordinate amount of time looking for a suitable PAR solution for each
defect map, regardless of how coarse the map might be.

From your previous posts regarding open tools and use of HLLs, I
suspect it is more on the leaving lots of performance on the table side
of things. In my own experience, the advantage offered by FPGAs is
rapidly eroded when you don't take advantage of the available
performance. However, you also had a thread a while back where you were
overly concerned about thermal management of FPGAs, claiming that your
RC designs could potentially trigger a mini China syndrome event in your
box. If you are leaving enough margin in the design so that it is
tolerant to fortuitous routing changes to work around unique defects,
then I sincerely doubt you are going to run into the runaway thermal
problems you were concerned with. I've got a number of very full
designs in modern parts (V2P, V4) clocked at 250-400 MHz that function
well within the thermal spec with at most a passive heatsink and modest
airflow. Virtually none of those designs would tolerate a quick reroute
to avoid a defect on a critical route path without going through an
extensive reroute of signals in that region, and that is assuming there
was the necessary hooks in the tools to mark routes as 'do not use' (I
am not aware of any hooks like that for routing, only for placement).

Still, I'd like to hear what you have to say. If nothing else, it has
sparked an interesting conversation. Having done some work in the RC
area, and having done a large number of FPGA designs over the last
decade (My 12 year old business is exclusively FPGA design, with a heavy
emphasis on high performance DSP applications), most of which are
pushing the performance envelope of the FPGAs, I am understandibly very
skeptical about your chance of achieving all your stated goals, even if
you did get everything you've complained about not having so far.

Show me that my intuition is wrong.


>
 
Ray Andraka wrote:
fpga_toys@yahoo.com wrote:

Seems that it can be completely transparent with very very modest
effort. The parts all have non-volatile storage for configuration. If
the defect list is stored with the bitstream, then the installation
process to that storage just needs to read the defect list out before
erasing it, merge the defect list into the new bit stream, as the part
is linked (place and routed) for that system.
With a system level design based on design for test, and design for
defect management, the costs are ALWAYS in favor of defect management
as it increases yeilds at the mfg, and extends the life in the field by
making the system tollarent of intermittants that escape ATE and life
induced failures like migration effects.


Which reconfigurable FPGAs would those be with the non-volatile
bitstreams?
I think John was meaning store the info in the ConfigFlashMemory.
Thus the read-erase-replace steps.
... but, you STILL have to get this info into the FIRST design somehow....

-jg
 

Welcome to EDABoard.com

Sponsor

Back
Top