pcb&bitstream

Thomas Womack <twomack@chiark.greenend.org.uk> wrote:

(snip, someone wrote)
I don't follow. Why would it take 2000 FPGAs to do what you can do
with 100 PCs?

10^18 per day = 10^13 per second = 10^9.7 per FPGA-second, according
to the figures he's using. Which might be a 100MHz FPGA clock and 80
units on the FPGA, or 25MHz and 300 units.

The PCs are 2.5GHz quad-cores, so there's the factor 100; SSE gets you
sixteen units rather than eighty, but the much faster clocks make up
for it.
It has been done on SSE, Google for 699 Rognes Seeberg and it comes
up at the top of the list.

(this is the problem I run into whenever considering how to do
number-theory really fast on FPGAs: a Spartan 3 has a hundred 17x17
multipliers running at 200MHz, a cheap AMD CPU has four 64x64
multipliers running at 2500MHz and an expensive one has twelve)
Conveniently my problem has no multiplies in it.

-- glen
 
The recent rise in SPAM postings that are not filter out from Google
Groups almost made me retreat completely to the Xilinx Forums, but I'm
still here for now.
Ed,

Please stick about.

I'm predominently an Altera user but your input here is important for the
'community' such as it is. And I know you'll be here for project or two I
do with Xilinx devices :)

Do you have to use Google groups?

I pay for news.individual.net access (10 Euros/year) which has really good
spam filters but I think there are a few free usenet servers with good
filtering.


Please keep it up.

Nial.
 
"Nial Stewart" <nial*REMOVE_THIS*@nialstewartdevelopments.co.uk>
writes:

I pay for news.individual.net access (10 Euros/year) which has really good
spam filters but I think there are a few free usenet servers with good
filtering.
I'm using news.eternal-september.org, which is free, but you have to
register. I almost never see any spam.

See http://eternal-september.org/

//Petter
--
..sig removed by request.
 
Hi !

geobsd wrote:
hi all
rick :
i wanted to use chunks of bit-stream assembled in my model conformance
to process in place of the cpu
Uhhh... you can do that. It's called partial reconfiguration.
i knew partial reconfiguration
it's ok for some "work" ;)
please take the following aspects into account :

- what rick described is a system where the configuration chunks
are precomputed. your own system is meant to recompute configurations on the fly,
which is another, different story : rick meant "static" reconfiguration
(the circuit has predefined functions, they are just swapped during use),
you want to have "dynamic" reconfiguration where you don't know
in advance what the circuits will do.

- there is a difference between "i've seen it done once" and "i do it usually",
between "it's possible" and "it's common practice". Xilinx does allow partial
reconfiguration on the more expensive parts because the cheap parts go to
standard applications where reconfiguration is not considered, practical,
too much of a problem for little gain, etc.

- so in practice, it seems to me that dynamic partial reconfiguration is a
nice, but unused and marginally useful feature, and only one of the many
benefits of FPGA. I can do without, and many others do. Sure, we are engineers
and solve real-life, industrial problems, we are not AI researchers :)

OTOH several FPGA families have "full" dynamic reconfiguration,
I seem to remember that S3 can select one out of several bitstreams
from its local Flash storage, depending on some external pin configurations.
it's less sexy than partial reconfiguration but useful in more places :)

hope this helps,
yg
--
http://ygdes.com / http://yasep.org
 
hi all
rick :
i wanted to use chunks of bit-stream assembled in my model conformance
to process in place of the cpu

Uhhh... you can do that.  It's called partial reconfiguration.  
i knew partial reconfiguration
it's ok for some "work" ;)
some elses need a complete one so it's not the full joy
I was going to use this once, a long time ago, with the Spartan
family.  But I don't think this ever materialized for the Spartans.
But you can do this with the Virtex parts. The partial bitstreams are
stored in a file or ROM and a controlling CPU sends them to the FPGA.
I think you can even build the CPU into the static part of the FPGA
design and it can reload the partial bitstreams itself!
i saw some boards sellers stating there spartan 3E boards do it so i
bought spartan 3E ;)
thanks Rick & all
 
On 16 Mrz., 16:34, rickman <gnu...@gmail.com> wrote:
On Mar 14, 8:46 pm, Kolja Sulimma <ksuli...@googlemail.com> wrote:

On 13 Mrz., 01:46, rickman <gnu...@gmail.com> wrote:

since a bad bitstream has potential of frying an FPGA.

This argument is invalid.

You can fry an FPGA with VDHL and vendor synthesis software.
This has been demonstrated at the FPL conference a decade ago.

 It doesn't matter if there are other ways
to fry a part.  The point is that the vendors exert control over the
design software so that they have control over this sort of problem.
It doesn't matter if they prevent you 100% from doing damage to the
chips.  They take responsibility if you are using their tools.
But how would documenting the bitstream format make this issue worse?
I could still expect the vendor tools to be correct, don't I?

Kolja
 
don't take the fly whygee
please take the following aspects into account :

  - what rick described is a system where the configuration chunks
    are precomputed. your own system is meant to recompute configurations on the fly,
    which is another, different story : rick meant "static" reconfiguration
    (the circuit has predefined functions, they are just swapped during use),
    you want to have "dynamic" reconfiguration where you don't know
    in advance what the circuits will do.
i know what is called partial reconfiguration
it is documented
no complain about it from me
  - there is a difference between "i've seen it done once" and "i do it usually",
    between "it's possible" and "it's common practice".
google give many peoples telling it don't work properly

  - so in practice, it seems to me that dynamic partial reconfiguration is a
    nice, but unused and marginally useful feature, and only one of the many
    benefits of FPGA. I can do without, and many others do. Sure, we are engineers
    and solve real-life, industrial problems, we are not AI researchers :)
ok, you are driving the fly, i wrote to you that i also wanted study a
new kind of AI (for free)
the most need for real dynamic bit-stream is for this
it's not cause i'm not a "pro" electronician nor searcher that you can
decide fpgas are not for me
OTOH several FPGA families have "full" dynamic reconfiguration,
I seem to remember that S3 can select one out of several bitstreams
from its local Flash storage, depending on some external pin configurations.
it's less sexy than partial reconfiguration but useful in more places :)
i know about it too
more places for your job
hope this helps,
i hope flys will not eat you
 
On Mar 17, 2:30 pm, Kolja Sulimma <ksuli...@googlemail.com> wrote:
But how would documenting the bitstream format make this issue worse?
I could still expect the vendor tools to be correct, don't I?
Kolja it seem honesty is the problem, not only from buyers but also
from makers that can sell non-conform products without the fear of an
external certification tool !
 
On Mar 17, 9:30 am, Kolja Sulimma <ksuli...@googlemail.com> wrote:
On 16 Mrz., 16:34, rickman <gnu...@gmail.com> wrote:



On Mar 14, 8:46 pm, Kolja Sulimma <ksuli...@googlemail.com> wrote:

On 13 Mrz., 01:46, rickman <gnu...@gmail.com> wrote:

since a bad bitstream has potential of frying an FPGA.

This argument is invalid.

You can fry an FPGA with VDHL and vendor synthesis software.
This has been demonstrated at the FPL conference a decade ago.

 It doesn't matter if there are other ways
to fry a part.  The point is that the vendors exert control over the
design software so that they have control over this sort of problem.
It doesn't matter if they prevent you 100% from doing damage to the
chips.  They take responsibility if you are using their tools.

But how would documenting the bitstream format make this issue worse?
I could still expect the vendor tools to be correct, don't I?

Kolja
If the vendor supports users messing about in the bit files in ways
that the vendor has no control over, then they open themselves up to
problems that can not only cost them in returned parts, it can lead to
problems with their reputation. Sure, they can say all day long that
the problem was a user creating a bogus bit file, but reputations can
be ruined by less substantial events.

There is no good reason for users to want the bit stream details.
Xilinx pours tons of money into the tools. I doubt that an outside
source would ever be able to come close to what they do. Clearly
there is little market incentive for them to risk shooting themselves
in the foot.

Rick
 
On Mar 17, 1:11 pm, geobsd <geobsd...@gmail.com> wrote:
On Mar 17, 2:30 pm, Kolja Sulimma <ksuli...@googlemail.com> wrote:> But how would documenting the bitstream format make this issue worse?
I could still expect the vendor tools to be correct, don't I?

Kolja it seem honesty is the problem, not only from buyers but also
from makers that can sell non-conform products without the fear of an
external certification tool !
If you are talking about the FPGA makers being afraid of users
"certifying" their products, that is pretty absurd. If their product
doesn't meet the spec, they can always "update" the spec to match.
They do that all the time when their chips first ship. It's called
"qualification".

Rick
 
On Mar 17, 8:27 am, whygee <y...@yg.yg> wrote:
Hi !

- there is a difference between "i've seen it done once" and "i do it usually",
between "it's possible" and "it's common practice". Xilinx does allow partial
reconfiguration on the more expensive parts because the cheap parts go to
standard applications where reconfiguration is not considered, practical,
too much of a problem for little gain, etc.
Actually, there is very little market justification for partial
reconfiguration in general. That is why it has taken Xilinx so long
to get it working. But from a user's perspective it is the low end
parts where it would be most useful. The only real need for partial
reconfiguration is to be able to use a smaller part. It would be the
low end parts that target the most cost sensitive applications that
would never be built if they can't meet their price target. The high
end parts are using in apps with higher profit margins and so
typically aren't so concerned with optimal costs. In particular, the
Virtex parts which are the ones that support the full up partial
reconfiguration, are not nearly as cost effective as the Spartans.

I had an app for partial reconfiguration a long time back, about 10
years. Xilinx said it would be supported on the Spartan devices, but
just not before my product became obsolete! It would have enabled
much more flexible configuration. The original product had daughter
cards with a small FPGA to interface each one to the DSP. The second
generation product had a single FPGA and would have used partial
configuration (didn't even need the "re") to allow each module to have
a separate mini-bitstream to configure its interface within the FPGA.
With four sites and potentially a dozen different daughter card types,
the number of combinations was far too large to construct all of
them. In the end the lack of partial configuration required the FPGA
bitstream to become a custom part of the board for each customer. Not
nearly as good a business model.


- so in practice, it seems to me that dynamic partial reconfiguration is a
nice, but unused and marginally useful feature, and only one of the many
benefits of FPGA. I can do without, and many others do. Sure, we are engineers
and solve real-life, industrial problems, we are not AI researchers :)
How is partial reconfiguration at all like AI? PR has great potential
applications. Cypress has been selling it for years on their PSOCs
which can cost as little as $1. Their programmable analog and digital
sections allow reconfiguration on the fly so that the same circuitry
can be a data logger for 23 hours and 55 minutes and then become a
modem for 5 minutes to upload the data collected. In fact, today I
saw a viewgraph showing that their PSOC1 devices have moved from
somewhere way down the list into the top 10 or maybe the top 5 of CPUs
sold. Not bad! Their new PSOC 3 and PSOC 5 devices have the same on
the fly reconfigurability, but have much more powerful circuits.


Rick
 
rickman <gnuarm@gmail.com> wrote:
(snip)

If the vendor supports users messing about in the bit files in ways
that the vendor has no control over, then they open themselves up to
problems that can not only cost them in returned parts, it can lead to
problems with their reputation. Sure, they can say all day long that
the problem was a user creating a bogus bit file, but reputations can
be ruined by less substantial events.
Yes, but someone could load random bits in and have the same problem.
If they then return it, the result is the same.

Besides, the vendor could provide a bit verifier that would
verify that the file was legal, however it was generated.

There is no good reason for users to want the bit stream details.
Xilinx pours tons of money into the tools. I doubt that an outside
source would ever be able to come close to what they do. Clearly
there is little market incentive for them to risk shooting themselves
in the foot.
As I said before, there is some argument for the LUT (ROM) bits.
I suppose also for BRAM initialization, on devices that allow
for that. (Again, ROMs programmed after P&R.)

In the XC4000 days, I was considering designs that would also
need to change bits on the carry logic. I wanted a preprogrammed,
but after P&R, constant adder.

Otherwise, it would be nice to allow for open source tools,
but not really necessary. One can always do netlist generation
for input to the vendor tools, and maybe more than that.

-- glen
 
On 18 Mrz., 06:32, rickman <gnu...@gmail.com> wrote:
They take responsibility if you are using their tools.

But how would documenting the bitstream format make this issue worse?
I could still expect the vendor tools to be correct, don't I?

If the vendor supports users messing about in the bit files in ways
that the vendor has no control over, then they open themselves up to
problems that can not only cost them in returned parts, it can lead to
problems with their reputation.  Sure, they can say all day long that
the problem was a user creating a bogus bit file, but reputations can
be ruined by less substantial events.

There is no good reason for users to want the bit stream details.
Xilinx pours tons of money into the tools.  I doubt that an outside
source would ever be able to come close to what they do.  Clearly
there is little market incentive for them to risk shooting themselves
in the foot.
I agree that there is little market incentive. Probably not even
enough to make
the cost for doing the documentation work.

But I do not buy the other arguments.

CPU manufactures get along well with documenting features that can
damage chips.
As do manufacturers of many products.

Also, we already have established, that the tool control only covers
some of the
possible reasons to damage the chip. Those that can be covered could
easily checked
by a design rule check provided by tha manufacturer. Those that can't
will be present in both
scenarios.

Also it is still possible to only document the parts that can not
cause damage.
I do not expect the FPGA to have many problematic bits in the
bitstream.
The need for many repeater buffers in interconnect probably has next
to completely removed
any interconnects with multiple drivers.

Kolja
 
hi all

rick the point for external certification is to be able to know if the
devices are fully "working" and if they are really as bought
for the reputation makers only have to make certified products
makers'softwares can work around bogus/non-conformance devices and who
will know ?

for the full spec not needed : it can be more needed as more users use
it

once again what is really strange is cpu are open spec while fixed in
use and fpgas the inverse
i hope fpgas will be open spec and more used than cpu in a near futur
 
On Mar 18, 2:47 am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
rickman <gnu...@gmail.com> wrote:

(snip)

If the vendor supports users messing about in the bit files in ways
that the vendor has no control over, then they open themselves up to
problems that can not only cost them in returned parts, it can lead to
problems with their reputation.  Sure, they can say all day long that
the problem was a user creating a bogus bit file, but reputations can
be ruined by less substantial events.

Yes, but someone could load random bits in and have the same problem.  
If they then return it, the result is the same.

Besides, the vendor could provide a bit verifier that would
verify that the file was legal, however it was generated.
You don't seem to be able to put on your FPGA vendor hat. They aren't
worried about one chip. They are worried about situations where a
*significant* customer is programming units in production and has
problems. This has two problems. One is that it will create a
problem for them when the chips are returned, not so much the cost,
but the bother. The other is when they have to investigate and spend
time figuring out what is wrong with the customer's operation. If
unapproved third party software is in the loop, it makes things much
harder for them and they are concerned it will be much more
frequent.

Yes, some software to verify that the bit file is good would be nice,
but they don't see a need to go to the trouble. The issue is not that
it is a lot of trouble, but that it requires more effort than they
would get benefit from. In releasing documentation of the bit stream,
what is the up side for an FPGA maker?


There is no good reason for users to want the bit stream details.
Xilinx pours tons of money into the tools.  I doubt that an outside
source would ever be able to come close to what they do.  Clearly
there is little market incentive for them to risk shooting themselves
in the foot.

As I said before, there is some argument for the LUT (ROM) bits.
I suppose also for BRAM initialization, on devices that allow
for that.  (Again, ROMs programmed after P&R.)
That is all supported by vendor supplied tools. No need for third
party tools. This is one of the areas where reverse engineering is so
easy. So if there is a use for an open source tool to do this, why
hasn't one been done at least to this extent?


In the XC4000 days, I was considering designs that would also
need to change bits on the carry logic.  I wanted a preprogrammed,
but after P&R, constant adder.  

Otherwise, it would be nice to allow for open source tools,
but not really necessary.  One can always do netlist generation
for input to the vendor tools, and maybe more than that.
Bingo! Not necessary! FPGA vendors are about profit, just like all
of us. They are already over 50% a software company rather than a
hardware company (I was told this in terms of number of employees
working on new designs). They don't want to add more burden to their
software teams.

I totally get that myself.

Rick
 
On Mar 18, 8:44 am, geobsd <geobsd...@gmail.com> wrote:
hi all

rick the point for external certification is to be able to know if the
devices are fully "working" and if they are really as bought
for the reputation makers only have to make certified products
makers'softwares can work around bogus/non-conformance devices and who
will know ?
Since when are chips "certified" by anyone other than the maker? I've
been doing this for over 30 years and I've never heard of anyone
bothering to "certify" a device until after they have reason to
believe there is a problem. When a vendor finds a problem they
provide that info as "errata" and users are warned how to avoid or
minimize the result. If a vendor gave me info on how to "certify"
their parts, I think I would avoid them like the plague. I want THEM
to certify their parts.


for the full spec not needed : it can be more needed as more users use
it

once again what is really strange is cpu are open spec while fixed in
use and fpgas the inverse
i hope fpgas will be open spec and more used than cpu in a near futur
I have no idea why you find this odd. CPUs are simple devices
compared to FPGAs. A CPU executes instructions sequentially with a
handful in the pipeline controlling a few thousand points at the max
with only a much smaller number operating at the same time. In an
FPGA there are literally millions of control points that are all
operating at the same time. This is very much more complex and there
are very, very few individuals who wish to deal with that level of
complexity. Most of us just want to get our work done.

I wish they would open it up too. But they don't and until someone
gives them a reason to do so, they won't.

BTW, you haven't given any clue as to how you would generate a
bitstream if you had the spec. I think even needing to reverse
engineer the bitstream, this would be the hard part. For starters,
you might consider generating HDL or EDIF and running that through
their tools to test your concepts. I am pretty confident you are
years away from needing real time hardware to test on anyway. Instead
of impulsively buying parts you can't use, think about the overall
project and consider ways to your goals. If you eventually show
promise, you may get an FPGA vendor to work with you.

Rick
 
On Mar 18, 3:52 pm, rickman <gnu...@gmail.com> wrote:
Since when are chips "certified" by anyone other than the maker?  I've
been doing this for over 30 years and I've never heard of anyone
bothering to "certify" a device until after they have reason to
believe there is a problem.  When a vendor finds a problem they
provide that info as "errata" and users are warned how to avoid or
minimize the result.  If a vendor gave me info on how to "certify"
their parts, I think I would avoid them like the plague.  I want THEM
to certify their parts.
a certification that only the seller can verify is untrustable
I have no idea why you find this odd.  CPUs are simple devices
compared to FPGAs.  A CPU executes instructions sequentially with a
handful in the pipeline controlling a few thousand points at the max
with only a much smaller number operating at the same time.  In an
FPGA there are literally millions of control points that are all
operating at the same time.  This is very much more complex and there
are very, very few individuals who wish to deal with that level of
complexity.  Most of us just want to get our work done.
the complexity have nothing to do for this
lot of peoples use pcs without knowing how it work
with fpgas becoming better and better cpus will be uselless
when it come more and more peoples will complain as i do
free-programming is the futur
BTW, you haven't given any clue as to how you would generate a
bitstream if you had the spec.  I think even needing to reverse
engineer the bitstream, this would be the hard part.
if i had spec
chunks to replace neon or elses instructions are assembled by my
scheduler (bye-bye context switch)
for the AI project, where dynamic bit-stream is a high need, depending
on the AI : generated by cpu or fpga(s)
with only the limit of the bit-stream for my model as constraint

you might consider generating HDL or EDIF and running that through
their tools to test your concepts.  I am pretty confident you are
years away from needing real time hardware to test on anyway.  Instead
of impulsively buying parts you can't use, think about the overall
project and consider ways to your goals.  If you eventually show
promise, you may get an FPGA vendor to work with you.
whygee said me almost the same ;)
but too late i bought 5 spartan 3E
anyway for chunks hdl+little work is ok
for the AI hdl is impossible, i will not live 1000K millenium !
fpgas are the basis of it
i'll find a way ;)
thanks Rick
 
rickman wrote:
BTW, you haven't given any clue as to how you would generate a
bitstream if you had the spec.
This opens an interesting line of thought.

Since FPGAs are flexible, why not emulate an FPGA-inside-an-FPGA?

Much like vmware emulates a CPU on a CPU, one could model an (ideal
and minimalistic) FPGA architecture and use the vendor tools once (!)
to implement it for a particular chip. Then all (virtual) bitstream
details are known! Even better, if any problems or shortcommings
become visible during the development the custom toolchain, the
bitstream format and features can be tweaked to provide a better fit.

With a cleverly chosen virtual architecture, a design could run with
relatively little overhead compared to a native bitstream. I'd say
that anything "less than a magnitude slower" qualifies as good
enough. It would make a modern part an equal or better bang for buck
compared to older chips with officially documented bitstreams (like
stone age Xilinx or Atmel AT40K for example).

It is not only a working solution for the GPs problem. It would also
be a great (and necessary!) research vehicle for the HUGE undertaking
of developping a custom toolchain. After all, any particular
architecture will turn obsolete anyway before the project can finish
successfully and being able to switch architectures is valuable.

Once the toolchain works, one can still add a backend for native
bitstreams. I'm sure either Xilinx themselves or bitstream hackers
will deliver the necessary details once the project is a tangible
reality (rather than trollish comments).

Best regards,
Marc
 
On Mar 18, 6:18 pm, Marc Jet <jetm...@hotmail.com> wrote:
rickman wrote:
BTW, you haven't given any clue as to how you would generate a
bitstream if you had the spec.

This opens an interesting line of thought.

Since FPGAs are flexible, why not emulate an FPGA-inside-an-FPGA?
why not but everything in the real fpga can't be used so

Much like vmware emulates a CPU on a CPU, one could model an (ideal
and minimalistic) FPGA architecture and use the vendor tools once (!)
i doubt the vendor tool work for an unknow fpga model

to implement it for a particular chip.  Then all (virtual) bitstream
details are known!  Even better, if any problems or shortcommings
become visible during the development the custom toolchain, the
bitstream format and features can be tweaked to provide a better fit.
it will take longer than to only find the bit-stream spec for a real
model
It is not only a working solution for the GPs problem.  It would also
be a great (and necessary!) research vehicle for the HUGE undertaking
of developping a custom toolchain.  After all, any particular
architecture will turn obsolete anyway before the project can finish
successfully and being able to switch architectures is valuable.
why switch as you only can fully change the bit-stream spec
or it doesn't stay an fpga (arch != model)
Once the toolchain works, one can still add a backend for native
bitstreams.  I'm sure either Xilinx themselves or bitstream hackers
will deliver the necessary details once the project is a tangible
reality (rather than trollish comments).

well i complain since i opened this thread to not have the bit-stream
spec so welcome to the club !
i didn't saw really trollish comments here !?!
Best regards,
thanks Marc
 
On 15 Mrz., 02:28, Ed McGettigan <ed.mcgetti...@xilinx.com> wrote:

You can fry an FPGA with VDHL and vendor synthesis software.
This has been demonstrated at the FPL conference a decade ago.

I am quite surprised about this. Can you provide any additional
material on how this was achieved?

There aren't any scenarios, other than internal tri-state contention,
that I can come up with to make this happen with a proven tool chain.
It was at FPL1999 that someone presented this as a sidenote in a
presentation about some
other topic. They said they were able to damage Altera FPGAs by
instantiating ring counters.
This resulted in spontaneous applause by the Xilinx crowd in the
audience but the presenter
made clear that this attack also applies to Xilinx Fpgas and that it
is computationally infeasible
to detect such attacks.

I just browsed through the list of papers of that conference
http://www.informatik.uni-trier.de/~ley/db/conf/fpl/fpl1999.html
but can't remember, which paper it was.

There were some prominent people from Xilinx present (Peter Alfke,
Steve Trimberger, Steve Guccione and some other Steve). Maybe one of
them remembers.

Kolja Sulimma
cronologic
 

Welcome to EDABoard.com

Sponsor

Back
Top