Guest
Ray Andraka wrote:
What are the platform flash parts? Magic ROMs?
They are CERTAINLY non-volatile every time I've checked.
In fact, nonvolatile includes disks, optical, and just about any other
medium that doesn't go poof when you turn the power off.
and now this assertion that all the parts
without having some form of non-volatile storage handy? What ever the
configuration bit stream sources is, if it is reprogramable ... IE
ignore 17xx proms ... you can store the defect list?
UNDERSTAND?
Now, the insults are NOT -- I REPEAT NOT - being civil.
filesystem. The intent is to do fast (VERY FAST) place and route on the
fly.
what I call "clock binning" where the FPGA accel board has several
fixed clocks arranged as integer powers. The dynamic runtime linker
(very fast place and route) places, routes, and assigns the next
slowest clock that matches the code block just linked. The concept is
use the fastest clock that is available for the code block that meets
timing. NOT change the clocks to fix the code.
will be, but that is a different problem. Shall we discuss every
project you have done in 12 years as though it was the SAME problem
with identical requirements? I think not. So why do you for me?
In my own experience, the advantage offered by FPGAs is
serial memory systems. The performance gains are high degrees of
parallelism with the FPGA. Giving up a little of the best case
performance is NOT a problem. AND if it was, for a large dedicated
application, then by all means, use traditional PAR and fit the best
case clock the the code body.
question was addressing. That problem case was about hand packed
serial-parallel MACs doing a Red-Black ordered simulations with kernel
sizes between 80-200 LUT's, tiled in tight, running at best case clock
rate. 97% active logic. VERY high transistion rates. About the only
thing worse, would be purposefully toggling everything.
A COMPLETELY DIFFERENT PROBLEM is compiling arbitrary C code and
executing it with a compile, link, and go strategy. Example is a
student iterratively testing a piece of code in an edit, compile and
run sequence. In that case, getting the netlist bound to a reasonable
set of LUTs quickly and running the test is much more important than
extracting the last bit of performance from it.
Like it or not .... that is what we mean by using the FPGA to EXECUTE
netlists. We are not designing highly optimized hardware. The FPGA is
simply a CPU -- a very parallel CPU.
they were some how the same problem .... from various posting topics
over the last several months.
Surely we can distort anything you might want to present by taking your
posts out of context and arguing them in the worst possible combination
against you.
Let's try - ONE topic, one discussion.
Seems that you have made up your mind. As you have been openly
insulting and mocking ... have a good day. When are really interested,
maybe we can have a respectful discussion. You are pretty clueless
today.
What are XC18V04's? Magic ROMs?Which reconfigurable FPGAs would those be with the non-volatile
bitstreams? I'm not aware of any.
What are the platform flash parts? Magic ROMs?
They are CERTAINLY non-volatile every time I've checked.
In fact, nonvolatile includes disks, optical, and just about any other
medium that doesn't go poof when you turn the power off.
and now this assertion that all the parts
Ok Wizard God of FPGA's ... just how do you configure your FPGA'shave non-volatile storage sure makes it sound like you don't have the
hands on experience with FPGAs you'd like us to believe you have.
without having some form of non-volatile storage handy? What ever the
configuration bit stream sources is, if it is reprogramable ... IE
ignore 17xx proms ... you can store the defect list?
UNDERSTAND?
Now, the insults are NOT -- I REPEAT NOT - being civil.
With RC there is an operating system, complete with disk basedWhat are you doing different in the RC design then?
filesystem. The intent is to do fast (VERY FAST) place and route on the
fly.
You are finally getting warm. Several times in this forum I discussedFrom my
perspective, the only ways to be able to be able to tolerate changes in
the PAR solution and still make timing are to either be leaving a
considerable amount of excess performance margin (ie, not running the
parts at the high performance/high density corner), or spending an
inordinate amount of time looking for a suitable PAR solution for each
defect map, regardless of how coarse the map might be.
what I call "clock binning" where the FPGA accel board has several
fixed clocks arranged as integer powers. The dynamic runtime linker
(very fast place and route) places, routes, and assigns the next
slowest clock that matches the code block just linked. The concept is
use the fastest clock that is available for the code block that meets
timing. NOT change the clocks to fix the code.
Certainly ... it may not hardware optimized to the picosecond. SomeFrom your previous posts regarding open tools and use of HLLs, I
suspect it is more on the leaving lots of performance on the table side
of things.
will be, but that is a different problem. Shall we discuss every
project you have done in 12 years as though it was the SAME problem
with identical requirements? I think not. So why do you for me?
In my own experience, the advantage offered by FPGAs is
The performance gains are measured against single threaded CPU's withrapidly eroded when you don't take advantage of the available
performance.
serial memory systems. The performance gains are high degrees of
parallelism with the FPGA. Giving up a little of the best case
performance is NOT a problem. AND if it was, for a large dedicated
application, then by all means, use traditional PAR and fit the best
case clock the the code body.
This is a completely different problem set than that particularIf you are leaving enough margin in the design so that it is
tolerant to fortuitous routing changes to work around unique defects,
then I sincerely doubt you are going to run into the runaway thermal
problems you were concerned with.
question was addressing. That problem case was about hand packed
serial-parallel MACs doing a Red-Black ordered simulations with kernel
sizes between 80-200 LUT's, tiled in tight, running at best case clock
rate. 97% active logic. VERY high transistion rates. About the only
thing worse, would be purposefully toggling everything.
A COMPLETELY DIFFERENT PROBLEM is compiling arbitrary C code and
executing it with a compile, link, and go strategy. Example is a
student iterratively testing a piece of code in an edit, compile and
run sequence. In that case, getting the netlist bound to a reasonable
set of LUTs quickly and running the test is much more important than
extracting the last bit of performance from it.
Like it or not .... that is what we mean by using the FPGA to EXECUTE
netlists. We are not designing highly optimized hardware. The FPGA is
simply a CPU -- a very parallel CPU.
First you have taken and merged several different concepts, as thoughShow me that my intuition is wrong.
they were some how the same problem .... from various posting topics
over the last several months.
Surely we can distort anything you might want to present by taking your
posts out of context and arguing them in the worst possible combination
against you.
Let's try - ONE topic, one discussion.
Seems that you have made up your mind. As you have been openly
insulting and mocking ... have a good day. When are really interested,
maybe we can have a respectful discussion. You are pretty clueless
today.