Fast Counter

On Sep 17, 12:14 pm, valtih1978 <d...@not.email.me> wrote:
Let's suppose you compile RTL directly into target fpga technology right
away and, thus, achieve the most optimal FPGA implementation ever
possible. How do you explain Jessica why you are still 10x behind ASIC?
I already explained the reason in my first post on this topic over a
week ago, perhaps you should read the first half of the post.
http://groups.google.com/group/comp.lang.vhdl/browse_frm/thread/73c00c74f6775d11/173b156f9e0a5825?hl=en#173b156f9e0a5825

To reiterate a bit, ASICs are single function parts, FPGAs are run-
time programmable. At the lowest level, both parts are all built on
the same basic technology and will have the same speed at that level
(i.e. the transistor level).

In order to provide run-time programmable parts, FPGAs are designed
such that the end user does not have direct control all the way down
to the transistor level. The primitive elements for a user for
implementing logic in an FPGA are mostly look up tables and flip
flops. There are no 'gates' that the user has control over.

The reason that FPGAs exist at all is because there is market demand
for a component that
- Implements arbitrary logic where the ability to implement any design
change is not limited by the FPGA, nor does it require payment to the
FPGA supplier to implement the change. In other words, the cost and
implementation time for a design change is completely under the
control of the designer that *uses* the FPGA, not the supplier of the
FPGA.
- Other technologies such as ASICs and CPLDs have not been able to
crush FPGAs out of the market. In fact, the opposite has been
happening for a long time: ASICs and CPLDs design starts are being
squeezed out by FPGA designs.

The 'design cost' that a user will pay for choosing an FPGA over an
ASIC is speed and power. The market currently supports many niches
for implementing logic designs. FPGAs, CPLDs and ASICs fill different
niches, they each are optimal for certain designs and sub-optimal for
others...that's the way it is, get on with it.

Kevin Jennings
 
On Sep 17, 2:55 pm, valtih1978 <d...@not.email.me> wrote:
Actually, it was rhetoric question with the purpose to show that whether
mapping to FPGA is immediate or undergoes virtual gate representation is
not important for FPGA vs. ASIC performance.
It appears that you don't even read your postings. Your stated
question was "How do you explain Jessica why you are still 10x behind
ASIC?" That's not a very good example of a 'rhetorical question'...


Regarding your marketing manifest, adding that "FPGAs are designed such
that the end user does not have direct control all the way down to the
transistor level" does not add very much to it.
Actually it has everything to do with 'it', but you do not seem to be
understanding 'it'. In this case, 'it' is the difference in system
level performance of an ASIC versus and FPGA. The reason for that
difference has to do with the fact that FPGA manufacturers saw a
market need for a device that can implement arbitrary logic (like an
ASIC can) but is user programmable. In order to implement the 'user
programmable' part of their product, some of the potential performance
of the raw silicon technology was used leaving less performance for
the end user. FPGA manufacturers were not the first to see that need
and market such a part they are one of many.

How do you run you
design on FPGA if you have no control over its "gates"?
Here you're wrong on at least a couple of fronts:
- FPGAs implement logic with lookup table memory, not in logic gates.
- Since one can implement logic with lookup table memory and no gates
the lack of 'control over its gates' is not relevant...there are no
'gates' to control and yet functional designs can be implemented just
fine.
- 'Gates' are not the real primitive device, they are themselves an
abstraction. Transistors are the primitive. Control of voltage,
current and charge is the game.
- I never said anything about controlling 'gates' in the first place.
What I said was "...does not have direct control all the way down to
the transistor level". 'Transistors' are not 'gates'. Transistors
can be used to implement a 'gate', but the reverse is not true.

Actually, it
says that "we do not allow you to turn our general-purpose computer into
app-specific one by design".
That's your interpretation...I disagree with it completely, but you
can have that. Computers have a definition (perhaps you should look
up generally accepted definitions), but those generally accepted
definitions do not include 'FPGA' or 'ASIC'. An FPGA or ASIC or
discrete logic gates or even discrete transistors can be used to
implement a computer. However, none of those devices are in any a
'general-purpose computer' or any other type of computer.

I'm sure, that the problem is not a design.
You cannot do that in principle. FPGA stays a fixed, hardwired
general-purpose piece of computer.
Not true at all...see previous paragraph...and you should probably
research the definition of computer as well.

It executes user app at the higher
level.
As does an ASIC design...unless you really think that ASIC designers
design everything down to the transistor level. Gates are an
abstraction.

A high level design language like VHDL can be used to describe an
intended function. That description can be used to implement a design
in many technologies. The technology chosen does not change the 'user
app' therefore that 'user app' cannot be at any different level then
if a different technology choice had been used.

In other words, it emulates user circuit rather than implements
it natively.
Not true. From a black box perspective, an FPGA and an ASIC can be
designed to implement exactly the same function. They simply have
different primitive elements that can be manipulated by the designer.
The choice of technology used to implement a design does not imply
that one is an emulation of the other.

As any emulation, it is is 10x slower.
Not true either. A discrete logic gate implementation or a discrete
transistor implementation would be much slower than an FPGA...but they
would not be an emulation as defined by most reasonable sources. But
you appear to suggest with this statement that an implementation that
is 10x slower is an emulation. If so, I've provided the counter-
example to your statement, thereby disproving it.

Perhaps if you peruse the following links and do some more research,
you will discover what the word emulation is generally accepted to
mean:
- http://en.wikipedia.org/wiki/Emulation
- http://www.merriam-webster.com/dictionary/emulation?show=0&t=1316306655

So, you cannot bypass this picture.
No idea what picture you think is being bypassed. You can choose to
use the words 'implementation' and 'emulation' how you want, that's
your choice. However, since those words already have accepted
definitions that are different than what you have chosen don't expect
to get much acceptance of your usage.

This is the last I have to say on this thread.

Kevin Jennings
 
Actually it has everything to do with 'it', but you do not seem to be
understanding 'it'.

Thanks. Next time, I will know that redundancy adds very much because
"it also has to do with it".

Not true at all...see previous paragraph...and you should probably
research the definition of computer as well.

I know the right definition! Computers are the people who do
computations! FPGAs fall into absolutely different category!

Not true either. A discrete logic gate implementation or a discrete
transistor implementation would be much slower than an FPGA

Good job. To be more honest, you had to compare the latest integral
nanoscale desktop processor against large mechanical relay logic from
30-ties. The first computers used that technology. This way, you would
have proven much stronger thesis: our flexible SW completely outdoes any
HW implementation!

Following this line of reasoning, we can recall that first uProcessors
were running at 1 mhz. Today FPGAs can emulate them 100 times faster.
Now, people must stop thinking that FPGAs are slower than ASIC
implementation. I just cannot understand why today 4 GHz processor can
run at 400 mhz maximum when implemented in FPGA?

As does an ASIC design...unless you really think that ASIC designers
design everything down to the transistor level. Gates are an
abstraction.

Transistors are an abstraction. ĐĄopper and electrons are an abstraction.
Everything is and abstraction. We like abstractions because they help
use to understand. Me, Xilinx and Synopsys use gate netilst abstraction
to understand the implementation.


You can choose to use the words 'implementation' and 'emulation' how
you want. However, since those words already have accepted
definitions that are different than what you have chosen don't
expect to get much acceptance of your usage.

How picture of user gates emulated by FPGA can not correspond to this
definition?


This is the last I have to say on this thread.
Thank you for the warning. It would be very nice. We can be prepared.
 
The very name, FPGA means "gate array", says that FPGA provides the
programmable gates. They are virtual abstractions, like you like to say,
implemented by hard silicon gates at the bottom level. Don't be scared
to distribute this view.
 
On Sep 18, 5:37 am, valtih1978 <d...@not.email.me> wrote:
I just cannot understand why today 4 GHz processor can
run at 400 mhz maximum when implemented in FPGA?
That 4 GHz processor won't run at 4 GHz if it is implemented in ASIC
gates either.

Does that mean that ASICs only emulate circuits too? (rhetorical!)

Andy
 
How 4 GHz ASIC, that is capable running at 4 GHz, cannot run at 4 GHz?
Sounds like a controversy.
 
On Sep 19, 4:57 am, valtih1978 <d...@not.email.me> wrote:
How 4 GHz ASIC, that is capable running at 4 GHz, cannot run at 4 GHz?
Sounds like a controversy.
No, just function-specific limitations on clock rate. Depends on what
you are trying to do on the chip. Just because a technology is rated
for a given maximum clock rate, does not mean you can calculate pi to
the millionth decimal place in one clock cycle on it.

Andy
 
Get a CPU with 800MHZ FSB and put NOPs on the bus :)

On 8/09/2011 5:07 AM, Jessica Shaw wrote:
Hi,

I need a 700 MHz to 800Mhz synchronous 16 bit counter. The counter
will also have a Start, Reset and Stop pins.

Reset will intialize the counter to zero. Start will let the counter
run on each rising edge of the 700 or 800 Mhz clock. And stop will
stop the counter and user will be able to read the value.

I do not know

1. What FPGA or CPLD will be able to do this task at the above
mentioned high frequency?
2. Do I need a PLL inside the FPGA or CPLD to produce such kind of
clock?
3. How can I generate this kind of clock?

Any advice will be appreciated.

jess
 

Welcome to EDABoard.com

Sponsor

Back
Top