Are HDLs Misguided?

R

rickman

Guest
Sometimes I wonder if HDLs are really the right way to go. I mainly
use VHDL which we all know is a pig in many ways with its verbosity
and arcane type conversion gyrations. But what bothers me most of all
is that I have to learn how to tell the tools in their "language" how
to construct the efficient logic I can picture in my mind. By
"language" I don't mean the HDL language, but actually the specifics
of a given inference tool.

What I mean is, if I want a down counter that uses the carry out to
give me an "end of count" flag, why can't I get that in a simple and
clear manner? It seems like every time I want to design a circuit I
have to experiment with the exact style to get the logic I want and it
often is a real PITA to make that happen.

For example, I wanted a down counter that would end at 1 instead of 0
for the convenience of the user. To allow a full 2^N range, I thought
it could start at zero and run for the entire range by wrapping around
to 2^N-1. I had coded the circuit using a natural range 0 to
(2^N)-1. I did the subtraction as a simple assignment

foo <= foo -1;

I fully expected that even if it were flagged as an error in
simulation to load a 0 and let it count "down" to (2^N)-1, it would
work in the real world since I stop the down counter when it gets to
1, not zero. Loading a zero in an N bit counter would work just fine
wrapping around.

But to make the simulation the same as the real hardware I expected to
get, I thought adding some simple code to handle the wrap around might
be good. So the assignment was done modulo 2^N. But the synthesis
size blew up to nearly double the size without the "mod" function
added, mostly additional adders! I didn't have time to explore what
caused this so I just left out the modulo operation and will live with
what I get for the case of loading a zero starting value.

I guess what I am trying to say is I would like to be able to specify
detailed logic rather than generically coding the function and letting
a tool try to figure out how to implement it. This should be possible
without the issues of instantiating logic (vendor specific, clumsy,
hard to read...). In an ideal design world, shouldn't it be pretty
easy to infer logic and to actually know what logic to expect?

Rick
 
rickman wrote:
Sometimes I wonder if HDLs are really the right way to go. I mainly
use VHDL which we all know is a pig in many ways with its verbosity
and arcane type conversion gyrations. But what bothers me most of all
is that I have to learn how to tell the tools in their "language" how
to construct the efficient logic I can picture in my mind. By
"language" I don't mean the HDL language, but actually the specifics
of a given inference tool.

What I mean is, if I want a down counter that uses the carry out to
give me an "end of count" flag, why can't I get that in a simple and
clear manner? It seems like every time I want to design a circuit I
have to experiment with the exact style to get the logic I want and it
often is a real PITA to make that happen.

For example, I wanted a down counter that would end at 1 instead of 0
for the convenience of the user. To allow a full 2^N range, I thought
it could start at zero and run for the entire range by wrapping around
to 2^N-1. I had coded the circuit using a natural range 0 to
(2^N)-1. I did the subtraction as a simple assignment

foo <= foo -1;

I fully expected that even if it were flagged as an error in
simulation to load a 0 and let it count "down" to (2^N)-1, it would
work in the real world since I stop the down counter when it gets to
1, not zero. Loading a zero in an N bit counter would work just fine
wrapping around.

But to make the simulation the same as the real hardware I expected to
get, I thought adding some simple code to handle the wrap around might
be good. So the assignment was done modulo 2^N. But the synthesis
size blew up to nearly double the size without the "mod" function
added, mostly additional adders! I didn't have time to explore what
caused this so I just left out the modulo operation and will live with
what I get for the case of loading a zero starting value.
Ok. So you didn't have time to explore the issue, but you have all the
time in world to write a lengthy post spreading FUD and jumping to all kinds
of Big Conclusions?

There, as is commonly known, no reason why modulo a power of 2 (hint)
would generate additional hardware, and there is overwhelming evidence
that decent synthesis tools do this just right.

Therefore, if you think you see this, the proper reaction is to be
very intruigued and switch to fanatic bug-hunting mode. Do that please
(or trick others into doing it for you). Chances are that we will not
here about the issue again.

All the rest is a waste of everybody's time.

I guess what I am trying to say is I would like to be able to specify
detailed logic rather than generically coding the function and letting
a tool try to figure out how to implement it. This should be possible
without the issues of instantiating logic (vendor specific, clumsy,
hard to read...). In an ideal design world, shouldn't it be pretty
easy to infer logic and to actually know what logic to expect?
--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
On Fri, 10 Dec 2010 20:25:58 -0800 (PST), rickman <gnuarm@gmail.com> wrote:

Sometimes I wonder if HDLs are really the right way to go. I mainly
use VHDL which we all know is a pig in many ways with its verbosity
and arcane type conversion gyrations.
I usually find that the gyrations are a hint to step back and see what aspect of
the design I have missed... I end up needing a few, but YMMV.

But what bothers me most of all
is that I have to learn how to tell the tools in their "language" how
to construct the efficient logic I can picture in my mind. By
"language" I don't mean the HDL language, but actually the specifics
of a given inference tool.

What I mean is, if I want a down counter that uses the carry out to
give me an "end of count" flag, why can't I get that in a simple and
clear manner?
Here the issue appears to be how to get at the carry out of a counter...
the syntax of integer arithmetic doesn't provide an easy way to do that by
default, in any language I know (other than assembler for pre-RISC CPUs, with
their flag registers).

It seems to me that you have two choices ...
(1) implement an n-bit counter, and augment it in some way to recreate the carry
out (unfortunately you are fighting the synthesis tool in the process)

(2) implement an n+1 bit counter, with the excess bit assigned to the carry, and
trust the synthesis tool to eliminate the excess flip-flop at the optimisation
stage...

I am willing to guess the second approach would be simpler. Even if the obvious
optimisation doesn't happen (and I bet it does) it's worth asking if your design
is sensitive to the cost of that FF.

Can you boil down what you are trying to do (and doesn't work) into a test case?

In an ideal design world, shouldn't it be pretty
easy to infer logic and to actually know what logic to expect?
I would say so. And I'm still hoping to live to see it!

- Brian
 
On 11 Dec, 04:25, rickman <gnu...@gmail.com> wrote:
Sometimes I wonder if HDLs are really the right way to go.  I mainly
use VHDL which we all know is a pig in many ways with its verbosity
and arcane type conversion gyrations.  But what bothers me most of all
is that I have to learn how to tell the tools in their "language" how
to construct the efficient logic I can picture in my mind. By
"language" I don't mean the HDL language, but actually the specifics
of a given inference tool.
I find the tools wierd sometimes, but they have their own style for
logic minimization. Like I only considered doubling the memory size by
having two routines to do 2 hi bits of jump, and then use bytes
instead of 16 bit words. Strange but it also makes the hardware
smaller!!

I have also been considering using preseting special values in the
cycle before a general load, instead of an if/else in the same cycle.

I also think the hardest part is specifying to the sythesis tool, how
external memory supplies a result after an access delay, and how to
make this delay relative to the synthesized fmax., not just in ns.

Cheers Jacko
 
On Dec 11, 11:44 am, Brian Drummond <brian_drumm...@btconnect.com>
wrote:
On Fri, 10 Dec 2010 20:25:58 -0800 (PST), rickman <gnu...@gmail.com> wrote:
Sometimes I wonder if HDLs are really the right way to go.  I mainly
use VHDL which we all know is a pig in many ways with its verbosity
and arcane type conversion gyrations.

I usually find that the gyrations are a hint to step back and see what aspect of
the design I have missed... I end up needing a few, but YMMV.

But what bothers me most of all
is that I have to learn how to tell the tools in their "language" how
to construct the efficient logic I can picture in my mind. By
"language" I don't mean the HDL language, but actually the specifics
of a given inference tool.

What I mean is, if I want a down counter that uses the carry out to
give me an "end of count" flag, why can't I get that in a simple and
clear manner?  

Here the issue appears to be how to get at the carry out of a counter...
the syntax of integer arithmetic doesn't provide an easy way to do that by
default, in any language I know (other than assembler for pre-RISC CPUs, with
their flag registers).
Well, no, I'm not trying to force the tool to generate a carry out
since I am not using it for anything. I just want a simple counter
and logic to make it detect a final count value of 1. I am pretty
sure I would have gotten that from my original code. But I also want
the counter to roll over to zero at the max value of the counter which
will give me a max count range of 2**N by specifying a value of 0 in
the limit register. To use the carry out for the final count
detection I would have to require the user to program M-1 rather than
programming M or 0 for max M.


It seems to me that you have two choices ...
(1) implement an n-bit counter, and augment it in some way to recreate the carry
out (unfortunately you are fighting the synthesis tool in the process)

(2) implement an n+1 bit counter, with the excess bit assigned to the carry, and
trust the synthesis tool to eliminate the excess flip-flop at the optimisation
stage...

I am willing to guess the second approach would be simpler. Even if the obvious
optimisation doesn't happen (and I bet it does) it's worth asking if your design
is sensitive to the cost of that FF.

Can you boil down what you are trying to do (and doesn't work) into a test case?
Jan doesn't get what I am saying. I'm not that worried about the
particulars of this case. I am just lamenting that everything I do in
an HDL is about describing the behavior of the logic and not actually
describing the logic itself. I am an old school hardware designer. I
cut my teeth on logic in TO packages, hand soldering the wire leads.
I still think in terms of the hardware, not the software to describe
the hardware. Many of my designs need to be efficient in terms of
hardware used and so I have to waste time learning how to get what I
want from the tools. Sometimes I just get tired of having to work
around the tools rather than with them.


In an ideal design world, shouldn't it be pretty
easy to infer logic and to actually know what logic to expect?

I would say so. And I'm still hoping to live to see it!
I'm not sure I will still be working then if it ever happens. As
hardware becomes more and more cost efficient, I think there is less
incentive to make the tools hardware efficient. I guess speed that
will always be important and minimal hardware is usually the fastest.
But that's not the case when the tools are doing the optimization. I
recently reduced my LUT count 20% by changing the optimization from
speed to area.

Rick
 
On Dec 11, 4:21 pm, jacko <jackokr...@gmail.com> wrote:
I also think the hardest part is specifying to the sythesis tool, how
external memory supplies a result after an access delay, and how to
make this delay relative to the synthesized fmax., not just in ns.
I'm not sure what you are trying to do, but you should be able to
specify a delay in terms of your target fmax. Just define a set of
constants that calculate the values you want. I assume you mean a
delay value to use in simulation such as a <= b after x ns?

Rick
 
rickman wrote:

Jan doesn't get what I am saying. I'm not that worried about the
particulars of this case. I am just lamenting that everything I do in
an HDL is about describing the behavior of the logic and not actually
describing the logic itself. I am an old school hardware designer. I
cut my teeth on logic in TO packages, hand soldering the wire leads.
I still think in terms of the hardware, not the software to describe
the hardware. Many of my designs need to be efficient in terms of
hardware used and so I have to waste time learning how to get what I
want from the tools. Sometimes I just get tired of having to work
around the tools rather than with them.
Ok, let's talk about the overall message then.

I remember an article from the early days were some guy "proved" that
HDL-based design would never otherthrow schematic entry, because
it is obviously better to describe what something *is* than what
it *does*. All ideas come back, also the bad ones :)

HDL-based design was adopted by old school hardware designers, for
lack of other ones. They must have been extremely skeptical. How
did it happen? Synopsys took manually optimized designs from
expert designers and showed that Design Compiler consistently
made them both smaller and faster, and permitted trade-off
optimizations between the two. The better result was obviously
*not* like the original designer imagined it.

The truth is that HDL-based design works better in all respects
than handcrafted logic. It is a no-compromises-required technology,
which is very rare.

Look no further than this newsgroup for active designers who understand
this very well. Their designs must probably be as efficient as yours.
Yet they use coding styles that are much more abstract, and they
are certainly not concerned about the where the last mux or
carry-out goes.

In other words, when you make claims about ineffiencies and requirements
to fight tools all the time, you better come up with some very
strong examples - the evidence is against you.

What do you give us? A vague problem with an example of a modulo
operation on a decrementer. Instead of posting the code and
resolve the issue immediately, you give some verbose description
in prose so that we now all can start the guessing game. The example
has a critical problem, but you don't know what it is and you refuse
to track it down. Yet you still refer to it to back up your claims.

If that is your standard, why should I take any of your
claims seriously?

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
rickman wrote:

Jan doesn't get what I am saying. I'm not that worried about the
particulars of this case.
I'm sorry to bother you with this again, but I am actually worried.
From your description, I tried to reproduce your problem, to no avail.
With or without modulo, it doesn't make the slightest difference.
(Quartus Linux web edition v.10.0).

Perhaps you stumbled on some problematic use case that
we definitely should know about. After all, HDL-based design is
not about specifying an exact gate level implementation, but
about understanding which patterns work well. Perhaps you stumbled
upon a pattern that doesn't and that we should avoid. Please
post your code. Let's not spoil an opportunity to advance the
state of the art.

Of course, you may have good reasons not to post your code, for
example because you found a bug in the mean time. Perhaps you
did modulo 2^N-1 instead of 2*N, just to mention a mistake that
I once made. Let us know, so that we can stop worrying.

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
On 12/11/2010 5:25 AM, rickman wrote:
Sometimes I wonder if HDLs are really the right way to go.
[snip]
I guess what I am trying to say is I would like to be able to specify
detailed logic rather than generically coding the function and letting
a tool try to figure out how to implement it. This should be possible
without the issues of instantiating logic (vendor specific, clumsy,
hard to read...). In an ideal design world, shouldn't it be pretty
easy to infer logic and to actually know what logic to expect?
IMHO using HDL as if we are using a schematic entry is rather limiting
and does not provide any high level abstraction which is rather powerful
in terms of description, implementation and maintainance of the code.
I found the following readings very inspiring:

http://www.designabstraction.co.uk/Articles/Advanced%20Synthesis%20Techniques.htm

and

http://mysite.ncnetwork.net/reszotzl/uart.vhd

Al

> Rick
 
On Dec 10, 8:25 pm, rickman <gnu...@gmail.com> wrote:
Sometimes I wonder if HDLs are really the right way to go.  I mainly
use VHDL which we all know is a pig in many ways with its verbosity
and arcane type conversion gyrations.  But what bothers me most of all
is that I have to learn how to tell the tools in their "language" how
to construct the efficient logic I can picture in my mind. By
"language" I don't mean the HDL language, but actually the specifics
of a given inference tool.
<SNIP>

Back in the 8086 days, I had to do the same thing with compilers. I
spent a fair amount of time learning how the code generation phase
worked so I could get the tools to work properly. I remember a "brand-
name" 'C' compiler very carefully generating code to keep the loop
control variable in the CX register, then at the end of the loop
moving CX to AX and adding minus-one. (For those that don't program
86's in assembly, the CX register is a special purpose register for
the 'LOOP' instruction.) Now that processors are real fast, and
memory is very cheap, I don't worry so much about efficient code.

I would draw a parallel to the frustrations you are having with HDLs.
Another parallel I would draw is that when I wanted 'very fast tight
code' I would code certain modules in assembly, and link them with 'C'
routines. When the synthesizer just won't get it right, I draw it a
picture. (Which is one advantage the Acme-Brand tool has over the
Brand-X tool)

Yeah, it's not portable, and it isn't "right". But I'm getting
product out the door.

RK.
 
On Dec 11, 10:44 am, Brian Drummond <brian_drumm...@btconnect.com>
wrote:
What I mean is, if I want a down counter that uses the carry out to
give me an "end of count" flag, why can't I get that in a simple and
clear manner?  

Here the issue appears to be how to get at the carry out of a counter...
the syntax of integer arithmetic doesn't provide an easy way to do that by
default, in any language I know (other than assembler for pre-RISC CPUs, with
their flag registers).

It seems to me that you have two choices ...
(1) implement an n-bit counter, and augment it in some way to recreate the carry
out (unfortunately you are fighting the synthesis tool in the process)

(2) implement an n+1 bit counter, with the excess bit assigned to the carry, and
trust the synthesis tool to eliminate the excess flip-flop at the optimisation
stage...

I am willing to guess the second approach would be simpler.
I have found the 1st approach far simpler, by using a natural subtype
for the counter. Then (count - 1 < 0) is the carry out for a down
counter. Similarly, (count + 1 > 2**n-1) is the carry out for an n bit
up counter. No fighting required.

Andy
 
On Dec 12, 3:15 am, rickman <gnu...@gmail.com> wrote:
I'm not that worried about the
particulars of this case.  I am just lamenting that everything I do in
an HDL is about describing the behavior of the logic and not actually
describing the logic itself.  I am an old school hardware designer.  I
cut my teeth on logic in TO packages, hand soldering the wire leads.
I still think in terms of the hardware, not the software to describe
the hardware.  Many of my designs need to be efficient in terms of
hardware used and so I have to waste time learning how to get what I
want from the tools.  Sometimes I just get tired of having to work
around the tools rather than with them.
I seem to recall a similar argument when assemblers gave way to higher
level language compilers...

This change in digital hardware design is not unlike the change from
one-man furniture shops to furniture factories. The craftsmen of the
one-man shops painstakingly treated every detail as critical to their
product: a chair. And the result was an exquisite piece of furniture,
albeit at a very high price, and very low volume (unless you hired a
lot of one man shops at the same time).

Circuit designers are no different (being one myself, dating back to
those "I can do that function in one less part" days gone by). But the
target has changed. We no longer need a chair, we need a stadium full
of them. And we need the elevators, climate control, fire suppression,
lighting, and all the other support systems, to go along with them.

Perhaps we should take a step back, and look at what we really need
(hint: a place for a lot of people to watch an event, while seated
most of the time). Now I can optimize my stadium to recognize that all
of my seats don't need to be finely crafted pieces of furniture. But I
don't know that until I focus on the requirements: "What must my
project do?" So, instead of finding a way to describe the project as
a collection of specific chairs, elevators and fire extinguishers, we
need to describe it as a set of desired behaviors, and then, through
some process (hopefully semi-automated), convert that description into
an optimized design for the stadium. Could the craftsman and his tools
have done that?

What do you want from the tools, a collection of exquisitely crafted
chairs, or an efficient stadium?

Andy
 
On Mon, 13 Dec 2010 09:31:30 -0800 (PST), Andy <jonesandy@comcast.net> wrote:

On Dec 11, 10:44 am, Brian Drummond <brian_drumm...@btconnect.com
wrote:

Here the issue appears to be how to get at the carry out of a counter...

It seems to me that you have two choices ...
(1) implement an n-bit counter, and augment it in some way to recreate the carry
out (unfortunately you are fighting the synthesis tool in the process)

(2) implement an n+1 bit counter, with the excess bit assigned to the carry, and
trust the synthesis tool to eliminate the excess flip-flop at the optimisation
stage...

I am willing to guess the second approach would be simpler.

I have found the 1st approach far simpler, by using a natural subtype
for the counter. Then (count - 1 < 0) is the carry out for a down
counter. Similarly, (count + 1 > 2**n-1) is the carry out for an n bit
up counter. No fighting required.
In which case it is the expression (count - 1) or (count + 1) which must be n+1
bits; then perhaps its size (and its type - integer - for count-1) need not be
explicitly expressed.

I believe some synthesis tools used to generate rather large elaborations of
this expression (inc/decrement, then comparator), hence fighting - but perhaps
none do so any longer.

- Brian
 
On Mon, 13 Dec 2010 00:23:30 +0100
Alessandro Basili <alessandro.basili@cern.ch> wrote:

http://mysite.ncnetwork.net/reszotzl/uart.vhd

Al
Am I missing something, or is the transmitter slightly flawed in this
code? I seem to see the following:

1. At some point, TxState_v is SEND, and you reach TxBitSampleCount_v =
tic_per_bit_g and hence bit_done is true. Also, TxBitCount_v is 7.

2. You enter the "if" block in the SEND case in procedure tx_state. You
set TxBitSampleCount_v to 0, serial_out_v to Tx_v(TxBitCount_v) =
Tx_v(7). You set TxBitCount_v to TxBitCount_v+1 = 8. You notice that
TxBitCount_v=char_len_g=8 and hence set TxState_v to STOP.

3. tic_per_bit_g clocks later, you enter the "if" block in the STOP
case. You set serial_out_v to '1' and TxState_v to IDLE.

4. From this moment, if the application queries the status register,
you will see that TxState_v is IDLE and hence report transmitter ready.
The application could thus immediately strobe another byte of data into
the transmit data register. Then tx_state will transition to TxState_v
= START and, on the next clock, set serial_out_v to '0'.

Problem: this might not have been a full bit-time since you started
sending the '1' stop bit! You never actually guarantee to wait for the
full stop bit to pass before accepting new data from the application in
the transmit data register!

Or am I missing something?
Chris
 
On Dec 13, 12:31 pm, Andy <jonesa...@comcast.net> wrote:
On Dec 11, 10:44 am, Brian Drummond <brian_drumm...@btconnect.com
wrote:



What I mean is, if I want a down counter that uses the carry out to
give me an "end of count" flag, why can't I get that in a simple and
clear manner?  

Here the issue appears to be how to get at the carry out of a counter....
the syntax of integer arithmetic doesn't provide an easy way to do that by
default, in any language I know (other than assembler for pre-RISC CPUs, with
their flag registers).

It seems to me that you have two choices ...
(1) implement an n-bit counter, and augment it in some way to recreate the carry
out (unfortunately you are fighting the synthesis tool in the process)

(2) implement an n+1 bit counter, with the excess bit assigned to the carry, and
trust the synthesis tool to eliminate the excess flip-flop at the optimisation
stage...

I am willing to guess the second approach would be simpler.

I have found the 1st approach far simpler, by using a natural subtype
for the counter. Then (count - 1 < 0) is the carry out for a down
counter. Similarly, (count + 1 > 2**n-1) is the carry out for an n bit
up counter. No fighting required.
I won't argue that, both of these will utilize the carry out of an
adder. But that may or may not be the same adder I am using to update
count with. I have looked at the logic produced and at some time
found two, apparently identical adder chains used, one of which had
all outputs unconnected other than the carry out of the top and the
other used the sum outputs to feed the register with the top carry
ignored. Sure, there may have been something about my code that
prevented these two adders being merged, but I couldn't figure out
what it was.

I see a number of posts that don't really get what I am trying to
say. I'm not arguing that you can't do what you want in current
HDLs. I am not saying I want to use something similar to assembly
language to provide the maximum optimization possible. I am saying I
find it not infrequent that HDL gives nothing close to optimal results
because the coding style required was not obvious. I'm saying that it
seems like it should be easier to get the sort of simple structures
that are commonly used without jumping through hoops.

Heck, reading the Lattice HDL user guide (not sure if that is the
actual name or not) they say you shouldn't try to infer memory at all,
instead you should instantiate it! Memory seems like it should be so
easy to infer...

I don't know Verilog that well, but I do know VHDL is a pig in many
ways. It just seems like it could have been much simpler rather than
being such a pie-in-the-sky language.

Rick
 
On Dec 13, 12:06 pm, d_s_klein <d_s_kl...@yahoo.com> wrote:
On Dec 10, 8:25 pm, rickman <gnu...@gmail.com> wrote:

Sometimes I wonder if HDLs are really the right way to go.  I mainly
use VHDL which we all know is a pig in many ways with its verbosity
and arcane type conversion gyrations.  But what bothers me most of all
is that I have to learn how to tell the tools in their "language" how
to construct the efficient logic I can picture in my mind. By
"language" I don't mean the HDL language, but actually the specifics
of a given inference tool.

    <SNIP

Rick

Back in the 8086 days, I had to do the same thing with compilers.  I
spent a fair amount of time learning how the code generation phase
worked so I could get the tools to work properly.  I remember a "brand-
name" 'C' compiler very carefully generating code to keep the loop
control variable in the CX register, then at the end of the loop
moving CX to AX and adding minus-one.  (For those that don't program
86's in assembly, the CX register is a special purpose register for
the 'LOOP' instruction.)  Now that processors are real fast, and
memory is very cheap, I don't worry so much about efficient code.

I would draw a parallel to the frustrations you are having with HDLs.
Another parallel I would draw is that when I wanted 'very fast tight
code' I would code certain modules in assembly, and link them with 'C'
routines.  When the synthesizer just won't get it right, I draw it a
picture.  (Which is one advantage the Acme-Brand tool has over the
Brand-X tool)

Yeah, it's not portable, and it isn't "right".  But I'm getting
product out the door.

RK.
I never said I don't get a working design. I just feel that HDLs are
more complex than useful.

BTW, I don't agree with the analogy between HDLs and compilers. For
one, you are considering the case of PCs where speed and memory are
virtually unlimited. My apps tend to be more like coding for a PIC
with 8K Flash and 1K RAM. A perfect target for a Forth cross-
compiler, but likely a poor target for a C compiler.

Where is the Forth equivalent for hardware design?

Rick
 
On Dec 14, 5:32 am, rickman <gnu...@gmail.com> wrote:
On Dec 13, 12:31 pm, Andy <jonesa...@comcast.net> wrote:



On Dec 11, 10:44 am, Brian Drummond <brian_drumm...@btconnect.com
wrote:

What I mean is, if I want a down counter that uses the carry out to
give me an "end of count" flag, why can't I get that in a simple and
clear manner?  

Here the issue appears to be how to get at the carry out of a counter....
the syntax of integer arithmetic doesn't provide an easy way to do that by
default, in any language I know (other than assembler for pre-RISC CPUs, with
their flag registers).

It seems to me that you have two choices ...
(1) implement an n-bit counter, and augment it in some way to recreate the carry
out (unfortunately you are fighting the synthesis tool in the process)

(2) implement an n+1 bit counter, with the excess bit assigned to the carry, and
trust the synthesis tool to eliminate the excess flip-flop at the optimisation
stage...

I am willing to guess the second approach would be simpler.

I have found the 1st approach far simpler, by using a natural subtype
for the counter. Then (count - 1 < 0) is the carry out for a down
counter. Similarly, (count + 1 > 2**n-1) is the carry out for an n bit
up counter. No fighting required.

I won't argue that, both of these will utilize the carry out of an
adder.  But that may or may not be the same adder I am using to update
count with.  I have looked at the logic produced and at some time
found two, apparently identical adder chains used, one of which had
all outputs unconnected other than the carry out of the top and the
other used the sum outputs to feed the register with the top carry
ignored.  Sure, there may have been something about my code that
prevented these two adders being merged, but I couldn't figure out
what it was.

I see a number of posts that don't really get what I am trying to
say.  I'm not arguing that you can't do what you want in current
HDLs.  I am not saying I want to use something similar to assembly
language to provide the maximum optimization possible.  I am saying I
find it not infrequent that HDL gives nothing close to optimal results
because the coding style required was not obvious.  I'm saying that it
seems like it should be easier to get the sort of simple structures
that are commonly used without jumping through hoops.

Heck, reading the Lattice HDL user guide (not sure if that is the
actual name or not) they say you shouldn't try to infer memory at all,
instead you should instantiate it!  Memory seems like it should be so
easy to infer...

I don't know Verilog that well, but I do know VHDL is a pig in many
ways.  It just seems like it could have been much simpler rather than
being such a pie-in-the-sky language.

Rick
From all this reading, Im guessing its not a problem with the language
you have, its more the synthesisors.

So my two thoughts:

1. Try AHDL - its pretty explicit (but you'll be stuck with Altera).
2. Instead of getting pissed off with the tools and pretending its an
HDL problem, how about raising the issue with the vendors and asking
them why they've done it the way the have.

Personally, I have never had too much of a problem with the tools. The
Firmware works as I intend. Im not usually interested in the detail
because it works, it ships, the customer pays and we make a profit. I
dont care if a counter has used efficient carry out logic or not - it
works and thats all the customer cares about. When its working, or I
have fit problems I can then go into the finer detail.
 
Andy wrote:
On Dec 12, 3:15 am, rickman <gnu...@gmail.com> wrote:
I'm not that worried about the
particulars of this case. I am just lamenting that everything I do in
an HDL is about describing the behavior of the logic and not actually
describing the logic itself. I am an old school hardware designer. I
cut my teeth on logic in TO packages, hand soldering the wire leads.
I still think in terms of the hardware, not the software to describe
the hardware. Many of my designs need to be efficient in terms of
hardware used and so I have to waste time learning how to get what I
want from the tools. Sometimes I just get tired of having to work
around the tools rather than with them.

I seem to recall a similar argument when assemblers gave way to higher
level language compilers...

This change in digital hardware design is not unlike the change from
one-man furniture shops to furniture factories. The craftsmen of the
one-man shops painstakingly treated every detail as critical to their
product: a chair. And the result was an exquisite piece of furniture,
albeit at a very high price, and very low volume (unless you hired a
lot of one man shops at the same time).

Circuit designers are no different (being one myself, dating back to
those "I can do that function in one less part" days gone by). But the
target has changed. We no longer need a chair, we need a stadium full
of them. And we need the elevators, climate control, fire suppression,
lighting, and all the other support systems, to go along with them.

Perhaps we should take a step back, and look at what we really need
(hint: a place for a lot of people to watch an event, while seated
most of the time). Now I can optimize my stadium to recognize that all
of my seats don't need to be finely crafted pieces of furniture. But I
don't know that until I focus on the requirements: "What must my
project do?" So, instead of finding a way to describe the project as
a collection of specific chairs, elevators and fire extinguishers, we
need to describe it as a set of desired behaviors, and then, through
some process (hopefully semi-automated), convert that description into
an optimized design for the stadium. Could the craftsman and his tools
have done that?

What do you want from the tools, a collection of exquisitely crafted
chairs, or an efficient stadium?
This analogy suggests the need for a compromise, which I think isn't
there.

I don't see a case where the schematic entry craftsman can
realistically hope to beat the guy with the HDL tools. For example,
for smallish designs, it can be shown that logic synthesis can
generate a solution close to the optimum, *regardless* of the quality
of the starting point. The craftsman can draw any pictures he wants,
even if the tool guy writes the worst possible code, the synthesis
result will still be as good or better.

Of course, for realistic, larger designs, the structure of the
input code becomes more and more significant. But thanks to
powerful heuristics, local optimization algorithms, and the
ability to recognize higher level structures, this is a
gradual process. In contrast, the craftsman's ability to
cope with complexity quickly detoriates beyond a certain
point. As a result, he has to rely on logic-wise inefficient
strategies, such as excessive hierarchy.

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
rickman wrote:
On Dec 13, 12:31 pm, Andy <jonesa...@comcast.net> wrote:
On Dec 11, 10:44 am, Brian Drummond <brian_drumm...@btconnect.com
wrote:



What I mean is, if I want a down counter that uses the carry out to
give me an "end of count" flag, why can't I get that in a simple and
clear manner?
Here the issue appears to be how to get at the carry out of a counter...
the syntax of integer arithmetic doesn't provide an easy way to do that by
default, in any language I know (other than assembler for pre-RISC CPUs, with
their flag registers).
It seems to me that you have two choices ...
(1) implement an n-bit counter, and augment it in some way to recreate the carry
out (unfortunately you are fighting the synthesis tool in the process)
(2) implement an n+1 bit counter, with the excess bit assigned to the carry, and
trust the synthesis tool to eliminate the excess flip-flop at the optimisation
stage...
I am willing to guess the second approach would be simpler.
I have found the 1st approach far simpler, by using a natural subtype
for the counter. Then (count - 1 < 0) is the carry out for a down
counter. Similarly, (count + 1 > 2**n-1) is the carry out for an n bit
up counter. No fighting required.

I won't argue that, both of these will utilize the carry out of an
adder. But that may or may not be the same adder I am using to update
count with. I have looked at the logic produced and at some time
found two, apparently identical adder chains used, one of which had
all outputs unconnected other than the carry out of the top and the
other used the sum outputs to feed the register with the top carry
ignored. Sure, there may have been something about my code that
prevented these two adders being merged, but I couldn't figure out
what it was.

I see a number of posts that don't really get what I am trying to
say.
Probably because many people don't see what you say you are seeing,
so they must think you don't have a case.

I'm not arguing that you can't do what you want in current
HDLs. I am not saying I want to use something similar to assembly
language to provide the maximum optimization possible. I am saying I
find it not infrequent that HDL gives nothing close to optimal results
because the coding style required was not obvious. I'm saying that it
seems like it should be easier to get the sort of simple structures
that are commonly used without jumping through hoops.

Heck, reading the Lattice HDL user guide (not sure if that is the
actual name or not) they say you shouldn't try to infer memory at all,
instead you should instantiate it! Memory seems like it should be so
easy to infer...

I don't know Verilog that well, but I do know VHDL is a pig in many
ways. It just seems like it could have been much simpler rather than
being such a pie-in-the-sky language.
I think Verilog will suit you better as a language, you really should
consider switching one of these days. However, there is no reason why
it would help you with the issues that you say you are seeing here.

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
On Dec 14, 5:17 am, Jan Decaluwe <j...@jandecaluwe.com> wrote:
Andy wrote:
On Dec 12, 3:15 am, rickman <gnu...@gmail.com> wrote:
I'm not that worried about the
particulars of this case.  I am just lamenting that everything I do in
an HDL is about describing the behavior of the logic and not actually
describing the logic itself.  I am an old school hardware designer.  I
cut my teeth on logic in TO packages, hand soldering the wire leads.
I still think in terms of the hardware, not the software to describe
the hardware.  Many of my designs need to be efficient in terms of
hardware used and so I have to waste time learning how to get what I
want from the tools.  Sometimes I just get tired of having to work
around the tools rather than with them.

I seem to recall a similar argument when assemblers gave way to higher
level language compilers...

This change in digital hardware design is not unlike the change from
one-man furniture shops to furniture factories. The craftsmen of the
one-man shops painstakingly treated every detail as critical to their
product: a chair. And the result was an exquisite piece of furniture,
albeit at a very high price, and very low volume (unless you hired a
lot of one man shops at the same time).

Circuit designers are no different (being one myself, dating back to
those "I can do that function in one less part" days gone by). But the
target has changed. We no longer need a chair, we need a stadium full
of them. And we need the elevators, climate control, fire suppression,
lighting, and all the other support systems, to go along with them.

Perhaps we should take a step back, and look at what we really need
(hint: a place for a lot of people to watch an event, while seated
most of the time). Now I can optimize my stadium to recognize that all
of my seats don't need to be finely crafted pieces of furniture. But I
don't know that until I focus on the requirements: "What must my
project do?"  So, instead of finding a way to describe the project as
a collection of specific chairs, elevators and fire extinguishers, we
need to describe it as a set of desired behaviors, and then, through
some process (hopefully semi-automated), convert that description into
an optimized design for the stadium. Could the craftsman and his tools
have done that?

What do you want from the tools, a collection of exquisitely crafted
chairs, or an efficient stadium?

This analogy suggests the need for a compromise, which I think isn't
there.

I don't see a case where the schematic entry craftsman can
realistically hope to beat the guy with the HDL tools. For example,
for smallish designs, it can be shown that logic synthesis can
generate a solution close to the optimum, *regardless* of the quality
of the starting point. The craftsman can draw any pictures he wants,
even if the tool guy writes the worst possible code, the synthesis
result will still be as good or better.

Of course, for realistic, larger designs, the structure of the
input code becomes more and more significant. But thanks to
powerful heuristics, local optimization algorithms, and the
ability to recognize higher level structures, this is a
gradual process. In contrast, the craftsman's ability to
cope with complexity quickly detoriates beyond a certain
point. As a result, he has to rely on logic-wise inefficient
strategies, such as excessive hierarchy.

--
Jan Decaluwe - Resources bvba -http://www.jandecaluwe.com
    Python as a HDL:http://www.myhdl.org
    VHDL development, the modern way:http://www.sigasi.com
    Analog design automation:http://www.mephisto-da.com
    World-class digital design:http://www.easics.com- Hide quoted text -

- Show quoted text -
I've seen too many examples where a bit more performance can be
obtained by either tweaking the code, or "hard-coding" the solution.
They are getting fewer and farther between, but they are still there.
My point was that the extra performance is seldom, but not never,
needed, and on a larger scale, letting the synthesis tool do the heavy
lifting results in a better overall design MOST of the time.

There're ain't no 100% solutions. If you try to hard code 100%, you
lose; if you try to let the synthesis tool do 100%, you lose.
Compromise is necessary.

Andy
 

Welcome to EDABoard.com

Sponsor

Back
Top