Guest
On Jun 23, 12:16 pm, Andy <jonesa...@comcast.net> wrote:
It's really interesting to hear all of the intelligent comments in
this thread, and I can agree to a certain extent with most of the
viewpoints. I think the root issue being debated here is how best to
implement a design - using a bigger emphasis on "hardware-ish pure
structure" or more on the "software-ish procedural approach". They are
certainly closely related, but are definitely not equivalent. The "old-
timers" who grew up with boards filled with 74LS logic and 22V10's
will likely feel more comfortable with the hardware-ish approach -
it's just an extension of hierachical schematics. The more
intellectually adventurous old-timers will then discover functions and
procedures and use these to neaten up their code. Those with a strong
software background will often see everything as a procedure and not
intuitively see how the entity/architecture/component structuring
would be the right approach in part of the design.
There are some places where E/A structuring is dead-simply the right
thing. For example, an FPGA which requires N identical (or nearly
identical, modulo some generics) modules - DRAM controllers, gigabit
transceiver modules, register files, what have you. If an FPGA vendor
tells me they have a big honking hardware block that does function X
for me, for free, then the right thing is to plonk it down as a
component instance. No matter how elegant the academic theory behind
it might be, I'll never try to infer a Sonet controller similarly to
the way I infer a block RAM. (Even a dual-port RAM doesn't infer
right half the time - anyone who's done it will probably agree.) No
amount of procedural finessing can work around this.
But there are other places where the old-time hardware guy is missing
the boat by slavishly implementing every little piece of his design
using the explicit combinatorial logic and flops he sees in his head
as he considers what the schematic for his circuit ought to look
like. The CRC is a nice example - with the right procedures defined,
the circuit using the procedures is short, concise, understandable,
and easy to modify. The manually written mess of gates, or even the
inline instantiation of a separate CRC-32 engine can be much longer,
less obvious, and interrupt the "flow" of your description. These can
become critical things as the complexity of your design grows.
Both styles need to be written with the right amount of room to grow.
This is basic good design practice, and it's as much an art as a
science. Room to grow, and good engineering, encompasses using the
right parameterization (generics), the right default values, the right
use of procedural style and instantiation at the right layers, and the
right interfaces, among all of the other choices related to the
implementation of the design itself.
I've seen guys who implement every interface as a std_logic_vector,
that just grows, and grows, and grows, as needed. I've also seen their
code: it's an endless sea of meaningless code like
if (interface_a(16) and not interface_b(14)) then -- this means the
event triggered
interface_c(14) := '1'; -- so clear the reset
The use of records as a conduit (I like that analogy) is one good
technique to manage the OP's original problem. It has other VHDL-
imposed drawbacks though - there are all sorts of rules about
partially associated records in interfaces, resolved vs non-resolved
records types, multiple drivers, interface element modes, and so on.
It's like a poor-man's version of type polymorphism (not the same as
the parametric polymorphism when we talk about overloading
operators). It's type polymorphism because if your conduit just goes
between two levels, you can make it carry the right type by editing
your record definition in the package. It's poor-man's because there
no way of automatically inferring the type of the pipe -- if an entity
does nothing more than carry a unidirectional signal from the outer
ports to an inner component, why can't I just say "connect the two",
rather than spelling it all out, manually, over and over. This feature
(type polymorhism) is a nice feature in languages like Standard ML and
Haskell, which use well known and well understood mechanisms to make
this happen nicely. These solutions are also guaranteed to be type-
safe, which is the reason we have so much explicit type specification
in VHDL to begin with! Unfortunately, I can see this would be
difficult to implement with all of the extra quirks of the language
(port modes, multiple driver rules, resolution, etc, not to mention
possible interactions with conventional VHDL name resolution).
What does this all mean? VHDL is quirky, obviously "designed by
committee", and large. But it's also flexible, and you can generally
do most of what you need, once you get your head around it. At the end
of the day it's just a tool, and the tool is nothing better than the
person wielding it.
(Going off to the woodpile to get my axe... now THERE's a tool
- Kenn
On Jun 23, 10:32 am, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com
wrote:
On Mon, 23 Jun 2008 06:47:40 -0700 (PDT), Andy wrote:
Am I missing something? Procedural interfaces still require data to be
passed in and out, but if the needed data was not passed, the same
problem exists.
Yes, but you can add new procedures without breaking code that
uses the old set of procedures. That ain't true for ports. [*]
Procedures also compose better than ports: if I have a procedure
that *nearly* does what I want, I can usually wrap it in another
procedure that adjusts things to make it do *exactly* what I want;
and again the old interface isn't broken, and the existing users
of my old interface are not disrupted by the new extensions.
I know it's never really quite that simple, but...
I guess I'm still not following. Any procedure than can be wrapped by
another procedure to "fix things" can also be represented by an entity
wrapped by another entity/architecture to do the same thing. Granted,
to do it in one source file, you'd have to have more than one entity/
architecture in the file, but that is a matter of form, not imposed by
the language or tools. If the ports have to be changed, they have to
be changed either way (entity or procedure). The procedure allows
local inheritance, which can avoid ports in the first place, but that
also obviates the possibility for reuse of that procedure, since it is
out of scope everywhere else, and always operates on the same local
objects. Without reuse, the need to keep it unmodified for fear of
breaking another instance (call) of it is also eliminated.
The original post exhibited a need for being able to easily plumb new
data through multiple levels of hierarchy in a design. Short of making
a whole project (ASIC or FPGA) one big entity/architecture with nested
local scopes for various procedures (and one huge source file), I
don't see how using procedures solves his problem. In fact, block
statements could be used to do the same thing without procedures, but
they will all necessarily be in the same source file.
Using procedures on the scale being proposed (eliminating all but a
single top level entity/architecture) is also unworkable due to the
current synthesis tool's inability to allow a subprogram to span time
(include a wait statement). So every procedure must be in the same
process. This is likely to complicate managing the order of
operations, which with variables, implies register usage.
I actually like the use of procedures on a small scale (i.e. within a
modestly sized process for clarification/separation of distinct
functionality). But there is a practical limit to the scope of their
application in synchronous, synthesizable code.
Andy
It's really interesting to hear all of the intelligent comments in
this thread, and I can agree to a certain extent with most of the
viewpoints. I think the root issue being debated here is how best to
implement a design - using a bigger emphasis on "hardware-ish pure
structure" or more on the "software-ish procedural approach". They are
certainly closely related, but are definitely not equivalent. The "old-
timers" who grew up with boards filled with 74LS logic and 22V10's
will likely feel more comfortable with the hardware-ish approach -
it's just an extension of hierachical schematics. The more
intellectually adventurous old-timers will then discover functions and
procedures and use these to neaten up their code. Those with a strong
software background will often see everything as a procedure and not
intuitively see how the entity/architecture/component structuring
would be the right approach in part of the design.
There are some places where E/A structuring is dead-simply the right
thing. For example, an FPGA which requires N identical (or nearly
identical, modulo some generics) modules - DRAM controllers, gigabit
transceiver modules, register files, what have you. If an FPGA vendor
tells me they have a big honking hardware block that does function X
for me, for free, then the right thing is to plonk it down as a
component instance. No matter how elegant the academic theory behind
it might be, I'll never try to infer a Sonet controller similarly to
the way I infer a block RAM. (Even a dual-port RAM doesn't infer
right half the time - anyone who's done it will probably agree.) No
amount of procedural finessing can work around this.
But there are other places where the old-time hardware guy is missing
the boat by slavishly implementing every little piece of his design
using the explicit combinatorial logic and flops he sees in his head
as he considers what the schematic for his circuit ought to look
like. The CRC is a nice example - with the right procedures defined,
the circuit using the procedures is short, concise, understandable,
and easy to modify. The manually written mess of gates, or even the
inline instantiation of a separate CRC-32 engine can be much longer,
less obvious, and interrupt the "flow" of your description. These can
become critical things as the complexity of your design grows.
Both styles need to be written with the right amount of room to grow.
This is basic good design practice, and it's as much an art as a
science. Room to grow, and good engineering, encompasses using the
right parameterization (generics), the right default values, the right
use of procedural style and instantiation at the right layers, and the
right interfaces, among all of the other choices related to the
implementation of the design itself.
I've seen guys who implement every interface as a std_logic_vector,
that just grows, and grows, and grows, as needed. I've also seen their
code: it's an endless sea of meaningless code like
if (interface_a(16) and not interface_b(14)) then -- this means the
event triggered
interface_c(14) := '1'; -- so clear the reset
The use of records as a conduit (I like that analogy) is one good
technique to manage the OP's original problem. It has other VHDL-
imposed drawbacks though - there are all sorts of rules about
partially associated records in interfaces, resolved vs non-resolved
records types, multiple drivers, interface element modes, and so on.
It's like a poor-man's version of type polymorphism (not the same as
the parametric polymorphism when we talk about overloading
operators). It's type polymorphism because if your conduit just goes
between two levels, you can make it carry the right type by editing
your record definition in the package. It's poor-man's because there
no way of automatically inferring the type of the pipe -- if an entity
does nothing more than carry a unidirectional signal from the outer
ports to an inner component, why can't I just say "connect the two",
rather than spelling it all out, manually, over and over. This feature
(type polymorhism) is a nice feature in languages like Standard ML and
Haskell, which use well known and well understood mechanisms to make
this happen nicely. These solutions are also guaranteed to be type-
safe, which is the reason we have so much explicit type specification
in VHDL to begin with! Unfortunately, I can see this would be
difficult to implement with all of the extra quirks of the language
(port modes, multiple driver rules, resolution, etc, not to mention
possible interactions with conventional VHDL name resolution).
What does this all mean? VHDL is quirky, obviously "designed by
committee", and large. But it's also flexible, and you can generally
do most of what you need, once you get your head around it. At the end
of the day it's just a tool, and the tool is nothing better than the
person wielding it.
(Going off to the woodpile to get my axe... now THERE's a tool
- Kenn