VHDL refactoring tools

On Jun 23, 12:16 pm, Andy <jonesa...@comcast.net> wrote:
On Jun 23, 10:32 am, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com
wrote:

On Mon, 23 Jun 2008 06:47:40 -0700 (PDT), Andy wrote:
Am I missing something? Procedural interfaces still require data to be
passed in and out, but if the needed data was not passed, the same
problem exists.

Yes, but you can add new procedures without breaking code that
uses the old set of procedures. That ain't true for ports. [*]
Procedures also compose better than ports: if I have a procedure
that *nearly* does what I want, I can usually wrap it in another
procedure that adjusts things to make it do *exactly* what I want;
and again the old interface isn't broken, and the existing users
of my old interface are not disrupted by the new extensions.

I know it's never really quite that simple, but...

I guess I'm still not following. Any procedure than can be wrapped by
another procedure to "fix things" can also be represented by an entity
wrapped by another entity/architecture to do the same thing. Granted,
to do it in one source file, you'd have to have more than one entity/
architecture in the file, but that is a matter of form, not imposed by
the language or tools. If the ports have to be changed, they have to
be changed either way (entity or procedure). The procedure allows
local inheritance, which can avoid ports in the first place, but that
also obviates the possibility for reuse of that procedure, since it is
out of scope everywhere else, and always operates on the same local
objects. Without reuse, the need to keep it unmodified for fear of
breaking another instance (call) of it is also eliminated.

The original post exhibited a need for being able to easily plumb new
data through multiple levels of hierarchy in a design. Short of making
a whole project (ASIC or FPGA) one big entity/architecture with nested
local scopes for various procedures (and one huge source file), I
don't see how using procedures solves his problem. In fact, block
statements could be used to do the same thing without procedures, but
they will all necessarily be in the same source file.

Using procedures on the scale being proposed (eliminating all but a
single top level entity/architecture) is also unworkable due to the
current synthesis tool's inability to allow a subprogram to span time
(include a wait statement). So every procedure must be in the same
process. This is likely to complicate managing the order of
operations, which with variables, implies register usage.

I actually like the use of procedures on a small scale (i.e. within a
modestly sized process for clarification/separation of distinct
functionality). But there is a practical limit to the scope of their
application in synchronous, synthesizable code.

Andy

It's really interesting to hear all of the intelligent comments in
this thread, and I can agree to a certain extent with most of the
viewpoints. I think the root issue being debated here is how best to
implement a design - using a bigger emphasis on "hardware-ish pure
structure" or more on the "software-ish procedural approach". They are
certainly closely related, but are definitely not equivalent. The "old-
timers" who grew up with boards filled with 74LS logic and 22V10's
will likely feel more comfortable with the hardware-ish approach -
it's just an extension of hierachical schematics. The more
intellectually adventurous old-timers will then discover functions and
procedures and use these to neaten up their code. Those with a strong
software background will often see everything as a procedure and not
intuitively see how the entity/architecture/component structuring
would be the right approach in part of the design.

There are some places where E/A structuring is dead-simply the right
thing. For example, an FPGA which requires N identical (or nearly
identical, modulo some generics) modules - DRAM controllers, gigabit
transceiver modules, register files, what have you. If an FPGA vendor
tells me they have a big honking hardware block that does function X
for me, for free, then the right thing is to plonk it down as a
component instance. No matter how elegant the academic theory behind
it might be, I'll never try to infer a Sonet controller similarly to
the way I infer a block RAM. (Even a dual-port RAM doesn't infer
right half the time - anyone who's done it will probably agree.) No
amount of procedural finessing can work around this.

But there are other places where the old-time hardware guy is missing
the boat by slavishly implementing every little piece of his design
using the explicit combinatorial logic and flops he sees in his head
as he considers what the schematic for his circuit ought to look
like. The CRC is a nice example - with the right procedures defined,
the circuit using the procedures is short, concise, understandable,
and easy to modify. The manually written mess of gates, or even the
inline instantiation of a separate CRC-32 engine can be much longer,
less obvious, and interrupt the "flow" of your description. These can
become critical things as the complexity of your design grows.

Both styles need to be written with the right amount of room to grow.
This is basic good design practice, and it's as much an art as a
science. Room to grow, and good engineering, encompasses using the
right parameterization (generics), the right default values, the right
use of procedural style and instantiation at the right layers, and the
right interfaces, among all of the other choices related to the
implementation of the design itself.

I've seen guys who implement every interface as a std_logic_vector,
that just grows, and grows, and grows, as needed. I've also seen their
code: it's an endless sea of meaningless code like

if (interface_a(16) and not interface_b(14)) then -- this means the
event triggered
interface_c(14) := '1'; -- so clear the reset

The use of records as a conduit (I like that analogy) is one good
technique to manage the OP's original problem. It has other VHDL-
imposed drawbacks though - there are all sorts of rules about
partially associated records in interfaces, resolved vs non-resolved
records types, multiple drivers, interface element modes, and so on.
It's like a poor-man's version of type polymorphism (not the same as
the parametric polymorphism when we talk about overloading
operators). It's type polymorphism because if your conduit just goes
between two levels, you can make it carry the right type by editing
your record definition in the package. It's poor-man's because there
no way of automatically inferring the type of the pipe -- if an entity
does nothing more than carry a unidirectional signal from the outer
ports to an inner component, why can't I just say "connect the two",
rather than spelling it all out, manually, over and over. This feature
(type polymorhism) is a nice feature in languages like Standard ML and
Haskell, which use well known and well understood mechanisms to make
this happen nicely. These solutions are also guaranteed to be type-
safe, which is the reason we have so much explicit type specification
in VHDL to begin with! Unfortunately, I can see this would be
difficult to implement with all of the extra quirks of the language
(port modes, multiple driver rules, resolution, etc, not to mention
possible interactions with conventional VHDL name resolution).

What does this all mean? VHDL is quirky, obviously "designed by
committee", and large. But it's also flexible, and you can generally
do most of what you need, once you get your head around it. At the end
of the day it's just a tool, and the tool is nothing better than the
person wielding it.

(Going off to the woodpile to get my axe... now THERE's a tool :)

- Kenn
 
Andy wrote:

Sorry, I was too busy rushing the punter to see the ball had already
been punted! I could get a penalty for that, a personal foul no
less...
I should have faked a hand-off ...

I thought you and Jonathan were proposing expanding that to cover the
hierarchy of the whole design.
A good PhD topic for someone.
Some procedural description/synthesis for instances seems inevitable.
But it's a hard problem with no big market evident.
Maybe the Gates foundation would sponsor the work :)

MyHDL sort of does this as a verilog generator.
An RTL-viewer does the *inverse* algorithm.
And emacs vhdl-compose-wire-components covers obvious wires.

We seem to go off on style about once a year,
so I guess it's back to the salt mines until next time...

-- Mike Treseler
 
On Jun 23, 12:30 pm, Mike Treseler <mike_trese...@comcast.net> wrote:
Andy wrote:
local inheritance, which can avoid ports in the first place, but that
also obviates the possibility for reuse of that procedure, since it is
out of scope everywhere else, and always operates on the same local
objects.

I use procedures for clarity and speed of coding.
Reuse is sometimes a nice side effect.

The original post exhibited a need for being able to easily plumb new
data through multiple levels of hierarchy in a design. Short of making
a whole project (ASIC or FPGA) one big entity/architecture with nested
local scopes for various procedures (and one huge source file), I
don't see how using procedures solves his problem.

I think both Jonathan and I punted that part of the problem.
Your suggestion of structured ports at least addresses Chris's problem.

So every procedure must be in the same
process. This is likely to complicate managing the order of
operations, which with variables, implies register usage.

Yes, I use one process per entity.
Yes, the variables I declare become registers.
The order of operations is a single thread per tick.
Or do you mean the order of piped registers?

I actually like the use of procedures on a small scale (i.e. within a
modestly sized process for clarification/separation of distinct
functionality). But there is a practical limit to the scope of their
application in synchronous, synthesizable code.

There is a practical limit to every style.

-- Mike Treseler
Sorry, I was too busy rushing the punter to see the ball had already
been punted! I could get a penalty for that, a personal foul no
less...

I understand your use of procedures and generally accept the idea. I
thought you and Jonathan were proposing expanding that to cover the
hierarchy of the whole design.

Andy
 
On Mon, 23 Jun 2008 06:47:40 -0700 (PDT), Andy <jonesandy@comcast.net>
wrote:

On Jun 21, 11:41 am, Mike Treseler <mtrese...@gmail.com> wrote:
I agree with Jonathan about procedural rather than
structural decomposition. The base function of
a crc check is just a shift with a twist, and
adding procedural layers is something the vhdl
language is good for. VHDL synthesis works better
than most designers believe.

-- Mike Treseler

Am I missing something? Procedural interfaces still require data to be
passed in and out, but if the needed data was not passed, the same
problem exists.

Actually, one
language (ada, from which much of vhdl was borrowed) allows a
procedure to be locally declared and externally implemented in a
separate file, but I don't know if that eliminates local scope
advantages.
If you mean the "separate" mechanism, that is defined to maintain the
"local scope" at the declaration, wherever the "separate" implementation
happens to be.

The ability to "containerize' the interface (procedure or entity),
like running virtual conduit through a building's walls, allows one to
add/subtract interface elements, without having to tear into the
intervening walls. And VDHL "conduits" (record types), unlike real
ones, never "fill up". They just don't currently have a flexible means
of defining directionality (port modes).
One record per mode works, but it's untidy.

- Brian
 
On Jun 24, 6:21 am, Brian Drummond <brian_drumm...@btconnect.com>
wrote:
On Mon, 23 Jun 2008 06:47:40 -0700 (PDT), Andy <jonesa...@comcast.net
wrote:



On Jun 21, 11:41 am, Mike Treseler <mtrese...@gmail.com> wrote:
I agree with Jonathan about procedural rather than
structural decomposition. The base function of
a crc check is just a shift with a twist, and
adding procedural layers is something the vhdl
language is good for. VHDL synthesis works better
than most designers believe.

-- Mike Treseler

Am I missing something? Procedural interfaces still require data to be
passed in and out, but if the needed data was not passed, the same
problem exists.
Actually, one
language (ada, from which much of vhdl was borrowed) allows a
procedure to be locally declared and externally implemented in a
separate file, but I don't know if that eliminates local scope
advantages.

If you mean the "separate" mechanism, that is defined to maintain the
"local scope" at the declaration, wherever the "separate" implementation
happens to be.

The ability to "containerize' the interface (procedure or entity),
like running virtual conduit through a building's walls, allows one to
add/subtract interface elements, without having to tear into the
intervening walls. And VDHL "conduits" (record types), unlike real
ones, never "fill up". They just don't currently have a flexible means
of defining directionality (port modes).

One record per mode works, but it's untidy.

- Brian
Yes, the "separate" mechanism is what I was thinking. Thanks for the
memory jog.

Andy
 
KJ schrieb:
kennheinrich@sympatico.ca> wrote in message
news:149d8a46-8a59-417d-aee4-e71317e0ed35@59g2000hsb.googlegroups.com...

Actually the vendor is trying to protect the IP from being generic in any
sense because they are trying to also sell you silicon. Trying to make the
code somewhat opaque and non-generic hinders at least some from reverse
engineering it and creating something that gets used on competitor's
silicon. But remember, you don't have to choose to use the IP, you can
engineer the solution yourself...just like the vendor did. They spent their
money engineering a solution that might be acceptable to a majority of their
users, it's up to the market to sort out the winners and losers.
The naive thought is that it is easy to write a generic component and
use it for any application. If you haven't done yet, then please try it
once. Remember, the requirement on an IP is that it will work, and that
the customer does not has to look into internals to get it work.

Write a component of a modest complexity and distribute it to several
designers and you will find out that all you did is not intuitive at
all. Come back to your code a month or later and you will think the
same. Every tiny thing should be documented in detail, and a heavy
documentation is frustrating itself.

Now, this is not all. You will find out that your customers use your
code in a manner you never have thought it can be used, and, of course
it does not work! Also you have stopped somewhere testing the 30
possible alternatives and the trillion corner cases, and all what you
haven't tested will not work.

I am from an ASIC design background, and I assume delivering IP on a
commercial base has the same demands, they should be error free on
delivery. Verification is the hard part, often you spend more than 80%
of your time there. I can imagine developping good documentation is also
not easy, so code development is a real tiny part of the complete package.

A generic IP is likely to cost a fortune which the customer is not
willing to pay. The only reasonable way is to provide one or two
flavours which work for sure, and the customer has to adopt this module
to his interfaces.

I am not developping IP, but all what I see this business case is not a
real money maker. We have used IP that should run out of the box but
often it doesn't, after a quarter year until we get it to work we have
wondered whether we should have it better designed ourself... An IP
vendor gives support for free in this case, bad luck...

Best regards

Wolfgang
 
KJ schrieb:
"Jonathan Bromley" <jonathan.bromley@MYCOMPANY.com> wrote in message
news:cdrp54tdqqao7guh0cf1gcv3jfr70mmi7m@4ax.com...
If it's a more fundamental change, then it raises a much
bigger question about the design of component hierarchy
and why you got the interfaces between components wrong
in the first place (that's not a criticism, just a bald
statement of the problem).

Actually not. The software folks who practice agile development insist on
incremental development, frequent testable deliverables and designing in
absolutely no more than is required for the current deliverable.

Agile development - just another buzzword, and even worse, nothing new.
Long before all but even more methods were described by Summerville
which had been a standard work for a time.

I have read through all of their ideas and what gave me the rest was a
contribution of a speaker on last EuroPython conference 2007. Optimum of
pair programming is about 45 minutes! Pair programming is very
exhaustive, and our problems are difficult enough that nobody is able to
work on another problem every 45 minutes. The idea is that after 2 weeks
working you need your holidays for recovering...

Extreme programming is about delivering a working framework in a short
time, programming hardware is to deliver a fault free environment after
a short time, and this is a huge difference. It does not cost the world
to fix a software error afterwards, but for hardware, it is. A software
error which hits the user every 20 minutes could be a hardware error
which occures every millisecond, and as hardware is not operated
interactively in many cases there are not easy workarounds.

I have read about the admiration about the relyability of good designed
hardware, and a group wanted to initiate "IC like software design".

It is not that we can't learn from modern software design, but the
quality of our designs depend only on excessive testing, and regression
tests. This is often not standard in software design, also not always in
FPGA design but for the multi million gates ASIC everything else is a no
no. As testing is often 80% of effort or more, the time to write the
functional code can be neglected. Real advances go hand in hand with
testing.

Research in the past has shown that not the language makes you
efficient, but the workflow. This could be agile development, of course,
but in an adopted way. For verifying hardware we have very advanced
methods software engineers can dream of. When reliability is concerned,
software can learn from an ASIC design flow. That means that you can be
beaten by antiquated methods any time, as long the working group is
familiar with it. A failure on a big ASIC means costs of 1Mio$+ and
delay of half a year. In the past we could manufacture ASICs like this
without functional failure. Show me any software as reliable as this.

Best regards

Wolfgang
 
Jonathan Bromley wrote:
Why do we even bother to do hierarchical partitioning in the first place?

Mike wrote:
Exactly. A preference for describing lots
of boxes and wires is a hindrance to agile development
KJ wrote:
I disagree in that it can (should) encourage design reuse.

I was agreeing with Jonathan that excessive
*internal* structure inside my own code,
also slows me down.
I agree that the more useful top entity ought
to have a simple, well-documented interface.

-- Mike Treseler
 

Welcome to EDABoard.com

Sponsor

Back
Top