cell libraries and place and route

fogh wrote:
I find it a quite insightfull comment. Do you have more to say on the
matter ? like comments on LBX , nomachine NX, citrix or tarentella ?
Thanks for the compliment.

The only one of these I've had any experience with is LBX (Low Bandwidth
X), though with dxpc (http://www.vigor.nu/dxpc/) instead of the X11R6.3
lbxproxy; from what I understand, they're roughly equal. It helps to
some degree -- especially with those large screen redraws -- but still
suffers from the round-trip-time limitations.

I've heard good things about NX, but haven't gotten a chance to use it
(or Citrix or Tarantella).


Aside:
The fastest remote GUIs I've used are where the GUI is an actual
application running on the head machine and is network aware. I've
written a Tcl app (before my time at Cadence) which allowed me to
control instrumentation in a lab from my cubicle in another part of the
building; because the messages were very small ("move head to 32mm,"
"spin disk," "write track," "get average amplitude," etc.), it felt like
it was the actual control machine (though it was problematic if
something went awry; I was thinking of putting a webcam on it for this
purpose...).

Such applications, though, require much more time to write and debug,
and it's not always clear how to divide responsibilities. In something
like Virtuoso Layout Editor, for example, it could easily be worse to
push the polygons across the pipe than just send a bitmap representation.
 
On Fri, 29 Apr 2005 18:30:41 -0400, David Cuthbert wrote:

fogh wrote:
I find it a quite insightfull comment. Do you have more to say on the
matter ? like comments on LBX , nomachine NX, citrix or tarentella ?

Thanks for the compliment.

The only one of these I've had any experience with is LBX (Low Bandwidth
X), though with dxpc (http://www.vigor.nu/dxpc/) instead of the X11R6.3
lbxproxy; from what I understand, they're roughly equal. It helps to
some degree -- especially with those large screen redraws -- but still
suffers from the round-trip-time limitations.

I've heard good things about NX, but haven't gotten a chance to use it
(or Citrix or Tarantella).


Aside:
The fastest remote GUIs I've used are where the GUI is an actual
application running on the head machine and is network aware. I've
written a Tcl app (before my time at Cadence) which allowed me to
control instrumentation in a lab from my cubicle in another part of the
building; because the messages were very small ("move head to 32mm,"
"spin disk," "write track," "get average amplitude," etc.), it felt like
it was the actual control machine (though it was problematic if
something went awry; I was thinking of putting a webcam on it for this
purpose...).

Such applications, though, require much more time to write and debug,
and it's not always clear how to divide responsibilities. In something
like Virtuoso Layout Editor, for example, it could easily be worse to
push the polygons across the pipe than just send a bitmap representation.

I'm not sure how Virtuoso and various X servers transfer data over the
network, or whether it can be done better. It really depends on the type
of data. If your screen is, say, 1024x1024 at 24-bit color, that's 3MB of
data. If the data consists of only a few unique colors (for instance
layout layers), a 256-color index will produce 1MB of data. If the colors
are solid (non-stipple) or mostly black background, you can compress it to
< 100K. Also, if only part of the image changes this will be even less,
especially if the server caches the image around the visible window. That
should in theory be very fast if the client/server understand these
optimizations.

Now, vector data will likely be larger for all but the smallest designs,
as rectangles are 16 bytes each and in general can't be compressed nearly
as well as images. Of course, the entire design could be sent over the
network as vector graphics during "load". If you're just viewing it, then
this amounts to copying the design onto the local machine and just viewing
it with a local tool, which is probably what you want to do. If editing is
required and the geometry is changing then it's much harder.

I've had to deal with slow graphics many times, and I keep wanting to
create my own solution to it but I'll probably never have the time. Maybe
eventually someone will have a good client/server that is optimized for
layouts/schematics. It doesn't seem too difficult to create this for a
fixed OS/platform. Exceed can work well if you set all of the
optimizations/window settings to the correct values, which is quite a
task, and maybe some day they will add an "Optimize for CAD tools" button
to it.

Frank

Frank
 
Frank E. Gennari wrote:
I'm not sure how Virtuoso and various X servers transfer data over the
network, or whether it can be done better.
Generally, it's instructions such as "draw a line from P0 to P1 using
PEN0" and "draw a rectangle from P2 to P3 using BRUSH0", where PEN0 and
BRUSH0 must be defined earlier. Virtuoso's layout canvas probably has
some optimizations built into it, but I'm not familiar with that part of
the code (and probably couldn't divulge much if I did...).

It really depends on the type
of data. If your screen is, say, 1024x1024 at 24-bit color, that's 3MB of
data. If the data consists of only a few unique colors (for instance
layout layers), a 256-color index will produce 1MB of data.
Yep. This is how VNC looks at the world.

If the colors
are solid (non-stipple) or mostly black background, you can compress it to
100K.
Whether something is solid or not matters only for run-length encoding
algorithms. Back in the day, this was the only image compression
algorithm that was feasible for realtime operations. Now with
processing power being far greater than network bandwidth,
pattern-matching (LZW/GIF and gzip/PNG) and frequency-space (JPEG)
algorithms are feasible (and used) for image compression.

Also, these are, (mostly) general purpose compression algorithms. If
the compressor/decompressor agree on some "pre-knowledge" about the data
being transferred, compression can get much better. For example,
transferring an image of the string "Cadence", rendered in Helvetica
with antialiasing at a height of 100 points occupies ~60000 bytes
(before compression). On the other hand, if I can tell the client to
render the string "Cadence" in font #1 (agreed upon as Helvetica,
antialised), point size 100, with a lower left corner at P0, this would
require maybe 28 bytes (and carries *more* information with it -- it's
text with the content Cadence in addition to an image of said text).

Now, vector data will likely be larger for all but the smallest designs,
as rectangles are 16 bytes each and in general can't be compressed nearly
as well as images. Of course, the entire design could be sent over the
network as vector graphics during "load". If you're just viewing it, then
this amounts to copying the design onto the local machine and just viewing
it with a local tool, which is probably what you want to do. If editing is
required and the geometry is changing then it's much harder.
As you note, there's a big difference between just looking at the data
and manipulating it.

The OASIS guys might have a bone to pick with you about the assertion
that rectangles are not very compressible. :) They've done a fair
amount of work in this regard. I've been curious to try to gzip an
OASIS dataset -- this is a good qualitative test for how much
compression has been achieved.

I've had to deal with slow graphics many times, and I keep wanting to
create my own solution to it but I'll probably never have the time. Maybe
eventually someone will have a good client/server that is optimized for
layouts/schematics.
Personally, I don't think that EDA tools are anything special in this
regard. We're just more aware of it because EDA users find themselves
having to use remote displays more often than the general public for a
multitude of reasons:
* Platform and licensing issues (it'd be expensive to get a license
for every laptop/home computer)
* Restrictions on transferring IP (I can't imagine Intel allowing
their engineers to take chip layouts out of the building)
* The users are just more savvy (i.e., most people probably aren't
aware that a computer can be used in this fashion).

I just don't think that the remote display problem has been
satisfactorily solved. Unfortunately, I doubt that we at Cadence or
anyone else in EDA will solve it -- while we're aware of the problem,
it's just not something we have the expertise in.
 
David Cuthbert wrote:
Frank E. Gennari wrote:

I'm not sure how Virtuoso and various X servers transfer data over the
network, or whether it can be done better.


Generally, it's instructions such as "draw a line from P0 to P1 using
PEN0" and "draw a rectangle from P2 to P3 using BRUSH0", where PEN0 and
BRUSH0 must be defined earlier. Virtuoso's layout canvas probably has
some optimizations built into it, but I'm not familiar with that part of
the code (and probably couldn't divulge much if I did...).

It really depends on the type
of data. If your screen is, say, 1024x1024 at 24-bit color, that's 3MB of
data. If the data consists of only a few unique colors (for instance
layout layers), a 256-color index will produce 1MB of data.


Yep. This is how VNC looks at the world.

If the colors
are solid (non-stipple) or mostly black background, you can compress
it to
100K.


Whether something is solid or not matters only for run-length encoding
algorithms. Back in the day, this was the only image compression
algorithm that was feasible for realtime operations. Now with
processing power being far greater than network bandwidth,
pattern-matching (LZW/GIF and gzip/PNG) and frequency-space (JPEG)
algorithms are feasible (and used) for image compression.

Also, these are, (mostly) general purpose compression algorithms. If
the compressor/decompressor agree on some "pre-knowledge" about the data
being transferred, compression can get much better. For example,
transferring an image of the string "Cadence", rendered in Helvetica
with antialiasing at a height of 100 points occupies ~60000 bytes
(before compression). On the other hand, if I can tell the client to
render the string "Cadence" in font #1 (agreed upon as Helvetica,
antialised), point size 100, with a lower left corner at P0, this would
require maybe 28 bytes (and carries *more* information with it -- it's
text with the content Cadence in addition to an image of said text).

Now, vector data will likely be larger for all but the smallest designs,
as rectangles are 16 bytes each and in general can't be compressed nearly
as well as images. Of course, the entire design could be sent over the
network as vector graphics during "load". If you're just viewing it, then
this amounts to copying the design onto the local machine and just
viewing
it with a local tool, which is probably what you want to do. If
editing is
required and the geometry is changing then it's much harder.


As you note, there's a big difference between just looking at the data
and manipulating it.

The OASIS guys might have a bone to pick with you about the assertion
that rectangles are not very compressible. :) They've done a fair
amount of work in this regard. I've been curious to try to gzip an
OASIS dataset -- this is a good qualitative test for how much
compression has been achieved.

I've had to deal with slow graphics many times, and I keep wanting to
create my own solution to it but I'll probably never have the time. Maybe
eventually someone will have a good client/server that is optimized for
layouts/schematics.


Personally, I don't think that EDA tools are anything special in this
regard. We're just more aware of it because EDA users find themselves
having to use remote displays more often than the general public for a
multitude of reasons:
* Platform and licensing issues (it'd be expensive to get a license
for every laptop/home computer)
* Restrictions on transferring IP (I can't imagine Intel allowing
their engineers to take chip layouts out of the building)
* The users are just more savvy (i.e., most people probably aren't
aware that a computer can be used in this fashion).

I just don't think that the remote display problem has been
satisfactorily solved. Unfortunately, I doubt that we at Cadence or
anyone else in EDA will solve it -- while we're aware of the problem,
it's just not something we have the expertise in.
Frank, David,

It seems you brushed already a portrait for the ideal thin client: use
generic bitmap compressors to start with, but train the compressor at
the same time. The trainer will detect that X client called virtuoso
layout editor has been lately using a given set of stipples, fonts, and
so forth, and for instance send a dictionary to the client when it is
obvious that that will give better performance than generic bitmap
compression.

That said, I have also an impression that indeed round-trip-times and
limiting the number of client-server negociations is more important than
dealing with bandwidth improvements. So this kind of compressor should
also know how to short-circuit those negociations.

( Sorry if my jargon is not right and this all sounds like hocus-pocus,
but I have not studied CS. )

It is understandable that cadence does not work on these matters
directly, but it could participate otherwise in development efforts. I
don t find that VNC has made much progress since it was AT&T's. I tried
NX a long time ago, while it was not stable enough to start a gnome or
CDE desktop.
 
fogh wrote:
It seems you brushed already a portrait for the ideal thin client: use
generic bitmap compressors to start with, but train the compressor at
the same time. The trainer will detect that X client called virtuoso
layout editor has been lately using a given set of stipples, fonts, and
so forth, and for instance send a dictionary to the client when it is
obvious that that will give better performance than generic bitmap
compression.
You've hit on a good model -- it's similar to one I thought about for a
web-based chalkboard, but had a fatal flaw there which doesn't apply here.

The goal is to get an updated screen to the user as soon as possible.
Thus, for a large, complex update (maybe the user is joining in to an
existing session), push the bitmap across -- don't bother trying to tell
it about lines, stipples, fonts, etc. We can do that, however, for
small, incremental updates: say you've dragged a device, so erase this
area, put a few lines down, done.

The flaw for the chalkboard was that I wanted to make the shapes behave
as objects that the user could manipulate. Not easy to do if all you've
sent across is a bitmap.

That said, I have also an impression that indeed round-trip-times and
limiting the number of client-server negociations is more important than
dealing with bandwidth improvements. So this kind of compressor should
also know how to short-circuit those negociations.
This is easy to do. First, spec out a virtual machine for this purpose.
Then the display side implements it, while the application side models
the changes being made. The application should never have to
interrogate the display about its state -- it's all contained within the
server.

(Thinking about this a bit, I realized that this is essentially what
NASA, ESA, etc., do during their space missions. If you've seen the
Apollo 13 movie, recall the scene where the ground team is trying to rig
up an oxygen scrubber from the parts on the spacecraft. Rather than
waste time asking the astronauts if they have this filter, that torque
wrench, etc., they use their model of the spacecraft to figure out how
to put the scrubber together. Same thing as what I'm describing, just
with higher stakes. :)

( Sorry if my jargon is not right and this all sounds like hocus-pocus,
but I have not studied CS. )
Ah, don't worry. CS lost all jargon credibility when someone decided to
call the client you sit at the "X server" and the server that runs your
application the "X client". :)

(I'm an EE who learned how to do this software stuff because I had
gotten fed up with bad and non-existent design tools. It'll take me
awhile to fix it all, but my little one-man revolution is churning
along... :)

It is understandable that cadence does not work on these matters
directly, but it could participate otherwise in development efforts. I
don t find that VNC has made much progress since it was AT&T's. I tried
NX a long time ago, while it was not stable enough to start a gnome or
CDE desktop.
Heh, well, keep in mind that VNC was written as a side project at AT&T,
and it's now mature. What this means is that there's no group whose
main charter is to fix and improve it, and it's a mature project which
means that those who *do* work on it in their spare time need to make
sure they don't break backwards compatibility. Even so, there have been
a few interesting efforts out there -- I personally use TightVNC, as
they added a couple compression schemes which perform well.

Oh, and don't forget about some of the cool stuff others are doing with
VNC. I find vnc2swf a nifty tool for generating demos. (SWF =
Shockwave Flash; vnc2swf = tool which records a VNC session to a Flash
movie).

vnc2swf can be found at http://www.unixuser.org/~euske/vnc2swf/
 
G Vandevalk wrote:
When I used it (long time ago ... ) the key input was a 2d description
of the cross section of the chip with a resistivity per layer
... and a clean lvs
... and a dependance on the global 0! substrate
Yuck !

This and other careless IC models (including pads ESD , etc..) abusing
node zero are discouraging those few last brave designers who have the
will to simulate a chip within a package with external components and an
external ground.

Node zero is baaad , M'kay ?
 
On Sat, 30 Apr 2005 13:09:08 -0400, David Cuthbert wrote:

Frank E. Gennari wrote:
I'm not sure how Virtuoso and various X servers transfer data over the
network, or whether it can be done better.

Generally, it's instructions such as "draw a line from P0 to P1 using
PEN0" and "draw a rectangle from P2 to P3 using BRUSH0", where PEN0 and
BRUSH0 must be defined earlier. Virtuoso's layout canvas probably has
some optimizations built into it, but I'm not familiar with that part of
the code (and probably couldn't divulge much if I did...).
That approach works well as long as there aren't so many lines that the
window appears to be filled a constant color, which occurs with large
designs containing several hundred thousand lines. These are the designs
that make a slow network very painful.


It really depends on the type
of data. If your screen is, say, 1024x1024 at 24-bit color, that's 3MB of
data. If the data consists of only a few unique colors (for instance
layout layers), a 256-color index will produce 1MB of data.

Yep. This is how VNC looks at the world.

If the colors
are solid (non-stipple) or mostly black background, you can compress it to
100K.

Whether something is solid or not matters only for run-length encoding
algorithms. Back in the day, this was the only image compression
algorithm that was feasible for realtime operations. Now with
processing power being far greater than network bandwidth,
pattern-matching (LZW/GIF and gzip/PNG) and frequency-space (JPEG)
algorithms are feasible (and used) for image compression.

Also, these are, (mostly) general purpose compression algorithms. If
the compressor/decompressor agree on some "pre-knowledge" about the data
being transferred, compression can get much better. For example,
transferring an image of the string "Cadence", rendered in Helvetica
with antialiasing at a height of 100 points occupies ~60000 bytes
(before compression). On the other hand, if I can tell the client to
render the string "Cadence" in font #1 (agreed upon as Helvetica,
antialised), point size 100, with a lower left corner at P0, this would
require maybe 28 bytes (and carries *more* information with it -- it's
text with the content Cadence in addition to an image of said text).
Sure, that's why formats such as PS and PDF are so much smaller than
images of presentations or scanned images of documents. Using vector
graphics and fonts allows for nearly infinitely small pixel size. I assume
stipple patterns could also be applied by the client.

I was just talking
about a simple image compression algorithm that someone could write in a
few days, as proof that it's easy to improve upon sending 3MB of data for
an image.


Now, vector data will likely be larger for all but the smallest designs,
as rectangles are 16 bytes each and in general can't be compressed nearly
as well as images. Of course, the entire design could be sent over the
network as vector graphics during "load". If you're just viewing it, then
this amounts to copying the design onto the local machine and just viewing
it with a local tool, which is probably what you want to do. If editing is
required and the geometry is changing then it's much harder.

As you note, there's a big difference between just looking at the data
and manipulating it.

The OASIS guys might have a bone to pick with you about the assertion
that rectangles are not very compressible. :) They've done a fair
amount of work in this regard. I've been curious to try to gzip an
OASIS dataset -- this is a good qualitative test for how much
compression has been achieved.
Sure you can compress rectangles, but I doubt many existing tools will
compress the visible polygons in a dataset on the fly for transfer of a
vectorized image over the network. Compressing a layout by extracting the
11 repetition types in OASIS might take longer than sending the entire 3MB
of image data if you have to do it every time the user's viewpoint
changes. If you're going to compress the shapes, then why not transfer the
OASIS file itself and render it on the client machine?

I don't have any OASIS files to experiment with, or any software that
reads or writes OASIS. I can say that GDSII files gzip by a consistent
factor of 5.5. In fact, you can get much of the same compression out of
GDSII that you get out of OASIS by restructuring the hierarchy to use
GDSII arrays, extracting repeated shapes into their own cells, and many
other tricks. I have a tool that will compress a GDSII file by as much as
a factor of 100. The problem is that most GDSII writers I've used don't do
a good job compressing GDSII. The reason may be that the compression
process requires modification of the hierarchy since the repetition
sequences of GDSII are only applies to references. Users don't like
it when their hierarchy comes out different than they created it. OASIS
allows the repetitions to be hidden from the designer since all of the
objects can be transformed and repeated. The other OASIS techniques for
size reduction probably have less of an impact than the repetitions, so I
won't discuss them at the moment.


I've had to deal with slow graphics many times, and I keep wanting to
create my own solution to it but I'll probably never have the time. Maybe
eventually someone will have a good client/server that is optimized for
layouts/schematics.

Personally, I don't think that EDA tools are anything special in this
regard. We're just more aware of it because EDA users find themselves
having to use remote displays more often than the general public for a
multitude of reasons:
* Platform and licensing issues (it'd be expensive to get a license
for every laptop/home computer)
* Restrictions on transferring IP (I can't imagine Intel allowing
their engineers to take chip layouts out of the building)
* The users are just more savvy (i.e., most people probably aren't
aware that a computer can be used in this fashion).
And who else needs to work with multi-GB files? The data volume amazes me,
though it's not as much data as in digital movies.


I just don't think that the remote display problem has been
satisfactorily solved. Unfortunately, I doubt that we at Cadence or
anyone else in EDA will solve it -- while we're aware of the problem,
it's just not something we have the expertise in.
Maybe the best solution is better client/server networking software that
applies to many applications and is aware of all the methods of data
compression and efficient transfer of graphical (including vector) data.
VNC is pretty good with this sort of thing.

Frank
 
On 26 Apr 2005 16:46:14 -0700, "pureck" <pureck@hotmail.com> wrote:

hello

i am working on a PA and tried to figure out how the portAdapter works
in loadpull simulation.
There is an example in rfExamples library and PA application note says
the portAdapter does the loadpull simulation.
i want to find out how the parameters of the portAdapter relate each
other. For example, how those phase of gamma and mag of gamma produce
capacitance, inductance and resistance values.
It will be very much appreciated if anyone can help on this.

thanks.
Did you read the SpectreRF User Guide? Chapter 11 covers simulationing load
pull contours, and explains how to set this up using the portAdapter.
(In case it isn't chapter 11 in the version you're using, it's the chapter
called Modeling Transmitters)

Regards,

Andrew.
 
On 26 Apr 2005 00:33:12 -0700, chloe_music2003@yahoo.co.uk wrote:

Hello Andrew,

The versions I'm using are:
- SimVision v5
- VerilogXL v5

Yes, I re-ran the simulation after re-invoking the simulator. The
waveforms not just did not reload, they disappeared altogether. The
values of all the signals became "????".
Here are the steps I took. Please let me know if I did anything wrong.

1. On Hummingbird xterm, I type "VerilogXL -s v_file.v v_tb.v +gui
2. VerilogXL verifies the RTL; if everything is okay, it calls
SimVision.
3. In SimVision Design Browser window, I select the signals which I
want to see simulated, and send them to Waveform Window.
4. I clicked on Simulation, and then Run.
5. The simulation runs and finishes. I perform changes to the RTL.
6. I click on Simulation -> Reinvoke Simulator.
7. The argument is still "-s v_file.v v_tb.v +gui". I re-invoke the
simulator using the argument.
8. When I clicked on Simulation-> Reinvoke Simulator, all the waveforms
disappeared, leaving behind only the signal names and its values (all
became "????").
9. I clicked on Simulation - > Run.
10. All the values are still "????". No waveforms.

The log files does not seem to be showing any abnormalities.
Please help if you can.
Appreciated :)

Kind regards,
Chloe
Hi Chloe,

Unfortunately I don't know why this is happening - I couldn't find records of
a similar problem (it's hard to search for this, unfortunately) - and to be
honest this is not really my main product area. If you're using LDV50, then
perhaps you might want to try a newer version (IUS54 is the current release)
to see if that helps?

Or contact customer support via your usual channel.

Regards,

Andrew.
 
Hi,

I assume you have to write them out
in the extracted view.

Use the command

saveInterconnect(
( derivedLayer "dfIILayer" )
...
)

at the end of your Diva extract rule file (divaEXT.rul)
to do this.

Another possibility is that they are not visible at all.
Go to LSW->Edit->Set Valid Layers...
and select them and make them visible.

Bernd

Inderpreet wrote:
Hi
I am making layouts using 0.25u tech in Virtuoso.
I am using diva rules for DRC, extraction and LVS.
When I see the extracted view it shows me only devices but no "nt"
layers.
This gives me problem while checking LVS errors.
Plz suggest me what to do.
Thanks and Regards
Inderpreet
 
Go into your LSW and make all layers valid ...

(messy and brute force ... but works ... )

What we did was write a skill routine that "made valid" all layers
that existed in a layout. (with a lot of other nice features ... we called
in lswpp ...
To mean Layer Selection Window Plus Plus ... )

-- G


Too bad Cadence does not have such features built in
"Inderpreet" <induhs@protected_id> wrote in message
news:a5c0fbc3b5477d1bcf1eab622b235aaf@localhost.talkaboutcad.com...
Hi
I am making layouts using 0.25u tech in Virtuoso.
I am using diva rules for DRC, extraction and LVS.
When I see the extracted view it shows me only devices but no "nt"
layers.
This gives me problem while checking LVS errors.
Plz suggest me what to do.
Thanks and Regards
Inderpreet
 
I have already done this...
I mean i have already tried selecting all the layers in LSW
but still in extracted view it is not showing any nt layer
 
First, check that the desired layer purpose pairs are defined in the
technology file. It is not enough for the layers and purposes to be
defined. You have to have the pairs defined as well. This must be done
before running Diva Extraction. Adding them afterward will not make them
visible, for some reason.

Second, check that your Extract rules have saveInterconnect commands for
the layers you are missing. Diva will validate the existence of the
layer purpose pairs when compiling the saveInterconnect commands. If
they were undefined, and you did not get an error, upgrade to a build
actually worth using immediately.

Third, make sure the LSW has the layer purpose pairs you want to see set
as valid, and visible. Sad, but true, that people often forget to make
the "missing" layers visible.

Now run Diva Extraction and you should see the shapes. Though there was
the one notable case where someone had set the display parameters for
the "net" purposes to be black outlines with black fill. Very
embarrassing to have your P0 PCR rejected with the designation "gross
pilot error" after the developers stop laughing their little pointed
heads off.

On Tue, 03 May 2005 12:55:53 -0400, "Inderpreet" <induhs@protected_id>
wrote:

Hi
I am making layouts using 0.25u tech in Virtuoso.
I am using diva rules for DRC, extraction and LVS.
When I see the extracted view it shows me only devices but no "nt"
layers.
This gives me problem while checking LVS errors.
Plz suggest me what to do.
Thanks and Regards
Inderpreet
 
On 3 May 2005 14:12:44 -0700, "danmc" <spam@mcmahill.net> wrote:

I want to be able to netlist a design by running:

icms -nograph -replay make_netlist.il

where make_netlist.il has

simulator('spectre)
design("mylib" "mycell" "myview")
createNetlist(?recreateAll t)

It seems when I'm using versionsync and all cells are checked in, the
design() call doesn't take. Then in my CDS.log I see:

\o You do not have the required cellViews or properties open
\o for this session. You may have purged the data from
virtual
\o memory or the schematic data has been closed. You can
type:
\o simulator('simulatorName) to reset the session or quit the
\o application that you are using.
\o The design has not been specified as yet. Please specify
\o the design using the design() command, or refer to
\o ocnHelp(' design).

has anyone else seen this? This is a real pain. Any workaround?

I'm using 5.0.33 USR3

-Dan
Most likely the problem is because design() normally opens the
design in edit mode (mostly for historical reasons, as socket-based
interfaces such as spectreS needed this).

I've a solution on http://sourcelink.cadence.com for this - 11011216.

The solution is to use:

design("mylib" "mycell" "myview" "r")

i.e. add a fourth argument "r" to indicate you want to open the design
in readonly mode.

Regards,

Andrew.
 
I have already made the nt layers as valid layers in LSW and LSW is showing
them.
Also while doing extraction if in the set swtiches option, I set
"parasitic capacitance" and "skip soft connect check" as 2 options in
extraction then in the extracted view I am not able to see the nt layers.
But if in set switch option I set only "skip soft connect check " then in
the extracted view I am able to see nt layers.
Is this a problem bcoz I am using 0.25u tech file as extracted views of
layouts in 0.35 u tech show the nt layers.
 
They are different ; however, you maybe have the two installed.
Check that your path is pointing to the CDB version.

stéphane


kimo wrote:
Hi

I have a simple question. Our installed Cadence is telling me that
this is the OA version, while I am trying to open a CDB file.

My question is, how do I open the CDB version of cadence. Or, do I have
to install a different version?
 
On Wed, 04 May 2005 07:03:53 -0400, "Inderpreet" <induhs@protected_id> wrote:

I have already made the nt layers as valid layers in LSW and LSW is showing
them.
Also while doing extraction if in the set swtiches option, I set
"parasitic capacitance" and "skip soft connect check" as 2 options in
extraction then in the extracted view I am not able to see the nt layers.
But if in set switch option I set only "skip soft connect check " then in
the extracted view I am able to see nt layers.
Is this a problem bcoz I am using 0.25u tech file as extracted views of
layouts in 0.35 u tech show the nt layers.
Almost certainly the extraction rules you are using is only calling
saveInterconnect() in the else branch of the ivIf which looks at the value of
the parasitic capacitance switch. The fact that it does it with some switch
settings and not others is a good indication of that.

These things are dependent on the extract rules and how they are written - so
it is impossible to give a general answer as to whether your 0.25u or 0.35u
technology would work one way or another - it depends on who wrote the rules
and how they wrote them, and for what technology they are for, etc, etc.

Regards,

Andrew.
 
On 4 May 2005 04:19:10 -0700, "kimo" <eng_ak@link.net> wrote:

Hi

I have a simple question. Our installed Cadence is telling me that
this is the OA version, while I am trying to open a CDB file.

My question is, how do I open the CDB version of cadence. Or, do I have
to install a different version?
You need to either translate the database from CDB to OA (Under
Tools->Conversion Toolbox in the CIW), or use a different installation with
the CDBA version of the tools.

Regards,

Andrew.
 
So what shud I do now.
Waht are the changes that I shud make in divaEXt.rul file so that it
starts showing nt layers
in extracted view.
 
Clearly you need to understand the flow of the tool data.

If the program is set to not-save the interconnect data on a
branch, then you will not get nets.

Start with a small example ( say an invertor ) and run the tool.

Try turning on the display of the rules execution and see what the
log file says.

It should be possible to move the saveInterconnect code into a
more mainline branch ...

-- G

"Inderpreet" <induhs@protected_id> wrote in message
news:5aac0519e3ac3c580a1573c043a7d13a@localhost.talkaboutcad.com...
So what shud I do now.
Waht are the changes that I shud make in divaEXt.rul file so that it
starts showing nt layers
in extracted view.
 

Welcome to EDABoard.com

Sponsor

Back
Top