out of memory when traversing cv->instances

M

Marcel Preda

Guest
Hi,


I have a skill function which dump into a ASCII file the list of all
instances and for each of them some attributes.

Until now everything was fine, but I have received from a layouter a
huge assura extracted view with parasitic resistors.
The total number of instances ~ 3 000 000 .
And now the virtuoso crashes because not enough memory.

the loop which scans the instances is :
;; cv == cellview open into the current virtuoso window

mapcan( lambda( (inst)
;; get some inst properties and print them into a file
.....
nil ;; this will be append by mapcan
) ;; lambda
cv->instances
)

The crash is when going into this loop.

I use
mapcan(lambda((x) ...)
l_list
)
statements because they seems to be faster than foreach() loop.

I don't know to much about the skill internals, the question is when
I do a call like:

mapcan(lambda((x) ...)
l_list
)

the l_list as parameter is the original list, or a copy is created and
passed as parameter ?
I can not figure other reason .

How can I traverse such huge list, without reaging the memory limit?

I've trace the allocated memory for the icfb process.
When I just open the extracted view cell the allocated mem is abou
2.5GB.
When my script starts, maximum allocated mem that I have seen is
3.5GB, and after that it is crasing.

I've try to use gcsummary() in some intermediate steps, but I can not
figure out which is the allocated memory by icfb .
Is there any function to tell me: in this moment icfb process is using
NNNNNNN Bytes ?

E.g. I have run icfb.
top command says that icfb is using 2.5 GB.
And I call gcsummary().
Honestly I see no correlation between top output and gcsummary().

The gcsummary output is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~
\o ************* SUMMARY OF MEMORY ALLOCATION
*************
\o GC Policy: 0
(Original)
\o Maximum Process Size (i.e., voMemoryUsed) =
890219344
\o Total Number of Bytes Allocated by IL =
25410178
\o Total Number of Static Bytes =
5783552
\o Total Pause Time = 1.190
sec
\o Longest Pause Time = 0.030
sec
\o Average Pause Time = 0.017
sec
\o
-----------------------------------------------------------
\o Type Size Allocated Free Static GC
count
\o
-----------------------------------------------------------
\o binary 20 0 0 32768
3
\o funobj 20 188416 40740 614400
0
\o list 12 5828608 1569648 2195456
62
\o fixnum 8 36864 36616 4096
0
\o fixnumLF 16 0 0 0
0
\o symbol 28 0 0 2240512
0
\o stdobj 12 28672 28380 0
0
\o envobj 12 28672 26172 0
0
\o flonum 16 729088 615456 147456
0
\o string 8 282624 83824 450560
0
\o port 60 8192 7440 0
3
\o array 16 110592 18496 49152
0
\o other 8 0 0 0
0
\o ptrnum 8 4096 3256 0
0
\o TOTALS -- 7245824 2430028 5734400
68
\o
-----------------------------------------------------------
\o User Type (ID) Allocated Free GC
count
\o
-----------------------------------------------------------
\o assocTable (20) 217088 55556
0
\o wtype (21) 8192 5000
0
\o hiField (22) 8192 5104
0
\o hiToggleItem (23) 8192 7940
0
\o hiMenu (24) 8192 4708
0
\o hiMenuItem (25) 16384 512
1
\o hiListBox (26) 0 0 0
\o hiListBox (26) 0 0
0
\o hiTreeItem (27) 0 0
0
\o hiTree (28) 0 0
0
\o dfStep (29) 0 0
0
\o dfStepInst (30) 0 0
0
\o dfFlowchart (31) 0 0
0
\o dfFlowchartInst (32) 0 0
0
\o cdfDataUT (33) 8192 7960
0
\o cdfParamUT (34) 8192 7632
0
\o ddUserType (35) 8192 8120
0
\o pcdbobject (36) 0 0
0
\o dbBagType (37) 0 0
0
\o rodObj (38) 0 0
0
\o dbobject (39) 73728 70140
1
\o geEnvironment (40) 8192 7448
0
\o geProbe (41) 0 0
0
\o geProbeCxt (42) 0 0
0
\o geHilightDataUT (43) 0 0
0
\o ddCatUserType (44) 0 0
0
\o gdmSpecIlUserType (45) 0 0
0
\o gdmSpecListIlUserType (46) 0 0
0
\o hdbobject (47) 0 0
0
\o nmpIlUserType (48) 0 0
0
\o opfcontext (49) 0 0
0
\o opffile (50) 0 0
0
\o psInfoId (51) 0 0
0
\o gcell (52) 0 0
0
\o layer (53) 0 0
0
\o aelEnv (54) 0 0
0
\o aelLineage (55) 0 0
0
\o adtComplex (56) 0 0
0
\o adtDoubleInt (57) 0 0
0
\o adtDpl (58) 0 0
0
\o adtAnyPtr (59) 0 0
0
\o adtString (60) 0 0
0
\o drDataVectorIL (61) 8192 8144
0
\o drWaveformIL (62) 8192 8144
0
\o pslSemanticIL (63) 0 0
0
\o drDataFile (64) 0 0
0
\o drFixedLeafNode (65) 0 0
0
\o drAllData (66) 0 0
0
\o drFixedIntrNode (67) 0 0
0
\o drAnalInst (68) 0 0
0
\o drRunObjFile (69) 0 0
0
\o drRunObj (70) 0 0
0
\o drFixedSwpNode (71) 0 0
0
\o drAnalInstNode (72) 0 0
0
\o drDataIntrNode (73) 0 0
0
\o drDataLeafNode (74) 0 0
0
\o drInstIntrNode (75) 0 0
0
o drInstLeafNode (76) 0 0
0
\o msp (77) 0 0
0
\o amsobject (78) 0 0
0
\o mpsHandle (79) 8192 7504
0
\o cdsEvalObject (80) 0 0
0
\o ipcUT (81) 8192 7920
0
\o TOTALS -- 405504 211832
2
\o
-----------------------------------------------------------
\o Bytes allocated
for :
\o arrays =
412444
\o arrays(stat) =
4960
\o strings =
491212
\o strings(perm)=
1148914
\o vcode =
1321684
\o vcode(stat) =
8050928
\o std slots =
64
\o env slots =
3920
\o IL stacks = 196596 + 256000 +
64000
\o (Internal) =
24576
\o TOTAL GC COUNT
70
\o ----- Summary of Symbol Table Statistics
-----
\o Total Number of Symbols =
80004
\o Hash Buckets Occupied = 4499 out of
4499
\o Average chain length =
17.782618
\o Longest chain length =
35
\o No of sym lookups = 204934
~~~~~~~~~~~~~~~~~~~~~~~~~~~

Best Regards,
Marcel
 
Marcel Preda wrote, on 03/20/09 06:55:
Hi,


I have a skill function which dump into a ASCII file the list of all
instances and for each of them some attributes.

Until now everything was fine, but I have received from a layouter a
huge assura extracted view with parasitic resistors.
The total number of instances ~ 3 000 000 .
And now the virtuoso crashes because not enough memory.

the loop which scans the instances is :
;; cv == cellview open into the current virtuoso window

mapcan( lambda( (inst)
;; get some inst properties and print them into a file
.....
nil ;; this will be append by mapcan
) ;; lambda
cv->instances
)

The crash is when going into this loop.

I use
mapcan(lambda((x) ...)
l_list
)
statements because they seems to be faster than foreach() loop.

I don't know to much about the skill internals, the question is when
I do a call like:

mapcan(lambda((x) ...)
l_list
)

the l_list as parameter is the original list, or a copy is created and
passed as parameter ?
Why are you using mapcan? This will constuct a new list from the appended
results of the lambda (which has to be a list). This means you're going to get
(assuming you're always returning nil) a list with 3 million nils in it. Seems
rather pointless to me.

The act of doing cv~>instances will also create a big list - you might want to
check what happens to the memory when you do that. In the CIW you can type:

mylist=cv~>instances t

(yes, there's a t on the line - it's there to stop it trying to print the list).

You can also run "layout -64" instead of icfb, to run in 64 bit mode.

But you should use mapc rather than mapcan if you're not trying to construct a
new list. Or just use foreach, which can make things simpler for most people -
avoids using lambda (it's effectivey the same, because foreach transforms into
mapc internally).

Andrew.
 
On Mar 20, 3:06 pm, Andrew Beckett <andr...@DcEaLdEeTnEcTe.HcIoSm>
wrote:
Marcel Preda wrote, on 03/20/09 06:55:



Hi,

I have a skill function which dump into a ASCII file the list of all
instances and for each of them some attributes.

Until now everything was fine, but I have received from a layouter a
huge assura extracted view with parasitic resistors.
The total number of instances ~ 3 000 000  .
And now the virtuoso crashes because not enough memory.

the loop which scans the instances is :
;; cv == cellview open into the current virtuoso window

mapcan( lambda( (inst)
        ;; get some inst properties and print them into a file
        .....
        nil ;; this will be append by mapcan
     ) ;; lambda
     cv->instances
)

The crash is when going into this loop.

I use
mapcan(lambda((x) ...)
   l_list
)
statements because they seems to be faster than foreach() loop.

I don't  know to much about the skill internals, the question is when
I do a call like:

mapcan(lambda((x) ...)
  l_list
)

the l_list as parameter is the original list, or a copy is created and
passed as parameter ?

Why are you using mapcan? This will constuct a new list from the appended
results of the lambda (which has to be a list). This means you're going to get
(assuming you're always returning nil) a list with 3 million nils in it. Seems
rather pointless to me.

The act of doing cv~>instances will also create a big list - you might want to
check what happens to the memory when you do that. In the CIW you can type:

mylist=cv~>instances  t

(yes, there's a t on the line - it's there to stop it trying to print the list).

You can also run "layout -64" instead of icfb, to run  in 64 bit mode.

But you should use mapc rather than mapcan if you're not trying to construct a
new list. Or just use foreach, which can make things simpler for most people -
avoids using lambda (it's effectivey the same, because foreach transforms into
mapc internally).

Andrew.
Hi Andrew,

I'm 101% sure that the mapcan(lambda()..) construction that I use does
not create a list with (3000000 X nil)
See the code bellow

#########################
let( (l_1 l_2)

l_1 = list(1 2 3 4)
l_2 = mapcan( lambda((x)
;; do nothing
nil ;; lambda ret val
)
l_1
)
printf("l_2 is %L\n" l_2)
)
####################
It prints:
"l_2 is nil"

From my experience foreach is much slower that
mapcan(lambda(()
nil
)
l_list
)

It because foreach consider that's a good idea to return l_list (3rd
argument).
I know that foreach is much readable than mapcan(lambda()), but becaus
eof the speed consideration
we have write few foreach into mapcan(lambda).

Probable mapcan is trying to allocate 3000000 x listCells, because it
considers that it will need them?!?!

About the 64 version, I'm affarid that will be also a limit per
process set by out IT team.
I have to check it.

Best Regards,
Marcel
 
Marcel Preda wrote, on 03/20/09 13:49:
Hi Andrew,

I'm 101% sure that the mapcan(lambda()..) construction that I use does
not create a list with (3000000 X nil)
See the code bellow

#########################
let( (l_1 l_2)

l_1 = list(1 2 3 4)
l_2 = mapcan( lambda((x)
;; do nothing
nil ;; lambda ret val
)
l_1
)
printf("l_2 is %L\n" l_2)
)
####################
It prints:
"l_2 is nil"

From my experience foreach is much slower that
mapcan(lambda(()
nil
)
l_list
)

It because foreach consider that's a good idea to return l_list (3rd
argument).
I know that foreach is much readable than mapcan(lambda()), but becaus
eof the speed consideration
we have write few foreach into mapcan(lambda).

Probable mapcan is trying to allocate 3000000 x listCells, because it
considers that it will need them?!?!
You're right. I was having a temporary brain aberration - I was seeing mapcar
rather than mapcan. However, I doubt very much that foreach() would be slower -
or mapc() would be slower. The list that is returned is the same list as the
list passed in, so that would take no time. It will of course take time if you
type it in the CIW, because of the display of the return value - but in a
program it should be negligible. In fact from a quick check doing some simple
ops on a list with 3000000 items, foreach was quicker than mapc and mapcan -
both of which were similar times (assuming that the return value of the lambda
was nil).

Perhaps it's just the construction of the cv~>instances list that's pushing you
over the limit?

About the 64 version, I'm affarid that will be also a limit per
process set by out IT team.
I have to check it.

Best Regards,
Marcel
It's probably the only option if you're exceeding the limit by doing cv~>instances.

Regards,

Andrew.
 
On Mar 22, 3:04 pm, Andrew Beckett <andr...@DcEaLdEeTnEcTe.HcIoSm>
wrote:
Marcel Preda wrote, on 03/20/09 13:49:



Hi Andrew,

I'm 101% sure that the mapcan(lambda()..) construction that I use does
not create a list with (3000000 X nil)
See the code bellow

#########################
let( (l_1 l_2)

    l_1 = list(1 2 3 4)
    l_2 = mapcan( lambda((x)
            ;; do nothing
            nil ;; lambda ret val
            )
            l_1
        )
    printf("l_2 is %L\n" l_2)
)
####################
It prints:
"l_2 is nil"

From my experience foreach is much slower that
mapcan(lambda(()
      nil
   )
  l_list
)

It because foreach consider that's a good idea to return l_list (3rd
argument).
I know that foreach is much readable than mapcan(lambda()), but becaus
eof the speed consideration
we have write few foreach into mapcan(lambda).

Probable mapcan is trying to allocate 3000000 x listCells, because it
considers that it will need them?!?!

You're right. I was having a temporary brain aberration - I was seeing mapcar
rather than mapcan. However, I doubt very much that foreach() would be slower -
or mapc() would be slower. The list that is returned is the same list as the
list passed in, so that would take no time. It will of course take time if you
type it in the CIW, because of the display of the return value - but in a
program it should be negligible. In fact from a quick check doing some simple
ops on a list with 3000000 items, foreach was quicker than mapc and mapcan -
both of which were similar times (assuming that the return value of the lambda
was nil).

Perhaps it's just the construction of the cv~>instances list that's pushing you
over the limit?



About the 64 version, I'm affarid that will be also a limit per
process set by out IT team.
I have to check it.

Best Regards,
Marcel

It's probably the only option if you're exceeding the limit by doing cv~>instances.

Regards,

Andrew.
Hi Andrew,

Thank you for the "layout -64" info.
It did the job :).
I have also to review the "foreach" vs. "mapcan" statements, but first
have to test them :).

Best Regards,
Marcel
 

Welcome to EDABoard.com

Sponsor

Back
Top