EDK : FSL macros defined by Xilinx are wrong

On Fri, 22 May 2009 11:57:36 +0100, Martin Thompson wrote:

I noticed whilst delving with FPGA editor into Xilinx devices that
there is a latch option within the flipflop block - have you ever used
them? Will synth tools map to them do you know?
I've always steered clear of latches in FPGAs, for all the
standard reasons. You're right, the slice FFs can be configured
as latches, and ISE uses them correctly - at least, it did with
the simple testcases I tried. And ISE's static timing analysis
seems to handle them correctly, though I confess I haven't done
a really detailed examination of how all that works.

I'm still reluctant to use latches for mainstream design,
but I guess that just shows I'm boring and unadventurous :)
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
On May 22, 7:30 am, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com>
wrote:
On Fri, 22 May 2009 11:57:36 +0100, Martin Thompson wrote:
I noticed whilst delving with FPGA editor into Xilinx devices that
there is a latch option within the flipflop block - have you ever used
them?  Will synth tools map to them do you know?

I've always steered clear of latches in FPGAs, for all the
standard reasons.  You're right, the slice FFs can be configured
as latches, and ISE uses them correctly - at least, it did with
the simple testcases I tried.  And ISE's static timing analysis
seems to handle them correctly, though I confess I haven't done
a really detailed examination of how all that works.

I'm still reluctant to use latches for mainstream design,
but I guess that just shows I'm boring and unadventurous :)
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.brom...@MYCOMPANY.comhttp://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
Hi Jonathan,
I have sent another email to you. Didn't you receive it yet?

Weng
 
Weng,

If you don't care to share your communications with the users of this
group, please do not waste the bandwidth seeing if someone has
received your private email. Doing so is no better than the rest of
the spam on this group.

Andy
 
On Tue, 2009-06-02 at 09:53 -0700, Antti wrote:
Hi

does anybody have real and realistic performance figures for Xilinx
GbE solution with XPS_TEMAC/MPMC ?

we need to get 60% of GbE wirespeed, UDP transmit only but it seems
like real hard target to reach :(

MPMC has memory latency of 23 cycles (added to EACH memory access
cycle) so the ethernet
SDMA takes a lot of bandwith already, there is another DMA writing
data at same speed, and the
PPC itself uses the same memory too

Antti
With custom Ethernet core + MPMC we get data rates slightly above
100MBps, depending on MTU. The single memory is shared by MicroBlaze/PPC
for code and data access, at least one streaming data source (custom PIM
for NPI) and the custom Ethernet IP (MAC + some packet composers,
decoders, etc.) again connected to NPI.
We rejected to use XPS_TEMAC because its low performance. The problem is
I lost my benchmark results. Sorry.

Jan
 
<Antti.Lukats@googlemail.com> wrote in message
news:45a07ecd-3a6c-4047-a640-cb5706d0b26b@k2g2000yql.googlegroups.com...
On 2 June, 21:05, Jan Pech <inva...@void.domain> wrote:
On Tue, 2009-06-02 at 09:53 -0700, Antti wrote:
Hi

does anybody have real and realistic performance figures for Xilinx
GbE solution with XPS_TEMAC/MPMC ?

we need to get 60% of GbE wirespeed, UDP transmit only but it seems
like real hard target to reach :(

MPMC has memory latency of 23 cycles (added to EACH memory access
cycle) so the ethernet
SDMA takes a lot of bandwith already, there is another DMA writing
data at same speed, and the
PPC itself uses the same memory too

Antti

With custom Ethernet core + MPMC we get data rates slightly above
100MBps, depending on MTU. The single memory is shared by MicroBlaze/PPC
for code and data access, at least one streaming data source (custom PIM
for NPI) and the custom Ethernet IP (MAC + some packet composers,
decoders, etc.) again connected to NPI.
We rejected to use XPS_TEMAC because its low performance. The problem is
I lost my benchmark results. Sorry.

Jan

hum.. my current task is to optimize a XPS_TEMAC based system
(with 1 single DDR2 chip as main memory!)
to reach about 580MBps

:(

I have never said that to be possible, but i need money :(
and if the goal cant be reached there will be none...

over 100MBps is sure possible (with XPS_TEMAC too)
but 580MBps is beyong doable i think for sure

Antti






over 100MBps is sure possible (with XPS_TEMAC too)
really? over GbE? impossible!

I take it you mean over 100Mbps which is far more plausible.

Phil
 
On Tue, 2009-06-02 at 11:32 -0700, Antti.Lukats@googlemail.com wrote:
On 2 June, 21:05, Jan Pech <inva...@void.domain> wrote:
On Tue, 2009-06-02 at 09:53 -0700, Antti wrote:
Hi

does anybody have real and realistic performance figures for Xilinx
GbE solution with XPS_TEMAC/MPMC ?

we need to get 60% of GbE wirespeed, UDP transmit only but it seems
like real hard target to reach :(

MPMC has memory latency of 23 cycles (added to EACH memory access
cycle) so the ethernet
SDMA takes a lot of bandwith already, there is another DMA writing
data at same speed, and the
PPC itself uses the same memory too

Antti

With custom Ethernet core + MPMC we get data rates slightly above
100MBps, depending on MTU. The single memory is shared by MicroBlaze/PPC
for code and data access, at least one streaming data source (custom PIM
for NPI) and the custom Ethernet IP (MAC + some packet composers,
decoders, etc.) again connected to NPI.
We rejected to use XPS_TEMAC because its low performance. The problem is
I lost my benchmark results. Sorry.

Jan

hum.. my current task is to optimize a XPS_TEMAC based system
(with 1 single DDR2 chip as main memory!)
to reach about 580MBps

:(

I have never said that to be possible, but i need money :(
and if the goal cant be reached there will be none...

over 100MBps is sure possible (with XPS_TEMAC too)
but 580MBps is beyong doable i think for sure

Antti
Just a simple calculation:
125000000 / 1024 / 1024 = 119.2MBps
It is without protocol overhead, FCS, IFGs. How do you want to exceed
limit of Gigabit Ethernet?

Jan
 
On Tue, 2009-06-02 at 20:47 +0200, Jan Pech wrote:
On Tue, 2009-06-02 at 11:32 -0700, Antti.Lukats@googlemail.com wrote:
On 2 June, 21:05, Jan Pech <inva...@void.domain> wrote:
On Tue, 2009-06-02 at 09:53 -0700, Antti wrote:
Hi

does anybody have real and realistic performance figures for Xilinx
GbE solution with XPS_TEMAC/MPMC ?

we need to get 60% of GbE wirespeed, UDP transmit only but it seems
like real hard target to reach :(

MPMC has memory latency of 23 cycles (added to EACH memory access
cycle) so the ethernet
SDMA takes a lot of bandwith already, there is another DMA writing
data at same speed, and the
PPC itself uses the same memory too

Antti

With custom Ethernet core + MPMC we get data rates slightly above
100MBps, depending on MTU. The single memory is shared by MicroBlaze/PPC
for code and data access, at least one streaming data source (custom PIM
for NPI) and the custom Ethernet IP (MAC + some packet composers,
decoders, etc.) again connected to NPI.
We rejected to use XPS_TEMAC because its low performance. The problem is
I lost my benchmark results. Sorry.

Jan

hum.. my current task is to optimize a XPS_TEMAC based system
(with 1 single DDR2 chip as main memory!)
to reach about 580MBps

:(

I have never said that to be possible, but i need money :(
and if the goal cant be reached there will be none...

over 100MBps is sure possible (with XPS_TEMAC too)
but 580MBps is beyong doable i think for sure

Antti


Just a simple calculation:
125000000 / 1024 / 1024 = 119.2MBps
It is without protocol overhead, FCS, IFGs. How do you want to exceed
limit of Gigabit Ethernet?

Jan
Or did I get you wrong and you talk about Mbits per second? I was
talking about Mbytes per sec.
If it is so, your goal should be reachable using xps_ll_temac instead of
xps_temac.
Jan
 
On 2 June, 21:05, Jan Pech <inva...@void.domain> wrote:
On Tue, 2009-06-02 at 09:53 -0700, Antti wrote:
Hi

does anybody have real and realistic performance figures for Xilinx
GbE solution with XPS_TEMAC/MPMC ?

we need to get 60% of GbE wirespeed, UDP transmit only but it seems
like real hard target to reach :(

MPMC has memory latency of 23 cycles (added to EACH memory access
cycle) so the ethernet
SDMA takes a lot of bandwith already, there is another DMA writing
data at same speed, and the
PPC itself uses the same memory too

Antti

With custom Ethernet core + MPMC we get data rates slightly above
100MBps, depending on MTU. The single memory is shared by MicroBlaze/PPC
for code and data access, at least one streaming data source (custom PIM
for NPI) and the custom Ethernet IP (MAC + some packet composers,
decoders, etc.) again connected to NPI.
We rejected to use XPS_TEMAC because its low performance. The problem is
I lost my benchmark results. Sorry.

Jan
hum.. my current task is to optimize a XPS_TEMAC based system
(with 1 single DDR2 chip as main memory!)
to reach about 580MBps

:(

I have never said that to be possible, but i need money :(
and if the goal cant be reached there will be none...

over 100MBps is sure possible (with XPS_TEMAC too)
but 580MBps is beyong doable i think for sure

Antti
 
On 2 June, 21:46, "Phil Jessop" <p...@noname.org> wrote:
Antti.Luk...@googlemail.com> wrote in message

news:45a07ecd-3a6c-4047-a640-cb5706d0b26b@k2g2000yql.googlegroups.com...



On 2 June, 21:05, Jan Pech <inva...@void.domain> wrote:
On Tue, 2009-06-02 at 09:53 -0700, Antti wrote:
Hi

does anybody have real and realistic performance figures for Xilinx
GbE solution with XPS_TEMAC/MPMC ?

we need to get 60% of GbE wirespeed, UDP transmit only but it seems
like real hard target to reach :(

MPMC has memory latency of 23 cycles (added to EACH memory access
cycle) so the ethernet
SDMA takes a lot of bandwith already, there is another DMA writing
data at same speed, and the
PPC itself uses the same memory too

Antti

With custom Ethernet core + MPMC we get data rates slightly above
100MBps, depending on MTU. The single memory is shared by MicroBlaze/PPC
for code and data access, at least one streaming data source (custom PIM
for NPI) and the custom Ethernet IP (MAC + some packet composers,
decoders, etc.) again connected to NPI.
We rejected to use XPS_TEMAC because its low performance. The problem is
I lost my benchmark results. Sorry.

Jan

hum.. my current task is to optimize a XPS_TEMAC based system
(with 1 single DDR2 chip as main memory!)
to reach about 580MBps

:(

I have never said that to be possible, but i need money :(
and if the goal cant be reached there will be none...

over 100MBps is sure possible (with XPS_TEMAC too)
but 580MBps is beyong doable i think for sure

Antti

 

over 100MBps is sure possible (with XPS_TEMAC too)

really? over GbE?  impossible!

I take it you mean over 100Mbps which is far more plausible.

Phil
yes, sorry, i did mean
XPS_TEMAC/MPMC, GbE (1000 base-X fiber)

100MBps is OK
580MBps -- hardly possible

Antti
 
On 2 June, 21:47, Jan Pech <inva...@void.domain> wrote:
On Tue, 2009-06-02 at 11:32 -0700, Antti.Luk...@googlemail.com wrote:
On 2 June, 21:05, Jan Pech <inva...@void.domain> wrote:
On Tue, 2009-06-02 at 09:53 -0700, Antti wrote:
Hi

does anybody have real and realistic performance figures for Xilinx
GbE solution with XPS_TEMAC/MPMC ?

we need to get 60% of GbE wirespeed, UDP transmit only but it seems
like real hard target to reach :(

MPMC has memory latency of 23 cycles (added to EACH memory access
cycle) so the ethernet
SDMA takes a lot of bandwith already, there is another DMA writing
data at same speed, and the
PPC itself uses the same memory too

Antti

With custom Ethernet core + MPMC we get data rates slightly above
100MBps, depending on MTU. The single memory is shared by MicroBlaze/PPC
for code and data access, at least one streaming data source (custom PIM
for NPI) and the custom Ethernet IP (MAC + some packet composers,
decoders, etc.) again connected to NPI.
We rejected to use XPS_TEMAC because its low performance. The problem is
I lost my benchmark results. Sorry.

Jan

hum.. my current task is to optimize a XPS_TEMAC based system
(with 1 single DDR2 chip as main memory!)
to reach about 580MBps

:(

I have never said that to be possible, but i need money :(
and if the goal cant be reached there will be none...

over 100MBps is sure possible (with XPS_TEMAC too)
 but 580MBps is beyong doable i think for sure

Antti

Just a simple calculation:
125000000 / 1024 / 1024 = 119.2MBps
It is without protocol overhead, FCS, IFGs. How do you want to exceed
limit of Gigabit Ethernet?

Jan
ups silly me :(
B is as byte

I wanted say we need 580Mbit/s

Antti
 
On 2 June, 21:50, Jan Pech <inva...@void.domain> wrote:
On Tue, 2009-06-02 at 20:47 +0200, Jan Pech wrote:
On Tue, 2009-06-02 at 11:32 -0700, Antti.Luk...@googlemail.com wrote:
On 2 June, 21:05, Jan Pech <inva...@void.domain> wrote:
On Tue, 2009-06-02 at 09:53 -0700, Antti wrote:
Hi

does anybody have real and realistic performance figures for Xilinx
GbE solution with XPS_TEMAC/MPMC ?

we need to get 60% of GbE wirespeed, UDP transmit only but it seems
like real hard target to reach :(

MPMC has memory latency of 23 cycles (added to EACH memory access
cycle) so the ethernet
SDMA takes a lot of bandwith already, there is another DMA writing
data at same speed, and the
PPC itself uses the same memory too

Antti

With custom Ethernet core + MPMC we get data rates slightly above
100MBps, depending on MTU. The single memory is shared by MicroBlaze/PPC
for code and data access, at least one streaming data source (custom PIM
for NPI) and the custom Ethernet IP (MAC + some packet composers,
decoders, etc.) again connected to NPI.
We rejected to use XPS_TEMAC because its low performance. The problem is
I lost my benchmark results. Sorry.

Jan

hum.. my current task is to optimize a XPS_TEMAC based system
(with 1 single DDR2 chip as main memory!)
to reach about 580MBps

:(

I have never said that to be possible, but i need money :(
and if the goal cant be reached there will be none...

over 100MBps is sure possible (with XPS_TEMAC too)
 but 580MBps is beyong doable i think for sure

Antti

Just a simple calculation:
125000000 / 1024 / 1024 = 119.2MBps
It is without protocol overhead, FCS, IFGs. How do you want to exceed
limit of Gigabit Ethernet?

Jan

Or did I get you wrong and you talk about Mbits per second? I was
talking about Mbytes per sec.
If it is so, your goal should be reachable using xps_ll_temac instead of
xps_temac.
Jan
oh, i am talking wrong today
yes Mbit/sec or Mbps
and sure XPS_LL_TEMAC with ALL hardware options tuned to maximum
and we do not copy buffers, and do not calc UDP checksum with PPC

but, even TRECKs marketing booklet promised only 355 Mbps for MTU1500
abd i need 580Mbps

Antti
 
<Antti.Lukats@googlemail.com> wrote in message news:244fac96-a937-48b4-949e-
but, even TRECKs marketing booklet promised only 355 Mbps for MTU1500
abd i need 580Mbps
Antti,

I think Treck numbers assume TCP/IP. I am actually in the middle of
evaluating the same thing. I have a design similar to the one described in
XAPP1041 running on a custom V4FX60 board and I seem to be getting the
numbers you are looking for (raw Ethernet Tx traffic) although it is early
for me to say whether they are "real", i.e. I haven't yet analyzed properly
what the Xilinx perf_app software does.


/Mikhail
 
<snip>
hum.. my current task is to optimize a XPS_TEMAC based system
(with 1 single DDR2 chip as main memory!)
to reach about 580MBps

:(

I have never said that to be possible, but i need money :(
and if the goal cant be reached there will be none...

over 100MBps is sure possible (with XPS_TEMAC too)
 but 580MBps is beyong doable i think for sure

Antti

Just a simple calculation:
125000000 / 1024 / 1024 = 119.2MBps
It is without protocol overhead, FCS, IFGs. How do you want to exceed
limit of Gigabit Ethernet?

Jan

Or did I get you wrong and you talk about Mbits per second? I was
talking about Mbytes per sec.
If it is so, your goal should be reachable using xps_ll_temac instead of
xps_temac.
Jan

oh, i am talking wrong today
yes Mbit/sec or Mbps
and sure XPS_LL_TEMAC with ALL hardware options tuned to maximum
and we do not copy buffers, and do not calc UDP checksum with PPC

but, even TRECKs marketing booklet promised only 355 Mbps for MTU1500
abd i need 580Mbps

Antti
Antti,
The USRP2 (http://en.wikipedia.org/wiki/
Universal_Software_Radio_Peripheral) is a software-defined radio that
uses a Spartan III + gigE PHY chip to reach 800 Mbits/sec sustained.
I believe the MAC in their FPGA has a few limitations (only supports
1000 Base-T) but was originally based on the opencores tri-mode MAC
(though significant modifications were needed to make it reliable,
IIRC). The other caveat here is that the USRP2 guys push raw ethernet
frames into a PC...i.e., they don't use TCP or UDP. I believe their
analysis showed that they needed a custom network layer to support the
sustained high data rates.

So I wouldn't give up hope on making something work here at 580 Mbits/
sec. All of the USRP2 code (software + HDL) is open-sourced, and
should be available through their subversion repositories.

Good Luck,
John
 
On Jun 3, 5:33 am, "john.orla...@gmail.com" <john.orla...@gmail.com>
wrote:
snip





hum.. my current task is to optimize a XPS_TEMAC based system
(with 1 single DDR2 chip as main memory!)
to reach about 580MBps

:(

I have never said that to be possible, but i need money :(
and if the goal cant be reached there will be none...

over 100MBps is sure possible (with XPS_TEMAC too)
 but 580MBps is beyong doable i think for sure

Antti

Just a simple calculation:
125000000 / 1024 / 1024 = 119.2MBps
It is without protocol overhead, FCS, IFGs. How do you want to exceed
limit of Gigabit Ethernet?

Jan

Or did I get you wrong and you talk about Mbits per second? I was
talking about Mbytes per sec.
If it is so, your goal should be reachable using xps_ll_temac instead of
xps_temac.
Jan

oh, i am talking wrong today
yes Mbit/sec or Mbps
and sure XPS_LL_TEMAC with ALL hardware options tuned to maximum
and we do not copy buffers, and do not calc UDP checksum with PPC

but, even TRECKs marketing booklet promised only 355 Mbps for MTU1500
abd i need 580Mbps

Antti

Antti,
The USRP2 (http://en.wikipedia.org/wiki/
Universal_Software_Radio_Peripheral) is a software-defined radio that
uses a Spartan III + gigE PHY chip to reach 800 Mbits/sec sustained.
I believe the MAC in their FPGA has a few limitations (only supports
1000 Base-T) but was originally based on the opencores tri-mode MAC
(though significant modifications were needed to make it reliable,
IIRC).  The other caveat here is that the USRP2 guys push raw ethernet
frames into a PC...i.e., they don't use TCP or UDP.  I believe their
analysis showed that they needed a custom network layer to support the
sustained high data rates.

So I wouldn't give up hope on making something work here at 580 Mbits/
sec.  All of the USRP2 code (software + HDL) is open-sourced, and
should be available through their subversion repositories.

Good Luck,
John

I'm using UDP and getting sustainable 600-700Mbits/sec. In fact this
number is limited by the PC side: either a network card or a stack.

- outputlogic
 
Jan Pech avait soumis l'idée :
On Tue, 2009-06-02 at 09:53 -0700, Antti wrote:
Hi

does anybody have real and realistic performance figures for Xilinx
GbE solution with XPS_TEMAC/MPMC ?

we need to get 60% of GbE wirespeed, UDP transmit only but it seems
like real hard target to reach :(

MPMC has memory latency of 23 cycles (added to EACH memory access
cycle) so the ethernet
SDMA takes a lot of bandwith already, there is another DMA writing
data at same speed, and the
PPC itself uses the same memory too

Antti

With custom Ethernet core + MPMC we get data rates slightly above
100MBps, depending on MTU. The single memory is shared by MicroBlaze/PPC
for code and data access, at least one streaming data source (custom PIM
for NPI) and the custom Ethernet IP (MAC + some packet composers,
decoders, etc.) again connected to NPI.
We rejected to use XPS_TEMAC because its low performance. The problem is
I lost my benchmark results. Sorry.

Jan
Hello,

FYI, we've developed a while ago a solution we name GEDEK that
implements 100% GbE performance (we guarantee the simultaneous
generation and reception of back-to-back Gigabit Ethernet Frames
without delay nor loss, and our hardware stack has UDP, some ICMP &
ARP, without requiring a processor (hardware stack indeed). Available &
tested on Xilinx & Altera, 100M, 1GbE, or dual-speed. We provide botrh
ends (FPGA block and PC Win/Linux API in source code). We have options
for Remote Flash Programming, Virtual UARTs, WOL etc.
Documentation and demos for both vendors are available on demand at
info at alse-fr not calm.

Bert
 
<news-support@sbcglobal.net> wrote in message
news:1244504405.96258_3078@flph199.ffdc.sbc.com...
Please note that on or around July 15, 2009, AT&T will no longer be
offering access to the Usenet netnews service. If you wish to continue
reading Usenet newsgroups, access is available through third-party
vendors.

Posted only internally to AT&T Usenet Servers.
Well there's a bite in the nuts. Thanks a bunch.
 
Hi Dan,


Developing professional GUIs is very time consuming for me. This has
been my bottleneck with the program all along. With a command line
interface, you will execute a script and in one window, and view and
edit and print the timing diagram shown in another window. Like
Matlab interface.
This descriptions sounds somehow like a program I used some time ago. Of
course not with the analyzing options you have implemented in your
TimingAnalyzer tool.

http://drawtiming.sourceforge.net/

cheers
ben
 
On Jun 20, 1:33 pm, Benjamin Krill <b...@codiert.org> wrote:
Hi Dan,

Developing professional GUIs is very time consuming for me.  This has
been my bottleneck with the program all along.  With a command line
interface,  you will execute a script and in one window,  and view and
edit and print the timing diagram shown in another window.   Like
Matlab interface.

This descriptions sounds somehow like a program I used some time ago. Of
course not with the analyzing options you have implemented in your
TimingAnalyzer tool.

http://drawtiming.sourceforge.net/

cheers
 ben

Hi ben,

I did see that before and that is similar but my approach will be much
more
higher level thus easier.

For example:

micro = m68000()
micro.write(add, data, wait_states)
micro.read(add, wait_states).

or

add_clock(......)
add_signal(.....)
add_delay(......)
add_constraint(.....)
add_or_gate(....)
add_and_gate(....)
add_counter(....)
add_clock_jitter(.....)

analyze_clock_domains(.....)
analyze_worst_case_timings(....)
analyze_best_case_timings.

read_vcd(....)
vcd_2_timing_diagram(.....)
create_testvectors(.....)
create_testbench(....)


A lot of these functions are built into the program now so its a
matter of converting them java to python. I won't have to spend most
of the time getting the user interface to look good and be friendly.
If this is made an open source project, I would hope that others
would help with the development and new features and bug fixes will
happen very quickly.

-Dan
 
On Jun 20, 10:51 am, chewie <timinganaly...@gmail.com> wrote:
On Jun 20, 1:33 pm, Benjamin Krill <b...@codiert.org> wrote:





Hi Dan,

Developing professional GUIs is very time consuming for me.  This has
been my bottleneck with the program all along.  With a command line
interface,  you will execute a script and in one window,  and view and
edit and print the timing diagram shown in another window.   Like
Matlab interface.

This descriptions sounds somehow like a program I used some time ago. Of
course not with the analyzing options you have implemented in your
TimingAnalyzer tool.

http://drawtiming.sourceforge.net/

cheers
 ben

Hi ben,

I did see that before and that is similar but my approach will be much
more
higher level thus easier.

For example:

micro = m68000()
micro.write(add, data, wait_states)
micro.read(add, wait_states).

or

add_clock(......)
add_signal(.....)
add_delay(......)
add_constraint(.....)
add_or_gate(....)
add_and_gate(....)
add_counter(....)
add_clock_jitter(.....)

analyze_clock_domains(.....)
analyze_worst_case_timings(....)
analyze_best_case_timings.

read_vcd(....)
vcd_2_timing_diagram(.....)
create_testvectors(.....)
create_testbench(....)

A lot of these functions are built into the program now so its a
matter of converting them java to python.  I won't have to spend most
of the time getting the user interface to look good and be friendly.
If this is made an open source project,  I would hope that others
would help with the development and new features and bug fixes will
happen very quickly.

-Dan

I agree. This could grow into a quite useful tool.

- outputlogic

http://outputlogic.com
 
Sorry if this was asked once before ....

ATT is shutting down USENET for its residential DSL subscribers.

Is there a web portal to comp.arch.fpga anywhere?

I'd have to have to pay for third-party usenet if it can be avoided.

thanks
Bob Smith





news-support@sbcglobal.net wrote:
Please note that on July 15, 2009, AT&T will no longer be offering
access to the Usenet netnews service. If you wish to continue reading
Usenet newsgroups, access is available through third-party vendors.

For further information, please visit http://support.att.net/usenet

Sincerely,

Your AT&T News Team

Distribution: AT&T SBC Global Usenet Netnews Servers
 

Welcome to EDABoard.com

Sponsor

Back
Top