logicore PCIX issue/question

M

Mark Schellhorn

Guest
Perhaps someone out there has an idea for working around the issue I have...

I want to use the fast (133MHz) version of the PCIX core in a server that boots
with the bus in PCI mode. Once the bus is enumerated, etc. the server resets the
bus and switches it into PCIX mode (Intel SE7501 chipset on an Intel
motherboard). Currently, the fast PCIX core causes the bus to hang when it is
accessed in PCI mode.

I want to make my device appear invisible on the bus until the bus is up and
running in PCIX mode. I know that this is not plug and play friendly, but we
think we can work around it in our driver.

My only thought so far is to internally gate the IDSEL input with PCIX_EN from
the core. Has anyone else dealt with this problem and successfully worked around it?

Thanks in advance!

Mark Schellhorn

ASIC/FPGA Designer
Seaway Networks http://www.seawaynetworks.com
 
Brannon King wrote:
One would think that for $18k you could get a core that would run both PCI64
at 66Mhz and PCIX at 133MHz depending upon availability. As it is, though, I
have to recommend using the PCI64 core and running your device at 66MHz.
Here's three reasons why I did it that way: 1) The core is a pain in the
rear when it comes to passing timing specs, and with a -4 chip the 133 is
impossible; 2) the majority of RAID and multimedia cards run PCI64,
including Intel and Adaptec's own ZCR stuff, and as soon as you put in a ZCR
your whole bus will kick down to 66MHz rendering your board useless; 3) I
have been unable to get one of Xilinx's controllers to successfully hot
swap. Has anyone else? I, like the poster, have been unsuccessful in this.
What's the trick?
Consider getting source code.


-- Mike Treseler
 
One would think that for $18k you could get a core that would run both PCI64
at 66Mhz and PCIX at 133MHz depending upon availability. As it is, though, I
have to recommend using the PCI64 core and running your device at 66MHz.
Here's three reasons why I did it that way: 1) The core is a pain in the
rear when it comes to passing timing specs, and with a -4 chip the 133 is
impossible; 2) the majority of RAID and multimedia cards run PCI64,
including Intel and Adaptec's own ZCR stuff, and as soon as you put in a ZCR
your whole bus will kick down to 66MHz rendering your board useless; 3) I
have been unable to get one of Xilinx's controllers to successfully hot
swap. Has anyone else? I, like the poster, have been unsuccessful in this.
What's the trick?


"Mark Schellhorn" <mark@seawaynetworks.com> wrote in message
news:twXMb.10066$881.1470800@news20.bellglobal.com...
Perhaps someone out there has an idea for working around the issue I
have...

I want to use the fast (133MHz) version of the PCIX core in a server that
boots
with the bus in PCI mode. Once the bus is enumerated, etc. the server
resets the
bus and switches it into PCIX mode (Intel SE7501 chipset on an Intel
motherboard). Currently, the fast PCIX core causes the bus to hang when it
is
accessed in PCI mode.

I want to make my device appear invisible on the bus until the bus is up
and
running in PCIX mode. I know that this is not plug and play friendly, but
we
think we can work around it in our driver.

My only thought so far is to internally gate the IDSEL input with PCIX_EN
from
the core. Has anyone else dealt with this problem and successfully worked
around it?

Thanks in advance!

Mark Schellhorn

ASIC/FPGA Designer
Seaway Networks http://www.seawaynetworks.com
 
Hi Mark,

I hope you don't mind that I've taken the liberty to answer the
questions of several others in this email.

You raise a good question. The behavior you see in your system
is certainly not what is recommended in the PCI-X Addendum in terms
of system initialization. The PCI-X Addendum suggests that a
system evaluate M66EN and PCIXCAP to determine the lowest common
denominator of bus mode, and then use that information to reset
the bus into a mode that is appropriate. Section 9.10, paragraph
four, from PCI-X Addendum 1.0b says:

A PCI-X system provides a circuit for sensing the state of the
PCIXCAP pin (see Section 14).
Perhaps your system does not provide or use this circuit for
sensing the states of M66EN and PCIXCAP. That aside, what your
system is doing should still work, but it requires your card
support operation in PCI mode (which is actually required for
a compliant design).

With the PCI-X core, you have several implementation options:

* bitstream for PCI, 33 MHz with PCI-X, 66 MHz
* bitstream for PCI, 33 MHz
* bitstream for PCI-X, 66 MHz
* bitstream for PCI-X, 133 MHz

These implementations require different speedgrades. You can
consult the datasheet or implementation guide for details.

The first option is good if you require moderate performance
and you want the simplicity of a single bitstream design.

If you require ultimate performance, some extra steps are
required. You would need to generate a PCI-X 133 MHz bitstream
in addition to a PCI 33 MHz bitstream (this does not require
redesign of the user application, simply a synthesis and place
and route with different options).

Then, you need to perform run-time reconfigurations whenever
the busmode changes. The core has an output, RTR, indicating
the wrong bitstream is loaded. One way to implement this is
with a small CPLD and two PROMs. I'm sure there are others.

Mike Treseler wrote:

Consider getting source code.
That is considerably more expensive than $18k. If you want to
explore that avenue, you should contact your local FAE.

Brannon King wrote:

One would think that for $18k you could get a core that would
run both PCI64 at 66Mhz and PCIX at 133MHz depending upon
availability.
I think $18k is a great deal for what you get. If you prefer
less expensive:

http://h18000.www1.hp.com/products/servers/technology/pci-x-terms.html

This one is free. However, there are hidden costs in compliance
verification and implementation details. And, you don't get any
support. It may give you the opportunity, though, to pare the
logic down to the bare minimum for your application -- which often
helps performance.

1) The core is a pain in the rear when it comes to passing timing
specs, and with a -4 chip the 133 is impossible
For PCI at 33 MHz with our core, timing is a slam dunk. For PCI-X,
at any frequency, the I/O timing is guaranteed if you use the parts
and speedgrades listed in the datasheet (-4 is not among those...)

The difficulty of the internal PERIOD constraint is a function of the
core AND the user design, so it cannot be guaranteed. I do recognize
that 133 MHz internal operation can be difficult.

2) the majority of RAID and multimedia cards run PCI64, including
Intel and Adaptec's own ZCR stuff, and as soon as you put in a ZCR
your whole bus will kick down to 66MHz rendering your board useless
In order for this to happen, there would have to be two slots on a
133 MHz bus segment. The motherboards I have seen only have one
slot on a 133 MHz bus segment. If you have specific motherboards
that have two slots on a 133 MHz segment, it would be useful to me
to know what brand/part number.

This behavior is more likely to happen if you have a four slot PCI-X
66 MHz bus, and you plug in a mix of PCI-only and PCI-X cards. The
bus does have to run at the lowest common denominator. If one card
doesn't support PCI-X, the bus does have to run in PCI mode. And
if you are using our core, it requires 33 MHz in PCI mode.

3) I have been unable to get one of Xilinx's controllers to
successfully hot swap. Has anyone else? I, like the poster,
have been unsuccessful in this.
You should file a case with Xilinx Support so that someone can
systematically debug the issue with you. Not knowing the failure
mechanism, I can't speculate what might be the issue.

Eric
 
Hi Eric,

Thanks for your response!

RTR might help me out. I'm stuck with using a single 133MHZ PCI-X only
bitstream, but I should be able to gate IDSELI with RTR to at least make my
device invisible when the bus is in PCI2.2 mode.

I ran a couple of simulations with the pcix_fast.v core but I'm seeing RTR
_always_ asserted, no matter what initialization pattern I put on the bus at
RST. Any idea what I'm doing wrong? The design and implementation guides for my
build (071) say that I should see this signal asserted when the core detects
that the wrong bitstream is loaded for the bus mode. I am not using the mode
forcing bits in the core's CFG register.

Thanks!

Mark


Eric Crabill wrote:
Hi Mark,

I hope you don't mind that I've taken the liberty to answer the
questions of several others in this email.

You raise a good question. The behavior you see in your system
is certainly not what is recommended in the PCI-X Addendum in terms
of system initialization. The PCI-X Addendum suggests that a
system evaluate M66EN and PCIXCAP to determine the lowest common
denominator of bus mode, and then use that information to reset
the bus into a mode that is appropriate. Section 9.10, paragraph
four, from PCI-X Addendum 1.0b says:


A PCI-X system provides a circuit for sensing the state of the
PCIXCAP pin (see Section 14).


Perhaps your system does not provide or use this circuit for
sensing the states of M66EN and PCIXCAP. That aside, what your
system is doing should still work, but it requires your card
support operation in PCI mode (which is actually required for
a compliant design).

With the PCI-X core, you have several implementation options:

* bitstream for PCI, 33 MHz with PCI-X, 66 MHz
* bitstream for PCI, 33 MHz
* bitstream for PCI-X, 66 MHz
* bitstream for PCI-X, 133 MHz

These implementations require different speedgrades. You can
consult the datasheet or implementation guide for details.

The first option is good if you require moderate performance
and you want the simplicity of a single bitstream design.

If you require ultimate performance, some extra steps are
required. You would need to generate a PCI-X 133 MHz bitstream
in addition to a PCI 33 MHz bitstream (this does not require
redesign of the user application, simply a synthesis and place
and route with different options).

Then, you need to perform run-time reconfigurations whenever
the busmode changes. The core has an output, RTR, indicating
the wrong bitstream is loaded. One way to implement this is
with a small CPLD and two PROMs. I'm sure there are others.

Mike Treseler wrote:


Consider getting source code.


That is considerably more expensive than $18k. If you want to
explore that avenue, you should contact your local FAE.

Brannon King wrote:


One would think that for $18k you could get a core that would
run both PCI64 at 66Mhz and PCIX at 133MHz depending upon
availability.


I think $18k is a great deal for what you get. If you prefer
less expensive:

http://h18000.www1.hp.com/products/servers/technology/pci-x-terms.html

This one is free. However, there are hidden costs in compliance
verification and implementation details. And, you don't get any
support. It may give you the opportunity, though, to pare the
logic down to the bare minimum for your application -- which often
helps performance.


1) The core is a pain in the rear when it comes to passing timing
specs, and with a -4 chip the 133 is impossible


For PCI at 33 MHz with our core, timing is a slam dunk. For PCI-X,
at any frequency, the I/O timing is guaranteed if you use the parts
and speedgrades listed in the datasheet (-4 is not among those...)

The difficulty of the internal PERIOD constraint is a function of the
core AND the user design, so it cannot be guaranteed. I do recognize
that 133 MHz internal operation can be difficult.


2) the majority of RAID and multimedia cards run PCI64, including
Intel and Adaptec's own ZCR stuff, and as soon as you put in a ZCR
your whole bus will kick down to 66MHz rendering your board useless


In order for this to happen, there would have to be two slots on a
133 MHz bus segment. The motherboards I have seen only have one
slot on a 133 MHz bus segment. If you have specific motherboards
that have two slots on a 133 MHz segment, it would be useful to me
to know what brand/part number.

This behavior is more likely to happen if you have a four slot PCI-X
66 MHz bus, and you plug in a mix of PCI-only and PCI-X cards. The
bus does have to run at the lowest common denominator. If one card
doesn't support PCI-X, the bus does have to run in PCI mode. And
if you are using our core, it requires 33 MHz in PCI mode.


3) I have been unable to get one of Xilinx's controllers to
successfully hot swap. Has anyone else? I, like the poster,
have been unsuccessful in this.


You should file a case with Xilinx Support so that someone can
systematically debug the issue with you. Not knowing the failure
mechanism, I can't speculate what might be the issue.

Eric
 
Hi Mark,

RTR might help me out. I'm stuck with using a single 133MHZ
PCI-X only bitstream, but I should be able to gate IDSELI
with RTR to at least make my device invisible when the bus
is in PCI2.2 mode.
From a simulation point of view, this will work. However, you
will find a problem in the implementation. If you insert any
logic after the IBUF for IDSEL in the wrapper file, you will
prevent the IFD packing into the IOB. What you'll see when you
run PAR is that you won't be able to meet the input setup specs
for IDSEL.

I haven't tried this, but I think a workable solution for your
case would be to gate the LCK signal in the wrapper file. This
would, I believe, hold the core and the user logic (but not the
bus mode detection logic) in reset. I appologize for offering
a solution that I haven't tested, and I hope it won't take more
than a few minutes of your time to try out... If it doesn't
work, contact me privately and we can look at other options.

I ran a couple of simulations with the pcix_fast.v core but I'm
seeing RTR _always_ asserted, no matter what initialization
pattern I put on the bus at RST. Any idea what I'm doing wrong?
The design and implementation guides for my build (071) say that
I should see this signal asserted when the core detects that the
wrong bitstream is loaded for the bus mode. I am not using the mode
forcing bits in the core's CFG register.
You do need to set the mode forcing bits in the cfg file. Let me
describe what I did to see the operation of RTR. I'm using the
"example design" in build_071, as delivered, with these changes:

1. In file test_tb.v, changed cfg_test_s.v to cfg_test_x.v
(this sets the mode forcing bits to PCI-X mode only...)
2. In file test_tb.v, changed pcix_core.v to pcix_fast.v
(this changes to the other netlist...)
3. In file test_tb.v, changed ../../src/xpci/pcix-lc.v to
../../src/wrap/pcix_lc_64xf.v (changed to correct wrapper
file...)
4. Set the library search paths to point to appropriate location.

When you run the simulation, the initial busmode pattern is for
64-bit PCI during RST# assertion. You will see RTR is asserted.
However, you shouldn't act on RTR while the user RST signal is
asserted, because you are basically watching the output of a
transparent latch. If there are transient conditions on the bus
mode pattern, you may see transient RTR behavior. After RST on
the user side is deasserted, the latch closes and RTR will be
stable with the correct information. You should then act on RTR.

The testbench sees that RTR is asserted, and skips the PCI mode
tests. It then resets the bus again, in 64-bit PCI-X during
RST# assertion. After this completes, you will see that RTR is
deasserted, as expected.

A side note, we are working to improve the RTR mechanism so that
it won't assert spuriously during reset.

Eric
 
Eric,

From a simulation point of view, this will work. However, you
will find a problem in the implementation. If you insert any
logic after the IBUF for IDSEL in the wrapper file, you will
prevent the IFD packing into the IOB. What you'll see when you
run PAR is that you won't be able to meet the input setup specs
for IDSEL.
My IDSEL setup time grew to 1.9ns; which would probably be fine because of
address stepping but would still be a non-compliance.

I haven't tried this, but I think a workable solution for your
case would be to gate the LCK signal in the wrapper file. This
would, I believe, hold the core and the user logic (but not the
bus mode detection logic) in reset. I appologize for offering
a solution that I haven't tested, and I hope it won't take more
than a few minutes of your time to try out... If it doesn't
work, contact me privately and we can look at other options.
Works. Sweet. We're looking at adding a second PROM in the clean-up spin, but if
our target server BIOS's are tolerant of the bus population changing when the
mode switches it might not be necessary.

I haven't actually tried the fix in the box yet but the Intel server/BIOS that
we are testing with seems to enumerate the bus periodically. I suspect it is the
hot-plug controller in the chipset. Hopefully it will accept the card magically
appearing following the mode switch.

You do need to set the mode forcing bits in the cfg file. Let me
describe what I did to see the operation of RTR. I'm using the
... snip....
tests. It then resets the bus again, in 64-bit PCI-X during
RST# assertion. After this completes, you will see that RTR is
deasserted, as expected.
I managed to get RTR working after playing around with my testbench reset
timing. I actually see RTR working as I would expect regardless of whether or
not I force the mode bit.

Thanks!

Mark
 

Welcome to EDABoard.com

Sponsor

Back
Top