D
Dimiter_Popoff
Guest
On 1/17/2023 3:49, Don Y wrote:
It is the same with the power SoCs I use. What I wonder about is
how the single IRQ line/mask bit the core has is handled.
All the peripherals on the SoC go into a *peripheral*, called
the IRQ priority encoder or something. And this encoder resolves
priorities, supplies vectors etc. one way or the other.
But it has only *one* IRQ wire to the core to signal an interrupt.
Normal processors take the interrupt and enter the interrupt
handling routine - selected by whatever vector - with the interrupt
*masked*, otherwise the core would have no chance to clear the
interrupt before being interrupted again.
I see two ways out of this to allow interrupt nesting:
- the IRQ input to the core is edge sensitive and either the
IRQ is not maskable or the core unmasks itself as soon as it gets
the interrupt; the external (to the core) priority encoder would
deliver another edge only if it is of higher priority than the
last one it has delivered which is not yet reset by the core.
- the IRQ input is still level sensitive but the IACK signal from
the core to the priority encoder makes it look edge sensitive (i.e.
the priority encoder negates its IRQ to the core in response to
IACK).
Basically this would mimic 68k behaviour, the 68k core must be
doing it in some similar sort of way itself anyway.
And I suspect that *my* SoCs might do that as well... I have just never
looked into it, the IRQ latency is good enough (around 1us worst
case I think) as it is...
On 1/16/2023 3:56 PM, Dimiter_Popoff wrote:
I am not sure I get it, I have had such an interrupt controller on
the power cores I have used (first one on the mpc8240, late 90-s).
What happens to the core when the IRQ line from the interrupt
controller is asserted? On power, this would cause the core
to set the interrupt mask bit and go to a specific address,
roughly speaking. Then it is up to the core to clear *its*
mask bit in its MSR, no matter what the interrupt controller
does.
There is one EXTERNAL interrupt signal available. All of the
INTERNAL (SoC) interrupt sources each connect to the NVIC
directly. There are dozens of such sources. The NVIC makes it
possible (to some extent) to prioritize how \"important\"
(priority) each source is.
It is the same with the power SoCs I use. What I wonder about is
how the single IRQ line/mask bit the core has is handled.
All the peripherals on the SoC go into a *peripheral*, called
the IRQ priority encoder or something. And this encoder resolves
priorities, supplies vectors etc. one way or the other.
But it has only *one* IRQ wire to the core to signal an interrupt.
Normal processors take the interrupt and enter the interrupt
handling routine - selected by whatever vector - with the interrupt
*masked*, otherwise the core would have no chance to clear the
interrupt before being interrupted again.
I see two ways out of this to allow interrupt nesting:
- the IRQ input to the core is edge sensitive and either the
IRQ is not maskable or the core unmasks itself as soon as it gets
the interrupt; the external (to the core) priority encoder would
deliver another edge only if it is of higher priority than the
last one it has delivered which is not yet reset by the core.
- the IRQ input is still level sensitive but the IACK signal from
the core to the priority encoder makes it look edge sensitive (i.e.
the priority encoder negates its IRQ to the core in response to
IACK).
Basically this would mimic 68k behaviour, the 68k core must be
doing it in some similar sort of way itself anyway.
And I suspect that *my* SoCs might do that as well... I have just never
looked into it, the IRQ latency is good enough (around 1us worst
case I think) as it is...