View unanswered posts | View active topics It is currently Tue Jul 16, 2019 4:33 am



Reply to topic  [ 14 posts ] 
 Designing a simple interrupt controller 
Author Message

Joined: Sat Jun 16, 2018 2:51 am
Posts: 41
I'm attempting to design a simple priority interrupt controller. The goal is to support many external interrupts. The design I have so far looks like this:

Attachment:
simpleInterruptController.png
simpleInterruptController.png [ 21.61 KiB | Viewed 873 times ]

The "controller" supports 7 external interrupts. These pins can be connected to the miscellaneous peripherals that wish to use interrupts to communicate with the CPU. Before reaching the CPU, the interrupt signals are conditioned as discussed in this post.[1]

Since more than one signal can arrive at a time, a priority encoder is used to tell the CPU which one should be serviced first. When the incoming interrupt number (received by the CPU from the encoder) is greater than zero, the CPU enters the interrupt servicing cycle:

  • The CPU first disables interrupts (by setting EI to low). This "saves" the current interrupt number
  • The CPU then retrieves the respective ISR address by using the interrupt number to index into an IDT somewhere in memory
  • The CPU then executes the ISR
  • When done, (on RETI instruction), the CPU sends an interrupt acknowledge signal to the device associated with the just-serviced interrupt. The IACK signal is held high for a few clock cycles[2]
  • IACK is then lowered, and interrupts are enabled (EI is set to high)

What are your thoughts? Is the circuit likely to perform as intended? Are there any gotchas I am missing?

The protocol assumes that each peripheral device will hold its respective INT signal high until it receives an INTA pulse that tells it the requested interrupt has been serviced. Are there any flaws with this assumption?

[1]: Can I do without the schmitt trigger if the interrupt signals sent by peripherals are expected to be digital and of the logic level expected by the CPU?
[2]: I am thinking two clock cycles will be enough for the pulse to be detected by a peripheral device. What do you think?


Fri Jun 14, 2019 3:05 am
Profile

Joined: Sat Feb 02, 2013 9:40 am
Posts: 901
Location: Canada
It looks good. I’m not sure that the two DFF’s are needed though. I would think one would suffice. The cpu could be used to “debounce” the interrupt inputs. Prioritizing the interrupts could be done by the cpu. That way an extra prioritizer wouldn’t be needed. Just an ‘or’ gate to indicate an interrupt is present and some buffers connected to the databus might be all that’s needed.

Is this for a retro-style computer built out of discrete components or is it using PLD’s of some sort? If it was an 80xx style micro I’d suggest using an 8259. The 8259 is expandable to more interrupt levels. Although it outputs an int or call instruction there may be a way to ignore that by the micro. The 8259 can rotate priorities and it’d be able to mask on or off individual interrupts. There are a couple of 8259 cores available that could be modified and programmed into a PLD.

Is there a need for edge sensitive or level sensitive interrupts?
Quote:
The protocol assumes that each peripheral device will hold its respective INT signal high until it receives an INTA pulse that tells it the requested interrupt has been serviced.

Depends on devices attached. There might be some devices which just output a pulse (timer) and continue on.

_________________
Robert Finch http://www.finitron.ca


Sat Jun 15, 2019 3:30 am
Profile WWW

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1201
Thanks for sharing your design, quadrant!

Indeed, I think the Schmitt input should only be needed when the source device is likely to output a slow or wavering input. If it's digital, and with a suitably fast edge relative to the sampling clock, that should do.

I think
- the CPU should be told when there is an interrupt, which means a ninth wire which is an AND (or OR) of the interrupt signals, which can be a single interrupt input to the CPU.
- and then you can bring the 8th code into play: the CPU always sees a three bit number which is the ID of the interrupting device

I'd be a bit worried about
- the open-loop timing of the IACK - wouldn't it be better for the device to respond and for the IACK to be deasserted at that time? Or, IACK should be a single cycle, and the device will need to register it suitably in its controlling state machine. Two cycles seems to be neither one thing nor the other.
- nested, overlapping, or simultaneous interrupts. Make sure every interrupting source will be serviced exactly once, in all cases.


Sat Jun 15, 2019 6:19 am
Profile

Joined: Sat Jun 16, 2018 2:51 am
Posts: 41
Thanks for the replies!

robfinch wrote:
Is this for a retro-style computer built out of discrete components or is it using PLD’s of some sort?
I am building the CPU using an FPGA. However the code is written with all components being built up from NAND gates (i.e. it is not "behavioral" code).

robfinch wrote:
Prioritizing the interrupts could be done by the cpu. That way an extra prioritizer wouldn’t be needed. Just an ‘or’ gate to indicate an interrupt is present and some buffers connected to the databus might be all that’s needed.
Not included in the diagram is a 3-input OR gate inside the CPU that would take the encoded interrupt signal and output 1 if a non-zero interrupt was present to indicate that an interrupt is awaiting to be serviced. The CPU would then launch the ISR associated with the interrupt number. I'm not sure where the extra prioritizer comes in to play? I'm thinking the priority encoder passes on the most important interrupt to the CPU...and the CPU processes the request...

robfinch wrote:
Is there a need for edge sensitive or level sensitive interrupts?
I am not sure. What are the benefits of having edge and or level sensitive interrupts? Is this how interrupts are commonly detected in CPUs such as the Z80, 6502 or microcontrollers like the AVR, PIC? I've stared at many of their timing diagrams but still find it hard to decipher their interrupt protocols.

BigEd wrote:
the CPU should be told when there is an interrupt, which means a ninth wire which is an AND (or OR) of the interrupt signals
Ah, I see. Thanks for the advice - I was sad about losing IRQ0!

BigEd wrote:
the open-loop timing of the IACK - wouldn't it be better for the device to respond and for the IACK to be deasserted at that time?
How would the device respond to the IACK? Is yet another wire required "ACK_of_the_IACK"? Or can IACK be driven by both the CPU and device?

BigEd wrote:
Or, IACK should be a single cycle, and the device will need to register it suitably in its controlling state machine. Two cycles seems to be neither one thing nor the other.
What happens if the device misses the IACK pulse? It seems there is a potential for the same interrupt to be serviced over and over again if the device misses the IACK pulse, and thus does not deassert its IRQ...Two is an arbitrary number in an attempt to avoid this situation...

BigEd wrote:
nested, overlapping, or simultaneous interrupts. Make sure every interrupting source will be serviced exactly once, in all cases
This is what I hope to achieve by disabling interrupts while one is being serviced, and enabling them only when an IACK has been "deemed" to be acknowledged by the requesting device. Is there something I am missing?

My inexperience working with microprocessors shines here. I have no idea what the typical protocol (with respect to timing diagrams) is between a processor and peripheral that wants to use interrupts to communicate with it. The IACK I've come up with is definitely flawed, any insights on how to improve it would be much appreciated.


Tue Jun 18, 2019 7:12 pm
Profile

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1201
The usual system on the 6502 is this: any device which can signal an IRQ will also have a status register. The 6502 takes the interrupt, and the entering of the Interrupt Service Routine automatically masks further IRQs. The ISR will then check the status register of devices which might have raised IRQ, and either that status register read or an explicit command register write will cause the device to un-signal the IRQ. Either within the ISR, or on exit, interrupts will be re-enabled, and a subsequent interrupt or another outstanding interrupt would be able to cause a new interrupt.

So, the closed loop involves the device asserting IRQ until it is acknowledged by action of 6502 code, and involves masking of interrupts within (at least the initial part of) the ISR.

On the 6502, NMI works rather differently. The 6502 is sensitive to active-going edges of the NMI. Normally there can only be one source of NMIs, so the ISR needs only to deal with the currently activated source device. The device could technically raise a second NMI just two cycles after the first, but that could not practically cause two invocations of the ISR. Software and hardware are arranged so that the ISR can deal with the NMI in time, such that the first NMI will be 'done' by the time the second one is signalled - it need not have returned, but it must be ready for a new invocation. On the BBC Micro, the floppy disk controller raises an NMI for each incoming byte, or to accept each outgoing byte, and that allows 32 microseconds between bytes. I think. See here for the limits of what's achievable. In this case, which is open-loop timing, I'd expect the peripheral to offer a status register which can report overrun or underrun - unhandled events.


Tue Jun 18, 2019 8:45 pm
Profile

Joined: Tue Dec 11, 2012 8:03 am
Posts: 265
Location: California
I don't have any experience with the processors or buses that issue an interrupt-acknowledge signal, but I (the interrupts junkie! :D ) would make the following related comment.

Any given IC that can generate interrupts, usually regarding I/O or timers, will usually have several possible interrupt sources within the one IC. If you have more than one of those sources enabled, you will have to poll the IC, even if you have interrupt prioritizing hardware, to see which source(s) caused the interrupt. They might be for example a timer roll-over, an interrupt-on-change pin, receipt of a byte on a serial port, etc.. As Ed said, the interrupt will have to be turned off in software, whether by reading the status or other register, or writing to a register.

If it's possible to turn off the interrupt with an IACK signal, you would have to be careful about the possibility that a second source of interrupt might be generated within the same IC while the first one is still pending, where you'd have the situation that the processor is never aware that there's another interrupt needing service.

I have a 6502-oriented article on how interrupts work and how to use them and what to watch out for, at http://wilsonminesco.com/6502interrupts/ .

_________________
http://WilsonMinesCo.com/ lots of 6502 resources


Tue Jun 18, 2019 10:04 pm
Profile WWW

Joined: Sat Jun 16, 2018 2:51 am
Posts: 41
BigEd wrote:
The usual system on the 6502 is this: any device which can signal an IRQ will also have a status register... The ISR will then check the status register of devices which might have raised IRQ, and either that status register read or an explicit command register write will cause the device to un-signal the IRQ.
With this in mind, I took a closer at the codebase of the xv6 operating system, and found the following code in the trap handler...

Code:
// Timer interrupt
case T_IRQ0 + IRQ_TIMER:
   ...
   lapiceoi();
   break;

// Disk interrupt
case T_IRQ0 + IRQ_IDE:
   ...
   lapiceoi();
   break;

// Keyboard interrupt
case T_IRQ0 + IRQ_KBD:
   ...
   lapiceoi();
   break;

// Serial interrupt
case T_IRQ0 + IRQ_COM1:
   ...
   lapiceoi();
   break;

After each interrupt is handled, an EOI is sent to the LAPIC! And from the OSDev Wiki:
Quote:
EOI Register
Write to the register with offset 0xB0 using the value 0 to signal an end of interrupt.

Thank you for the very clear explanation on the difference between open vs closed-loop interrupt handling.


--------------------------------

How does updating the status register look like from the perspective of the device? Does the device, after issuing the interrupt, spin in a loop checking if the contents of the register have changed (and now indicate that its requested interrupt has been serviced)?

Garth wrote:
Any given IC that can generate interrupts, usually regarding I/O or timers, will usually have several possible interrupt sources within the one IC.
If the device can issue multiple requests, what alternative to polling does it have for determining if a request it has sent has been serviced? The only thing I can think of is having separate threads that spin-wait for each individual interrupt the device issues... but that seems too expensive/complicated to be an approach of choice...

Can the status register be evaluated using (non-complex) dedicated circuitry instead of code (maybe an FSM of some kind)? Or is code the way to go (which would require the device to have an embedded microcontroller)... What is the typical/simple approach?


--------------------------------

Garth wrote:
I have a 6502-oriented article on how interrupts work and how to use them and what to watch out for
Thanks! It's on my current reading list. I've been slowly making my way through it, there's quite a bit to take in.


Fri Jun 21, 2019 4:02 am
Profile

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1201
Nice idea to look into a device driver! You could surely learn a lot by looking at different ones.

I would expect the peripheral device to be hardware only - although it needn't be - and so yes, somewhere in there would be a state machine, managing the internal interrupt sources, the potential masking of those, the outgoing interrupt line, the various bits of the status register, the updates by the micro to the command register.

A read of the 6522 datasheet might be useful, as it's complex on the inside, but still hardcoded. Every reaction can be in one clock cycle (maybe two.) The 6551 ACIA too, as it has overrun as a possibility.

A very complex peripheral like a floppy disk controller (8271, 1770) or Advanced CRTC might be worth looking into - the ACRTC surely has an internal microcontroller, but even so, the interrupt handling would probably be handled by a state machine as it has to respond cycle by cycle.
Quote:
The ACRTC recognizes eight separate conditions which can generate an interrupt including command error detection, command end, drawing edge detection, light pen strobe and four FIFO status conditions. Each condition has an associated mask bit for enabling/disabling the associated interrupt. The ACRTC removes the interrupt request when the MPU performs appropriate interrupt service by reading or writing to the ACRTC.


Fri Jun 21, 2019 5:11 am
Profile

Joined: Sat Jun 16, 2018 2:51 am
Posts: 41
Here is the revised architecture based on suggestions

Attachment:
simpleInterruptController_v2.png
simpleInterruptController_v2.png [ 30.43 KiB | Viewed 676 times ]


--- Left side ---

On the left side is the circuitry needed to support the IN and OUT instructions that the CPU and device will use to communicate with each other. With regards to handling interrupts, it will be used by the CPU to send the (device-specific) EOI command to a device.

The left side consists of:
  • A decoder - converts the IO address ("port number") specified in an IN/OUT instruction to a chip select (CS) signal. That is, it used to select the device specified in the IN/OUT instruction.
  • IO databus - allows the CPU and devices to exchange data
  • R/W - R/W output from the CPU goes to each device's R/W input. This allows the CPU to specify whether it wants to read data from the device (IN instruction) or write data to the device (OUT instruction)


--- Right side ---

On the right side is circuitry exclusive to interrupt handling.

Interrupt requests from the various devices first hit the DFF (register). The output of the DFF represents all the requests currently waiting to be serviced.

If interrupts are disabled, the DFF "freezes" its current state. That is, the interrupt requests that the CPU sees as pending will stay the same until it enables interrupts again (at which point the DFF will update and reflect the current state of requests).

The OR gate uses the output of the DFF to tell the CPU when there is an interrupt waiting to be serviced.

The priority encoder uses the output of the DFF to tell the CPU the number of the highest priority interrupt waiting to be serviced.


--- Revised protocol ---

When the CPU detects that an interrupt request is pending:

  • The CPU first disables interrupts (by setting EI to low). This "saves" the number of the highest priority interrupt.
  • The CPU then retrieves the respective ISR address by using the interrupt number to index into an IDT somewhere in memory
  • The CPU then executes the ISR
  • An EOI command is expected to be issued by the ISR ("OUT portNumber EOICommand") before it issues the RETI instruction
  • On RETI instruction, the CPU enables interrupts (EI is set to high)

The protocol assumes that each peripheral device will hold its respective INT signal high until it receives an EOI command.


---

Any thoughts on the revised protocol and circuit? I think I got most of the points suggested...

Thanks! :)


Mon Jun 24, 2019 7:52 pm
Profile

Joined: Tue Dec 11, 2012 8:03 am
Posts: 265
Location: California
No one else has replied yet, so I'll jump in again.

Quote:
If interrupts are disabled, the DFF "freezes" its current state. That is, the interrupt requests that the CPU sees as pending will stay the same until it enables interrupts again (at which point the DFF will update and reflect the current state of requests).

The OR gate uses the output of the DFF to tell the CPU when there is an interrupt waiting to be serviced.

The priority encoder uses the output of the DFF to tell the CPU the number of the highest priority interrupt waiting to be serviced.

Is the "freezing" necessary? It seems like it would not cause any trouble if the state changes while interrupts are disabled, and in fact there may be an advantage in that propagation delays may be avoided. More on that in a minute. You do obviously want the output to remain stable in the single cycle in which the processor reads it though.

You imply that you want to design your own processor too. Otherwise:
Quote:
  • The CPU first disables interrupts (by setting EI to low). This "saves" the number of the highest priority interrupt.

It is normal for processors, at least the few I've worked with, to automatically disable interrupt reception when they do their interrupt sequence. There may be various ways to detect that and to lock in the interrupt number. Even with lots of interrupts going, you won't often get more than one interrupt firing in the very same instruction; so you could probably delete any need for an EI signal.

Rather than IN and OUT instructions, the only processors I've used have memory-mapped I/O. On the 65xx processors, this has the particular advantage of being able to use many different available addressing modes, including indirect-indexed and indexed-indirect, making the code more efficient. For example, you could have a single routine to handle all four UARTs, and RAM registers or stack locations cause it to read and write to the particular UART you want at the moment, even if you interrupt the routine for another UART to be serviced for more-urgent work. Stacked addresses or register numbers used by the routine automatically keep everything straight.

Quote:
  • The CPU then retrieves the respective ISR address by using the interrupt number to index into an IDT somewhere in memory

If you can write to the controller, can you write the relevant ISRs' actual addresses to it, so you don't need an extra step to look them up? In fact, if that part of the controller could be memory-mapped to the interrupt vector address, that would be the best-performing, as the processor would read the vector from the controller directly.

Quote:
  • On RETI instruction, the CPU enables interrupts (EI is set to high)

This is pretty normal too, as the processor just restores the previous status, which included that interrupts were allowed (otherwise you wouldn't have gotten into the ISR in the first place, unless it was a non-maskable interrupt).

Quote:
The protocol assumes that each peripheral device will hold its respective INT signal high until it receives an EOI command.

I have never seen an IC that puts out an active-high interrupt signal. They've all been active-low. Often (usually?) they are also open-drain (or open-collector), so they can be wire-OR'ed, meaning you can connect two or more IRQ\ outputs to the same line, pulled up passively by a resistor. This implies that there's a considerable time constant to pull it up, equal to the capacitance on that line times the pull-up resistance; and with all but the slowest clock speeds, you'll need to turn off the source of the interrupt in the IC possibly many instructions before returning from the ISR, in order to give the line time to float up so it doesn't look like the IC is still requesting interrupt (whether from the same or different source within the IC) when you exit the ISR. How to deal with these "ghost" interrupts is something to consider. If another source of interrupt within the same IC requests interrupt before you're done servicing the first one, you might be choosing to let that one wait, which might be fine. It's something to think about too though.

I've thought a lot about such an interrupt priority encoder or controller over the years, but keep coming back to the fact that there are certain things such a circuit cannot overcome. If each I/O IC has many possible sources of interrupts within it, and these may be of vastly different priority levels, and the IC only has one IRQ\ output pin, you'll still have to poll to see what caused the interrupt, and sometimes figure out what you want to service first. You might get done servicing one interrupt within the IC, and before the RTI you see if it has additional interrupts pending (which might have been generated while you were servicing the first one); but now what do you do if another IC has a higher-priority interrupt pending? You would at least want the processor to be able to get an updated read from the controller's list status, and you'd want that to be able to change while you're servicing an interrupt.

Others have mentioned the 65c22 VIA (versatile interface adapter). It has seven interrupt sources in it, but only one IRQ\ output pin. The interrupts are:
  • T1 (timer 1) time-out
  • T2 time-out
  • CB1 active edge
  • CB2 active edge
  • the shift register (synchronous-serial port) has completed shifing a byte in or out
  • CA1 active edge
  • CA2 active edge
As you can see, they're all unrelated, and their applications might be too. Fortunately, it's rare that you would have many of them enabled at once. (The IC has an interrupt-enable register (IER) too, where you can tell it which ones you want enabled.)

So although I don't consider the controller a dead end, neither would I get my hopes up too high about it solving all the possible problems.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources


Thu Jun 27, 2019 12:33 am
Profile WWW

Joined: Sat Feb 02, 2013 9:40 am
Posts: 901
Location: Canada
Quote:
I have never seen an IC that puts out an active-high interrupt signal. They've all been active-low.
It’s fairly common to use entirely active-high logic in systems-on-a-chip internal to IC’s. There’s no reason to use active low logic, unless going outside of the chip to be compatible with something else. Depending on how things are implemented it may be desirable to use active high signals.

As Garth says there is an unavoidable need to poll I/O devices when there’s an interrupt, determining the cause can be quite complex. For an interrupt controller which I developed it outputs an eight-bit cause code which goes to a port on the cpu along with a single interrupt line to the cpu. The cause code is passed to the cpu in a register. So, there’s only a single interrupt routine basically one vector. The software looks at the cause code to determine what to do. Interrupt causes also include things like divide by zero. Because the cpu is much faster than I/O or memory it can determine how to proceed via software that might not have been possible many years ago. Software can handle things like altering priorities. It may be much faster for the cpu to evaluate an if/else tree than to lookup vectors from memory.

From the ISR for a project I'm working on r22 contains the cause code (this section has no memory loads):
Code:
      xor      r1,r22,#TS_IRQ
      beq      r1,r0,.ts
      xor      r1,r22,#GC_IRQ
      beq      r1,r0,.lvl6
      xor      r1,r22,#KBD_IRQ
      beq      r1,r0,.kbd
      beqi   $r1,#FLT_CHK,.iChk
      xor      r1,r22,#FLT_CS
      beq      r1,r0,.ldcsFlt
      xor      r1,r22,#FLT_RET
      beq      r1,r0,.retFlt
      xor      r1,r22,#240            ; OS system call
      beq      r1,r0,.callOS
      beq      r22,#FLT_CMT,.cmt
      beq      r22,#FMTK_SYSCALL,.lvl6
      beq      r22,#FMTK_SCHEDULE,.ts2
      beq      r22,#FLT_SSM,ssm_irq
;      beq      r22,#FLT_ALN,aln_irq
      jmp      _return_from_interrupt

Sure there's no vectoring, but it's not needed. It can take 20 clock cycles just to load a vector.

Depending on how complex a system is being implemented it might be worth looking at some platform-level interrupts controllers to see how they work. They deal with multiple bus masters and a variety of interrupt sources. Seeing how a really complex interrupt processing is performed might help.

_________________
Robert Finch http://www.finitron.ca


Thu Jun 27, 2019 6:22 am
Profile WWW

Joined: Wed Jan 09, 2013 6:54 pm
Posts: 1201
It might be fruitful to look at prior art: whereas an interrupt controller is rare enough on a 6502 system, in the land of z80 and 8080 it was perhaps more common. By the time of the IBM PC, we find
https://en.wikipedia.org/wiki/Intel_8259
(which provides 8 sources and can be cascaded to provide 64)

Quote:
There are three registers, an Interrupt Mask Register (IMR), an Interrupt Request Register (IRR), and an In-Service Register (ISR). The IRR maintains a mask of the current interrupts that are pending acknowledgement, the ISR maintains a mask of the interrupts that are pending an EOI, and the IMR maintains a mask of interrupts that should not be sent an acknowledgement.

End Of Interrupt (EOI) operations support specific EOI, non-specific EOI, and auto-EOI. A specific EOI specifies the IRQ level it is acknowledging in the ISR. A non-specific EOI resets the IRQ level in the ISR. Auto-EOI resets the IRQ level in the ISR immediately after the interrupt is acknowledged.

Edge and level interrupt trigger modes are supported by the 8259A. Fixed priority and rotating priority modes are supported.

The 8259 may be configured to work with an 8080/8085 or an 8086/8088. On the 8086/8088, the interrupt controller will provide an interrupt number on the data bus when an interrupt occurs. The interrupt cycle of the 8080/8085 will issue three bytes on the data bus (corresponding to a CALL instruction in the 8080/8085 instruction set).

The 8259A provides additional functionality compared to the 8259 (in particular buffered mode and level-triggered mode) and is upward compatible with it.


The z80 at least has several interrupt modes: the interrupting device can supply a vector, or the low byte of the vector, so that the ISR has the minimum amount of work to do. This seems to me quite a good idea for an I/O-heavy system.


Thu Jun 27, 2019 7:17 am
Profile

Joined: Tue Dec 11, 2012 8:03 am
Posts: 265
Location: California
Rob wrote,
Quote:
It’s fairly common to use entirely active-high logic in systems-on-a-chip internal to IC’s. There’s no reason to use active low logic, unless going outside of the chip to be compatible with something else.

Sure; if it's all on-chip, do whatever you want.

Rob's post gives me the following idea though, which isn't hardware, and we could start another topic if anyone feels there any discussion to be had; but self-modifying code (SMC), or code that writes the beginning of the ISR, might also be worth considering, so the highest-priority active sources get polled first. I recently posted a 65xx-oriented article on SMC. I'm not as excited about it as I have been about other features of my site, but I'm hope readers will get interested and send me more material to improve the article. (I'll give credit where credit is due of course.) Routines that install, delete, and prioritize ISRs could modify the master ISR so it's always optimum.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources


Thu Jun 27, 2019 8:16 am
Profile WWW

Joined: Sat Jun 16, 2018 2:51 am
Posts: 41
Garth wrote:
Is the "freezing" necessary? It seems like it would not cause any trouble if the state changes while interrupts are disabled, and in fact there may be an advantage in that propagation delays may be avoided.
Ah, noted!

Garth wrote:
You imply that you want to design your own processor too.
Indeed, I'm designing the CPU.

Garth wrote:
If you can write to the controller, can you write the relevant ISRs' actual addresses to it, so you don't need an extra step to look them up?
I see, will look into this. Thanks for the tip.

robfinch wrote:
The cause code is passed to the cpu in a register. So, there’s only a single interrupt routine basically one vector.
I see. This reminds me of the xv6 approach where all the interrupts are handled by a single interrupt handler, and the "interrupt number" is used to distinguish one from the other (1, 2).

robfinch wrote:
Depending on how complex a system is being implemented it might be worth looking at some platform-level interrupts controllers to see how they work. They deal with multiple bus masters and a variety of interrupt sources.
For the moment, I am going for a very bare bones interrupt handler, just something that can detect when more than one interrupt request is present and act accordingly.

BigEd wrote:
It might be fruitful to look at prior art...
I have been attempting to read the 8259's datasheet, but most of it is over my head at the moment.

---

Thank you all for the help. :D


Sat Jun 29, 2019 4:16 am
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 14 posts ] 

Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB® Forum Software © phpBB Group
Designed by ST Software