Interrupts
Dave Higton (1515) 3497 posts |
How long can we disable interrupts without causing a problem? How long can an interrupt service routine take? The reason I’m asking is that there is a critical period of a bit over 100 microseconds in the Raspberry Pi IIC code that should have interrupts disabled, otherwise the feature of beginning anther transfer without generating a stop bit could fail. I would need to disable interrupts for that period. The window during which interrupts would cause a malfunction is short – much shorter than 100 microseconds – so the chances of causing a malfunction are small, but I would like the chance to be zero, and that requires to disable interrupts from before the last byte is posted into the FIFO, until after the new transfer is under way. |
Theo Markettos (89) 919 posts |
I don’t know the answer to your specific question, but can’t you avoid the problem by not dealing with interrupts received during that period? Turning off interrupts to ensure atomic operation is a fairly old-fashioned way of doing things – it’s very unpleasant on a machine with any kind of concurrency (preemption, multicore etc). Is the need for a lack of interrupts because the ISR isn’t written with re-entrancy in mind, or because the hardware will go wrong somehow if an interrupt is received? If it’s just that the ISR will conflict, how about queuing the interrupt for dealing with when you’re out of the critical period? |
Dave Higton (1515) 3497 posts |
If the CPU is running something other than the IIC code when a linked transfer without stop bit should start, the controller will generate a stop bit after the first transfer. That would be different from what was requested. The controller gives no explicit way to prevent a stop bit at the end of a transfer, but the hardware happens to work that way if you initiate another transfer before the last byte of the present transfer has been transmitted. AIUI, the only thing that can cause the problem is running an ISR, and the only way to prevent the problem is to disable interrupts for that critical period. I’ve always assumed that an interrupt that occurs while interrupts are disabled will be queued for later processing. Now you make me think about it, I’m no longer sure that’s the case. If you’ve got a better solution, I’m very happy to learn it! |
Rick Murray (539) 13806 posts |
I guess the answer here is not to use linked transfers? Is this not a hardware bug? Alternatively – IIC works on two GPIOs doesn’t it? Perhaps you could consider if it is viable to bit-bash your own IIC in place of the hardware one? Neither solution is attractive, but…
I wouldn’t say “queued”, I’d say “not handled”. I think whether or not it can be safely handled later depends on context – for example it might screw up serial or Ethernet comms (that might expect a turn-around within time) but the keyboard and mouse are less likely to be affected. I think the best approach, to be honest, is to set up a machine with the ISR disable code and some test code to do these long transfers every few seconds, then ride the system hard and see if it fails. I remember high res modes on my A3000. There wasn’t enough oomph so when reading floppy, the screen went black (I guess – as floppy is highly time sensitive – it might have disabled interrupts too). Are we looking at something that maybe happens once an hour, or are these long IIC transfers frequent? |
Colin (478) 2433 posts |
Regarding the stop bit happening at the wrong time, ie when DLEN reaches 0 if you don’t get back in time to add more transfers, It seems to me reading the broadcom manual that getting the timing right may have something to do with the DONE interrupt. It may be that creating a new transfer on DONE cancels the stop. Just an idea. |
Jeffrey Lee (213) 6048 posts |
I’m fairly certain the RISC OS 3 PRMs stated that 100 microseconds is the limit for how long you’re allowed to leave IRQs disabled for. And since the DWC USB driver likes to receive around 8000 interrupts a second, I’d say that that 100 microsecond limit should still hold true today.
That’s a bit of a grey area, since there are a few things which do lengthy processing during their ISRs (e.g. sound buffer filling), and therefore re-enable interrupts in order to allow other interrupts to be processed. This means they could in theory run forever, if they keep getting interrupted by other ISRs. |
Dave Higton (1515) 3497 posts |
As I think about it more, I realise that I don’t need to keep interrupts disabled for the entire duration of the last byte – I need to disable interrupts, put the last byte in the FIFO, wait for it to be removed from the FIFO (the start of transmission of the last byte), set up the registers for the new transaction, and I can enable the interrupts at that point. Perhaps. |
Chris Evans (457) 1614 posts |
I think you find that was the Electron! |
Sprow (202) 1155 posts |
ADFS still has that policy if DMA underruns line 1757, and of course ChangeFSI will (optionally) swap down to MODE 0 during processing to speed things up. For shared memory bus computers it’s still pretty effective (obviously no use on split bus ones like the Iyonix). |