PCI heap conundrum
Jeffrey Lee (213) 6048 posts |
On startup the PCI module looks for a physically contiguous block of memory around 32MB in size for it to use as the PCI heap. Once it’s found the region, it remembers the base page number, and then requests appropriate page numbers from within that region on the DA PreGrow calls. This works fine, so long as there are no other components in the system which are interested in specific physical pages. E.g. if you had a second module which attempted to implement a physically contiguous heap in the same way as the PCI module, there’s a good chance they’ll both end up fighting over the same pages, potentially blocking one of the heaps from growing. This isn’t a problem at the moment (the only other physically-contiguous component in most systems is the kernel-managed video memory, which the PCI module knows to avoid), but could become a problem as soon as video drivers are given the ability to allocate and manage their own memory. Potential solutions I can think of:
Any other options people can think of? |
Rick Murray (539) 13850 posts |
I might be stupid here but…
And…
I feel like I’m missing something…? |
Jeffrey Lee (213) 6048 posts |
The heap is used to implement PCI_RAMAlloc, which is used by assorted drivers which want easy access to physically contiguous memory (and don’t have any constraints like e.g. needing to remap the memory in logical space, or needing to deal with giant video framebuffers which might overflow or seriously fragment the PCI heap)
How can it, if it doesn’t know which pages have been “claimed” by the PCI module? E.g.:
|
Rick Murray (539) 13850 posts |
Aha. I see. So PCI doesn’t want to claim pages it doesn’t need, but wants to lay claim to pages it hasn’t allocated for itself. |
Jeffrey Lee (213) 6048 posts |
Thinking about this earlier today, using a service call to influence the OS_Memory 12 results isn’t going to be a very good solution for threaded scenarios, since the act of reserving memory won’t be atomic (two concurrent calls to OS_Memory 12 could return the same block of memory to the two different clients, because those clients are only able to update their service call handlers to report the memory as being reserved once the corresponding OS_Memory 12 call has completed) So it would probably make sense to go with an explicit reserve/unreserve SWI pair, so that looking for a block of memory and marking it as reserved can be fully atomic. The reserve SWI could be similar to OS_Memory 12 (in that the kernel is able to do most of the work to determine the address range to use), but extending it to support a function callback would also be nice (so programs can fully implement their own logic for determining which range of pages to reserve). Since we want the reserved (but not-yet-used) memory to be made available for use by non-fussy DAs/PMPs, the reserved memory probably won’t have any DA or PMP associated with it, in the traditional sense. But maybe it would make sense to have some kind of association just so that the kernel can keep track of the memory more easily. Or maybe if we want to stop pages being poached by programs that didn’t reserve them via the official API (e.g. you can only lock that page in your DA/PMP if you can prove that you’re the one who reserved it). |
Jeffrey Lee (213) 6048 posts |
This is the approach that was taken, back in 2019. https://gitlab.riscosopen.org/RiscOS/Sources/Kernel/-/commit/1f84ad9f0a7089821d039601527141e98598a47c |