Long descriptor page table support
Pages: 1 2
Jeffrey Lee (213) 6048 posts |
I’ve recently been working on adding support for the “long descriptor” page table format to the kernel. This will give the OS flexible access to a 40-bit physical address space, improving our support for machines with lots of RAM. E.g. the 4GB IGEPv5, where 2GB of the RAM is currently inaccessible to RISC OS since it’s located above the 32bit limit. Or the rumoured 8GB Raspberry Pi model. It’s still early days, but for anyone interested in tracking the progress I’ve got a WIP merge request open on gitlab. https://gitlab.riscosopen.org/RiscOS/Sources/Kernel/merge_requests/15 |
David Feugey (2125) 2709 posts |
Fantastic… |
Jeffrey Lee (213) 6048 posts |
Current status is that the kernel can now use RAM located above the old 32bit physical address limit. But for this work to be complete, I need to update a few public kernel APIs which deal with (32bit) RAM physical addresses. As far as I can tell, these are:
One way of updating things would be to have the 32bit versions of OS_Memory 0 & OS_Memory 19 return an error if they find themselves in need of returning a large physical address. This would provide some degree of compatibility with existing programs. E.g. OS_Memory 0 & OS_Memory 19 already return errors if they’re provided with a non-RAM address (like ROM, or “external” VRAM on a PCI graphics card), so DMA drivers already contain fallback paths for if the OS_Memory call fails. So an old DMA driver will be able to use its fast path for low RAM, and will fall back to a slower path for high RAM. But the problem with this approach is that there’s no good way of dealing with the service calls – if a page of low RAM gets replaced with a page of high RAM then there’s no way for Service_PagesSafe to say “nope, there’s no valid 32bit physical address for this memory anymore”. So for this approach to work the kernel would have to follow the rule that when a page of low RAM is being replaced by another page, it can only be replaced by another page of low RAM. There’d also be some awkwardness if multiple pages are being replaced, some of which are in low RAM and some of which are in high RAM – the kernel would have to break things down into two sets of service calls, so that the new calls see the changes to the high RAM while the old calls don’t. The above feels a bit cumbersome and restrictive, especially when you consider that most DMA/device drivers are either in the ROOL sources (and so can be updated to use new APIs pretty quickly), or are third-party ones which will be targeting older machines (which will never have high RAM, and so can continue using the old APIs without problems). Plus the fact that most drivers will have a fallback for dealing with non-RAM areas means that (if there’s any high RAM present in the system) there’s no particular need for the old APIs to continue working at all. So instead, I’m thinking of going with the following approach:
Any thoughts? |
nemo (145) 2552 posts |
And DA handlers, I presume?
Would it not be safer to use the existing 3-word structure for addresses that fit in 32bits, and only use the new structure, reason codes and service calls for those addresses that are greater than 32bits? Therefore existing code would continue to work while ignoring high memory things happening above its head. I’ve no idea what this “existing code” might be, mind. In other words, is it feasible to retain the existing interfaces for 32bit addresses, and hence existing code can proceed under the illusion that this is all the RAM there is? Code that uses the newer APIs have access to the full 64bit address range. |
Sprow (202) 1158 posts |
Would the new equivalent of OS_Memory 19 also still support 32b addresses (ie. 64b but with zero in the high word)? If so, as 19 is so new I expect the only client is the SATA driver so they could just be moved over to the new subreason and dump 19 entirely, if that reduces spaghetti.
I can recommend bit 21 for the job. |
Jeffrey Lee (213) 6048 posts |
I need to update a few public kernel APIs which deal with (32bit) RAM physical addresses Yes; I forgot that the physical addresses can be supplied to PostGrow handlers. (Note that RISC OS 5 doesn’t implement ROL’s physical DAs system, so we don’t need to worry about DA handlers specifying physical addresses or “fake” page numbers in their PreGrow handlers) So to make PostGrow handlers safe, we could either ban all “needs specific pages” DAs which don’t use the new interface, or we could ban users of the old interface from trying to claim pages which are located in high RAM.
That would be a bit cumbersome. The main problem is the Service_PagesUnsafe & Service_PagesSafe pair. When a “needs specific pages” DA grows, the kernel splits the grow operation down into a series of chunks (small enough for it to fit the required page lists on the stack). For each chunk it does the following:
Since Service_PagesSafe specifies both the new and old details of each page (i.e. old physical info for the page the DA is taking, and new physical info for the replacement), things get awkward if a page with a low physical address is replaced with a page with a high physical address (or vice-versa). One way around that would be to change the kernel’s page replacement strategy so that it’ll only replace low-RAM pages with other low-RAM pages, but that feels a bit restrictive as it’s a strategy that would have to be active all the time (the kernel has no way of knowing whether a recipient of Service_PagesUnsafe actually cares about the pages that are being replaced). About the best we could do is have the kernel keep track of whether it’s ever exposed the address of a low-RAM page via an old API, and if so, run handle that page in the compatibility mode which guarantees that any replacements will only come from low RAM. There’s also the issue of how Service_PagesUnsafe & Service_PagesSafe currently only occur in pairs, with Service_PagesSafe describing the full set of pages that were made unsafe in the previous call. The documentation doesn’t state that Service_PagesUnsafe & Service_PagesSafe always occur in pairs, and the API means that they could occur in other arrangements. E.g. there could be several instances of Service_PagesUnsafe, followed by several instances of Service_PagesSafe, with the pages distributed differently between each call. Or if some pages of low RAM are being replaced by pages of high RAM, you could have Service_PagesUnsafe followed by Service_PagesSafe (for any low RAM replacements) and Service_PagesSafe64 (for the high RAM replacements), so that from the perspective of old software, some of the pages never become safe again. But, since the calls have always occurred in pairs, I’d expect some software to break if we changed to a different scheme. E.g. a quick look at DMAManager’s sources reveal that it will break.
Mainly anything that cares about the physical address of RAM – i.e. DMA drivers & similar. However other systems care as well, e.g. my SMP module listens out for Service_PagesUnsafe & Service_PagesSafe so that it can pause the other cores if pages are being replaced (because the kernel’s trick of disabling IRQs+FIQs to protect software from the replacement op will obviously only work for software which is running on the primary core).
Yes.
Yes, for the SATA drivers there’s no real need to support softloading them on older OS’s. |
nemo (145) 2552 posts |
I was imagining that PreGrow, PostGrow, PagesUnsafe and PagesSafe be regarded as having an implicit “32” suffix, and corresponding PreGrow64, PostGrow64, PagesUnsafe64 and PagesSafe64 be coined. DAs would require a new flag to say “use the 64bit interface please”, but it’s reasonable to expect them to handle the existing reason codes too for compatibility. The Handler for a “HighDA” will get PreGrow64 and PostGrow64 reasons, not the old ones. Otherwise it will get PreGrow and PostGrow. As for Service Calls, there is always a PagesUnsafe – this indicates that there’s remapping taking place, even if one doesn’t care about the pages themselves (such as your SMP example). If any 64bit addresses are involved this is immediately followed by a PagesUnsafe64. The process finishes with a PagesSafe, preceded by a PagesSafe64 if PagesUnsafe64 was issued. In all cases any 64bit address would be replaced by ‘nowhere’ in the 32bit structures. So from a 32bit API point of view, pages can appear from ‘nowhere’ and be sent to ‘nowhere’. The 64bit API gets the correct address. It’s reasonable to observe that the 32bit version doesn’t need to include any ‘nowhere’ to ‘nowhere’ entries.
I can’t think of a scenario where one would specify the page number but not care about its physical address, so that seems reasonable, but perhaps there is a usage case. Such code should already be checking that it got the page it requested, so it would be safe to substitute some other low page if available, on the off-chance it didn’t mean it. P.S. Unless there has already been a situation in which PagesUnsafe could have a page count of zero, it would be best to avoid that case due to the risk that something will interpret it as 4.2 billion. A single ‘nowhere to nowhere’ entry would be required in that case. |
Jeffrey Lee (213) 6048 posts |
The problem with that is that ‘nowhere’ is a logical address, not a physical one. In the two lists of pages sent to Service_PagesSafe, entry #X in R3 will have the same logical address as entry #X in R4. It’s the page number & physical address that changes, not the logical address. We could invent a ‘nowhere’ physical address (e.g. &ffffffff), but for that to work software would obviously need to be updated to support it (at which point authors might as well just add support for the new APIs right away). |
nemo (145) 2552 posts |
It may be that there isn’t already an invalid 32bit physical address. If so, it would be worth sacrificing a page to create one. It was to 64bit physical addresses I was referring when I wrote:
Since the word ‘nowhere’ is evidently confusing, please substitute the word ‘high’ to stand for the designated invalid 32bit physical address. i.e. If the physical address is 64bit, use the invalid 32bit physical address ‘high’ in the 32bit structures, but the correct 64bit physical address in the 64bit structures.
PagesUnsafe/PagesSafe would continue to bracket the remapping; would continue to contain all the 32bit remaps; would partially document low pages being swapped with ‘high’ pages; but wouldn’t document a high page swapping with another high page (unless that was the only kind in which case there would be a single ‘high to high’ mapping). PagesUnsafe64/PagesSafe64 would contain all mappings. PreGrow/PostGrow would be informed of a low page being replaced by another low page exactly as presently occurs. It would be informed of a low page swapping with a ‘high’ page (which is to be avoided if possible) – this may lead to something going wrong if it uses the physical address, but it is a sacrificial address for safety. It would not be informed of a ‘high’ page being swapped for another ‘high’ page because it clearly isn’t using 64bit physical addresses (would the change of page number be useful even if it isn’t using physical addresses? would it be sensible to include ‘high to high’ remaps anyway? I don’t know). PreGrow64/PostGrow64 would get the whole mapping. The alternative you suggest is to simply not inform the 32bit code at all, which means it could continue to DMA using the wrong physical addresses, or refuse to create a DA with a 32bit Handler on a system with more than 4GB:
I continue to be alarmed by this approach to API compatibility, but I realise I am in a small minority around here, so I shall say no more. I tried. |
Jeffrey Lee (213) 6048 posts |
(if there’s any high RAM present in the system) there’s no particular need for the old APIs to continue working at all. I believe the two approaches can be summarised as follows:
I am alarmed by this approach to API compatibility.
The latter. I’d be aiming to stop old software from ever getting its hands on a physical address (with a couple of exceptions, e.g. PCI_RAMAlloc will still work because it can guarantee 32bit addresses, and the pages can’t be claimed by other DAs). Of course, I could revisit the original idea, of preventing low RAM from being replaced with high RAM. Potentially it could be as simple as having the kernel use one of the page flags to indicate whether the physical address of a low-RAM page has been revealed via an old API, and using that flag to either prevent the page from being claimed, or to influence the replacement logic. Making it dynamic like this will resolve one of the issues I was initially concerned about, which was that the kernel would be needlessly restricting page movement on systems where software is only using the new APIs. And I was always planning on making the kernel choose at runtime whether the old APIs should be supported (based on whether any high RAM is present in the system), so this compatibility code to allow the 32bit APIs to be (mostly) functional on 64bit systems might only be a small bit of extra baggage. |
Rick Murray (539) 13850 posts |
I continue to be alarmed by this approach to API compatibility Anyone else? Can we make this a three alarmer? |
nemo (145) 2552 posts |
It is not an ‘incorrect’ physical address, it’s a non-existent physical address.
Let’s be specific. You are suggesting that if a user has more than 4GB of RAM then they should not be able to run PhotoDesk, Zap, Vantage; should not be able to load the SysLog or LineEditor modules; that they be forbidden from using anything that creates a Dynamic Area with a handler; until those authors have all complied with whatever new API you have cooked up… yet just 72 hours ago you were admitting:
So forgive me for trying to keep software running and not making demands on authors.
Do you want to reconsider your wording of that? And if it isn’t bloody obvious, I’m furious that you should make this personal. “My approach”? |
Jeffrey Lee (213) 6048 posts |
Your suggestion will cause less software to fail, but the failures will be unobvious to the software (incorrect physical address being used by hardware device) Within the context of this discussion, a non-existent physical address is still an incorrect physical address. In addition to giving the “all clear” signal, Service_PagesSafe returns information the about how memory has moved. Software relies on this information being correct. If the kernel was to start saying that pages exist at “nowhere” instead of providing their actual addresses then things will break. Consider the following example:
This will result in the wrong data being written to disc, because at no point was the data from logical address &8000 present at physical address &BAD00000. It’s only ever present at physical address &A0123000 (up until the other DA receives the page and presumably starts overwriting it), or at physical address &987654000 (after the kernel has copied the contents over during the page replacement step). &BAD00000 is an incorrect physical address for the data which the system was trying to write to disc.
Sorry, I probably wasn’t clear enough when I responded to the correction about DA handlers. Dynamic areas which have flag bit 8 set (“needs/requires specific pages”) will be banned, because that uses the PreGrow/PostGrow variant which accepts page lists. Dynamic areas which don’t have bit 8 set still receive PreGrow/PostGrow calls, but they use a different variant which don’t accept page lists. There’s no reason for me to ban them, therefore I won’t ban them, and they’ll continue to work fine. I am alarmed by your approach to API compatibility. Sorry – I didn’t intend to anger you. I think I can see how it happened; I’ll try and remember to be more careful in the future. (post edited) |
Steve Fryatt (216) 2105 posts |
Pot, meet Kettle. Your posts come over as very personal towards certain individuals, too… and once again, you’re attacking someone who is actually spending time working on the OS instead of photo-shopping animated GIFs of software that they claim to have written but somehow never seem to actually be able to release to the community1. 1 (ETA) How many times have Rick and other asked if that RMA tool that you claim to have is available, for starters? |
Chris Hall (132) 3558 posts |
You are suggesting that if a user has more than 4GB of RAM then they should not be able to run PhotoDesk, Zap, If this is true then please do not proceed. Whatever you are doing is wrong. There must be a backwards compatible method. |
Steve Fryatt (216) 2105 posts |
Jeffrey seems to have said fairly clearly that this is not true, unless those apps are doing something very dodgy. |
Jeffrey Lee (213) 6048 posts |
I need to update a few public kernel APIs which deal with (32bit) RAM physical addresses Actually, double-checking the PRMs, they say that PostGrow handlers are only supplied the page numbers. In reality the list will contain the physical & logical addresses of each page (if specific pages were requested), because the information is needed for the service calls & page replacement process. But the presence of that data has never been officially documented. If software adheres to the PRMs then it should never try looking at the addresses, only the page numbers. So that means we shouldn’t need to add new 64bit PreGrow/PostGrow handlers (unless someone’s aware of some naughty software which is peeking at the physical address fields in the PostGrow handler? Searching the disc image sources & a couple of ROM sources suggests the OS sources are clean) |
Jeffrey Lee (213) 6048 posts |
Unfortunately it doesn’t look like this will be as straightforward as I hoped – I think it’s possible for OS_Memory 0 to be called inbetween the PagesUnsafe & PagesSafe calls (or as part of a PagesUnsafe / PagesSafe handler). This will be problematic if the kernel uses the 32bit PagesUnsafe call to only list the pages which have been exposed via a 32bit API – if a page hasn’t been exposed yet, but is being replaced as part of the page replacement op, then anything which calls OS_Memory 0 (or similar) while the replacement op is in progress won’t know that the page is in an unsafe state because the kernel never reported it via the 32bit PagesUnsafe call. So if we want to retain compatibility with the 32bit APIs, we’ll probably have to have the restriction that 32bit pages can only ever be replaced with other 32bit pages. Either that, or have the kernel do a complicated dance where it re-plans things on the fly (and issues lots more PagesUnsafe / PagesSafe calls) if one of the pages it’s due to change changes state mid-operation. |
Chris Gransden (337) 1207 posts |
Testing the latest changes on an IGEPv5 everything seems to be working OK apart from running any ELF binary always gives the error ‘Address not recognised’. |
Andrew Rawnsley (492) 1445 posts |
Just a thought, but isn’t ARMEABIsupport on 1.02 now (or thereabouts)? I can’t quickly check as my test rig is currently dismembered, but I have a faint memory of some issues in 1.00… |
Jeffrey Lee (213) 6048 posts |
Packman has SOManager 3.01 & ARMEABISupport 1.00, and those appear to work fine (or at least GCC 4 runs). I’ll blow the cobwebs off of my autobuilder install and see if I can get a build of 3.02. Do you know if it’s all ELFs that are failing, or just EABI ones? |
David Pitt (3386) 1248 posts |
Cobwebs blown off yesterday, see here if it helps. |
Chris Gransden (337) 1207 posts |
All ELFs fail to load regardless of which version of SOManager (3.01 or 3.02). I did a search of the OMAP5 source for ‘Address not recognised’. RiscOS.Sources.HWSupport.STA.SATADriver.c.osmem19 |
Jeffrey Lee (213) 6048 posts |
No luck reproducing the error with either David’s or my own GCC/SOManager builds. Chances are the error is related to a specific combination of high/low RAM pages being used – I’ll keep digging and see if I can find the problem. |
Chris Gransden (337) 1207 posts |
Setting a 2GB ram disc in Predesk looks to be the cause. |
Pages: 1 2