The end of the OMAP family?
Rick Murray (539) 13840 posts |
Damn. Just when we’d got TI to a point where actual proper datasheets were available… which is more than you can say for others. |
Jeffrey Lee (213) 6048 posts |
Indeed. Of course there’s no timescale given for when the OMAP line will be discontinued, so we may still have a few years of availability before we have to worry about finding another source of good hardware. And even if OMAP5 gets cancelled (or doesn’t have any cheap dev boards available) there’s still plenty of things we can do to keep ourselves busy with OMAP4 (multithreading!) |
Steve Pampling (1551) 8170 posts |
Samsung seem keen on knocking out new versions of their Exynos designs. |
Rick Murray (539) 13840 posts |
Given that “we aren’t Linux”, good hardware is only part of the story. You kind of need a useful degree of documentation as well. Take, for example, the BroadCom chip in the Pi. The last released doc I saw (about 205 pages as compared OMAP3’s 3500+!) simply didn’t mention the video system at all. Does the chip not even have a “dumb framebuffer mode”? The first thing any OS is going to want to do after basic init code is to paste something onto the screen. The BCM2835’s datasheet’s only comment regarding this is “The VideoCore section of the RAM is mapped in only if the system is configured to support a memory mapped display (this is the common case).” (p6). I’m not asking for GPU docs (even TI’s GPU is fairly undocumented publically), however without even basic documentation on this (or hacking around some video driver code that is available), the entire chip is useless for toy/homebrew OSs as Joe Hacker can’t even get to paste a bright yellow smiley face on the screen… I fear if we lose TI and the OMAP, we’ll be back to the dark ages and relying upon specific people who are either well-placed contacts or have signed NDAs. Case in point – who would fancy taking a crack at porting RISC OS to the A10 chip used in many of those knock-off tablets? The datasheet is on dropbox and there’s no info on the video there either; possibly not even enough details to bring up a rudimentary kernel. Meh. I can hope that by the time the OMAP reaches its lifespan there will be others who open up their development tools/docs; but I’m sensing that the fruity approach may be taken as a reason to be less open, not more open… |
Rick Murray (539) 13840 posts |
As a separate posting – it would be an interesting challenge to get RISC OS to be multiprocessor capable. I don’t see any reason for the OS itself to utilise multiple processors; this is perhaps something that the Wimp could handle, with being able to schedule tasks across CPU cores; although I suspect the Wimp itself would need to run on both cores as it wouldn’t be so efficient to run software on both cores but have all the graphics ops and UI stuff only taking place on one; and this brings with it some nifty problems to consider, especially given that the Wimp, indeed the OS, carries assumptions – like there’ll be no task active at the point of a SWI call ’cos the processor will be dealing with the SWI. Well, what happens in a dual-core setup? Does the second core get frozen while the first is handling a SWI? Is there any documentation / code / etc for the Simtec Hydra project? I would be interested in seeing how those questions were answered there. Sorry, I’m pretty much just thinking aloud here. ;-) |
Jeffrey Lee (213) 6048 posts |
before we have to worry about finding another source of good hardware Yes, I think I accidentally deleted the bit where I said the documentation had to be good as well.
“We choose to do these things not because they are easy, but because they are hard”. I started a thread about this kind of stuff a couple of years ago: https://www.riscosopen.org/forum/forums/5/topics/406 It wouldn’t surprise me if we end up trying a few different techniques before coming up with one that works. Plus there are some interesting philosophical questions – what if making RISC OS multithread safe causes it to lose the qualities that make it “RISC OS”? If it’s no longer RISC OS, or is incompatible with all existing RISC OS software, would anyone want to use it? If nobody’s going to use it, is there any point in trying?
See the above thread; Ben Avison thinks it took the approach you describe, where the Hydra runs its own OS/kernel seperate from RISC OS, thereby only allowing specially written software to make use of it. |
Greg Race (1437) 5 posts |
Assumming the route forward eventually is to make RISC OS multi core happy then does it really have to be compatable with current RISC OS software, obviously to completely ignore current RISC OS software is not what I mean, but can it not be freely developed knowing that creating an Aemular type program afterwards to allow for the running of current RISC OS software. Hope I am not insulting anybodys intelligence here, I am simply just thinking out allowed too but sometimes the most outrageous ideas turn out to be not so outrageous. Anyhow just want you guys in this thread to know you ROCK as well as a number of other key programmers. Thanks guys for keeping my dream alive. RISC OS is awesome |
Martin Bazley (331) 379 posts |
I don’t think it would be physically possible to convert RISC OS to be thread safe without some kind of compatibility layer, but it should at least be possible to make it so that most applications don’t need said layer. Burning bridges because ‘we can fix it in Aemulor later’ isn’t a good approach to take. One thing which I really do think needs to be sorted out is per-process state, like system variables, current directories, etc. To take just one example, the gotcha I discovered recently where Sys$ReturnCode – which is written to by all C programs on exit – is global and extremely prone to spontaneous modification by random other programs will be completely unacceptable on a multithreaded system. But making all system variables local by default will probably break every application which relies on them to pass data around, which is quite a lot of applications, since they’re pretty much the only way of passing data around. (Some adaptation of the Unix ‘env’ command would be useful here.) And if you decide to make the CSD local, what about the programs which depend on being able to preset it for the use of other programs? The ‘Set directory’ option was added to the Filer for a reason, right? There’s no single right answer to all of this, which is why a compatibility layer will be required for the bits which don’t quite fit. But we should still strive to make sure that that’s as few bits as possible. |
Rick Murray (539) 13840 posts |
Just more thinking out loud. ;-) Is it possible to ‘suspend’ a core? The way I see it, the simplest approach that may give us multi-core benefits is to pass this over to the Wimp. Core 0 will be the core running RISC OS. The Wimp, as it starts, will start applications on Core 0; and Core 1 will be on standby. In other words, for light use, RISC OS will run as it always has. I think the Wimp deals with PollWordNonZero itself, but in other cases the memory allocations (I think the two CPUs have independent memory maps, yes?) will need to be tightened up to the state where a task can pretty much ONLY access its own memory. If it reads/writes page zero or dicks around in the RMA then it will (rightfully) be faulted. Not sure how this goes with regards Dynamic Areas. Are they tied to the task that created them, or is it a free-for-all bit of mapped in memory? The big caveat, however, is that during interrupt/fiq/swi the second core will need to be either put on pause, or the microkernel capable of busy-waiting until RISC OS is ready. In this way, the flagrant (ab)use of OS_IntOff/OS_IntOn and such will continue to work for the time being. Let’s face it, it is highly unlikely RISC OS will be rewritten in a way that makes all this concurrency work nicely; and if it was to be rewritten (like if Google fell in love with it or something), it would probably be in a high level language, feature bloat, blah blah and it would end up as a weird sort of not-Linux that would be about as similar to RISC OS as Bada is. Anyway, there’s a whole heap of “I’m in charge, whoo-hoo!” situations to deal with, and the current simplest option I can think of is to just suspend/defer CPU1 while CPU0 is busy. It’s software, it can evolve… so let’s think of something a little less troublesome than “rewrite a PMT Wimp” for now, huh? Another caveat – module tasks will probably need to remain on CPU0. The Wimp will also offer some support along the way; specifically in suspending/resuming the second core for potentially extended periods of time (think: printer driver). Additionally, obviously, whatever maps memory (is this the kernel or the Wimp now?) will need to be aware of two processors, two memory maps (one can, obviously, be rather smaller) and two things will think they are at &8000. Additionally (you see how this simple idea gets real complicated in a hurry?) we’d need to devise a regime for debugging so that it is possible to see registers/status of either CPU, plus the ability to view/edit the memory according to either CPU. *Memory &8000 would be for CPU0, and *Memory 1:&8000 would be for CPU1. That sort of thing. Here’s another caveat. Broadcast Wimp messages; especially ones that have an ack/nak. What do we do if CPU1 is on pause? Or how about if a task on CPU1 sent the message and was then paused while the message did the rounds? Pros:
Cons:
I’ll have a look at the linked thread tomorrow. I’m practically a zombie now, so my apologies if this reads like a cross between Engrish and line noise :-) Zzzzzzzzzz….. |
Jess Hampshire (158) 865 posts |
Isn’t one of the big decisions whether you try to make existing software run multithreaded or leave them on one processor, and save the multithreading for new apps? I would think that the latter approach is probably far simpler to achieve. I would the the procedure would be to produce a module which provided the new environment (possibly appearing as one thread to the OS), to allow new software to be writen for it. Then the OS would be modified to incorporate this into the core system. So that the benefits show. The final stage would be to move the old way of doing things into a module, that would be loaded for old apps. |
Steve Pampling (1551) 8170 posts |
Surely any multithreading/pre-emptive OS development already has a suggested route: RPCEmu. If that kind of development is done then the base OS can be made multicore aware provided a legacy environment emulation exists for legacy software. I know they had more resource to do it but Apple did this for their OS migration. However I suspect the base rebuild and emulation is probably less effort than a magical method of making current (legacy by then) software work on the new build direct. |
Theo Markettos (89) 919 posts |
There’s an interesting narrative on the demise of Nokia’s MeeGo which points out that one of the key factors was that TI had given up on producing baseband chips. So Nokia’s MeeGo range were all using OMAP, but there wasn’t an companion baseband processor for OMAP 4, and there’s no LTE support in the existing chips. The other options being Qualcomm (which does, but weren’t very interested in MeeGo) or Intel (Nokia’s later partner, whose chips are still lacking in LTE support). So looks like the writing has been on the wall for some time. |
Michael Drake (88) 336 posts |
That may be changing: |
Theo Markettos (89) 919 posts |
There are rumours that Amazon are interested in TI’s OMAP division. That may or may not be good news – it’s unclear whether Amazon would want to keep them all for itself (like Apple’s ARM SoC range) or retain existing customers in areas that don’t conflict with their strategy. I suspect they’ll be less customer-focused than TI, though. |
Eric Rucker (325) 232 posts |
So, OMAP isn’t dying, it’s just going to “a broader set of embedded applications with long life cycles”. If that means that boards like the Beagle and Panda keep getting made… that’ll actually be good for RISC OS. |