Any plans to have an open source toolchain to build RISC OS?
Rick Murray (539) 13861 posts |
They were state of the art in the mid ’80s. Times have moved on…
I would love to, but you’re one of the few with access to the source.
Indeed, and it’s why I tend to use DDT as a last resort. [oh, and on a non-high-vector system, DDT is happy to sanitise memory but at the same time makes absolutely NO warning of memory accesses to page zero, which I find to be rather odd]
As has everybody else since RISC OS was conceived.
Actually, circa 1987ish. I used RISC OS 2 at school, and that was before 1990. Anyway, how is it a bodge? A processor of that era, by definition, can only do one thing at a time. Even with today’s multiple cores and hyperthreading, you’re basically juggling the system’s capacity to run multiple things at the same time with the number of things the user is actually running. And, you guessed it, on most normal domestic systems you’ll be able to run from (approx) 2 to 16 things concurrently. And there’s a s**tload of stuff going on in a modern OS. So… nothing has really changed much. It’s an illusion of dividing the available processing time amongst the tasks that require attention. In RISC OS, it’s co-operative, so everybody has to behave. Elsewhere, it’s generally pre-emptive, so the OS is Nanny and You Will Do As You’re Told. Either way, it’s going to require a method of taking the current task, pushing it aside, calling up a new one, letting it run for a while, then rinse and repeat. What RISC OS did was to spread tasks around in their own little bit of memory, lie to all of them (tell them they’re at &8000), and simply map them in as and when required. In this way, the tasks all think they’re at &8000. They do indeed all run from &8000, but they’re elsewhere and the memory mapping is simply fiddled as appropriate. I’m not sure I’d call it a bodge, it’s a rather ingenious solution to give proper multiple concurrent tasks to a late ‘80s operating system. Okay, it wasn’t so hot with 28MiB wimp slots, but then the idea was conceived when such a thing was not thought likely – the MEMC could only address 4MiB, and you could precariously daisy-chain them to a maximum of 16MiB, which was as good as it got given the processor could only address a total of 64MiB. Maybe, like a lot of RISC OS, it hasn’t aged well. Still, not sure I’d call it a bodge.
It would likely be a lot less hassle to get a Pi and run some sort of debugger solution over serial or ethernet; so the ‘target’ machine can be poked, prodded, and controlled from a host machine (which, of course, will keep on working while the system under test is stalled awaiting input). Actually… I would suggest that adding some sort of remote capabilities to DDT mightn’t be a bad idea, but I think first we need a debugger that isn’t so prone to falling over in the slightest breeze.
Hmm… Sounds good as a single sentence. But, like using multiple cores, I wonder how much stuff is liable to break when it is actually implemented? |
David Feugey (2125) 2709 posts |
Correct. But it would be enough for app development (OK, not system dev)… I ‘love’ so much ROS when you need to reboot every 10 minutes because of some bug in your application code. Sometimes, a second session of RISC OS would be nice. Even if a second Pi can help too :) |
André Timmermans (100) 655 posts |
Or when the application is stuck in an infinite loop from which you cannot break from and need to unplug/replug the Pi (and the USB hub as it leaves the Pi powered). |
Rick Murray (539) 13861 posts |
Wait, what? Noooo! |
Colin Ferris (399) 1818 posts |
It would be interesting to know how the debugger works – interesting back along – I found 26/32 bit checking code around the wrong way – seemed to work with the Iyonix but not with later machine’s. Long ago I seem to remember someone saying you could use it with modules!! |
Rick Murray (539) 13861 posts |
Given what I’ve read about it along the years, and Gerph’s comments, I think the best way to handle Debugger is line up some shot glasses and down one every time you encounter a “no, it can’t possibly…” moment.
Long ago, Acorn were of the (bizarre) option that major applications would be written as modules. Actually, it sort of makes sense if you understand that they had created a whizzy new machine but hadn’t really moved on from the mindset of major applications being on (EP)ROM, which is the Beeb MOS equivalent of modules. Remember also that application modules have the application point (*RMRun) entered in user mode. It is therefore theoretically possible to run a debugger on these without the issues that you’d encounter doing it in SVC or IRQ mode (as may be expected from other module entry points).
Surely it would be Iyonix but not earlier machines if the 32 bit check was wrong? That said, later machines introduced a number of changes that could cause things to act up. I found that the Castle era compiler quite liked using rotated loads, which would have failed on anything 32 bit other than Iyonix and Pi1 in ARMv6 mode. We all know about the Pi3 and SWP. Or later ARMv7 and it’s enhanced rejection of gibberish. So there’s all manner of reasons that something might cease to work “as of this device”… |