Idea for discussion: Rewriting RISC OS using a high-level language
Andrew Hodgkinson (6) 465 posts |
Because the underlying forum software is general purpose and formatted code blocks were probably not what the author had in mind; and Textile is an acquired skill! |
|
Trevor Johnson (329) 1645 posts |
If you do manage to fit this in some time, maybe a BroadCom employee would be enticed to enhance it by adding VideoCore support!
and acquired taste! |
|
Trevor Johnson (329) 1645 posts |
With a bounty of £80-90k,1 it’s "the type of OS a team of 4 could write in 2 years from the ground up"! |
|
Steve Revill (20) 1361 posts |
Good luck finding a team of four people will the right skills prepared to work at minimum wage for two years… ;) I think the real price is closer to £400,000 if we accept the four people for two years (also assuming no overheads). |
|
Terje Slettebø (285) 275 posts |
This is just a note to say that once I get my PandaBoard-based computer, I aim to get back to this work. Also, I may change to using the Norcroft compiler, so that the code may be used in the standard build process. I figure that changing the build process to using something else like GCC is not currently in the pipeline? |
|
John Sandgrounder (1650) 574 posts |
To quote Paul Daniels. Not a lot If it is not broke, don’t fix it. |
|
Terje Slettebø (285) 275 posts |
I guess it depends on your definition of “broken”. An OS written mostly in ARM assembly won’t work very well on a 64-bit ARM, for example (at least not in native mode). |
|
Trevor Johnson (329) 1645 posts |
|
|
Rick Murray (539) 13840 posts |
This would imply a scenario where ROOL is no longer around (or else the toolchain would be available). The question to ask, I guess, is in the (hopefully very unlikely) event that such a situation did arise, would ROOL be able to open-source the toolkit, or does it contain stuff proprietory to ARM Ltd? In short – I believe three things are required for the long-term continuance of RISC OS:
I feel that as long as the above conditions are met, then RISC OS can continue for as long as there is interest… |
|
Rick Murray (539) 13840 posts |
I’m not sure I see the 32 bit stuff dying out in any sort of hurry. The 64 bit ARM is for higher-end server platforms for expanded memory addressing, bigger/better number crunching, etc. Your lower end kit – dev boards, RPis, most mobile phones and tablets…they will likely stick with 32 bit. I mean, if a tablet can use an off-the-shelf A10-alike and achieve HD playback, what’s the impetous to jump to 64 bit? [Compare: The lowly 80386 was produced up until mid-2007; the Z80 is still being produced…] |
|
Steve Pampling (1551) 8170 posts |
OK, thats the storage and grunt work beast, 19 inch rack mount, corner of garage just under the router. |
|
Tim Rowledge (1742) 170 posts |
Surely we can see some attraction in the idea of a RISC OS version/derivative/descendant that can make good use of a quad or octo- core 3GHz 64bit cpu? |
|
Terje Slettebø (285) 275 posts |
Thanks for that link, Trevor. Rick: Yes, 32-bit ARM will probably be around for a long time, but if the PC world is anything to go by, then these days, hardly a PC is sold hat isn’t 64-bit. I mentioned 64-bit because after all, it appears that all new development when it comes to increased performance on the ARM will happen on that platform, so if we “only” support 32-bit, then RISC OS computers will hit a performance ceiling sooner or later. I for one don’t use ARM/RISC OS because it uses less power. I’d like to have as much computing power as possible, and I’d think that goes for most of us using it as a PC (and not for mobile or embedded stuff). Furthermore, I still think that with much of the OS being written in assembly code, this hinders development, as it limits the developers to those who are able to program in assembly code, and furthermore, as I said in the first post in this thread, assembly code tends to take more time to develop, compared to a higher-level language. So there are several reasons for moving to a higher-level language for RISC OS, portability to new platforms and ease of development being the major ones. Consider an OS like Unix/Linux: Where would it be today if it had been written in 80386 assembly code? |
|
Rick Murray (539) 13840 posts |
I wonder how much is chipfabs wanting to push their latest hardware? That said – there are a lot of applications for a modern PC (primarily in the gaming/multimedia arena) that need a huge amount of oomph. Plus the “PC” way to solve problems since the 386 era has been to throw more power at the problem. Moore’s law is getting in the way, so different technologies crop up to keep moving forward.
Technically we already have with no support for multiple core devices. However the difference between wishlists and reality is the lack of applications that would actually benefit from such a thing. Or to put it differently, Linux can play HD and H.264 videos fullscreen realtime. On RISC OS on the same hardware, anything over about 320×240 H.263 suffers on the same hardware. Should we not look to ways to better utilise what we already have?
I use RISC OS for pleasure. For the hardcore stuff I use a PC. I am running a photo editor comparable to PhotoDesk (but simpler to use, nice for PNG transparencies), a web browser with a few dozen tabs, video transcode (H.264 from my phone to XviD-HD), and so on. Stuff that RISC OS has never traditionally been strong at. For me, RISC OS and a PC fill different purposes. There is an irony that I can run a RaspberryPi from the PC’s USB and use a USB video box to receive the composite video output. ;-)
I want power when I need power. I don’t want a bucketload of power running 100% all the time when I’m not doing anything taxing. Thankfully modern PCs are a lot better at idling sensibly than the older generations.
Indeed, but I can give you two-and-a-half reasons against moving to a higher-level language: 1. There is no point rewriting RISC OS if we are going to clone what it is today with all its eccentricities. This means we will start to look at what it is we actually want and what it should do. Assuming we’re in the realms of fantasy where “writing an OS is an easy weekend job if we all pitch in”, you know what we’ll end up with? A funny looking not-quite-Unix that uses a non-standard filing system. 1.5 (the half reason) How much effort would it take to replicate an API aimed at the assembly language coder_. Consider OS_Mouse or OS_Byte – none of this stuff follows any sort of generalised call/return procedure. For OSMouse, R0-R2 are used and everything else is preserved. Can you imagine how much glue code it will require to interwork the API with a high-level-language? 2. Who is going to rewrite an entire OS to be exactly what we have now, only a different language? Don’t get me wrong – you do have valid points. The problem is that I don’t see anybody with the sort of free time it would require to reimplement RISC OS in C; and I fear that if it was to pass, the result would only be RISC OS as we know it by some sort of “legacy layer”. |
|
Terje Slettebø (285) 275 posts |
Yeah, I’ve been a bit puzzled about the 64-bit craze, myself. Moving from 16 to 32-bit was an obvious step, as 16 bit doesn’t let you do much. However, from 32 to 64 bit…? I mean, how often do you need to represent numbers that are 64 bit long? Pointers, I can understand, but all of this also takes twice as much memory and cache space as the 32-bit equivalent, which makes you wonder how much, if any, you gain from it.
Yeah, but competence in utilising DSPs may be an even rarer commodity than assembly programmers… Raising the level of abstraction of the code might be one way to get more people onboard and/or more done.
Likewise. :)
I’m not so sure about that. I’ve done a few “reimplementation projects” in my job, and my experience is that you should generally avoid changing functionality at the same time as you reimplement/refactor code (you should avoid wearing more than one “hat” at a time). Keeping the functionality constant helps testing and ensures backwards compatibility. Once you have a system in a better-organised form, it becomes practical to do more drastic changes. Compare changes like going from integers to floating point computations in code, in assembly code versus e.g. C code: The former results in a more or less complete rewrite, while the latter may only need a single typedef change… However, believe me, I have no illusions about the scale of such a project. Yet, RISC OS’s modular system makes it feasible to handle a part at the time.
I guess we’ll find out if we try. I’m a “letting the rubber meets the road” guy: Instead of speculating too much on what the effect of something will be, I simply take a stab at it, and get feedback from that.
I have no idea… :) Then again, I had not expected that someone would be able to almost singlehandedly port RISC OS to a completely new platform, either… Miracles happen… :)
As do you. Thanks for your input. Your assembly language course is a favourite of mine: I’ve always had a soft spot for assembly code, and ARM assembly in particular. I wouldn’t have spent more than a year on an ARM assembly-based assembler if I didn’t. Still, I’m not really attracted to writing in assembly code for any larger systems; it’s a job more suited to a compiler. |
|
Rick Murray (539) 13840 posts |
Or more specifically – that you can’t synthesise with 32 bit ops or using the mathsy parts? That said – does a normal home user need that sort of capacity onboard? Internet? Facebook? Skype? If a mobile phone can make a good attempt at those things, does the desktop machine need to attempt to rival a minicomputer?
Does anybody know what Linux uses on the OMAP3? Is it a binary blob or is it code? I’m just wondering how much is generic and how much is system specific, and of that, how much is ‘open’.
Being realistic, then, I think we would need to move the C library out of a module and into some of the lower level kernel space – because if we’re going to look to providing more of the core modules written in C, incrementally or otherwise, we’re damn well gonna need C to be initialised beforehand! Actually, I’ve just had a psycho-crazy thought (so please ignore me…): I await Jeffrey/Andrew/Steve/etc to tell me why this is a spectacularly dumb idea. ;-)
That particular example opens up a world of pain. Should the compiler favour accuracy or speed? Should it use VFP or Neon? On the OMAP3, Neon is considerably faster, but only works with single accuracy. What should happen if the desired FPU is not present – load an emulator or attempt an on-the-fly patch to use an alternative FPU? [as long as the basic opcodes have a 1:1 correspondance, this should mostly work although differing number formats might need fixing up]. Sorry, I’m probably overthinking this.
Twice. Once is crazy-awesome. Makes you wonder what might have been possible if the companies involved, you know, got over it and worked together.
I’ll second that, though these days it has a lot to do with my brain resembling a Swiss mountain cheese (white, smelly, and full of holes! [photo]). |
|
Terje Slettebø (285) 275 posts |
Oh, I think it has everything to do with addressing capabilities (at least for 64-bit processors). Unless you use parts of registers, I’d think that most of the time, that extra 32 bits of registers is simply wasted. For things like SIMD operations, large registers are useful, but that’s kind of specialised.
I guess it depends on what you’re interested in. I’m especially interested in computer graphics and animation, and in that field you can never have too much processing power… :) Yes, I know: These days, that’s generally handled by a GPU, but as long as we can’t use it…
You’re right about that. :)
There might be a compiler flag where it uses VFP or NEON for floating-point, but probably VFP will be used, and at least in Cortex A9, it’s much faster than in Cortex A8. For processors without VFP, I guess it could use a VFPEmulator module. Speaking of high-performance ARM processors: You may have heard the rumours about Nvidia’s Project Boulder. |
|
nemo (145) 2546 posts |
Crucially though, those 64bit machines can still run 32bit software.
One can represent 64bit numbers on an 8bit system if one desires. It’s more efficient on a 64bit system. The reason for the change to 64bit is to deal with:
To put this in context, my day job currently involves dealing with PDF files containing 10,000,000 pages. 32bit ints are a pain in the behind under such circumstances. |