The Future of DTP, Illustration and general GFX/Audio using RISC OS desktop computers (Is 64-bit!)
Sveinung Wittington Tengelsen (9758) 237 posts |
Mr. Semmens, pardon the assumption – Arm Ltd. has some 80% engineers on their total staff these days I believe so I did a statistical blooper there. Erred on the side of caution. Re-read the subject.. “The future” is now compared to when I did art exhibition catalogs (1) books (1) Rave flyers (lots) and a few CDROM covers/inlays plus flyers for the CDROM Dept of Oslo Uni to put butter on my bread. The tools I had were great for those jobs (ArtWorks, Photodesk, Ovation Pro mainly) but are just tangentially relevant to what I wish to do now. I boycott Apple and M$ for very good reasons, and Linux (Fedora w. Cinnamon) isn’t conductive to it either. It’s a tall order, requesting my favorite platform to go where (in a parallel reality?) it went some 12 years ago. Though I’ve yet to encounter a valid argument that 64-bit RISC OS won’t be a huge benefit to the platform as a whole, even if “just” using all four cores on RPI4’s CPU for starters. Then R-Comp could order a few SoCs from MediaTek or others, see if it’d run well there too. Otherwise, to hack with it. 8) <-Pixeleyes. |
Clive Semmens (2335) 3276 posts |
This, absolutely. Which was how I eventually ended up working at ARM, after various jobs where writing ARM assembler was actually a useful skill, albeit not the main job anywhere. |
Stuart Swales (8827) 1357 posts |
What Assembler-in-BASIC did bring was an easy-to-use, free developer tool for the hobbyist or small startup that needed to optimise bits of code or move on to producing modules to extend the system. Wasn’t the Desktop Development Environment five hundred quid or some such? |
Clive Semmens (2335) 3276 posts |
No idea. Never had one. Until I started at ARM, I’d only ever written 26-bit ARM, using the embedded assembler in BASIC; at ARM, the only assembler I ever wrote was tiny snippets to check my understanding of instructions. |
Stuart Swales (8827) 1357 posts |
Here we are: Acorn September 1989 Price List (£, inc. VAT, sole use) SKL80 ANSI C (Release 3) 171.35 |
Rick Murray (539) 13850 posts |
There was also Asm by… <rummages around in brain > Nick Roberts? Quite useful given the original “DDE” was split into the assembler and the compiler and sold separately. £171.35 for the compiler, £228.85 for the developer’s toolbox (I have no idea what that is), and £228.85 for the assembler. Prices from the September 1989 price list (on my site). The compiler and linker came with my A5000, Nick provided the assembler and there was an abandoned Pascal compiler floating around too. So obviously just for the sake of it, I wrote a simple little program with a bit in C, a bit in assembler, and a bit in Pascal. Actually I wasn’t so good at C in those days so a lot in assembler. Edit: Stuart beat me to the prices |
Chris Mahoney (1684) 2165 posts |
See page 4. Basically it’s “the rest of the DDE”. |
David Pilling (8394) 96 posts |
>could it conceivably be used in 64-bit RISC OS if it existed? Oh yes. But as others have said, a 64 bit ARM BBC Basic was never going to be a problem. >isn’t it the Assembler-in-BBC Basic part of what got us into this Not at the first order. Using assembler in BBC Basic has always been easy. It means there will be many programs which might have been platform independent but are not because someone slipped in a bit of assembler. The same thing must have happened when stuff moved from BBC Micro to ARM. Some lucky people will have been able to take BBC Basic software from BBC Micro to the latest processors with no change. If they’re just crunching numbers. You could see a massive amount of grief with other software. BBC Basic might have been some magic portable environment for writing software – but it wasn’t. |
David J. Ruck (33) 1636 posts |
A quick reality check of where other OS’s are on 64 bit. Linux where the OS and applications are written in a high level language and sources are available, distros have moved to 64 bit and have largely dropped 32 bit support (some do still exist and are supported, and there is the ability to run 32 bit user land over a 64 bit kernel). Apple mandated the move to 64 bit while still on x86, and dropped 32 bit compatibility entirely, old applications stopped working. But with a large an active developer community updating the most popular applications, this was not a problem. Windows has been providing 64 bit OS’s for years, but still support a subsystem to run 32 bit applications. It’s not just for legacy applications, many currently developed apps are still supplied as 32 bit – look how much stuff is in C:\Programs Files (x86). Indeed its only been a couple of years since Visual Studio itself has stopped being a 32 bit application (even though it could build both 32 and 64 bit stuff). So Windows hasn’t completed the transition yet, even with its vast developer community. It can be argued that RISC OS stopped having a sufficiently large developer community even before it moved to 32 bit (given how much stuff I had to port without sources in the years after the Iyonix came out, and how much still hasn’t been touched since). Even if some wonderful 64 bit, multi-core pre-emptive version was developed by unicorns tomorrow, not a single 3rd party application would run on it, and it would probably take years to get even a hand full of big applications re-written for it. All the rest of the applications for which people still use RISC OS for, would remain under emulation, and will never take any advantage of the new OS. So I see the situation being we’ve had a little bit of a golden era of 32 bit RISC OS being able to run natively on ARM hardware such as the Pi 4B and Titanium etc, but as 32 bit is dropped from new faster 64 bit ARMs we’ll move back to emulation, as many did when the StrongARM Risc PC was no longer powerful enough. |
Stuart Swales (8827) 1357 posts |
It’s not as if we have something intrinsically against 64-bit architectures. I have been developing device drivers, hardware support libraries and applications for them for over twenty years (Sparc, Alpha, amd64). It’s just that a total rewrite of RISC OS for AArch64 wouldn’t in my mind justify the needed investment. Applications would get left behind, have to be run in emulation, performing worse than before on the same hardware. Even in 32-bit land, we still have titles that need the helping hand of Aemulor. |
Sveinung Wittington Tengelsen (9758) 237 posts |
So no new Golden Age (or the potential of such) with RISC OS 64 running on faster and faster Arm CPUs and looking closely at applying it profitably to other devices in mass-user markets. The technological starting point could be much worse than it is, but not the attitude and lack of vision. It must be wished for, for starters, to have something better. |
Stuart Swales (8827) 1357 posts |
Feel free to try to raise some capital! Would you risk your business and home on such a flaky prospect? I think not. |
John WILLIAMS (8368) 495 posts |
We could get ahead of the curve and go straight for 128 bit, thus leaping a generation. |
Rick Murray (539) 13850 posts |
John – 👍 Arguably, if RISC OS were rewritten in a higher level language (with a tiny bit of assembler glue to get going), then the number of bits of the processor, indeed the architecture, should become an irrelevance. |
Rick Murray (539) 13850 posts |
Wishing is useless. It’ll take cold hard cash and a lot of time and resources to design from the ground up a secure modern operating system from scratch.
Sorry, but most of us around here are crusty old gits (I include myself in that group), which means we have enough real world experience to know what’s possible, what’s difficult, and what’s pissing in the wind. Feel free to set up a Kickstarter or whatever to get the ball rolling, but kindly cease accusing us of lack of vision. We’ve already discussed this. Waving pom-poms and wishing isn’t going to change the facts of the situation. I will happily be a cheerleader for a shiny improved RISC OS, but it starts with a solid plan that is grounded in reality. What needs done? How does this affect current RISC OS? What is the underlying API? How does it use C from the get-go? How does it retain the flexibility of RISC OS while being secure enough that people aren’t going to be pwned in a heartbeat? How do we convince chip bakers to provide GPU drivers? Alternatively, can we load Linux drivers and use those? How will the desktop work? What about multiple processes? What is the expected time required to get a kernel and command line running? What about a filing system? What about a desktop? How many devs are required to do this? Who pays them? What sort of SDK is required to ease porting existing software? What about POSIX to ease porting stuff from the Unix world? Should it be capable of running Android apps in order to have an easily available source of potential software? Who does all of this? How do they get paid for all of this work? That’s just for starters. Want imagination and thinking big? There you go. I await your reply. |
Rick Murray (539) 13850 posts |
And yes, supporting POSIX and/or Android is a requirement as an OS without software behind it is a mere toy. The current incarnation of RISC OS does actually have some applications that are good and useful. Less than in the past, but some. The transition to 64 bit will, without emulation, get rid of most/all of them. So, replace them with what? |
Clive Semmens (2335) 3276 posts |
Can we not have a better language than C to write everything in, while we’re dreaming? Surely C is getting a bit long in the tooth now? Let’s think about the desiderata for a Flashy New High Level Language, fit for the 256-bit age… |
Dave Higton (1515) 3534 posts |
Designing a new OS from scratch would be a ludicrous undertaking. We need a translation of the existing RISC OS to 64 bits, for which the shortest and least difficult route would appear to be translating as much as possible to C. The existing API must be retained so as to maximise the chances that applications can be rebuilt. |
Dave Higton (1515) 3534 posts |
Here’s a technical question. To rewrite a module (for example) from assembler to C, a complete start from scratch looks like a very difficult proposition – you’d have to get most or all of it written before you could start serious testing. So what about the possibility of starting at the top level, substituting C and retaining all the calls to assembly language, and then gradually replacing a few functions at a time in C, so that the whole thing is at all times testable. Does that approach make any sense? Has anyone tried it or anything like it? |
Stuart Swales (8827) 1357 posts |
Much assembler code is written without reference to the procedure call standard that C callers mandate, so there would be lots of jumping through marshalling shims. Just bite the bullet. What you could do perhaps is to write the replacement module to sit alongside the existing one, and just vector to new code if possible (e.g. a replacement SpriteOp function), then fully replace. |
Graeme (8815) 106 posts |
Instead of re-writing an OS that is already written, could an assembler be written in something like C to ‘compile’ the 32-bit source code into a 64-bit machine code? As simple example would be ADD R0,R1,R2 becomes ADD X0,X1,X2. There is no LDM instructions in 64-bit so this could become a list of LDR instructions and an ADD/SUB. There is no conditional for ADDEQ, but this could become BNE jump_over : ADD : .jump_over Of course, this gives an issue of things like accessing PC directly (that 64-bit does not allow) including jump tables. Anything that assumes one line of assembler produces one line of code would need to be looked at. I am also not suggesting that the output code would be the most efficient. There could be workarounds for some of these things. What percentage of the code could be assembled (or compiled) this way? Or put it another way, how much would not work and would need to be looked at by hand? |
Paolo Fabio Zaino (28) 1882 posts |
@ Dave
IMHO this is a very good question and topics that should probably have it’s own thread instead of being hidden in this one. Just my 0.02c |
David Pilling (8394) 96 posts |
I probably should not say this… I’ve written a lot of modules in C, standard Acorn tool cmhg (c module header generator). I always fancy I have a lot of the code of the Draw module in C – because I had to rewrite it to get Ovation Pro to work on Windows. You could redirect anything you did not have working to the original module. Compare SpecialFX which redirects some Draw SWIs to the GDraw module. But developing code one SWI at a time would be easy enough. I do wonder, if I came along with a Draw module written in C, who would use it, well it’s 10% slower and the bugs are in different places. Not so bad if you’re going to replace emulated code, where it would have a speed advantage, but on 32 bit hardware not much use. Are there some modules that have been redone in C? |
Rick Murray (539) 13850 posts |
The irony is that C was intended for writing operating systems. Applications, not so much. But it existed and was standardised before the authors of modern trendy languages were even born…
Yes. Especially the sort of work that would need to done in order to make any sort of ripple in the world (and by ripple I mean hoiking an itty bitty piece of gravel into an ocean). Our claim to fame seems to mostly be “this was the OS that was born with the ARM processor”. Better than obscurity, but not on the same level as, say, Symbian. Which, ironically, seems to have essentially vanished from the face of the earth. At least the people who had the rights to RISC OS had the foresight to make it open source, which is how we’re even having this discussion.
No, NO, and a billion extra times NO. If we’re going to create an OS based around the same API as we have right now (which might preclude the use of anything other than assembler without jumping through a lot of hoops…. assuming it’s even capable with AArch64 to have something like that (where’s the link register? which register is the program counter?), plus running a 32 bit API on a 64 bit processor? Why? There’s absolutely no sensible reason to take RISC OS 64 bit and retain the existing API. That would be broken by definition. Might as well just create an emulator and call it fait accompli.
It’s the same with a module written in assembler. ;) The benefit of C is that you can easily cobble together a test harness to run as an app to exercise the code. I feel that very little of what’s in the RMA running in supervisor mode actually needs to be in supervisor mode. So bits can be tested individually in user mode, and added together until the basis of the code has been written and tested. Then knowing the groundwork is good, it can be patched into the module stuff and tested again “as a module”. Or you could bash out some code, shoving it directly into a module, hope like hell it actually works. I’m guilty of both approaches. One is a useful way to design, the other is just laziness.
I’m not sure that it’s possible to answer that without actually trying. I’m not familiar with AArch64, and no real desire to learn, but I can’t shake the feeling that there’s some sort of lurking gotcha in there somewhere. |
Paolo Fabio Zaino (28) 1882 posts |
@ David Pilling
I am organising a set of zoom meetings on “Everything coding on RISC OS” It would be an absolute honour to have you join and sharing your coding experience and whatever you wish to pass on to people willing to learn more on the matter of coding on RISC OS. So if you wish, please let me know, that would be a great place to “say this” and more :) Thanks! |