The Future of DTP, Illustration and general GFX/Audio using RISC OS desktop computers (Is 64-bit!)
Rick Murray (539) 13850 posts |
Me.
A fact, or just musing? I’d trade a slight slow down (probably barely noticeable on a ~1GHz machine) in order to advance towards not having massive wodges of assembler. Maybe twenty years ago when I had a brain that almost sort of functioned (on a good day) and lots of time, I’d have been willing to delve into the piles of assembler to do stuff. These days? I’m afraid “Oh, **** this” and going and putting the kettle on is more my speed. Especially when it comes to programmers who have been “clever”. I won’t name names, but I have some sources to things and trying to trace through what’s going on is a real head against the wall scenario. I completely and utterly empathise with the guy working his way through Impression. I don’t like Impression and I’d not use it (I’m an OvationPro guy), but by god I’d buy that man a beer. This’ll have to do for now: 🍺
By being written in a more readable language, hopefully easier to track down the issues. Plus, as I said, rip out chunks of code and run them as an app to test stuff without risking trashing the kernel.
Maintainability.
Julie has rewritten Obey. I thought I saw a reference in the Git that somebody has redone RAMFS in C. Little by little, working through the mass of obscure assembler that has zero tolerance for error plus leaves all of the annoying grunt work up to you (stacking and unstacking stuff, remembering what’s in which register and when… the sort of mundane crap compilers were invented for). Also Cloverleaf have written a replacement Filer, but at half the size of the entire OS (!) and a memory requirement that wouldn’t fit any MEMC machine, I don’t see that becoming a part of RISC OS anytime soon. ;) So, yeah. If you make a Draw module in C and it does the same stuff (more or less) as the existing Draw module, I’ll use it. |
Clive Semmens (2335) 3276 posts |
I’m not sure modern, trendy languages were quite what I had in mind. In my experience, trendy does not correlate positively with well-designed, programmer-friendly, and error-resistant… |
David Pilling (8394) 96 posts |
>A fact, or just musing? It’s the way to bet – and if you want fast I can write hard to understand C. Experience with Ovation Pro on Windows, the Acorn Draw module does odd things, so does my code, just in different places. Both are strictly wrong, but people who do drawings are happy with what they’re used to. Talking of GDraw – C modules might gain some hold if they offered more than the existing ones. On another subject: “Linus Torvalds, the creator of Linux, has reportedly confirmed that Rust code – barring unforeseen circumstances – will appear in version 6.1 of the Linux kernel, a much-anticipated milestone. " |
Steve Pampling (1551) 8172 posts |
I’ve not seen that, but at a guess it has all the code for the things it does actually IN the module. The Cloverleaf code probably only tries to produce the same API without looking at what could be done to be more efficient. |
Steffen Huber (91) 1953 posts |
But then, C is also not at all well-designed (based on today’s knowledge of course, not necessarily the 70s), programmer-friendly, and error-resistant. Only slightly more so than ARM Assembler, which is why (additionally to “compilers are easy to write, so even RISC OS has one”) we are even thinking about C as a viable choice for an OS to be written today. |
Clive Semmens (2335) 3276 posts |
Indeed it’s not. That’s why I (only half tongue in cheek) suggested we need to actually think about the desiderata for a new language – not a trendy, modern one, but an actually well-designed, programmer-friendly, and error-resistant one. Someone ought to be doing that, but I’m not sure (understatement…) we’re really the people to design it. |
Rick Murray (539) 13850 posts |
Ask Google the best language to write an operating system, and you’ll get many answers but they tend to have one thing in common – C is listed first. However, I think before one worries about what language to use, one needs to work out such things as how apps call out to libraries, how system module are loaded, how OS calls work, etc etc. What’s the runtime environment that this new RISC OS provides? Dumping some code in at &8000 and saying “you’re on your own now” probably won’t cut it any more. Indeed, the entire structure of the OS is that it responds to user input and application requests. Maybe it ought to be telling the application what to do instead? Plus it’s pretty obvious that there can’t be a single VDU context, nor…. It may be that C(++) comes out as the most useful language to use (certainly the most widely spoken). Or something else might be more appropriate. |
Clive Semmens (2335) 3276 posts |
Oh, I know. And of the available languages, it very likely IS the best. But it isn’t well-designed, and it’s not very programmer-friendly, and it’s not very error resistant, but like QWERTY, it’s so well-established that it’s not really likely to get supplanted. I wasn’t suggesting we try to choose a better language from those available. I was suggesting (rather tongue in cheek) that we should work out the desiderata for a language that someone (certainly not me) would then design to fit those desiderata. Not a trendy, modern language – but an actually good one. Did you see that flock of pigs flying over? |
Jeff Doggett (257) 234 posts |
Sorry to take this a bit off topic, but here is a video about the Worst Programming Language. |
David J. Ruck (33) 1636 posts |
I’m surprised no one has mentioned that gerph has already re-writen most of RISC OS in Python. Python doesn’t really have any dependency on the underlying integer size, so could easily transition from 32 to 64, or even 128 or 256 bits. Second choice would be C++11 or later (ignoring some of the old nastiness), for the familiarity with C, but with STL classes for strings and RAII for managing object lifetimes. |
Clive Semmens (2335) 3276 posts |
So Python meets at least ONE of the (presumed) desiderata for a new language. Does it meet any of the others? What are the other desiderata? There’s the three I was on about:
And then there’s the ONE that C meets: 4. Platform independent Actually, the one desideratum that Python apparently meets ought to be part of (4) – and C notably fails to meet that part… For what it’s worth, the reason I liked (the original, 26-bit) ARM assembly language was that it met (1) and (2) (for me at least) – but of course it spectacularly fails to meet (3) or (4)… There must be other desiderata, but I’ve done my bit. |
Rick Murray (539) 13850 posts |
I’m not sure Python is exactly programmer friendly when not only does whitespace have meaning (contrary to many other languages), but there was a massive shift between two versions of it, and pretty much not forwardly compatible. This isn’t to say that the change wasn’t useful, but at least C and C++ don’t think of themselves as being the same language…
C is, and isn’t, platform compatible. Pure C using the standard libraries is, so long as you take differences in what an “int” refers to into mind (which is why many use definitions like uint32 to always specify). Where C fails to be platform compatible is when one uses a platform specific library. That’s not the fault of C, that’s just how it is. An example from myself can be found here (about two thirds of the way down). In order to make that work, I needed to write a clone of the BGI graphics library for RISC OS, and pass everything through wrapper code that was platform specific and dealt with stuff like whether the top or the bottom of the screen was 0,0. Once that was done, the exact same code compiled on both systems. It’s worth providing a far better example. Linux itself. Written in C, available on pretty much anything that is physically capable. |
David J. Ruck (33) 1636 posts |
Python is very programmer friendly, and white space is not an issue for anyone who formatted their other language code in any way remotely readable – i.e. for all except for BBC BASIC programmers still putting multiple statements on one line because it saved 24us on the 2MHz BBC B. Transitioning from Python 2 to 3 was not a massive shift, inconvenient for existing code bases, but needed to be done to give the language a future, and it got done (unlike the prevarication in the RISC OS world over any breaking change). In my current project I’ve ported 919 files totalling 209463 lines, largely using automation and the six (2×3) compatibility library. It wasn’t entirely painless, but it was very useful to deal with some of the technical debt, and several bugs got fixed in to the bargain. Python 3 has been out for 14 years now and Python 2 support ended 2 years ago, so this not something you can use as an excuse going forward. |
Clive Semmens (2335) 3276 posts |
This BBC BASIC programmer puts multiple statement on one line not for speed, but to IMPROVE readability. If statements are closely related, putting them on one line, with space-colon-space not just colon between them, allows them to be seen as a related unit. It also means you get more on screen at the same time, so you don’t have to scroll up and down so much. What I don’t do is put more than about 120 characters on one line, despite having a screen four times that wide – for readability, and to allow me to have several views either into different parts of the same file, or into different files. (Or possibly a Draw window, or Netsurf, or whatever.) |
Sveinung Wittington Tengelsen (9758) 237 posts |
>> It must be wished for, for starters, to have something better. > Wishing is useless. It’ll take cold hard cash and a lot of time and resources to With no people, no money. Sound familiar? Except in this case, the hardware (CPUs, GPUz, DSPs etc) is here, very much improved in all respects, present in billions of devices globally. Making a “It’s M$ Windows for ARM!”-laptop into a RISC OS (64) laptop would be fun, but at that level the post-Acorn world (RPI, R-Comp) has the ability to make any format computer hardware. So it’s purely a software (operating system) issue which is, as you cogently point out, quite a project. A Thorvalds fellow did that a few years ago, ended up with “Linux” and its hundreds of distros not all compatible on issues like package format. RISC OS 64 will not have to have that sort of ballast. Oh, I’m a COG too. Growing up on the coast of north Norway in the 1960-70’s I learned very early on not to pass water upwind. My father taught me how to play chess when four. We had lots of bookshelves. Fam got a TV in 1975. It was only one channel these days (NRK) so I can’t claim to have missed anything there. I’m a reader. Must admit that it’s part nostalgia, a wish to use the same OS of my first computer, the Archimedes 310, on modern processors/gpu’s etc. I think what makes RISC OS (the GUI/WIMP system) a delight to use on 32-bit rigs would be even more delightful on 64-bit rigs, even running emulated software. There would at least be something to build on. |
Clive Semmens (2335) 3276 posts |
Blimey. I’d been teaching IT for years by the time the Archimedes appeared. Was lucky enough to be in a position (deputy head of the institution) to order a classroom full of them… I’ve no nostalgia for anything earlier, really – a little for the BBC Micro, which I taught with, and had a work one at home in the holidays, but never owned myself. Played with 6502s on breadboards and soldered into veroboard, used to know 6502 machine code (never mind assembler…) but that’s completely gone now. |
Sveinung Wittington Tengelsen (9758) 237 posts |
Mr. Semmens, it’s basically the productivity of the RISC OS GUI I wish to save – and the Arm architecture has moved on “a bit” since the “Acorn ARM”s, Their latest Neoverse architecture is pretty focused on HPC systems under which a SOTA graphics/audio workstation should be (along with G715 and maybe a M85 on a SoC), certainly not an architecture to use on phones. Arm appears to concentrate their development more on the HPC/server market these days which is exciting because these CPUs (one or more) can do very well in “consumer” level desktops and laptops when made in these volumes. In the meantime there’s the MediaTek Dimensity 9200 which is capable of much more than what’s needed by a mobile phone. Read https://i.mediatek.com/dimensity-9200 closely and think about what operating system GUI you’d rather run on it. |
Stuart Swales (8827) 1357 posts |
Android??? |
Clive Semmens (2335) 3276 posts |
I know. I’ve got one in my Mac Mini. I don’t know the ARMv8 instruction set at all, it’s a completely new architecture – unlike ARMv7, which was a development from earlier architectures, and which I know very well (because I was responsible for documenting it). To be honest, I’m not interested to know ARMv8 – I’m very happy as a user of a computer using it, but only because it’s a good computer, I don’t give a damn what the underlying architecture is. I like RISCOS, because as you say, its GUI is good for productivity. But apart from !Draw, !Zap, and BBC BASIC, I’ve migrated away from it. I should be sad to lose those three, if I outlive the hardware they run on. If RISCOS can be successfully migrated to any other architecture to prolong its life, that’s good so far as I’m concerned. But I’m not holding my breath. I say “any other architecture” because ARMv8 is no easier a target than any other architecture, being utterly different from earlier ARM architectures. Making RISCOS architecture agnostic would be wonderful, but I honestly don’t see it happening. |
Rick Murray (539) 13850 posts |
Yup. I think he’s missed the fact that a ridiculously overwhelming majority by orders of magnitude 1 of mobile phones and similar devices use ARM processors. Plus, of course, since ARM design cores and leave others to bake the silicon, they’re not a one trick pony. The A series, the R series, the M series (hmm, does that spell something?) plus targeting server workloads as well as ultra low power for mobile devices. They have their fingers in many pies.
Me? Now? Whatever one can stream Bollocks Radio 2 and play Netflix videos. 1 Estimates are in the order of 95%. |
Rick Murray (539) 13850 posts |
This. I like Android over Apple because I have freedoms. I can wander the filesystem (not all of it, my devices aren’t rooted, but the non-OS parts), I can install apps from other places (like NewPipe that Google will never allow on the app store)… …but I have no desire to program an app, nor learn how to, and my affection for Android is only insofar a I know more or less how it works and behaves (even if they screw around with that in every major release). It’s a requirement only insofar as I have a number of apps that I use (banking, Amazon, music, movies…) that run on Android. Thanks to my phones and tablets, I’ve pretty much stopped using the PC. Never went beyond XP. I didn’t see any reason to upgrade to a shiny new machine to run the same stuff I use right now slightly faster. The stuff I use? Handbrake to rip DVDs for the tablet (though I rarely buy them now), and a very nice photo editor. Plus, once in a while, copying the mass of photos off my phone onto harddisc with DVD-R copies. RISC OS? That’s for fun. Keeping the geekier parts of my brain functioning. Also for writing many of my blog articles because, let’s face it, Android editors are still a pain in the arse and the on-screen keyboard is annoying and Android with Bluetooth keyboards still, in 2022, can’t handle dead keys for easy access to îñtérñâtìöñål characters like RISC OS since 1992 and Windows since 1995… It’s an embarrassing s**tshow when Zap on RISC OS can offer a greater ease of use than an OS and editor created by a company worth one and a quarter trillion dollars. When I was in my twenties, I cared about what operating system and processor I used. That’s why I use Windows about 0.25%, RISC OS about 5%, and Android for the rest. |
Graeme (8815) 106 posts |
Oh yes, there are a few. Here is an example, AArch64 has no STM or LDM instructions so a list of LDR/STR and and ADD/SUB are going to be needed. Of course that would be too simple. The system stack pointer also needs to be 16-byte aligned. Put one 64-bit or one, two or three 32-bit values on the system stack, then pull it back off and a failure happens. Yet, your code looks right. AArch32 was designed for humans to write |
Clive Semmens (2335) 3276 posts |
AArch32 was designed for humans to write This, exactly*. And I’m a human (an odd one, probably, but still…a human). But the fact is that apart from being like AArch32 in being a lot more energy efficient than other architectures, AArch64 is totally different from AArch32. * Almost exactly. AArch26 was designed for humans to write. AArch32 was AArch26 stretched a bit, trying to remain sort of human-writable but accepting to a large extent that it’d be written by machines. (That’s pretty much a direct quote from some of the engineers responsible.) |
Colin Ferris (399) 1818 posts |
I was looking at one of the ARM64 books on Programming – the author was waxing lyrical on the new format – saying how much better it was doing away the XXXeq instructions. I wonder if anyone could make up a Module that could recreate old instructions – like the SWP module? |
Clive Semmens (2335) 3276 posts |
That suggests the author simply didn’t understand the logic of the old ARM instruction set.
That’s virtually the whole of the old ARM instruction set you’ve got to recreate – which is basically emulation. Of course someone could. RPi4Emu. Well, better than no RPi4, but for the moment, a real RPi4 is a hell of a lot better. |