Is RISC OS still a participation sport
Rob Andrews (112) 164 posts |
I was thinking of being able to take out all of the grunt work set it on a task let it do all the boring endless stuff see what comes out the other end. |
Stuart Swales (8827) 1357 posts |
Hilariously badly – see https://www.riscosopen.org/forum/forums/2/topics/17593?page=3#posts-136657 |
Rick Murray (539) 13850 posts |
Yes. It’s not hard to translate assembler into C.
No, the process is simply translating assembler into C that is logically equivalent. There’s no understanding, so what you’ll end up with is a sort of bastardised version of the assembler code, written in C. It will be as much a pain in the arse to maintain as the raw assembler. Perhaps moreso, as it won’t quite look like assembler to an assembler programmer, and it won’t look enough like C to a C programmer.
Which is another way of saying No. ;)
Another problem is that the C runtime, if you’re actually going to use any of the useful parts of C, has certain prerequisites. Take a look at what CMHG does in order to bash a regular RISC OS entry point (SWI, vector, etc) into something C-friendly. Given that pretty much all of RISC OS is an assembler API with a largely immutable format (you cannot randomly mess with other registers), I wonder how much C-ification is actually possible? |
nemo (145) 2553 posts |
Herbert missed several points with
Like Basic they are not tied to a specific CPU. So they aren’t a problem. But unlike Basic they won’t have inline ARM assembly, so they’re even less relevant to this discussion.
CPU-specific.
CPU-specific.
CPU-SPECIFIC What does “natively” mean? I’ve been using a JIT compiling emulator for twenty years – so it IS running natively. You mean native code sitting on disc? Why would you care? What happens when your silicon changes? Regardless of what you do to the Kernel, the programs it exists to run are written in ARM code and, as you described above, most are written in 26bit ARM code. So why are you rewriting existing Kernel code into another language to compile onto a specific CPU? The programs have to be emulated. What do you think you’re gaining by rewriting This is such a Twentieth Century way of looking at computers! Why should I care which CPU I have this week? Android Apps don’t care about CPU. Web apps are written in JavaScript and Wasm. ChromeOS, Fuchsia, .NET/Mono and even Windows are multiple CPU. Why would anyone think “We’ve learned the lesson of targeting one physical silicon design with ARM written on it by targeting this other one physical silicon design with ARM written on it”? Make RISC OS virtual and it lasts forever. |
Rick Murray (539) 13850 posts |
That’s such a last decade way of looking at things. 😜 Why should I care what OS I have this week? My primary OS these days is Android. On a phone or on a tablet. I don’t pick Android because I’m a fanboy, I pick it because I don’t like Apple because (note – my experience of Apple ended with iOS 7):
But I also appreciate:
With all of those considerations, my current preference is Android. These days I only use Windows for ripping DVDs and backups to harddisc. So, today, my preference is Android. If Google crashes and burns or the EU decides to pull their finger out and roll their own OS… does it have a reasonably active app ecology? It’s there an SFTP app? Netflix? A decent music player? Browser with configurable blocking? The app my bank needs for their so-called security? 2 Etc etc. My device does the stuff I want it to do in the way I’d like it to do it. You’ll notice my commentary on Apple was mostly things I didn’t like about iOS. Because it didn’t work in the way I would have wanted. But above and beyond that, it’s not that big a deal because really, when you get down to it, an OS these days is just the thing that provides the “look and feel” of the device and runs the software you choose to install. What it actually is? Far less important than what apps are available. 3 1 My iPad Mini’s embarrassingly useless WiFi hasn’t yet been beaten. Even a €60 ebook and €25 internet camera can catch and hold a weak signal better than the iPad. Of course, encasing the thing in a big piece of metal probably wasn’t ever going to help, but “it looks cool”. 2 I don’t particularly want a banking app, but as part of their especially awkward interpretation of the recent EU banking rules, they require that I periodically enter a special PIN into the app to, somehow, prove that I’m me. It’s worth noting that the other bank I use (I have two accounts) does not require all this crap. 3 As Microsoft expensively discovered. |
Paolo Fabio Zaino (28) 1882 posts |
@ nemo
Thanks for reminding me, I have Lua DASM/DynASM half ported by a while, forgot to finish the work, too many spare time projects. Lua however (through the aforementioned) can do in-line assembly and it’s way more powerful than what BBC BASIC can do (just for the record).
True, and even more true if one think that the so called “binary” gets actually interpreted in a “lower” level form called micro-code. |
GavinWraith (26) 1563 posts |
Excellent news. A LuaJIT for RISC OS is something I have long wanted to see implemented. Plenty of speed without having to wrestle with a compiler. |
nemo (145) 2553 posts |
Paulo pointed out
Excellent. I keep meaning to take my Basic Assembler Dialects work past the proof-of-concept stage – that allows Basic to use any kind of assembly language, and target the precise instruction set, such as ARMv4+FPU, or 65SC102. Arduino perhaps.
On non-ARM processors, that’s very true. It’s turtles all the way down. And though I’m sure everyone here is aware of it, Archimedes Live is a virtual RISC OS experience in any browser on the planet. It’s early days, but is a promising start.
|
John WILLIAMS (8368) 495 posts |
Except for RISC OS NetSurf! |
John Rickman (71) 646 posts |
Archimedes Live is a virtual RISC OS experience It works for me. I would be happy to run RISC OS this way if it had hires screen modes and a way of keeping files over a shutdown. |
Colin Ferris (399) 1818 posts |
Arc 3.10 on a Android phone :-) |
Paul Sprangers (346) 525 posts |
But how do I run my own programs on it? Dragging them as an archive into the emulator, as the ‘Load Software’ suggests, doesn’t do anything. EDIT: Forget about that. I have to drag them over the desktop rather than over the download page. |
Paolo Fabio Zaino (28) 1882 posts |
@ nemo
Sorry I just noticed that. To be clear I do not want to start an emotional reaction from ARM purists, however also ARM uses microcode and this is so by a while now. The way ARM microarchitecture uses microcode is hardwired, not in a ROM like the x86 does, this allows ARM to still maintain an hardwired-control unit, but yes AArch32/AArch64 ISA do get translated into micro-operations like every other architecture these days. On an x86 microcode can be updated via software, while on ARM it’s defined at design time (this is one of the important elements that makes the Apple M1 ARM core faster than another ARMv8 core for example). Here are some examples for who is interested into knowing more: There are tons of advantages in doing so, so it’s not a bad thing. Also, just for the sake of the conversation, ARM did implement full interpreters in the past, like for example the Jazelle, which used to interpret Java bytecode into ARM instructions by being placed betwen the local cache and the instruction fetcher IIRC (it has been a while sorry). In any case, ARM maintain a RISCy architecture by hardwiring the decoding process into micro-ops, which, beside being fast, is also fully recognised as a form of proper RISC architecture, reference here: https://en.wikibooks.org/wiki/Microprocessor_Design/Instruction_Decoder Read: 2nd variant in the Instruction decoder chapter. Full documentation here: https://developer.arm.com/documentation/uan0015/b/ Read: 2 Introduction → Pipeline Overview “Instructions are first fetched, then decoded into internal micro-operations (µops). From there, the µops proceed through register With that said, for whoever will use these info onwards: please please please do not avoid the details, ARM micro-ops are hardwired and so not a traditional form of microcode, so NO it’s not the same as x86, it’s just 2nd variant of instruction decoding for RISC architecture, but YES AArch32/AArch64 are interpreted instruction sets, not executed like an instruction would on a 6502, 8086 etc. Hope this helps and it’s not too confusing, thanks for bringing this up btw, cheers, [edit] |
Rick Murray (539) 13850 posts |
Something that I wonder, moreso with x86 than ARM, is if there would have been any benefit to exposing the microcode instructions directly. Or is there too much collusion between the different bits to make such a thing viable? |
Paolo Fabio Zaino (28) 1882 posts |
@ Rick
That is a fascinating argument IMHO. So, the actual reason why microcode (in whichever form) was invented (at IBM IIRC and back in the 60s, but I may be wrong on dates, sorry!) is an actual electronic’s foundamental principle, which is reuse of components. In other words, if 2 ISA instructions present idential portions of hardware logic, why duplicating the number of transistors to implement such instructions? That would lead to higher costs, more heat to dissipate etc. So, using micro-ops makes it possible to create an ISA (let’s remind that the ISA belong to the Architecture level, and a CPU Architecture is a contract, an interface between the software and the hardware logic) that reuses the same hardware components where possible, making it possible to reduce the number of required transistors to implement such an ISA, am I making sense? This reduces costs, complexity etc. So it’s a good thing. However, to go back to your thought, why not to expose the micro-ops? Well the micro-ops are NOT meant to be a contract between the software and the hardware (they belong to the real of microarchitecture, which is a different thing than Architecture, also called big A architecture), so they can change at every single implementation, literally even between 2 different ARMv8 chips, so that would make writing software for such a CPU really really hard to maintain and, as well, given the nature of a micro-ops, even harder to write (remember an ARM or x86 instruction is a combination of micro-ops). For the x86 the number of micro-ops per instruction varies between like 2 or 3 up to hundreds, that just for a single instruction. And, everytime we update the microcode blob these numbers can change. So, exposing it, would most likely break things. Having an ISA, on the other hand, that is a contract, means that everything compiled for that specific ISA release, will always run on all the CPUs that adhere to that ISA release (regardless how such an ISA is implemented), for instance all the ARMv8 <- . This is one of the thing that made it possible for our beloved RISC OS to still run on ARM to this date btw, so a good thing I’d say. The way those AArch32 instructions get truly executed underneat has changed probably drammaticaly over the years, surely it has happened for the way x86 instructions are executed and yet, we can still run MS-DOS on the latest and greatest Intel x86 chip. Hope this helps somehow, |
Rick Murray (539) 13850 posts |
Well, yes. I suppose if your architecture has an GF2P8AFFINEINVQB instruction, then it might just perhaps require a few bits of microcode to make it happen. 😂 Definitely putting the C in CISC, there.
Because they take backwards compatibility to insane lengths. The processor might be happy booting DOS, but the UEFI firmware might not agree. ARM, unfortunately, goes in the other direction. As we well know. After all, ARM mostly existed in the embedded device world so the imperative to support an OS from five years ago isn’t really an issue, never mind one from the eighties. Indeed, it wasn’t until Linus blew his stack that anybody considered that maybe supporting device trees might be an idea, so an OS can have some idea of what it’s running on. So we’re left with Linux on x86 basically being two varieties – old 32 bit and newer 64 bit (note: I’m talking about the kernel, not the many distros). For ARM? Well, there’s one for this board and one for that board and one for the other board (this site alone provides nine different builds of RISC OS) and if the manufacturer doesn’t provide a binary blob to talk to the hardware or adequate documentation, you’re screwed. And certainly you can’t really probe-and-pray to see what’s there. Hell, most of them don’t even boot in the same manner. But as ARM starts to enter the realm that was once held by x86 hardware, I think they’re either going to remain as custom single OS devices (like all the Android things) or will need to start to develop an x86 style of openness. I’m not going to hold my breath, however. Anyway, just a shame there’s this fast simple underlying architecture and no means of directly utilising it |
Rick Murray (539) 13850 posts |
For anybody interested in a little light reading, here’s a rather in depth description of the operation of a microcoded processor. I will warn you, though, the author’s favourite pet word may well have been “plurality”. ;) |
Steve Pampling (1551) 8172 posts |
Technically the OS doesn’t really change, the included driver modules and HAL elements yes, the OS, not really. That’s why the main thing that needs to be done is to strip bits out of the HAL/ROM and push bits in. |
Mr Rooster (1610) 21 posts |
|
Paolo Fabio Zaino (28) 1882 posts |
Indeed :) Which is why I wonder how this legend has emerged in the RISC OS world, I guess someone must have read some early RISC CPU design papers and assumed ARM was designed as such. Another element might be that there seems to be a perception that uCode is like some kind of “hidden lower-level assembly”, while it isn’t. |
nemo (145) 2553 posts |
The ARM’s PLA is not much different in operation from the 6502’s PLA, which is hardly surprising considering its inspirational role. What the ARM’s PLA certainly isn’t is microcode in the way a CISC processor uses microcode. The ARM’s PLA is merely a unit sequencer and forms only part of the decoding and execution of an instruction – much like the 6502, but even simpler. In fact, IIRC, the 6502 PLA has over 120 output bits whereas the original ARM PLA produced less than 40. To claim that the ARM is “interpreted microcode” but that the 6502 “executes instructions” is just not sensible. It was Steve Furber himself who said that the PLA did not constitute microcode. I take his word for it. And ARM Ltd make this distinction: |
Rick Murray (539) 13850 posts |
I’m afraid I can’t put much faith in such an advertising heavy site that doesn’t respect the GDPR or the committee rules (stating in big letters “Nous partageons également des informations sur votre usage de notre site avec nos réseaux sociaux, publicitaires et partenaires d’analyse.”!), not to mention numerous messages about wanting to play protected content… …and while there seems to be some degree of information on the original ARM, the page describing “microcode” says “There is currently no text in this page.”. Hmm… You’d have thought a site talking about processors would have defined that, wouldn’t you? 🤷🏻♀️ |
Paolo Fabio Zaino (28) 1882 posts |
@ Nemo (thanks of your comments, this is definitely an awesome chat!)
True, but there are some substantial differences as well that should be mentioned: 1) they both have sequential timing states, however the implementation of their sequential state is different 2) AFAIR the 6502 uses a shift register to move sequentially through states, while the ARM1 has a way more complex sequence controller which is capable of looping on a state (this is a big difference, because the ARM1 by being capable of looping on a state allow larger instructions) 3) the 6502 should (IIRC) have more states, like 7 or 8, while the ARM1 has only 4 states 4) Another big difference is that the ARM1 SC is highly structured and this is where the two differs the most, the ARM1 does have sequence “commands” being generated by the PLA (if you observe the behavior of the ARM1 PLA you’ll be able to clearly see the END command that always terminate a sequence), while the 6502 uses PLA combined to hard-wired logic to determine when to terminate a sequence, IIRC it has a specific signal that resets the shift register (but I need to double check this last bit, not sure I remember well)
Yes and no, as I have mentioned above, the 6502 uses a combination of PLA and hardwired logic to process states while the ARM1 doesn’t, and actually if you try to look at the output of the PLA on the ARM1 you can clearly see the sequence of commands being generated and that includes (again) even the END sequence.
True, but most likely Prof Furber stated that for a number of reasons: 1) Each instruction on the ARM1 should be broken down to 1 to 4 microinstructions, which are stored in the instruction decode PLA (so the PLA does act as a ROM, but (a) cannot be changed as I have mentioned in my previous detailed post and (b) they are only 4, not that many) However, the ARM microcode is stored as 42 rows of 36bit microinstructions and the 42 rows are divided into 18 classes of instructions (again each consisting of 1 to 4 microinstructions) The PLA output at each state cycle a sequence and each sequence is composed of bits that obviously trigger micro operations, like for example: ALU add etc. so we can represent a single sequence cycle like: CN| SS |uInstruction [0][seq][36 bits forming a microinstruction] Where CN is the Cycle Number, SS is the sequence. So, while not officially a microcode architecture, the CU on the ARM1 does indeed generate microinstructions. There are also other reasons to consider, for instance implementing pipeline actually means adding internal registers that needs to store the above while it’s being executed. In the other hand, once could also not consider it heavily microcode based because the ARM1 microcode only represent a small portion of the CU, for instance the famous CC fields are handled by a dedicated subunit the Conditional Unit (so not by the microcode) and therefore the control signals can be heavily modified by the instruction “ignore” portion which is something that is indeed NOT microcode based. But saying that the ARM1 isn’t microcoded is probably not the correct definition, it is in fact microcode based, but it’s an Acorn and since when Acorn followed standards? XD Modern ARMs are microcode based and I have provided enough info on the matter, including the advantages of such an approach and it’s RISC, there is nothing in microcode that is related to only CISC CPUs, although if it indeed benefits CISC CPUs way more than it does on RISC CPUs, but with modern CPU processing optimization techniques it’s something that benefits RISC architecture too and helps simplifying their design (which is a very RISC thing to do btw) @ Rick
Totally agree with you on this Rick, not just that, also the site has no technical informations to actually say WHY the ARM1 could be considered microinstructions based… so yeah quite poor internet resource, something that is so common :( |
David Gee (1833) 268 posts |
@nemo There are a number of Android apps that will NOT run on devices with an Intel processor. Virtualising an operating system doesn’t guarantee it will last forever if the virtualising software can no longer be used. Virtual RISC PC for. MacOS being a case in point. |
Michael Stubbs (8242) 70 posts |
It has gotten better at fooling people with supposed “intelligence”, but that just means that better hardware and implementations mean it can do it’s pattern matching wizardry much more rapidly. Spot on. If it were intelligent, it would know when it didn’t really know. All that aside, any firm that wants full control and understanding of the code that runs its business, is not going to have its programmers using ChatGTP. Also, if it wants its processes and intellectual property to remain private, it won’t have its programmers use ChatGTP, either. Samsung found that out the hard way. This wrongly-named AI shenanigans can only go so far before it becomes problematic in any number of ways. It already is in some areas. What on earth is the point of pouring time and resources into creating technology that can spit out symphonies in seconds, or produce deep fakes of well-known artists singing a song it created in seconds? There’s no inspiration behind such music, no experience, no emotion. It just turns the beauty of human inspiration and creativity into pure mathematics. The race to create and deploy AI, especially without any oversight, is the dumbest thing humans have done yet. The only thing dumber is to voluntarily embrace it because you’re then going along with the process of making yourself and everyone around you an irrelevancy. Humans have always had jobs and roles, and you can go to any place unfortunate enough to have high unemployment to see how badly things go when people don’t have a purpose. |