No more big 32-bit cores for RISC OS from 2022
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Steve Pampling (1551) 8173 posts |
I already gave you a link to the existing Roadmap document. |
|
Stuart Swales (1481) 351 posts |
A question of developer resources and priorities, methinks. |
|
Gulli (1646) 42 posts |
You do realise I’m talking about the situation 8 years ago? “this is exactly what I wanted to help avoid” |
|
Steve Fryatt (216) 2106 posts |
The interesting things about the 2012 thread that Gulli highlights are:
It’s interesting that new ideas and new people get dismissed out of hand. We’re keen to get new people involved, but:
You only have to look at the patronising way that people have responded to some of the “new” posters in this thread to see why people don’t hang around. “Your ideas are different to ours? Well, you can’t have been around here long enough to realise why you’re wrong, then.” |
|
Paolo Fabio Zaino (28) 1882 posts |
@ Stuart
Yes also on the Commodore 64, and I still have people who reported running my code from the 90s (which I think it’s crazy lol). So, sure yes there are and there will be, but honestly does that means RISC OS IS stable and robust for the average Jo user? Just as an example, I copied a CD ROM ( not even full, so ~500MB) in a zip file using SparkFS on RISC OS 5.28 (SparkFS allowed to use more than 32MB RAM) it worked till the very end and then when 0 bytes were left to copy the entire system crashed… This on a Iyonix, fully working fine, new PSU, 512MB RAM and powered via Pure Sine power unit. So, Stuart I am not trying to argue with you ok? But are you absolutely sure you would recommend RISC OS Desktop over a Linux Desktop or even a clean Windows for stability? Just curiosity. |
|
Rick Murray (539) 13855 posts |
My experience with a live CD of Ubuntu – whatever is the one that annoyingly and counterintuitivly puts all the menus at the top like a Mac – was not good. It failed to deal with my fairly generic sound hardware, while giving a smiley face to say all was well. Brasero (CD writer) ignored my chosen speed setting and burned at maximum speed, turning two discs into coasters in the process. A lot of things seemed bug ridden and unfinished. And eventually, for some reason, the whole thing threw in the towel with a kernel panic. As for Windows, it’s better these days but the BSOD is infamous for a reason. Sometimes in the W95 era, ejecting a CD at the wrong time would bring down the system. On my XP box, if I use a USB serial port (programming the ESP32) while the USB WiFi adaptor is connected, instant freeze. No error, just completely frozen. In other words – systems can and do crash. They’re not supposed to, but it happens. |
|
Steve Pampling (1551) 8173 posts |
I, for one, am not criticising. |
|
Steve Pampling (1551) 8173 posts |
Well, it got faster and erm, yes… 64 bit But then if you look at things like that then you have to realise that 64 bit is going nowhere because 128 bit will be along in a similar time period and I’m sure 128 bit will be “so last year” and Of course some of the “32-bit is going nowhere” comment will have come from people who didn’t want to let 26-bit go and hadn’t quite realised how many years it had already passed since the last 26-bit CPU was made. So, here, now. 32-bit and hopefully move to a less limiting source language. To do what? |
|
David J. Ruck (33) 1636 posts |
Most 64 bit chips (not just ARM) are have only 48 bit wide address buses, so there’s plenty of headroom before 128 bit is need – if ever (think about the size of the numbers). As for data bus widths, that’s largely uncoupled from the core ISA due to SIMD instructions and GPUs. So 64 bit is going to be where the price and performance sweet spot is for a few decades yet. |
|
Steve Fryatt (216) 2106 posts |
In a way that could come across as somewhat patronising. If someone is asking about things that they could do, we’re not talking about the “limited developer resource” – we’re talking about them. So it doesn’t matter a bit if it seems useful to you, unless it’s also interesting to them. In the case of the 2012 thread, there was no reason at all for Terje not to re-write the sprite code in C++ and benchmark it. They were clearly interested in writing and benchmarking it, so at the end it would be clear if it worked or not, and there was no obligation on ROOL to include their work in the OS anyway. And since they weren’t working on the OS in any other way, it wasn’t tying up the “limited developer resources”. But no, that wasn’t to be. ROOL, Jeffrey and a few others were supportive, but other feedback was rather less so.
To be clear, there are all comments in response to someone who is spending their own time recoding a single module into C++ to be able to benchmark it and see how it compares to the ARM original. They’re not proposing to forcibly include it in the ROMs, or require anyone else to use GCC and C++. They’re doing it out of interest. They’re proposing to spend their own time, as someone who wasn’t currently working on the OS, trying out an idea that interested them and might have given us all useful information at the end. And still we drove them away. Eight years on, going by the replies in this thread, we’ve learned very little. RISC OS is now Open Source, so if someone fancies tinkering with C++ or porting some of the OS over to another build system, the response isn’t “Seriously – I don’t have GCC here. I have the DDE, so I don’t have a need (I’m not interested in C++ and GCC can’t build the OS yet).” The response is “Have a go – we can’t help with time, but if you need advice, we’re here and we’d really like to see how well it works”. In the end, there’s no requirement to use their work, but it might just prove very valuable indeed. And if they have fun doing it, that’s what matters. But no, eight years on, the same faces from that 2012 thread are tutting at people for not buying ARMBooks, or telling people that their ideas are wrong because they don’t involve hand-optimised ARM code or Norcroft, or use ELF binaries. It doesn’t matter. If someone is interested in something, welcome them and let them experiment, even if you personally don’t like the idea. By all means point them towards the “right” direction, prior work and resources that might help them along the way, but if they’re clear that they’re not interested in any of that, support them in tinkering with what they are interested in and consider the finished result if and when it arrives. |
|
Steve Fryatt (216) 2106 posts |
Oh, and remember that anyone coming new to the platform is going to be expecting better dev tools that the DDE plus Zap/StrongED and a bunch of folders on their hard disc. If they want to cross-compile and utilise the facilities that they can get via that, let them. It doesn’t affect you, and may just bring the familiarity required to keep the project interesting to them. The learning curve for someone coming new to RISC OS code is likely to be steepish, so don’t add the complexity of having to lose the tools they’re familiar with as well. |
|
Clive Semmens (2335) 3276 posts |
Where’s the “like” button? |
|
Charlotte Benton (8631) 168 posts |
The clash of future visions is the other big reason why financial investment is required1. Money doesn’t just buy labour; it buys unity and cooperation. People are far more amenable to doing things the “wrong way” if they get a cheque for doing so. (That’s partly the reason why we don’t have volunteer pilots, spacecraft engineers, etc, even though there are numerous qualified people who’d gladly do the job for free.) 1 Where the money comes from is left as an exercise for the reader. |
|
Paolo Fabio Zaino (28) 1882 posts |
+1 |
|
Rick Murray (539) 13855 posts |
Just to clarify as it seems a bunch of different things have gotten mixed up here, my point regarding GCC is that I don’t feel that the existing build process needs two different and independent toolchains just because of one small bit of code. It’s a lot of added complexity. However, what I feel should not be any reason for somebody not to try stuff. When something exists (the “show me the code” part), it can be evaluated on its merits. |
|
Rick Murray (539) 13855 posts |
Citation? Because anybody seriously promoting the use of pure assembler has clearly failed to learn the multiple lessons of the last, oh, three decades?
Well, if something is intended to be built into ROM, how well that thing can interact with the existing build process is a rather fundamentally important question, don’t you think? |
|
Steve Fryatt (216) 2106 posts |
How about this? If you want more recent, then there’s been talk in this thread of re-writing it all in AArch64 not being a totally daft idea.
Yes, and no. The fact that once again you’re raising this straw man suggests that you’ve either not understood, or that you’re still trying to impose your will on anyone who wants to tinker. If someone wishes to rewrite the sprite handling, or anything else, in C++ and then benchmark it against the existing ARM code for their own amusement, that’s their call, and we should be welcoming them and encouraging them. It might not be fun for them if they did it in C, and they wouldn’t bother. It might not be fun for them if they couldn’t use the development tools that they’re familiar with, and they wouldn’t bother. But if they do bother, in the way that they choose, then at the end of the project we might have some sprite code written in C++ with benchmarks, and we can decide what to do next. Is it fast enough? If it is, do we want to investigate how to use it in the OS build? If we do, how do we progress from there? Perhaps, with several months’ RISC OS experience under their belt, the person who did the work might be interested in making some of that their next learning project. Otherwise, if we tell them to get lost or set the bar impossibly high at the start, we don’t get any of that. We don’t get the code, or the chance to hook a new developer, or anything else. We’ve just got our “Norcroft purity” and the same shrinking pool of developers. Working on stuff in your own time has got to be fun, and interesting to you. After a time, that “fun” might include “helping out the platform”, but it won’t at first. And if something’s not fun, people go and do something which is. |
|
Timo Hartong (2813) 204 posts |
And indeed let people write in C++, OO is certainly the way to go. Give them support.But if people want to contribute something in assembler ( 64 bit ) why no ?. But back to 64 bit. Yes we should try to get RISC-OS working on 64 bit. But there are still 8 bit processors for sale ( 8051 for example ) |
|
Steve Pampling (1551) 8173 posts |
Read that; and when you look at the start of the post and the end Andrew seems to be pointing out the original reason for assembler and that maybe new machine are fast enough that the issue is no longer significant:
and
In which Mr. Swales side-steps the nomination to do the donkey work of what some see as a required step, and he thinks direct would be easier. Then there’s my joke about Asperger’s (I have no idea where I sit on that spectrum, but if you can find me a person who is a techie, artist, musician that isn’t on the spectrum then I shall bow to the wondrous search abilities you have – and demand a retest) and then there’s a vague suggestion from me that might spark something1, or not. So, in that I presume you were nominating Mr. Swales as the keen party? Although he doesn’t look that keen to me. 1 Hmmm, Timo’s half interested. |
|
Steve Pampling (1551) 8173 posts |
I’d have thought most people would want C (both, depending on how you boot)
I agree the higher level language is likely to be more flexible and platform newcomers might have skills there already while ARM assembler isn’t an item you expect a Windows person to have.
For the benefit of the future I think I’d rather they did it in C or whatever, but if they are set in an assembler mentality and this is something brand new – why not? Is it possible to run existing SoC’s with different cores in different modes? (32 & 64) |
|
David J. Ruck (33) 1636 posts |
It’s quite reasonable for an OS to specify that a single toolchain is used to build all the deliverable code, but only if it is up to the job. Norcroft is a fast and efficient compiler for integer C, and produces native file formats, but lacks support for the VFP and Neon, and has no meaningful C++ capability (CFront is a 1990s anachronism). GCC supports all ARM integer and FP instructions, provides a migration path for building 64, has support for C++ and other languages. The draw back is non native file formats. LLVM/clang has similar advantages and disadvantages, but also a lot of very useful code analysis functions, and it would be very useful to port to RISC OS. Given the above issues, the RISC OS build process needs to start incorporating a new toolchain, and eventually move to it exclusively. As for 64 bit assembler in the OS, outside the bootstrap process and a few hand crafted bits of critical hardware vector code, I say no. It’s no longer the best way to produce code, as even besides the main reason of portability between architectures, it can’t take any advantage of new instructions and compiler technology. |
|
Clive Semmens (2335) 3276 posts |
Yes. If an instruction to change mode is issued on one core, only that core is affected. |
|
Theo Markettos (89) 919 posts |
I upvote a lot of what SteveF’s said above. I think there’s a tension in ‘RISC OS modernisation’ due to those who fundamentally don’t want it to be modernised. They are quite happy doing things the way they’ve done it for 30 years, and they would rather things wouldn’t change. Meanwhile the rest of the world changes underneath them and RISC OS falls further out of step. I had this issue when working on packaging. There’s a subset of users who are adamant that they absolutely don’t want any automation, and want to be hand-copying everything and keep track of their module versions in a spreadsheet. That subset is rather vocal. What you don’t hear from is the set of users who have been installing apps on other platforms using app stores for years and wouldn’t know what a module is, let alone how to install one. That group is very much larger (since it covers people who don’t use RISC OS) but they don’t have a voice because they aren’t RISC OS users yet, and would be turned off by this kind of thing if they tried. As Steve says, just let people do their personal thing, make that as easy as possible, help them where they get stuck, and review their pull requests if they come. If they don’t come, then someone had some fun on their personal project and the rest of the world doesn’t need to worry about it. The bazaar, not the cathedral. It’s also worth saying that I have a use case for spending a little work time on RISC OS – if it were AArch64. It isn’t, so it doesn’t work for my purposes. |
|
Bryan (8467) 468 posts |
I used to think that. But my view now is that I am happy to see RISC OS develop. And, if does go somewhere I am not keen on, my Rasberry Pi 4s and 5.28 will probably last longer than me. |
|
Rick Murray (539) 13855 posts |
Yeah, there seems to be an aspect of cherry picking historical quotes mostly devoid of context.
Clarity. Good. That’s helpful.
I have no problem with people who want to tinker, fiddle, back, or rewrite the Wimp in Ada. But what happens next ?
Which raises the question – what is ObjAsm/cc/Link written in? What might (and this is just an idea) benefit us is if the toolchain was able to run on x86 (under Windows, under Debian…) so that people can use the sorts of tools that we’re lacking to build bits. That’s not to say flawless, there’s no x86 BASIC unless Brandy is up to the job 1, and those pesky filetypes to sort out…
In your average bread maker or toaster, anything more than that is overkill. You’ll note that it’s either an 8051 clone or something in the Sam88 family, with maybe a couple of kilobytes of Flash and if you’re lucky, 256 bytes of RAM (and some of that are actually internal registers). You will, on the other hand, have a slew of timers, ADC channels (but probably only one actual ADC) and a bucketful of digital I/O. I’m not sure I’d want to see a 64 bit processor in a toaster. Damn thing would probably run a ten line Perl script for the toast functions, and forty seven other processes for the built in spyware.
That’s the point exactly. If a person is a jerk, they shouldn’t play a sympathy card in a place where a good number of the people are probably undiagnosed with exactly the same thing.
I “probably am” but when I was young, it was not something that anybody chose to admit to. All sorts of euphemisms instead (like “special” or “different”). As to where on the spectrum? That part is easy – Cerulean. ;-)
That’s why the 64 bit question is fascinating. The platform either stays as it is, under emulation, sort of frozen in time…
Might explain those still using RiscPCs.
Don’t bother closing the stable door. That horse has not only bolted, it died of old age, got buried under stagnant landmass, and is now being extracted to refine as middling grade diesel.
Oh, I don’t know. Until sort of recently various installers for Windows were quite notorious for not correctly dealing with adding DLLs to the system so that “removal when count equals zero” could trash installed applications. Don’t forget, also, that RISC OS has a long history of “open archive, drag app somewhere, installed”. That said, as more stuff uses the shared libraries, there are more complicated dependencies, which would be a pain to sort out manually. Having that automated is only going to be beneficial. 1 I don’t count BBfW as we’d want something platform agnostic to cater for Windows and Linux (and maybe Mac?) |
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19