Are RISC OS folk meanies or just poor?
|
I might be poor! but .. no I am on £5 / month to general bounty :D It is nice to pay for being able to “work” with RISC OS :) |
|
Examples please. Especially given as how:
now, the big one:
*BASIC ARM BBC BASIC V (C) Acorn 1989 Starting with 4190460 bytes free >10 CRAP >RUN Mistake at line 10 >SAVE "$.__crap" >*Filer_Run $.__crap > [error box on screen says, simply, “Mistake” – where’s the “at line 10” part?] And, finally:
<Columbo> Oh, and one final thing… </Columbo>
10SYS "OS_GenerateError", "Life and everything about it SUCKS!" >RUN SWI &2B returned a bad error pointer at line 10 Reason is simple – the error message needed four bytes making what, to the OS, looks like a valid error number (though it probably shouldn’t be the OS’ job to vet the number passed). That particular error is fairly new, and if you prefix the string with CHR$(0) four times, then it’s clear that one can pass practically anything to a BASIC program to get reported as an error message. So, yeah, what was the argument again? >10SYS "OS_GenerateError", CHR$(0)+CHR$(0)+CHR$(0)+CHR$(0)+"Something happened at line at line at line at line" >RUN Something happened at line at line at line at line at line 10 > |
|
Byran, I appreciate your enthusiasm but shouting doesn’t make the technical barriers go away! I’m going to quote Andrew Hodgkinson here:
So, you can make recurring donations to ROOL, because there’s already a button for that, but not to any of the other bounties – because if you do, we’ll ask you not to because it breaks the bounty system’s poor, little brain. |
|
Sorry for the shouting ;-) But you are misunderstanding me. I don’t want a recurring donation option on every bounty, I just want an obvious link to the existing recurring donation option! It doesn’t require any changes to the bounty system or Paypal integration, all it needs is an extra html button above the bounties list and/or in the sidebar Donate section, that links to the General bounty page. Like this: Setup a regular donation to ROOL Job done! |
|
You might have to reassess your outlook – I never use the swimming baths where I live, it’s not something I’m interested in, but I still pay council tax which funds them. OK, taxes aren’t optional, but when I go to see a film I might like 4 of the 5 main actors but still buy a full price ticket that goes towards the mis-cast wooden actor I’m not interested in.
Considering just the default RISC OS ROM/disc apps, very few are keyboard driven or lend themselves to the clipboard (eg. because they don’t claim focus or don’t manipulate document types). Edit, Draw, and icons implemented by the Wimp are the only ones I can think of off hand. Edit already supports the clipboard thanks to CJE Micro’s, so Further clipboard support is exactly what it says – it’s finishing the job, no? Actually, I missed SciCalc, but that already uses the clipboard after its recent makeover. At a stretch ChangeFSI could so you could paste into Paint, but the mob driven design process for the Paint bounty didn’t come up with clipboard as something overly desirable. Just Draw and the Wimp then. Draw is actually written in (reasonably) sane C, so I’m not sure carving it off into another bounty would achieve much – it’s almost certainly the smaller part of the task. You’d probably just end up with a £2800 Wimp part and a £500 Draw part (note: they don’t add up to the current target, since that’s 2 people that now have to get up to speed). If Draw’s such a tiny task, I wonder what’s stopped anyone adding clipboard support since the sources were published in 2007? Perhaps we need to have some kind of community funded incentive scheme where potential developers who could do these tasks are… |
|
I’m late to the party on this thread but just wanted to say something about one of Peter’s points: Bounties that contain simple small goals are aggregated into ‘super’ bounties instead of being listed individually. The perfect example of this is the ‘Paint improvements’ bounty, which realistically should have been ~20 bounties that people could individually choose to contribute to the features that they considered the most important. For the Paint bounty, I have already said I’m happy to split the proceeds (subject to ROOL approval) if anyone else wants to pick up some of the remaining tasks. But any other developer that currently has the knowledge or interest to do it almost certainly already has other commitments to RISC OS that their time would be better spent on – in which case I probably remain the best placed to get it all finished even at the regrettably slow pace I’ve managed so far. On a more positive note, I made a lot of headway with the various Colour tasks, so things haven’t ground to a halt by any means. Speaking more generally, I don’t think having large, aggregate bounties such as this is too much of a problem. When there’s a logical division of the tasks, such as with Paint, we can roll them out as and when they are completed, so the community don’t have to wait for the entire thing to be completed. Regarding donations, if you’re only interested in one small subset of the features, then why not simply donate a correspondingly smaller amount to the aggregate bounty? Just because you are only interested in one particular feature, doesn’t mean that’s the same for all the other users. Relative importance of features can and does get discussed in the bounty threads in any case. You can put forward views during the proposal stage on features as well as how the bounty should be structured. I think this community is pretty open to suggestions and fair. For what it’s worth, I also have to say that if the Paint bounty had been split into lots of smaller ones, it would have increased the likelihood that I as a developer would not attempt every one of the tasks, just choosing the ones I preferred to work on; in which case you might find your own pet feature would never get implemented! So, in conclusion I don’t think it helps RISC OS hugely to keep trying to reinvent the bounty process. That effort would be better spent making more direct contributions to the code, the wikis, donations or technical discussion, or, hey, even promoting RISC OS via discussion on other websites. |
|
My personal dream would be to see RISC OS turned into a 64-bit, multi-processor aware, security first, processing powerhouse! But I believe, in order to accomplish that, you’d likely break a whole lot of RISC OS. If we started at the bottom (the kernel), how much would it cost (assuming you could find someone able to do it) to redesign just the kernel? I live in the USA, so it’s USD, not EURO or British Pounds, for me. :-D |
|
The Linux kernel (and just the kernel) has an estimated cost of around 14.7 billions dollars. So it’s probably better to enhance things than to rewrite them :) To give you an idea of costs, the new TCP/IP stack (taken form FreeBSD) will probably need £30.000 to be ported. |
|
Since the UK use of a decimal separator involves “.” for decimal fractions and “,” for 10^3 that looks like a rather cheap porting job :) You probably meant £30,000. |
|
I’d be wary of anything that quotes computer stuff in “billions” of dollars. This is likely to be a Facebook “billion” and not the sort of billion that is a number that people like us don’t even bother dreaming about. Don’t tell me that it needs a software ecosystem and that it can’t be done – we’ve seen in recent times two new operating systems, one based on Linux and one a little less so, each with their own specific ecologies: Android and iOS. I noticed this:
Isn’t this argument fundamentally invalid because the code is GPL – hence the authors can choose to cease development, but they pretty much can’t rescind their GPL licensed contributions?
There’s a part of me that suspects that figure might be bigger than Acorn’s entire software development budget. You’ve read the transcript of the speech about the development of Arthur – does that sound like it was done with lots of money to splash around? But even so, money is irrelevant if the desire just isn’t there. Look at Google. In many parts of our lives there is some aspect of Google, however people that remember when it started will know that for a ridiculously wealthy company with a notoriously insane hiring process, they have an appalling track record of creating pretty awesome things and then just killing them off when they get bored. As to our own preferred operating system, I think quite a lot of it is doing it for geek points, scratching personal itches, or “just because”. Things harder to put a price to, but ironically things that may well end up with a better result and better coding than if money was being flung around. So be careful attaching dollar prices to things, especially with calculations as simplistic as “this many lines of code at this hourly rate”. Remember how Linux itself was born… |
|
The first RADIUS authentication setup on the LAN at work was a FreeRADIUS install with a custom GUI based on WebMin1. Eight years down the line certificate based authentication is the requirement and last week the config didn’t exist, this morning it did which is somewhat faster than the estimate2 of several weeks for build and test. 1Yes, I know there isn’t a downloadable GUI for that and the modules in WebMin are Perl CGI (which might as well have been Hungarian when I first looked at it) but somewhere I have a copy of that custom setup that I learned enough Perl to write. |
|
I just checked the “around a while” element for something less vague than my memory and a reference to high cpu from fed and stack-mgr processes is mentioned here which is dated four years ago last month. Clearly paying people substantial amounts doesn’t generated good code particularly fast. Assuming the “fixed” version doesn’t also contain a bucket load of bugs. |
|
Ok, in the context of Bounties, there is a person (or team) who, at a certain Bounty amount, will begin working on a given project. Obviously, it is not BILLIONS of dollars. I have seen Bounty amounts ranging from $500-$1,000+. Provided there is anyone who CAN do it, how much would the bounty have to be to make the kernel 64-bit? How much to make it SMP? I sense there is either a lack of interest or a lack of ability, as to why certain Bounties have not been reached. If you have the funding, but no devs, nothing happens. Likewise, if you have devs but insufficient funding, same story. But, if you have indifference, you lose, period. If nobody really CARES, then you could have both the funding AND the devs, but nothing will happen. There must be no “cart before the horse” mentality. Everything must be done in proper order, to accomplish the proper end. Haiku (OS) has been hogtied by this very situation, since the very beginning. And this is why, lo 15-17 years later, it has ONLY just reached beta1. True, Open Source projects are mostly “funded” on free-time and personal desire, but if there are others (users) who want to USE that OS (whatever one it may be) they find themselves complaining that X, Y, and Z are not working and that just bonks everything overall. |
|
There’s no reasonable figure large enough. Seriously – take a look at ARM 64 code. You might as well be porting the kernel to MIPS or something. It’s quite different to the 32 bit code of RISC OS, which will break (of the top of my head) every single kernel API that isn’t the vdu stream, anything that makes clever use of conditional execution, and given how ARM 32 is not ARM 64, it isn’t as if one could conditionally compile. And that’s just for starters. There would also be an extremely long list of issues to resolve (namely around reentrancy) and the fact that callbacks and event claims jump into user supplied code (usually modules) which would pretty much require 64 bit software as jumping to and from 32/64 could become the mother of all messes in no time. There are few real benefits to going 64 bit, and plenty of other things of greater value that could do with attention first. Such as Bluetooth support, to name one.
I think it would probably have to be – that reentrancy thing again… |
|
On the other hand, Arthur was not really useful and complex.
That depends. SMP is possible. It’s even available. 64-bit, hum… RISC OS 64 will be all that you want BUT RISC OS. |
|
The main thing with 64-bit is the whole thing where there’s now AArch64-only CPUs – the Apple A11 and A12, Cavium ThunderX and ThunderX2, and Qualcomm Centriq come to mind. Granted, only the Apple SoCs are meant for anything other than servers (where there is no legacy code). The question becomes, how long before AArch32 is dropped by ARM themselves? (Android effectively requires AArch32 due to the number of AArch32 binary blobs in the Play Store, but Google could just decide to pull the plug on AArch32. Windows 10 on ARM somewhat requires it too, due to the Windows Store having a lot of AArch32 code as the only ARM option.) So, the benefit to going 64-bit could be “continued availability of compatible CPUs”. But, yeah, it would also involve either lots of emulation, or throwing away all of the existing codebase (that hasn’t been already thrown away by the various compatibility breaks over the years). Conversely, ARM is still launching new AArch32-only designs – Cortex-A32 comes to mind. So, it may be possible to keep coasting on AArch32 for quite a long time. |
|
Probably never. I suspect that ARM 32-bit will slowly replace 8-bit things, and then stay in place for decades. The problem is not the lack of ARM 32-bit SoC, but the slip from SoC to µC for the 32-bit market. RISC OS will need to adapt to this… or to find other solutions. Remember that we can provide now legally ARMv2a class components (no patent on the instruction set). And perhaps even ARMv3 chips in a few years. Borrow the MMU, FPU and SIMD from other projects (RISCV?) and you’ll have an unlimited source of processors for RISC OS. For these reasons (µC and OpenCores support) I think that RISC OS 5 should not forget to support Amber Core. Amber Core is the guarantee that RISC OS will never die. IMHO, a generic bounty around RISC OS 5 on Amber (Core improvements, HDK, SDK and OS) would attract more retro/purist people.
And a bunch of 64-32-bit offers. |
|
While Amber Core is nice, with current FPGA technology it is just too slow to be a meaningful alternative for SoCs. On affordable FPGAs, Amber does not even reach ARM2 speed, and I don’t honestly know how much optimization potential there really is. I know that the Amiga guys are struggling to provide an 68000 compatible FPGA implementation that significantly outperforms “real” 68040 CPUs. So it looks like FPGA “soft” CPUs are probably 20 years behind performance-wise. While nice for Retro stuff, it will not be any help keeping RISC OS 5 alive as a generally acceptable computing platform. And it does not look like that gap is closing. Regaring the AArch32 vs. AArch64 thing: it does not help RISC OS if AArch32 just survives. It needs to survive in SoCs that are ideally available on cheap SBCs and more powerful than what we already have. I don’t think this is particularly likely unfortunately, as for most power-hungry platforms as well as linux tinker boards there is no reason to stay with AArch32, and the cost for moving to AArch64 are nearly zero. The next RPi might be 64bit only, depending on SoC availability.
Starting to hack away on the Archie core on a MIST or MISTer is really easy, the Cores and development tools are all-open-source and/or free, but this has not attracted anyone from the retro/purist people yet. As Jeffrey once put it: nobody wants to tackle the hard things. I have to agree with that. |
|
Also, the microcontrollers are Thumb – they don’t include the 32 bit ARM instruction set. So as well as lacking an MMU, you’re going to have to translate the assembler parts to a different instruction set. One option that could be interesting is to use FPGAs like the Xilinx Zynq or Intel Cyclone SoC. Currently these have 32 bit ARM cores (eg Cortex A9, A53) which typically run Linux. But if it was no longer 32 bit capable, a soft ARM core in the FPGA logic could talk to Linux for its I/O – you’d get native speed communication at the expense of slow ARM speed. If things like graphics and disc I/O were offloaded, the performance hit for running a 100-200MHz ARM might not be so bad. |
|
Duplicating an ARM2 or ARM3 isn’t a great idea, because you’re tied into the 3-pipeline-stage, no/limited-cache model. 68K is even worse. You could however implement an ARMv2 or ARMv3 core (which I think are now out of patent) using a better microarchitecture and get a much higher clock rate. You might need to relax a few things (eg exact exception behaviour) which might need some OS-level changes, and you might end up implementing a few newer instructions (carefully avoiding the patents) like synchronisation barriers, but those should be relatively minor. On current FPGAs 100MHz is doable, 200MHz with some careful design. Next-gen FPGAs (Stratix 10 etc) might push that towards 400MHz, but those aren’t likely to be affordable any time soon. |
|
That’s true, but it isn’t easy writing an operating system from scratch, especially “by Thursday”. ;-) The guys who were paid to write the all-singing all dancing OS created something hideously unweildly in a now-dead language. Arthur was “plan B”. Seriously… while people say that the speech about Arthur is obviously going to be biased towards Arthur, if there is any truth in it swapping and slowing to a crawl with a bunch of clock faces running (and it is surely truth as it was ditched in favour of Arthur even if Arthur cheated!), what on earth do you imagine would happen if the system had to do something challenging, like run a database or a compiler? I recall back in the RISC OS 2 days, I ran about fifty eight copies of !Clock (there was a limit of 64 tasks in RISC OS 2) and they all did their thing. The machine didn’t freak. It didn’t swap. It only had a floppy disc! Sometimes it is a case of “as needs must”, and Arthur was the 32 bit BBC MOS that Acorn shipped in their new computer. Yeah, it was a bit rubbish, but it got refined into RISC OS 2, and that was further (greatly) refined into RISC OS 3. I think Arthur → RISC OS 2, and RISC OS 2 → RISC OS 3 were the two greatest iterations of the operating system that Acorn ever did. In comparison, the RiscPC version was little more than “some patches to make use of new hardware”. I guess it’s telling that it was called 3.5 instead of 4. ;-) But, as I said at the beginning of this ramble, writing an OS is hard. It is even harder when the machine is almost ready to ship and an operating system is required, like, NOW!
Why not? A 64 bit OS with 64 bit apps and software is going to have no need whatsoever for the 32 bit stuff. That’s the thing with ARM cores. They are licenced cores, not entire processors, so a chip baker that doesn’t need the 32 bit world can just ditch it entirely. Save die space, possibly save energy too, and make the internals of the processor less complicated.
Depends on how much it is still being used. The thing is, there is a huge amount of code and resources for 32 bit, so I don’t imagine it will vanish for a while. By then we’ll probably be looking at ARMvX and 128 bit or somesuch. And it’ll probably look like a really weird CISC processor…
That’s the only way to run RISC OS on alien processors. For the purposes of discussion, AArch64 is very alien. As I said a while ago (was it this thread?) that making RISC OS 64 bit would be akin to porting it to something like MIPS. It’s that different.
Uhh… Isn’t that a 26 bit PC+PSR era processor? That’s… not so useful.
I don’t. You’re forgetting three things:
That’s a hard market for RISC OS to exist in. The thing with microcontrollers (as opposed to microprocessors) is that things are very much simpler for microcontrollers. Why does a µC need an MMU? Why would it need a GPU? Why would it need huge amounts of RAM? Why would it need to, say, boot from an SD card when it’s a world where embedded firmware is placed into on-chip ROM? I don’t think we need to panic just yet, but I do wonder where AArch32 fits into the grand scheme of things. The Thumb/Cortex-M range deals with the lightweight microcontroller stuff, the AArch64 processors deal with the needs of the likes of Linux. This leaves an awkward middle ground – AArch32…
Is that 100MHz ARM operation, or running the FPGA itself at 100MHz? So all this effort to get a simpler processor that is going have its ass handed to it by a sixteen year old Iyonix; and maybe even have its ass kicked by a twenty two year old StrongARM RiscPC? |
|
This is a non-starter for numerous reasons:
Basically if you’re going down the ASIC route you need something that looks like a Cortex A, and I’m sure ARM would sell you a licence for one of those. But you’d need a few million dollars to begin with. |
|
Both (you may feed in a different clock on a pin, but you can multiply it up to whatever your design can handle). |
|
I just looked up ARX on Wikipedia. That page needs an editor, stat:
|
|
Oh my god! “something hideously unweildly in a now-dead language” describes that entire paragraph! |