C and RISC OS
Pages: 1 2
GavinWraith (26) 1563 posts |
Alas that the RISC OS community is now sustained by only a handful of developers, who generously give their time to projects which become increasingly urgent. As lovely new cheap hardware becomes available, RISC OS needs lovely new software to exploit it; and not only for the updating It seems to me that there are currently just two choices for programming with C in RISC OS: ROOL’s DDE and GCC 4.7.4. The former is a development of the Norcroft C compiler. I used to prefer it because it offered the possibility of smaller binaries, by reason of the shared C library being provided in ROM. On the negative side, compared with GCC, it only offers floating point in software (slow) and it I would be grateful if those who know better than I what is going on could correct my remarks, because they are probably inaccurate. What I would like to see, of course, is the availability to RISC OS of a C compiler that combines the advantages of both: the more advanced features of GCC and the simpler integration offered by a shared C library as |
Rick Murray (539) 13840 posts |
This. More than the dynamic library, the hanging on to an outdated emulated API means that anything compiled with DDE that uses FP suffers horribly. There have been developments with the DDE, so I remain hopeful that it will arrive some day… |
Steffen Huber (91) 1953 posts |
Thinking about the long term viability of RISC OS, I am sure that we need a solution based on a virtual machine. ARM is not interested in maintaining long-term backwards compatibility – see 26bit vs 32bit, ARMv5 vs ARMv7, ARMv7 vs ARMv8, AArch32 vs AArch 64. So we need a compatibility layer. LLVM would be a possible solution. So it would be Clang instead of GCC and Norcroft. Not sure how that would integrate with ARM assembler, but in the face of ARM inventing an ever-so-slightly-incompatible architecture every few weeks, we should get rid of as much assembler as possible. But no matter if it is ELF-GCC or Norcroft or Clang: RISC OS needs developer manpower. Not sure where those developers should come from, as RISC OS is an increasingly hostile environment for modern software development, because of the aging tooling. No IDEs, no modern programming languages, no modern compiler infrastructure, no modern VCS. It more and more becomes a typical embedded platform – develop on a “real PC” and (integration-)test it on RISC OS. The biggest problem I see with Norcroft is that RISC OS development itself is dependant on Norcroft at the moment. And since RISC OS has the source available, but not the Norcroft compiler itself, it could be considered a kind of risk for future development. I guess further development of Norcroft is dependent on some extremely limited available manpower (perhaps just part-part-time from Ben). This is far from an ideal situation. |
Steve Pampling (1551) 8170 posts |
Truthfully, would I be greatly upset if the RPC didn’t run software as fast as the newer kit? If it doesn’t run some software at all due to lack of a feature? Oh, well, time to use the new kit which, despite inflation, still doesn’t seem to total up to what the RPC cost. |
GavinWraith (26) 1563 posts |
I have always liked virtual machines, ever since encountering Sweet16 on the Apple ][. But the whole point of hard float is speed, in which case interpretive overhead has to be avoided. In the old days VMs tended to be stack-based; good for simplicity but not speed. The Lua 5 VM broke new ground by being register-based, so that each instruction does more work, and other VMs have copied its success. But for emulating an ARM-type VM on an ARM platform a JIT approach would seem preferable to static compilation. Already we have more than 3 versions of the Raspberry Pi. No doubt the Raspberry Pi organization is contemplating the implications of this. We should update Heracleitus from everything flows to everything forks . |
Steffen Huber (91) 1953 posts |
LLVM code can be statically compiled to “real machine code” given maximum performance – but the important point is that the “object code” is available to be compiled to machine code forever. Until such a static compiler is amended to support ARMs latest creation, the VM can be used to execute the LLVM code instead it still works, but slower. So it is win-win for everyone. |
Rick Murray (539) 13840 posts |
Steffen:
Not really – who is going to be writing this fancy new not-entirely-written-in-assembler version of RISC OS? Work with the sources for the version that we have is fairly slow, due to… well… limited developer time. I do not see 32 bit going away any time soon. It is widely deployed, and to be honest there’s an awful lot of stuff out there that doesn’t need AArch64. So the 32 bit version will sit in between Thumb and AArch64. For more beyond that, we might be looking at some form of emulation. But, of course, this is assuming that AArch32 vanishes. And I don’t think ARM is that daft.
If that’s what suits you, go for it. You’ll also get benefits like full versioning, and we know that RISC OS’s connectivity is good enough that it’s a doddle to mount a share on another machine and run stuff from it. I mean, we aren’t talking about Apple here, are we? ;-) Back in the land of the viable now… Steve:
This would be the overall best solution – it should be VFP that is emulated on older hardware, not newer hardware (with hardware FP) doing the emulation. I did some simple tests a while back and the difference between a native FP call and using FP was dramatic. And I’m talking chunks of minutes vs fractions of seconds kind of dramatic. But we’re back to the main problem. Why understands VFP maths and low level assembler well enough to write a module to emulate the VFP? It’s all-round hardcore wizard level stuff. I’ll be honest, I looked at the FPE out of idle curiosity and I don’t understand a bit of it…
That’s the approach I’m taking these days. I’m developing for RISC OS 5, and testing on such. If the software happens to work on older versions, that’s good, but making it work on older versions is not a priority unless the fix is really really simple. RISC OS 3.7 is twenty years old now, and I don’t have any 4.x versions to test with anyway (and they’re for obsolete hardware too). |
Steffen Huber (91) 1953 posts |
My point was not RISC OS the OS itself, but software written for RISC OS. It is comparatively easy to keep single points of code compatible with ARMs efforts to break compatibility (like the OS or a LLVM compiler and VM). It is – as the past has shown – comparatively hard to make the thousands of pieces of code where no source is available compatible again. |
Chris Mahoney (1684) 2165 posts |
Right, but given the glacial pace of RISC OS development, “soon” may be closer than you think. But as noted, it’s tricky. If everything was in C and the bulk of the OS (kernel and the like excepted, of course!) could just be recompiled then that’s one thing. But with huge chunks of assembler, things quickly become “fur and tusks” (to borrow your quote from the other thread!) |
Rick Murray (539) 13840 posts |
Sorry, I interpreted “we should get rid of as much assembler as possible” as referring to, well, most of RISC OS.
This is surely an argument in favour of some sort of “open source”? Yup. I’m right there with you. I can understand why a commercial project is not going to be willing to embrace open source, however when a product reaches the end of when anybody wants to provide some sort of official support, what does it hurt to make the source code available, even if this means securing some permissions? Surely it is better than having the software and it’s legacy relegated to being nothing more than an artefact of history. |
Steve Fryatt (216) 2105 posts |
That’s not a Norcroft-specific thing. GCC is perfectly capable of using the Shared C Library, in exactly the same way. I’ve not compared binaries, but I can’t see why they would be significantly different in size. All of my software is compiled (cross-compiled, in fact) using GCC, and all of it uses the SCL.
It’s called GCC. It isn’t quite that simple, because it depends on what you’re trying to do with it. If you want UnixLib’s extra functionality, you can’t benefit from the SCL (hence stuff like NetSurf being as big as it is). I’m not sure what the situation is with floating point; it shouldn’t be too much hassle to experiment, however. Norcorft’s main advantages, as far as I can see, are that it’s friendlier to someone who’s never used a cli-based C compiler before, and that it’s what is used to build the OS. |
Steffen Huber (91) 1953 posts |
Yes, but past experience shows that this will not work out. And it is still a major job to gather all those sources and recompile them. Just look how hard it was to make UnixLib-based stuff ARMv8 ready. You would really need to make sure that all the open source stuff is gathered in a way that it can be built automatically. Oh, the autobuilder – already invented :-)
You are certainly correct, but if someone has lost interest in the software for our beloved OS, why would he want to spent a great amount of work to make it releasable as open source? Past experience shows: in the majority of cases, this will just not happen. So it can’t be a good strategy for the future. We need to make sure that “binaries” will run unchanged on future CPUs. |
David Feugey (2125) 2709 posts |
Integrated emulator? |
Steffen Huber (91) 1953 posts |
This is another possibility, but needs a lot of work to achieve adequate performance. Having a generic high-performance ARM-on-ARM-JIT would be attractive for many different projects, but is a big task. And if we look at Aemulor and ADFFS, there are rather a lot of small and big obstacles on the way to a seamless user experience. I think the “intermediate representation binary and compile-on-demand for target system” way is much easier, but would of course only help with future problems, while the emulation idea also helps with past problems. |
Rick Murray (539) 13840 posts |
Want that sort of what Java was supposed to be? |
Alan Robertson (52) 420 posts |
What are road blocks stopping GCC from being able to build RISC OS? I’m aware of the RISC OS Open shared License restrictions and inability of GCC to produce modules, but was wondering if they are the only two things? |
Rick Murray (539) 13840 posts |
Eh? It can’t build modules? Are you sure about that? BTW, RISC OS comprises of a HAL, a kernel, and about a hundred and forty modules chained together. :-) |
Alan Robertson (52) 420 posts |
Oh, I might be wrong, but I thought modules was something thta GCC could not produce. So can GCC produce a build of RISC OS? |
Chris Mahoney (1684) 2165 posts |
Nope. |
Jeffrey Lee (213) 6048 posts |
I’m not sure offhand what the license conditions are for GCC-built code, but I’d be surprised if there were any significant blockers to using GCC for building RISC OS. You need to bear in mind that there are several different aspects to licensing when it comes to most compilers:
|
Lee Noar (2750) 16 posts |
GCC can produce modules. |
Alan Robertson (52) 420 posts |
Looks like the award for the most ill informed post goes to me. Glad to see that GCC is more than capable to building modules, and potentially one day, RISC OS itself. |
Steve Pampling (1551) 8170 posts |
It was true, there was work done. |
Steffen Huber (91) 1953 posts |
That change in status was in July 2005, by the way. GCC 3.4.4 Release 3. |
Tristan M. (2946) 1039 posts |
The last page or two could be separated and labeled as the major stumbling blocks thread. It breaks down to: Every time. Slight change of subject. A few days ago I “ported” LibKern.s to GAS. It was just for my own education. Then I tried doing the same in C. Next I tried Kernel.s |
Pages: 1 2