C DDE Armv7 bounty
William Harden (2174) 244 posts |
Hi, Just pondering a little about the DDE bounty (with EDID looking good and Filecore in progress, and work being done on USB indirectly through the sound system discussions, this is really the only one seeing no active work to my knowledge). Fundamental question: is it sensible to continue to develop the DDE at all? I know the question has been asked in a different way of ‘can it compile with GCC?’ – I’m asking ‘should it compile with Norcroft C in future?’ Expertise in compiler development is rare. I suspect adding inline ARMv6/7 is probably not too bad (the patterns are no doubt there to build from), but adding or changing peephole optimisations are probably technically challenging (depending upon the degree of optimisation afforded). The benefits to a good compiler for RISC OS are massive – ARMv7-optimised OS 5.22 may well offer significant speed increases over SA-era ARM. Support for the various extensions may offer additional benefits still. However, adding those things will take development time. £500 is nice, but isn’t going to feed a compiler developer for long – so we’re down to hobbyists in their spare time. By comparison, GCC is likely to be used for a lot of ARM development, and is being maintained by a massive community – and by nature I would imagine will become increasingly efficient at managing ARMv5/6/7 over time. Indeed I would be surprised if Norcroft is still more efficient than GCC. So – the question is: what is Norcroft’s USP? Is there a particular strength in using the Norcroft compiler over GCC? Or would the effort to modernise to support newer CPUs not be better spent in a roadmap of transition to GCC compilation by default? |
Jeffrey Lee (213) 6048 posts |
Although I haven’t done any performance comparisons, I know that in some situations Norcroft produces significantly smaller code than GCC.
A roadmap to support GCC compilation would be nice, although the “by default” bit depends on whether we find GCC suitable for the job! I think John Tytgat’s idea of using a custom linker script to link modules together into a ROM image is a good one. It would (AIUI) allow us to easily replicate the way in which C modules are currently staticly linked for running at their start address within the ROM and are staticly linked to the ROM version of the SCL. |
Rick Murray (539) 13840 posts |
Why not? It isn’t a bad package, even if it pales a little next to the sort of development environments you find on other platforms. Whether you like them or not is a different matter – I actually find coding in Zap and working with raw files to be a hell of a lot friendlier than a complicated IDE that tries its damndest to outsmart you, plus all the incessant auto-completion of variables and such. Is it harder to write code with Zap? Yes. A lot harder. You need to know what is inside structs, and you need to write it out in full. But then, I know that array “thingy” contains “this” and “that”. Not that array “t dot” contains “t cursor down enter”. Granted, maybe it is just me that likes the long winded approach. But given the amount of <expletive> floating around calling itself “software”, maybe others should try doing it the lengthy way too? How can you think you know your code when the IDE tries so hard to abstract you from it?
Yup. I’d offer to help, but I know diddly-squat about compilers. It is dead-easy to write a compiler. I wrote a simple BASIC compiler many years ago (long gone, and a good thing too – it was CRAP). It is much harder to write a useful compiler, and writing a decent compiler is an exercise in pure wizardry.
Well… it has been around for a decade or so and still hasn’t quite caught up with Norcroft. It can apparently build some versions of RISC OS which is pretty good going. I wonder how they would compare if placed side by side with a Norcroft build. Anybody got a recent IOMD version I can chuck at RPCEmu?
To keep ROOL supplied with toilet paper and doughnuts?
I think, personally, it would be better to work on supporting newer CPUs because RISC OS on a wider range of hardware is a good thing for our exposure in the world. The way I see it, if GCC wants to build RISC OS, GCC can adapt. :-) It appears to be doing so, so let’s leave it at that, and leave RISC OS’s dusty corners in peace, for fear of disturbing the spiders at sleep there.
Oh, you’d better believe it. The beauty of RISC OS on an ARM machine is that it is small, tight, and fast. While the build fails for reasons as yet unknown (not that I’ve put any effort into looking!), trying to build RISC OS on a Beagle xM running the entire dev setup from a huge RAMdisc is… quite nippy. Understatement. [Of course, you’ll say it can build so much faster on GCC on a PC. I bloody well hope that a quad-core microwave-clockspeed hunk of Intel with insane lookaheads and branch prediction logic could outpace a single core single GHz ARM! |
William Harden (2174) 244 posts |
The relative efficiency of the compilers (in code size / code speed) is what I was getting at. I’ve only ever used Norcroft and haven’t really dealt with GCC at all on RISC OS: but I do know earlier versions of XCode used it as their compiler prior to Clang so I am assuming given the handheld ARM platforms that code size and speed were relevant constraints in the iPhone / iPhone 3G era. I’m surprised if its efficiency is still less than Norcroft (pleasantly surprised for Norcroft!), given the relative development work that both have seen in the last 15 years – but if that were the case and it was still producing more efficient, smaller code now then the answer ‘why Norcroft?’ is a very simple one indeed. I’m not arguing the case of ‘free’ versus ‘not free’; more asking ’what’s the best tool for the job?’. If GCC produced tighter, faster code already; or indeed if it produced equivalent code but offered a greater feature set and wider processor support, then there is a case for switching. The other issue there is on developer resources. I agree that having ARM v5/6/7 support in the compiler is very important if Norcroft remains the best tool for the job. If however GCC produced faster, more efficient code then looking at things that prevent using GCC to build the OS components would perhaps be more important as this would leave the ARM core support in the compiler to the myriad of developers for GCC, rather than the small handful of RISC OS developers that may be able to take up such work on Norcroft. |
Steve Pampling (1551) 8170 posts |
From various things I’ve read over the last few years the size of output was larger from GCC because certain, otherwise shared library, items were being compiled IN for GCC output. Wouldn’t taking a number of simple (one small source file or similar) projects and compiling them with GCC and Norcroft demonstrate the current state of both compilers? William is agnostic, Rick is a Norcroft acolyte, anyone volunteering for the GCC acolyte position? |
Rick Murray (539) 13840 posts |
I wonder if a “small” test would show up the differences? A larger project may also present more variety of code constructs. An interesting test could be a jump table (like a switch block?). Norcroft does quite well. Microsoft C++ v1 (on x86) was abysmal (back in ’96). ;-) |
Steffen Huber (91) 1953 posts |
The question is simply “where do we get more for less”. Keeping Norcroft C at the bleeding edge of ARM is consuming a lot of manpower. Cortex-A53 and Cortex-A57 anyone? How about Neon and VFP support? Support for C11? Taking a wild guess, maintaining a RISC OS version of GCC is likely less time consuming, and it would mean that we use the same tools as the rest of the ARM world. Switching to GCC means also that there is no longer a barrier to start tinkering with the OS. We have significantly lowered the barriers on the hardware (RPCEmu, RaspPi) and OS (free to download) side, but you still have to buy the DDE. Being able to cross-compile on powerful machines is also a big plus for GCC. Building RISC OS is no small task on current RISC OS machines. On the other hand, Norcroft is the tried-and-trusted solution. It consumes less memory and is faster on RISC OS hardware. And there is of course that philosophical thing of “every credible platform has its own toolchain”. And it helps to make some money for ROOL. Would be nice to hear from Ben if he can quantify the maintenance work vs. the income from licences in recent years. |
Steve Pampling (1551) 8170 posts |
I suggested small and simple as off the shelf items requiring little if any modification of make files (AMU creates wonderfully non-portable examples) the small was also applicable for other reasons: 1. The additional compiled in code reportedly showed up more obviously in small binaries |
Chris Johnson (125) 825 posts |
I did play around with both GCC4 and Norcroft last year when compiling up some recent libraries/binaries such as the IJPG stuff. Certainly GCC took longer, particularly with having to convert ELF to AOF. In terms of size, it depends on whether you are linking with SharedCLibrary. The RISC OS sharedclib is over 200K in size, so that should be taken in to account when comparing sizes. I found that when linking with the RISC OS SharedCLib, the sizes produced by the two compilers were not that different. On final binaries of around 100 – 200 KB, the GCC was about 10% larger. |
Theo Markettos (89) 919 posts |
Just to unhelpfully confuse matters, another compiler that hasn’t been mentioned yet is LLVM (and the Clang frontend for C). Of course, this doesn’t build code for RISC OS yet. But it seems to be much easier to develop than GCC, and ARM is putting a lot of effort into developing it. So it may be worth looking at doing the necessary to make LLVM usable for RISC OS rather than putting more effort into developing Norcroft, and then hopefully all of ARM’s improvements will come for free. (detail detail devil detail detail) In any case, making the toolchain compiler-agnostic is probably a good start. |