Porting ARM32 BBC BASIC to ARM64
Glenn R (2369) 125 posts |
I think we may have just discovered where the final batch of RIFA capacitors from the Beeb PSUs ended up. |
Piers (3264) 42 posts |
If you put in an initialisation for safety it’s impossible for a sanitiser to know why – you probably don’t know why, apart from it being habit. It’s effectively masking bugs. I suppose it’s down to whether you’re an advocate of defensive programming or not. If I have a pointer that (according to my algorithm) must always point somewhere, I would prefer not to have to check it for null at runtime. I’d prefer for an analysis tool to tell me it hasn’t been initialised (probably at compile time, but also at runtime under the sanitiser when running my extensive automated test suite :-) ). I could assert for it being null, but putting asserts everywhere is error-prone compared to an automatic sanitiser. That’s not to say asserts aren’t equally necessary, just for different things like checking the members of what it’s pointing to.. I personally dislike defensive programming because it’s usually impossible to backtrack to a safe state if something weird goes wrong. This is why I love asserts – the programme should abort (assert or sanitiser exception)) and the bug fixed before release.
Its “uninitialised variable” checker fails to spot any bugs because you’re initialising every variable. You’re masking all the bugs it’s designed to spot. Also “more of a problem with pointers”, only in that they’re more likely to result in a visible crash. An uninitialised buffer size variable is more likely to result in, say, truncated files, or excessive memory allocations. Much more subtle to spot.
I checked WSS’s website before I mentioned it, and it’s still listed, though I can’t imagine they sell many copies at that price. But I loved it. It’s always surprised me that clang/gcc took 20 years to reinvent (or copy?) it. |
Rick Murray (539) 13806 posts |
Yes. Never make assumptions. ;) So, we’ll have to agree to disagree. ;)
There’s not really any such thing when things go screwy. That’s why copious amounts of tracing/debug messages are helpful to follow what’s going on. Of course, elsewhere in the world with proper development environments, it’s probably more like TurboC used to be where it’ll halt the program and say “here”. But alas this is RISC OS, so if you’re lucky you’ll get a backtrace that bears some resemblance to reality.
I was thinking more the 26/32 thing. Or does it come with source so you can build your own? |
Piers (3264) 42 posts |
Oh, didn’t think of that. I can’t imagine it’d be technically tricky for Robin or Julian to recompile it, if they could be bothered. But, I’d be pretty amazed if Julian still had a RISC OS machine. I wonder if gcc10’s runtime sanitisers could be made to target RISC OS somehow. I’ve no idea what’s required, but I doubt much of it is platform-specific. |
tymaja (278) 172 posts |
I’m using gcc for (everything except the BASIC code itself) – the BASIC code is in GNU asm format, but I have written it to be very easily portable to the RISC OS assembler format (except it is 64 bit); the labels have a : at the end but that can be removed, and I couldn’t figure out how to do the workspace allocation in the Acorn way, so I just make a workspace file with ‘VARS’ as the pointer, then all the workspace variables starting with a _ symbol … then copy/paste, and declare the variable names without the _ : VARNAME = _VARNAME – VARS which then lets me do LDR X28,[ARGP,#VARNAME] in the actual assembler files! Where it gets interesting is debugging (both asm, and with GCC). I made a basic SWI emulator that let me do stuff like that Mandelbrot I posted recently. But am now working on a more complete SWI system… The challenge is I can’t get debug to work 🤪 so … the ‘error message’ that I get when something goes wrong, is that the software instantly closes all the windows and is gone (so anything on the display is lost too). The debugger won’t link to the software for some reason. Outputting text to a console window is the only way to ‘keep’ data, but that is challenging, because the console locks up if you dump too much data at once, so you can’t dump ‘everything’ leading up to a crash either. Writing RISC OS software for years makes this style of debugging acceptable. It would be nice to have an advanced debugger. That ‘RiscPC with an ARMv8 CPU with ARM610 MMU’ emulator I made a few months ago booted to desktop (and was a RISC OS program written in ARM32). With more development, it would be possible to upgrade it further, and allow it to be paused at any moment, to allow for inspection of all memory, registers, also memory mapping, page tables, etc etc). When I return to work on that, it will run on RISC OS (only), and will emulate a Raspberry Pi (I am thinking RPi4 hardware, 512MB RAM – a slight modification to the HAL.Top file will allow this). |
tymaja (278) 172 posts |
(continued) – I did add a feature I think was quite interesting to that ARMv8 emulator. I redefined the ARM32 ‘SWINV -1’ (SWI with all the SWI number bits set) as a new instruction ‘AARCH64’. That instruction causes an instant switch to ARMv8 ARM64 emulation. This is actually rather easy; I just keep the registers stored as 34 × 64-bit registers, and ARM32 mode reads R0-R15 as the low 32 bits of the 64-bit register bank, and switching to AARCH64 mode copies R14 to X30, and R15 to (PC). The basic functions of the CPSR (NZCV) map to the same bits in the PSTATE NZCV register! In the ARM64 emulation, I defined SVC #&FFFF to be the ‘AARCH32’ instruction, which reverses the above changes. While I didn’t use those features in the RPC ARMv8 emulator, I did use them in the ‘BASIC emulator’ I made (also on RISC OS), and started converting parts of BASIC32 into ARM64, using the AARCH32 and AARCH64 instructions to switch back and forth. I also have, on RISC OS, the BASIC source code, converted (by hand) into a series of BBC BASIC files (organised in the same way as the original source code), so that I can double click and create new BASIC modules in a second or two! (I mislaid my copy of the ROOL development software, so this allowed me to still experiment with the BASIC code. I added a fair few ARM64 instructions to the assembler as well (including the AARCH32 and AARCH64 instructions). I set it up so that those instructions also switched the assembler between the two architectures, so you can code back and forth between the two instruction sets, then run the module on the BASIC emulator for testing). This is actually what made me think converting ARM32 BASIC into ARM64 was feasible, despite the architecture differences. I will return to the above work. But back to BASIC for now; I have finally learnt C, and progressed quite far on ‘a C interface that handles video, IO, all SWIs, compatible with the RISCOS32 way of doing things, except 64 bit pointers etc’ for my ARM64 BASIC. I will continue the BASIC port. I am understanding the code more and more, which will allow me to properly put 64 bit and long strings into ARM32 BBC BASIC. Once BASIC is complete, I will convert it into C, so that it is more processor agnostic. I have already designed some compatibility structures with this; the ‘A%-Z%’ integer variables are stored in 64 bits, but with a minor performance penalty; the low 32 bits can be stored in the original ARM32 32-bit places (good for CALL interface etc). Strings are also ‘256’ bytes, and continue in another place once they become ‘long strings’, meaning that the classic STRACC, OUTPUT, ERRORS buffers can be accessed as they used to be. Once work on this is a lot further ahead, I will make a RPi4 emulator that runs RISC OS 5, has (128-1024MB RAM), the ‘new’ style MMU. It will be an interpreted emulator, so a bit slow, but it will have full debugging, breakpoint setting, system register viewing, etc. And it will have the AARCH64 instruction. I have actually done a lot of the work on this and have it safely backed up for later. What stopped me previously was that I couldn’t get the keyboard / mouse to work – but I know I could now, with what I have learnt in the last few days. Anyway, back to BASIC (it has MOS compatibility like Basic V, and what I am working on now is the MOS32 compatibility layer) :) (edit in progress; a paragraph that annoyingly vanished reappeared after posting. iOS…) |
Ralph Barrett (1603) 153 posts |
After spending the summer chugging around on my canal boat, I’m now back 32-bitting Theo van der Boogaert’s !ARM_Debug utility, so that it will run on a RPi. !ARM_Debug can break into any ram based Risc OS programme at a preselected address, and then continue to run that programme in a sandbox with all register values etc. being shown on a simple (BBC-like!) screen. One very useful option is that you can log all the instructions to a file (with or without register values). You also have access to a ‘*’ prompt screen, so that you can perform ’*memoryi* and save out memory maps to file. There are lots of other useful options too, that Theo added over the years. Another useful option is to be able to trap any SWI or OSCLI call and break into that programme. I’ve just started looking at this code, and my first test was to trap ‘OS_Find’ and capture the subsequent code (from messagetrans). Here is some random code as an example of the text capture format: FC14CAF8: SWI XOS_Find |
tymaja (278) 172 posts |
If there was anything I could do to help with 32-bitting that code (permissions etc allowing), I would be interested to help. It sounds like a very useful thing for anyone developing for RISC OS (or those developing RISC OS itself!) |
tymaja (278) 172 posts |
(in case embedded image doesn’t work) : https://ibb.co/kJz5N4G Current state of the BASIC port : Temporarily, I can’t load or save stuff anymore (I was using SWI macros for those commands, which called some handler code which allowed loading / saving of programs to a directory on my desktop). So I had to design and type in the demo in the attached picture! BASIC really needs access to Acorn-format SWI commands, and VDU drivers. I have gone through my BASIC port, and am now using a new ‘SWI macro’. All it does is call a ‘SWI call’ C function, with the usual information given to the SWI / returned from the SWI as one would expect. All the VDU stuff goes through OS_WriteC etc. The VDU drivers are quite complex, and are heavily tied into the kernel. So, over a few days, I have ported the required parts of the RISC OS Kernel over to C code. It obviously isn’t a ‘proper’ kernel, but technically it is an implementation of parts of the RISC OS Kernel, but in C. In summary: Aside from the lack of MMU, it does have software vectors linked in to SWIs as usual, and has the SWI handler. The SWI handler calls a null SWI handler for stuff not implemented. The SWI handler includes the,VecSwiDespatch stuff, and so far supports: VDU : most of the text display stuff, and I just got line drawing working, hence the picture above :) The SWI code that it is there is done in the same way as RISCOS does it. I am using a simplistic conversion method : ‘32 bit pointers become 64-bits’. It is all done in C (not a single assembler instruction anywhere, except for BASIC itself). It is obviously running ‘in a window” on Linux. The only extra code I added is a simple ‘HAL’, and all that does at the moment is handle the ‘VRAM’, and keyboard IO, so it is tiny. This allows the RISC OS keyboard handler code to interface with the ‘outside world’ without modifying how RISC OS does it. The video handler runs in a different thread, so the ‘RISC OS’ C code can do what it wants, and the ‘display’ just refreshes in the background without any input needed from RISC OS (so, kind of like a video chip on real hardware). I’m not trying to make a fork of RISC OS 64, but it does show that it isn’t impossible. I’ve just made a framework so that I can make some decent VDU drivers in C, and later to use to convert almost all of ARM64 BASIC into C as well. The benefit of the ‘minimalistic RISC OS Kernel’ is that I can then create these VDU drivers, in C, in an Acorn compatible way. Even if it ends up being totally useless, I am learning C, which I never used before, so that is a benefit! Regarding the screenshot above; everything is ‘real’ (except the processor report string … which is real (because that is what is in the Pi5 it is running on!), but which doesn’t use any autodetect code for the CPU. The memory is real (DRAM + VRAM), although the AplSpace slot is 512K, so BASIC starts with 512K for now. |
tymaja (278) 172 posts |
(on page 3 of thread): (regarding the practice of initialising variables on creation, even if they are only initialised later)
I do agree with this; while learning C, I have occasionally had ‘Warning, variable used uninitialised’ during compilation (when I declared as uint64_t variable;) – which was helpful, and which could have wasted a lot of time if I had set it to = 0 when I declared it, then forgot to set it with the proper value a fair few lines down! I will still keep doing my:
because the compiler I am using (gcc) gives a very mild warning if I accidentally do It says ‘warning, suggest parentheses around assignment’ :O. I do make sure my code compiles without warnings, but having used BASIC since before RISC OS existed, and never learning C till now, it is all too easy to do = rather than == A bonus (for me, in the short term) is also that, in the vdu drivers, there is a lot of flag testing, and also passing registers around between functions, so a lot of changes are needed. I find that doing
Makes it easy to understand that this is a test for 0 or 1 in the chosen flag, which helps a lot given all the other stuff I need to track (data handed between functions in registers, functions relying on registers incidentally set two functions ago, status-flags used to pass data etc); :) |
Stuart Swales (8827) 1349 posts |
C’s operator precedence is there to trap you! Strongly suggest use of parentheses. |
tymaja (278) 172 posts |
:) I actually do if (0 == (CursorFlags & stuff)) { – I tend to use ‘BIDMAS’, but usually parentheses for everything except ‘MAS’ … and often use brackets for even those! I was impressed at the layout of the Acorn VDU drivers today; I added 4-bit mode support, added the code that calculates character and colour tables for 4 bit modes, then when testing it, I typed DRAW 1000,1000 … and it actually worked! The weirdly complex graphics code with AND, OR, EOR etc, does have its advantages! I will slow down for a few days on this project … I am doing it by first creating ‘disgusting code’ in C (using a global array reg17, converting to C but using that global array, often expanding the code a lot to get things like ‘carry flag signalling’ sorted, then removing the dependence on global registers. It is quite rewarding when nice code comes out after this – even simple things like making a struct for the ‘Line Control Block’, which is usually just R0-R8, can make things a lot ‘tidier” overall :) |
Piers (3264) 42 posts |
But much of the time you compare variables with variables, not a with constant. That would just give you a ‘very mild warning’. It’s easier to be clear and consistent and just have zero warnings. Given you say you’re using gcc, I also tend to use -Wall, to include extra warnings, though sometimes it’s excessive. You can individually turn some off if necessary. |
Rick Murray (539) 13806 posts |
I consider Norcroft giving any warnings (option Accordingly, the number of messages that pop up while building the OS cause me anxiety… |
Piers (3264) 42 posts |
Sadly, there was no source-level quality control at all that I recall. You could commit what you wanted – if it didn’t build you had to fix it, but that’s about it. Certainly no code reviews, CI or automated testing of any sort – to be honest, it wasn’t until about 2006 that I ever came across such advanced methodologies. Of course, my code was perfect. And because I manually did the Java builds for development, testing, and distribution (mostly on Solaris), no one knew any different. |
tymaja (278) 172 posts |
I definitely have a ‘no warnings’ policy for builds too – mainly because they are at least pointers to ambiguous code / potential bugs, but also because the IDE I am using is quite ‘clunky’ visually, so I tend to leave 3 lines for the build reports, so any warnings mean you can’t see whether or not the build even completed! Will try and find my copy of the ROOL dev software – probably time to upgrade it anyway, but would be interesting to see how close I can get the VDU drivers, speed wise, to the asm ones (current progress is 1,2,4,8 bpp (16 and 32 will be easy to add, but need to process mode selector blocks first); text works, still need to add VDU5, teletext, and some more graphics stuff! |
tymaja (278) 172 posts |
(link in case image fails to show) : https://ibb.co/nbpY8hz Current state of BASIC / VDU port: Edit : Need Mode 7 support. Also need VDU7 technically… |
Simon Willcocks (1499) 509 posts |
Nice work, I look forward to stealing it! |
tymaja (278) 172 posts |
:D it needs a fair amount of work still; but it is getting there! It is called by a ‘SWI handler’, and calls SWIs as well. It runs in a ‘partial C port of RISC OS’ (which starts at the RISCOS_Start in kernel.s.HAL); it is a minimal port, just enough to host BASIC and VDU (and Supervisor / OSCLI). The RISC OS VDU drivers are fairly well self-contained; they do call some memory management stuff (so far mainly in mode change) – just using OS_Heap and OS_ChangeDynamicArea. They also call vectors at times (GraphicsV), and some of them are vectorised SWIs too (I have coded all of this, so the ‘SWI numbers match vector number SWIs’ actually go though a vector which can be changed. They are ‘tied in’ to the kernel in RISC OS, and take up a large part of the kernel workspace. They are needed by the kernel (and do a lot of the window drawing, sprite rendering etc). I don’t have an answer to this yet, but do wonder – what if they could be ‘instanced’?. When coding on RISC OS, I sometimes redirect to VDU output for easy window framebuffer redraws – if an error happens during the redraw, it takes down the whole system, because the VDU drivers are ‘global’ (my workaround was to switch to / from sprite and do anything that could raise an error before redirecting to sprite!) So – instancing. The VDU driver workspace is mostly in one place – the ‘Main VDU driver workspace’ part of the kernel workspace. They do use a few scattered variables, such as in the Zero Page OS_byte space, as well as a block of ‘workspace’, also in the Kernel workspace (used for various things, with offsets in the one of the vdudecl* files). They also read ‘core’ system variables (in particular the various versions of ‘display memory start’ throughout the Kernel workspace. I do think they could be instanced, but am currently sticking to a faithful port (to C) of the RISC OS drivers. If I could make them good enough to be a part of RISC OS, then instancing would need to be considered. It would be great if the OS could have an instance for desktop, generate another instance for the F12 command line, and for apps to get their own instance (there are finer details, such as apps having two different windows containing sprites to which they redirect, although in reality, the apps themselves would be managing redirection and vdu safe areas already). I don’t have an answer, but do think ‘instancing’ (and re-entrant VDU drivers) are possible. What got me thinking is that I made a small bare metal ‘hello world’ program as a kernel.img for Raspberry Pi 3B+ (in QEMU) recently, and the initial output was: HeHello Worldllo WHello WororldHello ldWorld (or something like that) to the serial port. This changed to ‘Hello World’ once I put the ‘spare’ cores into an infinite loop based on MPIDR – but it made me think about what would happen if the RISC OS VDU drivers were called in a re-entrant way :D. I won’t waste time on this (bare metal RPi coding), as I have noticed ‘commented out C style function headers’ scattered throughout the core kernel code, suggesting there is a C port underway – so for now I can ignore all the core kernel and MMU stuff, because a C port is likely in the works!) I will do some ‘code consolidation’ for the next few days; my tasks are: |
David J. Ruck (33) 1629 posts |
That’s has been in C since C99, see stdbool.h |
tymaja (278) 172 posts |
Wow – I hadn‘t realised that – thank you! :) – I added stdbool.h and set a bool var to false in my RPi3 bare metal test code (and it works!). While I am not* making anything ‘Bare Metal’, I am trying to follow ‘bare metal rules’ for the VDU drivers, so that they don’t rely on any libraries that may not be available if they were used in a bare metal project in the future. I hadn’t started the retrospective conversion yet, so only have a few parts of the code using (non-bool TRUE / FALSE); I used a global define for TRUE / FALSE, so will remove the #define now, then let the compiler show me where I used those defines!
|
Rick Murray (539) 13806 posts |
Me too. ;) I know about stdbool, but it is lowercase so… |
David J. Ruck (33) 1629 posts |
It’s C not BASIC, pretend you are using a proper language, and by that I mean C++. |
tymaja (278) 172 posts |
There is method in my madness :D – I used #DEFINE (uppercase boolean), because I am using a C++ compiler to compile my current C code! Originally because I used a class (not in VDU or BASIC, just in some code that calls it); the class isn’t needed so I will remove it soon. The problem was that lowercase true / false were already ‘defined’ since I was using C++. A benefit was that it was super easy to convert the TRUE / FALSE back to true/false by removing the #define, recompiling, and fixing the errors that came up! |
Rick Murray (539) 13806 posts |
For me, things that are
Sorry Druck, I’ll go get my coat… |