Any plans to have an open source toolchain to build RISC OS?
Theo Markettos (89) 919 posts |
A RISC OS developer conference is something I’ve suggested to ROOL a few years ago and had no response. We’ve previously done such things for the FreeBSD community and it’s worked well, and I have access to the facilities in Cambridge that can probably host it for free ( and with a supply of nearby accommodation if needed). The main issues being: would people come, on say a Saturday or Sunday? Is Cambridge a suitable location? Is there a better time of year (university vacations at Easter and summer are better for accommodation)? And what would be the best way to structure things for people to get the most out of it? Obviously planning anything like this now is a bit tricky, but when things resume I’d be happy to look into it again if there is demand. I take the point about being colocated with a show though. Although would people find it easier to do two days together, or is a couple of day trips easier to manage? |
Alan Adams (2486) 1150 posts |
I’ve tripped over that while debugging on several occasions. It usually comes about when modifying code written years ago, without going through the calling sequence in detail. With my early Fortran background I tend to use I% and J% a lot for loop counters and things (Fortran77 declared I to N as implicit integers saving the need to declare them) and forgetting to declare them LOCAL causes some hard-to-trace behaviour. I’m not trying to justify it, just saying… |
Alan Adams (2486) 1150 posts |
I did something similar using a piece of copper water pipe. While running towards the car, I could hear pieces of copper hitting the car I was running towards. I didn’t know about work-hardening then. In retrospect adding flash powder in the hope of projecting a fireball wasn’t a great idea. (Flash powder is an explosive. Gunpowder can be a detonator.) |
Fred Graute (114) 645 posts |
The 4.70a13 test release is on the StrongED website go to the download page and then click on the test release link. You’ll need to use the UK resources, the German resources are not fully up-to-date yet. The included AppBasic mode demonstrates the use of insertion templates as I’ve outlined above. The Dump mode has seen lots of updates and brings better syntax colouring, annotations (function names, clib calls) and more. Have fun, and let me know what you think. |
Richard Walker (2090) 432 posts |
I will have to check out the test StrongEd. I have used visual studio for years, and the code completion (well, really the built in pop up function prototypes and type definitions) and ease of ‘find all in files’ are life changing. I found trying to use regular StrongEd and opening lots of c/h/StrongHelp files a bit of a grind at times. Now, the debugger… :) |
Michael Gerbracht (180) 104 posts |
@Theo Yes it is difficult to guess how much demand there is for a RISC OS developer conference. But it might not only be interesting for people starting to program but also be a good occasion for experienced developers to meet and discuss things. Coming from Germany for me a two day event (either two days development workshop or mixed with a show) would be better but this is just me. @Fred Sounds great, I will test the new version soon (probably not today) and let you know what I think. |
Steffen Huber (91) 1958 posts |
I am always happy to join Eclipse bashing attempts, but startup time is really not an issue at all. On a mid-price laptop (Ryzen 7, SSD), my Eclipse with my private workspace, including 32 Java projects in 18 Git repos, with 12 source files opened and readily Sonar-Linted, starts in 16 seconds. If I could get an IDE with Eclipse qualities on RISC OS, I would happily wait 5 minutes for it to start. Proper SCM integration alone saves a lot of work and therefore time. |
Alan Adams (2486) 1150 posts |
TextSeek covers this requirement for me. |
Fred Graute (114) 645 posts |
and ease of ‘find all in files’ are life changing I just use StrongED for this. Load all files, ensure the List-of-Windows (LoW) is open (c-L) then hide all texts (cs-H). The text(s) to work on are then made visible via the LoW. Having all files for a project loaded makes it very easy to jump to function definitions (even if in a different text), and to find all occurrences of function names and variables. |
Rick Murray (539) 13862 posts |
The Other One™ but same reasons. :-) Especially useful when tracing code that isn’t your own, you might look for X, then Y, then Z… and it is useful to have the results windows all available with click-to-jump links for all. |
Rick Murray (539) 13862 posts |
Today, perhaps not. I’m talking several years ago when computing power was less and things took longer.
Sure? There’s a reason I don’t turn on my PC much these days, and that reason is how long it takes to get itself going. Sure, XP shows a splash screen pretty quickly, and is rapidly to a desktop…but that’s only half way into the boot sequence. The Start menu and the like is inactive a good while longer as everything loads, phones home, stops for a coffee break, and then decides whether or not it feels like doing something…
I believe unicorns are a paid extra. |
Colin (478) 2433 posts |
I use find from the DDE which puts the results in a throwback window. I don’t like putting a lot of files in stronged – does anyone search a rom distribution by dropping all the files in stronged? |
Alan Adams (2486) 1150 posts |
There was a time at work where we used Dell desktops with Windows NT4 (no SP at that stage). The drill on arriving in the office was to switch the computer on, then fill the kettle, boil it and put the coffee etc in the cups. Now go back and wait for the login screen. Type in details, go back to the coffee and pour the water. Talk about last night’s TV, then back to the computer and wait. Total time 7 minutes! Remember that the standard method of fixing windows problems is to reboot – yeah, right. Mind you, when I was working in primary schools it was worse. There was a strong drive to use laptops for everything, and they wanted everything locked down so that, for example, saved work was on the server, not local. This meant using Active Directory, which dumps a load of data to the computer during startup, and another load at login. Wireless networks at the time were 54MB at best, and with only three non-overlapping channels, a maximum of 3 acccess points in/close to a classroom, bandwidth was limited. The reality is that the laptops would not split across access points well, so when starting up, about 20 of them would use the same ascess point. 20 laptops sharing an effective bandwidth of around 30MB, meant that startup and login time was around 20 minutes. With a primary lesson period typically being 25 minutes, it wasn’t great. Add to that a battery runtime of around 90 minutes, meaning that the laptops had to be returned to the charging rack AND PLUGGED IN during breaks. (Do you have a way to teach year3 pupils to find the power cable and plug it in when they just want to get the playground?) At that point I retired, so I don’t know how/if that was resolved. |
Rick Murray (539) 13862 posts |
My brief time at a tech college (before moving), back in the early ‘90s. A really half-assed setup using a bunch of Nimbus machines (I think 286s?) hooked to a 10base2 network snaking around the building, running DOS and/or Windows 3.11. I don’t actually recall if the machines even had harddiscs, it was all set to boot from the network. Yup. Two dozen PCs all trying to start up Windows at the same time on a rubbish network setup with an even more rubbish (flaky as hell) server. I lost count of the number of lessons where we’d be looking at OHP slides because our attempts to log in to DOS and start up TurboPascal for the assigned work would cause the server to crap itself and keel over dead. And that’s not even counting the people who <gasp> broke the wire </gasp> by unplugging one of the BNC connectors. Do that and instantly the network dies. Logically only everything downstream ought to die, but the server detected this somehow and shut down with a fault message, so the IT people would have to go and check behind every single machine. I never unplugged anything, but given that they revoked my access to the system “for playing games” on a day when I was absent (went to see Hugo Fiennes!), and sent me on a wild goose chase to try to get that sorted out, I certainly don’t have any sympathy with the incompetent twats. I should point out, I was using a crappy (salvaged from a skip) 10baseT hub and 10baseT in my bedroom at the time they were using dumb coax. Anyway, the only expectation one ought to have for educational IT providers (and staff? 1) is that they’ll charge well. As for knowing what they are doing, making sensible decisions, or coming up with solutions that don’t make things worse… yeah… not so much… as the above post demonstrates. :-) 1 I’ve come across very few dedicated IT staff who knew what they were doing. I’ve come across very few dedicated IT staff, period. |
Rick Murray (539) 13862 posts |
The entire set of ROM sources? No. However things can often be narrowed down to areas. So “all the Wimp sources” or “FileSwitch’s many parts” or “the entire kernel”. I’ve never used Find…<cough>never actually noticed it before</cough>… |
Chris Johns (8262) 242 posts |
That was how 10base2 worked. If you broke the cable everything died, I guess because of reflections – or at least systems detecting the lack of termination and saying “no”. If you were really evil you could shove a bit of me chewing gum wrapper into one of the T pieces to short it out. I can not confirm nor deny if that was something i ever did. It really wasn’t suitable for a school setup. Thankfully 10baseT came along. |
Rick Murray (539) 13862 posts |
It seems as if http://sendiri.co.uk/udp.html hasn’t really gone anywhere, so… we’re stuck with DDT.
Exactly. Which makes debugging module code somewhat tricky. That said, the “fix” probably isn’t a better debugger, but to stop running module code with kernel level privileges. It’s a bit dumb that forgetting to unstack one single register in SVC mode code will freeze the machine. Crashing out the errant module is one thing, but crashing out the OS?
Which is probably the most high-tech and trendy RISC OS has ever been this millennia!
Blame that on FontManager. You can’t take over the world until you’re able to tell everybody who is in charge…
That would be a pretty standard feature of a debugger – “watchpoints” to interrupt the session when a variable or memory location changes. I suspect this might actually be broken in DDT as watching a memory location didn’t appear to do anything even though the contents changed; but since it died shortly afterwards, I gave up and decided to just use a lot of spewage to DADebug to track down the problem. Not a lot has changed in nearly forty years (fx: nostalgic sigh as the bagpipes part of John Farnham’s “You’re The Voice” kicks in) |
Rick Murray (539) 13862 posts |
Ah, yes. I forgot about the terminators at each end.
Oh, no, that’s just moderately evil. Really evil is soldering a piezo cooker lighter wand to a BNC socket and sticking that into the network and clicking it a bunch of times.
Oh, I can categorically state that I never zapped the network (though I will deny knowing who did) as shoving powerful pulses of several hundred volts directly into the signal wire had the expected effect. |
Alan Robertson (52) 420 posts |
Two of the main points that developers seem to want on RISC OS are
To add to my own understanding and perhaps help this conversation, what is wrong with RISC OS’ current debugging tools, and is it an app issue or is it an OS issue? i.e. How would the problem be fixed? |
Jeffrey Lee (213) 6048 posts |
It’s both an app & OS issue.
|
Charles Ferguson (8243) 429 posts |
[apologies for long reply; I’ve spent a lot of time working towards some of these goals, and have written a lot about it in the past, so this collects together some of these thoughts]
There are two points that you stated as needs, and then asked about just one of them. Since an IDE (Integrated Development Environment) incorporates as debugging environment 1, this becomes just a single question really, but a very wide one. I’m sure that people will have different views on this, but for an IDE to be worthwhile, there are a number of things that it needs to do be able to do. - Being able to edit code (source or binary). I think that’s pretty much a given. There are surely more features that you can add, and things that you can say that are unnecessary in that list, but let’s look at how you might want to make this work and we can then find problems in how you might approach this. For a start, there’s many ways to skin a cat. That is, an IDE can be built in many ways. It can be a single monolithic application that comprises all the parts that you need. It can be lots of small things that are gathered together into a whole. Or it can be somewhere in between. ’IDE’s that I used back in DOS and Windows days tended toward (or were) the monolithic side. Many environments that I’ve seen on Linux have been somewhere in the middle – leveraging large amounts of interactions and control of smaller tools. The important thing – for the purposes of the goal – is that the IDE integrate those parts to be more than they were and more useful to the user. It is the RISC OS way to be modular (maybe not in the way that Unix-like systems are, but it still holds). The principle of having applications that do things well, and which interact with others on the system at the user’s will works both in favour and against an integrated environment. Do you decide that you want to allow the user to use their editor of choice, or do you force an editor on them. Same goes for your resources – Sprites, templates, resource files, etc. On other systems, you might just link in a bunch of good tools for that and you get ‘a text editor’ or ‘a sprite editor’ in your monolithic application. This makes development much easier because you just dump that code in and all’s well. Probably it just runs on another thread and you don’t need to worry except to interact with it. Do that on RISC OS and you’ve hit a problem in that threads are not easy, and the cooperative environment goes against it (q.v. Niall’s preemptive Wimp that did an admirable job of showing how it could be done at a process level). That’s not generally the way you do things on RISC OS (although RISC_OSLib is an example where you could just do that). Another alternative is to work out a way to interact with the user’s editor so that they get their preferred environment. Clare’s PCA was a great example of that, which wasn’t picked up by people. Computer Concepts OLE system tried to do parts of that, and is used within more things (DataPower, the Impression suite and associated applications being the things that spring to mind). The External Edit protocol was less ambitious and more focused on interactions with objects than with their presentation. External Edit is supported by Zap and StrongEd for text files, and used by applications like Messenger 2, and other mail clients 3. I added External Edit support to my template editor 4, and later to !ResEd 5. I planned to add it to !Paint as well 6. This work was specifically to allow for someone to use thoses facilities to build tools which bring these resource editors together. Those parts are still missing the additional control that you might want in an integrated environment (6 talks a little about automation interfaces that might allow you to direct the application to do things, in a more structured way that the limited Message_AppControl that currently exists). With interfaces between the constituent applications to make them work better together, the user could feel that they were using a single application. Consider !Configure (Select specifically, but to a lesser degree, RISCOS 3.8 derivatives). The individual parts of !Configure are applications and they work together to try to appear to be a single application. Consider !ResEd. The object editors within !ResEd are actually distinct applications and it can be extended relatively easily (relative to what, I shall leave as a separate discussion). In the Select stream, this went further, with individual gadgets being loaded dynamically and linked to the !ResEdWindow application at runtime, so that it could be extended without needing to update the whole of !ResEd itself (and allowing user upgrades) 5. There’s still a lot missing there – being able to name Toolbox Resource File objects is possible, but not gadgets, and not custom events. This means that it’s not possible to (for example) use the Res editing tool to click on a gadget (or a key code, or a menu entry, or object), select a named event that you defined earlier, and then create a stub function in your editor to act on that event. Or to list all the events used by a Resource, or find unattached events. !ResTest has the ability to be launched by !ResEd, and so there’s a little interaction there, but although it has the internal ability to understand special messages, that isn’t shared with the !ResEd application, and you can’t jump from an event in its log to the object that created it in !ResEd. So there’s some things missing there, but they’re still monolithic. I began work towards providing introspection for the Toolbox modules, and quickly realised that although it was good for Toolbox it could be generally useful. If you just queried the module that was loaded what its interfaces were, then you could generate stub code to call them, or those events that I was just discussing. The toolbox window gadgets already knows how to render themself for the purposes of drawing !ResEd windows, so why not incorporate the information about how to call the gadgets and what outputs they have, as well? At the very least you could get short help blobs in your editor to tell you how the functions worked. That’s some of the cases of how the editor systems could be integrated, but what about other parts of the system? If you have a separate editor (or even if you have a built in editor) how do you get from the user choosing a line, marking it for a breakpoint, and then executing the code (or continuing the execution), or finding out for the current line of code what the variable context is? The solution used by !DDT was to ignore the editor part, just be a debugger and provide you with a view on to the source, which you could set breakpoints and watchpoints on. This means that it has its own source viewing system. Why does it do that? Well, take a look at the source and cry for a little bit. It essentially looks at the vector chains strips out all claimants but the last, and uses those for the rendering system. It captures the screen at the point of interrupt and overlays its own content, redrawing parts as necessary. As you resume, the environment and screen is restored and the execution is continued 18. A system of patching of instructions is used to ensure that the execution happens as expected. I forget how watchpoints are implemented – probably it’s using the page mapping system to mark the pages as aborting and then emulating if they’re not the addresses that are of interest. These days it’d be better managed through the AbortTrapping interface (the CPU level watchpoints only cover you so much). Of course, whilst it’s executing, the desktop (and everything else) is stalled and you only have a single tasking debugger application. Everything you see in DDT is rendered by DDT through the OS vectored calls (which is why it doesn’t work on Select; there’s more sitting on those vectors than the Kernel, that needs to be there and if you take it away, the system stops working). This is why it looks like an early paleolithic desktop with hand drawn icons and windows, and system font text. The reason that DDT’s like that is that RISC OS is a single tasking system, that fakes multiple tasks by utilising known switch points – which we call co-operative multitasking, but that’s really just putting a fancy name on a bit of a bodge 7. The lack of a constrained process model means that the current user mode application is, essentially, in full control of the system. If it doesn’t want to return to the OS, it doesn’t have to. That’s powerful, but it makes many things difficult – one of which is debugging. You cannot easily suspend the application and allow the user to do things in other applications, whilst you work out what’s going on. In a debugger you want to be able to look at the application and see what’s happening, maybe poke a few things, check your documentation, make tweaks to the code, update breakpoints, and then continue. If you make a lot of assumptions about the type of application you can debug, like only debugging a single tasking application, you can get a little distance into making this safe in the desktop. We may need to consider that your ‘single tasking application’ could itself have installed vector handlers, maybe pointed the sound system into its own application space, or even changed the CPU handlers to point to different parts of the system. Maybe it tweaked the page table mappings for application space (AMB is going to have a bad day if you go back to the desktop). Maybe it’s in the middle of a VDU sequence – you need to capture all the VDU state, and not just the bits that are held by the Sprite redirection context. Maybe it’s trying to use a call that came from a module (application upcall or event handler, let’s say) which comes from a module which isn’t reentrant (and how would you know). And if it came from a module, the SVC stack isn’t going to be empty, so you’re going to want to deal with that too. Suddenly you’re feeling a lot like you’re in the middle of dealing with what TaskWindow does. Because it has to deal with all this stuff to make it look like you’re running a tool in a preemptive environment, and it does a lot of bad stuff to make that happen 8. If at any point the application, or your debugger does something that’s not expected, you kill the whole system. This is probably the biggest no-no of a debugger. Having your debugging tool kill the system it’s trying to debug, or be unable to trap the failure case of the application because the system died, is a terrible failing. So you’ve got a number of things here that are a problem. There’s a lack of process model that means that the application is in control, not the debugger. There’s a whole load of context that the system holds that you have to preserve, subvert, or otherwise bypass in order to interrupt the debuggee. Because DDT is having to handle this, it doesn’t offer the ability to interact with anything else – it just says that’s not the problem it’s trying to solve. Clearly a limited debugger is better than none at all. But that’s not what you want for an IDE. So something needs to happen to help with that. Introducing process isolation and the ability to identify what parts of an application are able to be suspended. Maybe you can’t catch everything, so cases where the sound system points into application space is a hard case to deal with, but maybe you can hit many of them by providing OS level (not hacks, but actual OS level) support and design to make that happen. If you have a way that a part of the system can say “I’m debugging this other part”, and the OS can be aware of it, then there can be a controlled and managed way of dealing with that situation. The general ‘free for all’ with applications able to do whatever they want gets in the way of that. Introducing some restrictions for if you want to be able to be debugged may be the way to go. If you have the ability for a part of the system to act as a debugger, then you should be able to have another part of the system able to control that debugger. In unix-world this is ‘ptrace’, utilised by many tools, and at a higher level GDB, which has protocols for controlling the debugging. This is what makes it possible for those systems to debug applications 10. With a form of multi process model (even if it was just a ‘debuggee/everyone else’ model), it might be possible to provide an integration of the debugger to an external environment. I’m completely ignoring the issue of debugging modules here. With a better process model you might be able to handle that better, but it’s better (IMO) to isolate the issue of driver and support out of the realm of an interactive debugger. As I’ve mentioned many times before, building your code so that it can run in user mode, and testing it there means that you can run it in a debugger before you put it into the privileged environment, and you can crash it in ways that you can recover from. Some of this has been discussed in the future direction rambles 9. In lieu of a debugger, it is useful to be able to obtain diagnostics from a failed application. These commonly come in the form of backtraces, and possibly contextual information from the application. Connecting the execution failure and the IDE together is vital to understand what’s going on if you don’t have a debugger. Debug information usually gives you enough to be able to connect the location of the failure to the part of the code that failed, even if you don’t have an actual core dump (non-RISC OS term for a capture of the program state at the time of failure). The systems built into Select provide for an equivalent of core dumps (Diagnistic Dumps, styled after the Windows MiniDumps 11), which incorporate structured information about the state, and which are raised as services out of the SCL (or other systems). This allows the debugger (or IDE) to capture this information from the service and then handle it differently 12. I seem to remember reading something about a reinvention of this system in parts of the current RISC OS 5. Anyhow, closing the loop from ‘thing has broken’ to ‘user can edit the offending code’, is doable without major changes to the way that the OS functions, if an application were to wish to pick up the information. Moving on to other IDE features… linting, code completion, type hinting, help on types and functions. Commonly on other systems linting, and sometimes code completion and help is offloaded to a separate process which does the work. Possibly even on a separate machine (the Language Server 14 system which is promoted by the VSCode people is able to be remote). The editor fires off requests for linting whilst you’re editing, and the processing runs in the background and you’re oblivious to it until a second or two later it tells you about the bug you just fixed whilst it was doing its thing 13. The problem with doing this on RISC OS is… yes, we don’t have a good process model to manage this. Yes, you can fire off a TaskWindow to go and do stuff, but it’s not nice, and if that job turns out to be redundant, you’ve got a kill it and play russian roulette with the system because Taskwindow just isn’t great and sometimes things go bad if you kill the applications at the wrong time. Not to mention the fact that if you end up firing off a bunch of these because you’re editing other applications the system isn’t really designed to handle them in a balanced way. You can’t ‘nice’ a taskwindow job, or pause it because you’ve run out of memory. Of course, if you’re allowing the user to provide their own editor, then they would have to support this system as well, so that they can get those hints and code completion information. I touched on the issue of help, and of getting introspection information on the system, but there’s larger information about APIs, which is also useful to integrate. StrongHelp provided part of this solution in the past, and it was an awesome system – being able to hit a key and get the API documentation I needed was quite awesome. But things have moved on a little. Mostly people use web browsers to look up API docs, so it’s possible there’s no need for a dedicated help integration. That said, Dash provides awesome structured documentation linked from keypress 15, and its documentation sets are where I was (just last year) trying to take the PRM-in-XML work, before I got distracted by another project. Still, it’s an area to consider as part of the integrated environment. I mentioned being able to select options for builds and targets. Those are all important, but they’re entirely application based, and don’t need any support from others. Maybe having a way to define linkage options and the library names would be useful, and doing so in a structured way would be useful. I’m thinking of pkgconfig, or conan 16 as examples of how such information can be packaged and utilised as part of a build environment. In the bulleted list I skipped over a little one, which I’ll just touch on now. Being able to get back messages about problems in the build in the IDE. Throwback, as we know it on RISC OS, is commonly implemented on other systems as a set of rules for parsing the output from the toolchain, within the IDE or editor. Whilst the solution used by RISC OS of invasively added system calls to notify the environment of problems in a structured way, within the build tools, is a little icky, its effectiveness is amazing 17. I’ll not say anything about integrations with source control, review systems, or deployment, because I suspect there’s very little that can be done about them except to make the tooling necessary for them available (source control libraries and tools, libraries for accessing review systems like github, and interfaces for deploying to artifact servers). 1 A debugger can exist apart from the IDE, but almost by definition the IDE is expected to integrate the tools you need to develop, so I suspect that this is one of those larger issues. 2 IIRC; I’m pretty sure that Messenger also had a built in editor, but I believe it could work with External Edit. 3 My own Fido mail reader used External Edit as far back as ’94 I think. 4 !FormEdExt; not really mine, just another in the long line of hacks on the original FormEd. 5 http://gerph.org/riscos/ramble/buildtools.html#ResEd 6 http://gerph.org/riscos/ramble/futuredirection-desktop.html#app_paint_0 – also includes some other discussion of how the desktop might be extended which works well if you’re trying to integrate applications teogether. 7 Oh it’s an awesome bodge… when it was created back in the ’90s, but it hobbles a lot of the things in the system. 8 GraphTask is even more impressive, taking TaskWindow’s job to greater levels and closer to what you’d /want/ from a debugger. But it too will be more hacks on to a system that wasn’t designed for it. (no offense meant – it does a great job, but it’s still trying to fight a bad system). 9 http://gerph.org/riscos/ramble/futuredirection-resilience.html talks about process isolation; http://gerph.org/riscos/ramble/futuredirection.html#Diagnostics talks a little about about !DDT’s failings. 10 Together with not having mixed user and privileged mode state, having managed process models, and defined interfaces to the Kernel. 11 Somehow those Windows Mini dumps seemed to always fail entirely to be ‘Mini’… but I digress. 12 http://gerph.org/riscos/ramble/testingdebugging-2.html#BTSBTSDumpDiagnosticDump_Applicationdiagnosticdumps – talks about exactly this case, in amongest discussion about how it was integrated to SCL and what it means. 13 Hopefully that’s not just me; linting live is great, but tends to be more frustrating than useful. 15 https://kapeli.com/dash; I believe that Zeal offers a similar functionality on Linux and I used to use Velocity on Windows. 17 Replacing the Throwback handler with a HTTP response inside JFPatch-as-a-service, which gave me the same experience but in the browser, was such a lovely bit of nostalgia when it worked. 18 http://gerph.org/riscos/ramble/testingdebugging-3.html#DDT talks a little about how it does things, and some of the extensions that I began writing. |
Michael Gerbracht (180) 104 posts |
Regarding the debugging I wonder if it would be possible to utilise a second cpu core to run a second instance of RISC OS (maybe optimized for the debugging process) and watch/control it from the main instance? Sorry, don’t know whether this is in any way possible – just an idea to use those other cpu cores for something meaningfull ;-) |
David Feugey (2125) 2709 posts |
Definitively a good idea. Michael made it on one platform: There is another thread, where he said that RISC OS could run on the other cores. A good solution to protect some very sensible/big processes (web browser?), for light threads or for debugging an application with the whole control on the targeted system. Jeffrey provided some API to access the other cores of the Pi. I don’t know if it could be a start for this. But an AMP system could be very interesting for RISC OS, as it will not break its “CMT on mono core kernel” behaviour. SMP is cool, but (almost) everyone know something about giant lock and problems to make a SMP compliant graphical interface. Main processes on main core and selected processes or threads on the others could be the way to go. And that would be cool to get xx hardware VM, RISC OS or NetBSD. I hope Michael will follow this path for its RK3399 port. |
Jeffrey Lee (213) 6048 posts |
Yes, we’ve had Debugger Exception Dumps for a few years now. It’s probably not quite as comprehensive or user-friendly as Diagnostic Dumps, but it does a pretty good job of annotating stack dumps without the need to add extra backtrace structures to the stack itself.
It would be possible, but it would be a lot of work to implement the virtualised I/O devices, and you’d end up with a situation where your debugging environment isn’t the same as the live environment that users will be running the code in. And rather obviously, it won’t work with single-core machines! (Raspberry Pi 1, RPCEmu, etc.) If we’re looking for a universal solution, extending RISC OS to allow in-system debugging is the only solution. |
Steve Drain (222) 1620 posts |
The lack of names for Window gadgets, Menu entries and custom (Other) events had bugged me for all this century. With BASIC and the Oobi library I now have a working method of names for components using the Help message field. Such names are also unique to their parent object, so an attribute might be manipulated thus: text$=FN0 MyWindow#_InputField.Value
I do not have a method for independent names for Other events, but they are registred against unique functions of the form: FNMyWindow_Option1_ValueChanged .
The numbering of Other events can also be automatic, which is a further simplification. This means that a Res file can be written so that only the necessary functions have to be included in the !RunImage, and to help with this I have modified the Messages file of ResEd to give guidance. Oobi is still quite fragile and barely documented, but it gives me an extended project to work on while confined at home. ;-) |