OS_ClaimProcessorVector again
Stuart Swales (8827) 1357 posts |
Those are all very good reasons why we didn’t do that! |
Colin Ferris (399) 1814 posts |
It would be nice to have something like Druck’s Stats Progs that reported call’s to the CLib – for each Prog that uses it. |
David J. Ruck (33) 1635 posts |
Interesting… Pretty easy to do for new tasks starting, you intercept Initialise SCL SWI and substitute with your own shims to count calls. More difficult if you want to capture for things already running, you would have to sniff out all the AIF tasks and find their SCL jump tables. Couldn’t do for ROM tasks unless you did a lot of horrible ROM patching. |
Rick Murray (539) 13840 posts |
I can’t help but think that analysing CLib mightn’t be overcomplicating things. The core of the library is in assembler, so there’s not a lot of scope for optimisation there. Instead, perhaps measure how long the system is running for (as of the start of the program), then how long the application itself is running for (from one Wimp poll to the next). This should give a count of how much time the app gets relative to all the other apps. Maybe there’s some scope for five tuning how often polling happens? Also, if it isn’t too onerous, track how might time is spent in filesystem calls. The filesystem is not especially fast. Perhaps a speed boost could be achieved by caching more things in memory? |
Rick Murray (539) 13840 posts |
Easy for you, perhaps. ;-) |
David J. Ruck (33) 1635 posts |
Easy for me 30 years ago, these days very rusty on assembler modules. |
Stuart Swales (8827) 1357 posts |
I would wager that Iris isn’t using the SCL but a whole wodge of dynamically loaded .so modules – can that mechanism co-exist with the SCL? Dunno. Anyhow unless somebody whips a profiler out we are guessing… How good is modern gcc? I always found gcc-generated ARM code to be pretty poor. |
Martin Avison (27) 1494 posts |
Which is exactly what TaskUsage does to measure processor usage for eaach task. Usage I think may use a different method. |
Stefan Fröhling (7826) 167 posts |
@Stuart Swales
Can you specify that? If there is really bad coding in GCC then we should try to fix it. Means put a work group on that task as it will effect more and more appication and not only Iris. And Iris is one of the most important application in decades for RISC OS. As using a browser is 30% to 90% of the time usage on “real” OS. |
David J. Ruck (33) 1635 posts |
Back when I was writing ARMalyser in the early 2000’s ARM GCC 3 code wasn’t great, you could spot its nastiness a mile off, and Norcroft ran rings around it performance wise. These days gcc code looks and preforms a lot better, and Norcroft hasn’t really moved on. |
Stuart Swales (8827) 1357 posts |
Thanks, that’s good to know. |
Theo Markettos (89) 919 posts |
Iris is compiled with GCC 10, so I would expect that code generation to be a lot better than 1990s GCC. It is, after all, what the Linux kernel and much of userland is compiled with. Has anyone done a SWI profiler? Record the time on entry and exit to every SWI and build a histogram of where time is spent. I suppose that would require access to a high resolution timer (see ARM generic timer thread), ideally of nanosecond resolution (since if you’re compounding, rounding errors of a 1MHz timer will build up rapidly). But for the time being the HAL timer interface should suffice. |
Peter Howkins (211) 236 posts |
In 1987 a crack coding unit was sent to a software factory for a crime they didn’t commit. These men promptly escaped from a maximum security office to the Cambridge underground. Today, still wanted by the userbase, they survive as coders of fortune. If you have a problem, if no one else can help, and if you can find them, maybe you can hire the R-Team. pew pew pew pew pew pew |
Martin Avison (27) 1494 posts |
How about Druck’s SWIstat? |
David J. Ruck (33) 1635 posts |
SWIstat counts but doesn’t time calls (same for SERVstat and VECstat) as most calls would take less than the timer resolution. APPstat both counts and times applications handling of Wimp reason codes, which generally take longer. It could be done with better timers, but I daren’t touch that code these days. |
Stefan Fröhling (7826) 167 posts |
@ Jon Abbott
This sounds like a good idea to make a clean cut. And with Multi-threading as major feature there is a good reason to switch RISC OS to version 7. |
Rick Murray (539) 13840 posts |
I think the main stumbling block is that a lot of people use emulation, which is basically a RiscPC class machine. |
Steve Pampling (1551) 8170 posts |
I think that particular stumbling block fell in to place in the RO3.7/4.02 era, part of the argument for not needing to use 32 bit was that although the hardware was drying up1 you could emulate 26-bit happily on a PC. 1 and indeed dried up – hence RO5.0 |
Stefan Fröhling (7826) 167 posts |
The main stumbing block is that the current user base of RISC OS cannot support the survival of RISC OS. |
David J. Ruck (33) 1635 posts |
That’s been the case since at least 1997
Completely agree
There I disagree, the vast majority of emulators users are doing so to get 100% compatibility with old machines, and new incompatible features is the last thing they want. |
Clive Semmens (2335) 3276 posts |
Don’t understand this. The software might not exist, but given the software, existing Pis can do multithreading and have decent GPUs, and don’t cost that much. |
Rick Murray (539) 13840 posts |
Well, then they can’t complain if RISC OS stops being developed for them.
Again with the “who?”. Modern machines (I’m thinking host PCs) are powerful enough that we don’t need to be stuck with emulations of quarter century old kit. As has been previously discussed, it may suffice these days to have a thin layer so one isn’t really “emulating” anything other than an ARM core. Just enough to actually run RISC OS, passing everything else to the host.
That price is probably about right for a basic Pi setup if one isn’t imaginative and doesn’t recycle kit from previous machines/owners. Me? I bought the Pi, a VGA adapter, the Vonets WiFi gizmo, and a power brick. Oh, and a case to put the Pi in. The monitor? Boot sale. The keyboard? Previous machine. The mouse? Rescued from a bin at work and lovingly repaired. Outlay? Maybe €60ish? But not all at the same time, as I had the “disposable cash”. You can knock off a little chunk if you can do wired networking. Knock off another little chunk if your salvaged monitor does HDMI natively. Where there’s a will, there’s a way. |
Clive Semmens (2335) 3276 posts |
For me, all I needed was the Pi, an SD card, and an additional HDMI lead. Everything else was either salvaged from old systems, or shared with the Mac (the monitor). I’ll probably spend another £54 for a 4GB Pi4 soon, and I’ll need a suitable HDMI lead and another SD (micro)card. I’ll probably buy some thermally conductive paste to put a heat sink or three on it – plenty of old heat sinks in the scrap box, ready to be cut to size. |
Paolo Fabio Zaino (28) 1882 posts |
from GCC 6/7 things started to get a lot better for ARM, GCC 10 can do a better job than Norcroft on modern ARM (say ARMv7,ARMv8), and the other advantage when using GCC 10 is a more modern C++ and GAS is far more documented than OBJAsm is (not a small detail).
Agreed.
The retro scene is happy playing with the old RISC OS, there seems to be no need for new features (if anything, maybe just some bug fixing at most). Emulation is mostly to support old software on new hardware/OS, so yes, at a first look it may seems that there is no need for Emulators of newer hardware (let’s remember that an Emulator is not a “RISC OS Emulator”, it’s a “RiscPC Emulator” or a “Raspberry Pi emulator”). However, what about the 2 following use cases? a) Some people are running an Emulator to code on latest RISC OS from their Intel/AMD. |
Steve Pampling (1551) 8170 posts |
I’m sure a lot of people would like that, but as Rick said: “Who?”
What Paolo said +1 |