Cooperative Multitasking
Paolo Fabio Zaino (28) 1882 posts |
As I already said No programmer is going to take the effort to do a porper port today . So what would it be a solution then? I mean if there is not a solution then you seems to implicitly justify what Druck is trying to say and with which I agree…
In the 90s most of the code available to us was way more simple than it is nowadays and that made it easy for most developers to be able to port things or rewrite stuff from ground up in many cases. The OSes were more simple too and so the porting process was intrinsically easier. Modern applications have way more features, an internet orientated nature, profound multi-media nature, loads of different formats, file-types (mime types nowadays). The assumption that the OS takes care of a lot of details for the developer and that he/she has a ton of libraries available to create his/her own applications… in other words we have moved on from the 90s quite a bit…
This is the part where we agree…
This is were you seems to go off again… What would be these dynamic linking methods? AIF has no native support for DLL, it was added as an R&D project and never really made it through to RISC OS sources… DLL also introduce another issue (well this according to what a good portion of community constantly reports): OS loading time increase OR Application loading time increase. So, for example:
This is a typical engineering situation where we need to make a compromise. This would be so even if we finish the R&D and add DLLs to AIF. So, this exact problem will stand for AIF too and in the AIF case we do not have the other advantages that ELF brings… IMHO I think it’s probably time to learn how to make compromises… but as always this is my 0.5c |
Paolo Fabio Zaino (28) 1882 posts |
@ Rick
:) sure I will and you do the same when the application goes in infinite loop and the hourglass stays forever and the user has no idea of what’s going on :D P.S. I am not pushing for PMT, I mentioned at the beginning I am ok with CMT… the reason for the details on PMT is however a set of FACTS that determined the shift from CMT to PMT of the rest of the world. |
Chris Evans (457) 1614 posts |
O.k. I knew: Google told me: But what is: |
Charlotte Benton (8631) 168 posts |
At the risc of repetition, the only way Desktop RISC OS for the masses is ever going to happen is by adapting a modern OS into something outwardly similar, with all legacy software running via a compatibility layer. |
Charlotte Benton (8631) 168 posts |
AMP is asynchronous multiprocessing. 1 Or heterogeneous multiprocessing. |
Paolo Fabio Zaino (28) 1882 posts |
@ Chris Evans SMP = Symmetric Multi-process. Basically when all the CPU cores of a computer system execute the same things, so every “load” unit can be scheduled to be executed by each and every core. This is modern x86 and modern ARM multi-cores. AMP = Asymmetric Multi-process. Basically one CPU Core works as a “coordinator” (or master if you want) while the others executes some form of “load” units. The load could be symmetric across the slave cores or it could also be asymmetric across the slaves (load partitioning, even more complex). Processing can be based on PMT with barriers (where basically each load unit has a “goal” to reach and when it does a signal is sent to the master “dinner is ready!” HMP = Hybrid Multi-processing (but a better term could be HMT, Hybrid multi tasking, so do not confuse it with AMP/SMP and vSMP) simply means a mix of different task scheduling techniques. For example one implementation could be PMT for the Kernel threads scheduling and CMT for the user-space process scheduling. Other forms of mixing could include RTM (Real Time Multi-tasking) with regular PMT (so basically PMT with priorities as it happens in Linux for example) HT (or Hyper-threading) can be seen as an extra layer on top of all the above, because it’s simply a form of multiplexing a CPU by adding twice as much (and in some cases up to 4 times as much) of control registers and status registers and then a circuit that can automatically activate/disable the duplicated ones so to simulate the presence on an extra CPU core. The advantage of HT is simply that in all of the above there is always going to be a moment where a load unit (aka a thread) will have to be set as not-running and so switching in hardware to execute another thread makes latency even lower. The issue with HT is security, but this is a story for another time! There is also to mention Heterogeneous computing, this on ARM means the capability to have different types of cores. For example low power cores (or small cores) and high performance cores (or big cores) for the now famous Big.Little architecture… |
Chris Evans (457) 1614 posts |
Thanks Paolo for your comprehensive reply. I am now much better informed. I obviously wasn’t putting the correct terms into google! |
Paolo Fabio Zaino (28) 1882 posts |
Yup that’s definitely a good path forward for who wants the modern desktop for the masses… btw little funny note on your comment: “At the RISC of repetition…” :D So here are some potential candidates for such a task, I have mentioned them on other threads, just leaving them here for who is interested in such a challenge:
There are more, Acorn themselves (AFAIR) did some study on Mach Kernel. |
Paolo Fabio Zaino (28) 1882 posts |
No problem Chris, always happy to help if I can :) |
David Feugey (2125) 2709 posts |
Yes. That doesn’t mean PMT can’t be done. It’s the case with taskwindows for example.
And more than that, each core can load its own OS. Basically, what Michael did on the RK3399 (RISC OS on one core, NetBSD or RISC OS on other cores), is AMP. Of course, this is not SMP, but you can spread light threads on cores quite easily with AMP. And the very good point, is that you’ll be able to use the same tools/methods for clustering. So it’s more work to exploit it. But once it’s done, it’s really open bar for true massive multicore systems :)
I agree. This is highly non standard compared to other desktop OSes, but also very fun. And if you really need to use one big SMP-friendly-only app on RISC OS, why not launching a suitable OS on another core? And best of all: it will not break RISC OS and existing apps. |
Paolo Fabio Zaino (28) 1882 posts |
@ DavidS I am not sure how static linking would help porting software that needs changes in source format on the first instance, nor how it would help applications to load faster. The benefits of static linking are in other aspects like better runtime performances and eventually better stability of an app (this last one can be argued with) etc… But happy to see how you would envision this in a practical use case sir :) |
Rick Murray (539) 13851 posts |
Hmm, now would that possibly be a standard known as POSIX? It’s about the only coherent “this is what a machine should have” standard around that is, sadly, heavily tied into the Unix way of working.
??? Then how it is going to work? Pretty much every interaction with an operating system is “specific”. Ask yourself how you open a window (even an empty one) on RISC OS. Windows. MacOS… Now ask yourself how you look to see if the Shift key is held down. Or enquire the screen geometry. Or even “show me a list of files”.
It seems so. Because writing a kernel is hard. GNU leapt all over Linux because they never got anywhere with HURD. All the fiddly low level stuff needs to be done right, and it’s a lot easier to use something that is known to work than to spend forever writing something new.
No, I think they just need an operating system that can tell the difference between a process and a thread.
Yeah. I remember. It was mostly command line stuff. As for the browser? There’s a part of me that suspects that the problem isn’t so much the rendering engine as everything else. Once upon a time web pages were simple. I think it was ArcWeb that rearranged HTML into DrawFiles, that it then displayed. These days? It is possible to turn a modern browser into a word processor with margins, caret, drop-down menus, styles, and so on. Google Docs does exactly that. I can imagine the more clever is needed in a browser, the more it may need to depend upon OS help. After all, it’s what the OS is there for…
And applications written with porting in mind ought to have all the OS specific code in separate places where they can be switched at compile time for different bits of code for other systems.
Program problem, not OS problem. Also why a %ge is useful (the user can see it has stalled).
Namely – once upon a time bugs could make or break a company. Software was written. It was committed to ROM. All 16K of it. Which is about large enough that a programmer could have his wife read the source and fix it for him. Because fixing stuff in the field would be costly. Nowadays programs are tens of megabytes, operating systems hundreds at the small end, up to many gigabytes. The operating system needs to be able to cope with all this shitty coding. PMT is more resilient, because CMT kind of requires that the programmer know what the program is doing and be nice with the rest of the system. Well, you can imagine a Google app on RISC OS, can’t you? Eighty centiseconds for me, twenty for everything else…
:-)
Yeah, my Pentium4 proudly boasts its hyper threading technology. And Windows ‘sees’ two processors. It’s a lie. There is only one core, it is kept busy by feeding it bits of what processor ‘0’ is supposed to be doing and bits of what processor ‘1’ is supposed to be doing, and magically it appears as if there are two distinct processors.
You’re way off. It’s already forty odd megabytes.
I consider there to be something of an absence of multimedia ports. VLC? MPlayer? Well, there’s probably no point given that the (mostly) single-tasking ffplay runs like treacle in January without GPU assist. Audacity? MuseScore? Uh-hu.
More time slicing. Besides… I’m not sure you should use TaskWindow as an example. ;-) I wonder if Charles is reading and would fancy stepping up to dump some vitriol on TaskWindow? Suffice to say, I read the code. Quite a lot I didn’t really understand. The bits I did gave me nightmares. |
Paolo Fabio Zaino (28) 1882 posts |
@ Rick
Ever tried to run a ray trace, go to the kitchen to make a coffee (erm a tea for you sorry!) leave your computer and do other chores and come back, its frozen, no email received, missed messages or the code you were compiling did not compile and oh you’ve forgot the document your were writing for work opened and did not save it… C’mon Rick, it really sounds like you’re trying to justify one approach over the other. I am expecting someone who claims knowledge to have achieved that conclusion that there is NO better approach, they all have pros and cons. PMT it’s just more convenient for both user and developers, makes it easier to handle multiple cores, a generic developer can still write an application like he is using a BBC Micro but that app runs in multi-task etc… otherwise Atari TOS was fine, well documented and worked beautifully as a single task WIMP OS… but wait we abandoned it and it still had crashes (yup even in single task). Sure a bunch of old school geeks may complain about zombie process, but to the average user a zombie process is fine, because the rest of the OS staid up and running and so the other apps he/she was running. Nothing needed to be done, finish their work and close the computer and tomorrow it will start clean… In my work experience this is how masses perceive stuff, no general user cares about beauty of code this or beauty of code that, PMT vs CMT, DLL vs Static linking. What users care about is this:
So again for the modern desktop for the masses the situation is clear, then we can argue/debate/discuss for like the rest of the eternity… meanwhile the world moves on… |
Rick Murray (539) 13851 posts |
Yes, thank you. :-)
What’s the point of this? That running a single tasking program stalls the computer while it is busy, or that things go wrong when the raytracer crashes? Because RISC OS is fragile, delicate, and easily stressed. You don’t need a raytracer if you want to see it fall over in a heap. Just run Zap and dump thirty odd files into it a couple of times. Or run Aemulor with a new app that mightn’t be entirely ‘safe’. Or… Besides, things crashing and losing work doesn’t depend upon the multitasking used. The moment I get the ArduinoIDE to talk to my ESP32, if I do that while the USB WiFi dongle is plugged in, the machine instantly goes BSOD. Everything lost. Total meltdown. Boo-hoo. Well, I believe that Windows 10 even shows an unhappy smiley, which is… yeah… probably responsible for some impact damaged monitors.
Nah, I’m just not convinced that PMT is the holy grail of multitasking in a GUI. In a command line, Unix style, it makes sense. But in a UI…. I guess PMT helps lousy coders write lousy code as there’s no real need to consider the rest of the system, which is a factor in CMT. ;-)
This.
One shouldn’t expect RISC OS to be particularly stable – there are just too many holes that things can poke. That said, I don’t know if I have a magic touch or just don’t tend to install piles of rubbish, but for me, Win98SE and XPSP3 have both been pretty reliable. Hell, I usually shut both (well, don’t use 98 any more, but the point still stands) by hibernating instead of a full shutdown and restart. It’s a little quicker at getting itself going that way.
Yeah, I hear Fortnite is a big thing.
I shall leave Authentic Steve to pass comment on random home machines connecting to work systems. While he is thinking of how to phrase his reply, I’ll go charge the batteries in the cattle prod.
This. That would be my question. ;-)
Ah, catering for the self-obsessed crowd. That’ll be the beginning of the end.
Indeed. Not this one. |
Rick Murray (539) 13851 posts |
Hmm, I think there have been holy wars and flames a’plenty on both sides of that one. [and no, I am never generically calling it “GNU/Linux”, it’s just “Linux” and that other option is just a mouthful no matter how you say the “GNU” part]
It’s a perfectly valid argument when you recall that the part of discussion that relates to is about the difficulty of porting browsers and things, nothing to do with ELF at that point. Do keep up.
My anti-JS rant would be those fools who use JS instead of, say, simple links. But as for complex stuff…
The thinking here is that a web app only needs to be written once. A good example that I can point to is the difference between Google Docs as a web thing, and the Android app. But, the app, with documents saved locally, is the only way of supporting…
Which might be about the only reason the app exists at all.
Especially when you consider that a lot of sites pull in popular code directly from its source. Which means when npm or node.js get pwned, so does everybody else. Again and again. |
Rick Murray (539) 13851 posts |
Hmm… You’re American. The land where hospitals are a for-profit business. Just think, if they couldn’t access your records (kept offsite to keep costs down) then something could go wrong and they’d need to keep you in for longer. Ker-ching! [this is also another example of why dumping stuff in “the cloud” is a really poor idea] |
Steve Pampling (1551) 8172 posts |
OK, my weird status kicked in.
Ah, for the masses. So when do we ditch the middle mouse button menu stuff and bung a “ribbon” across the top of things? |
Dave Higton (1515) 3534 posts |
The compromise is greater bandwidth and storage space. You can’t have it both ways, at least not by any algorithms that we know yet. |
Steve Pampling (1551) 8172 posts |
Or when the system is on the same site and there’s a problem with the core routers1 so people have to use paper to record what the manual readings were. That one was years ago and these days its more acute – all the ECG kit and BP kit and pulseox etc all just have to talk to the central system via a network connection. On site, no internet in the mix. 1 I think I mentioned losing a Boxing Day family visit (they were here my mind wasn’t) |
Rick Murray (539) 13851 posts |
People want better quality, far higher resolution, and a much smaller bitrate in order to sensibly stream HD content to mobile devices over the mobile network, or on slow delapidated broadband links, or in cities with high contention. The only way to get bigger and better and smaller is to throw a lot more processing power at it. |
Steve Pampling (1551) 8172 posts |
Rick did say “mobile devices” which covers a wide variety of items. Now mobile phones is a tricky example, some are so big they qualify as small tablets. |
Steve Pampling (1551) 8172 posts |
Of Shared libraries: You could read the link from earlier where Dr. Simpson builds on the earlier work that he references (says dead link but Wayback machine will get you the pages of reference) when creating a shared library using AOF format. Having created a working item he then refers to GCC having produced a Shared library system and declares his code – which is all there – obsolete |
Rick Murray (539) 13851 posts |
My screen is 2960×1440. Even taking into account the pentile pixel arrangement, it still ought to be capable of resolving FullHD.
My screen is 5.8 inches (according to Aida64). I use it as a home cinema. Since I’m short sighted, I can lie in bed and hold it fairly close so it looks just a little smaller than a projected screen. At that proximity, individual pixels on lesser screens are obvious (my 10" tablet is 1024×600 and I can tell). Additionally, there is a visible difference between 480p and 720p, as well as between 720p and 1080p. Tested using downloads from YouTube.
On a mobile device, it’s not worth losing sweat over. Much of the video decoding will be hardware assist by the GPU. The processor? Will be busy pulling packets of data over the WiFi link running flat out and shuffling it all into the right place at the right time to give you non-stop video. The processor is doing a lot of things, but video decode isn’t one of them.
A portmanteau of “phone” and “tablet” to give the horrible “phablet”.
Most of my phone activity ceased at the end of September 2019. I think I’ve made maybe half a dozen calls since then. Now, non-phone behaviour, on the other hand… Most of my messages here are written on the “phone”. I read El Reg, check the weather, pop over to The Guardian to assess the level of stupid in the world 1, and read XKCD. I stream music. Listen to my own MP3s. And videos. Lots and lots of videos. Netflix has just told me Flatliners is available (remake of the film from the ‘90s?) so I’m going to finish this, make a tea, and go lie in bed (it’s cold!) and watch it. If my back is okay, there’s Cursed to finish watching. If my back isn’t okay, well, I’ll probably finish watching it anyway… :-) Speaking to people? Why would I want to do that? |
Steffen Huber (91) 1953 posts |
Everybody who bought a new mid-class phone in the last 5 years.
Depends how close you are to the screen. Screen size says nothing without considering viewing distance. My Full HD projector creates a picture 2,5m wide, but I sit at 4m distance.
CPU bandwidth is irrelevant, as modern video compression formats are decoded in special hardware with nearly 0% CPU usage. On non-RISC OS systems only, sadly. Streaming bandwidth on the other hand…but the difference between Full HD and slightly worse resolutions that would still result in acceptable quality is not that big. |
Steffen Huber (91) 1953 posts |
You have RISC OS on a phone? Or are you just switching topics rapidly and mixing everything together? |