ExtdVars
Thomas Milius (7848) 116 posts |
I have just released a new small module called ExtdVars. ExtdVars simply provides a couple of variables which all are starting with “ExtdVars$”. You can read this variables as usual eg. in programs written in C (getenv, SWI OS_ReadVarVal), BBC Basic (SWI Inbuild variables: 1. UUID – Everytime when it is called it will return an unique value according to RFC 4122 version 1. The actual version allows the generation of up to 65536 unique values within 1/100 second. Eg. such UUIDs are required inside iCalender formats to identify an event in a unique way. But you can also use it to write Command scripts of which you can run several instances in parallel by passing the UUID as parameter and use it to create unique variable names (Example Instances_main). 2. ActualDir – Contains the current selected directory. In theory you should be able to obtain this from FileSwitch$CurrentFilingSystem and FileSwitch$*CSD variables but practice shows that these are often not updated in the way you are expecting. Additionally you can create and delete a nearly arbitrary number of variables by commands which can be of one of two types. The command ExtdVars_UnsetVar followed by variable name without “ExtdVars$” will delete such variables. A regular Unset isn’t enough as ExtdVars module won’t note the deletion. The command ExtdVars_CreateCommandVar followed by variable name without “ExtdVars$” and command will create a so called command variable. When you are reading the content of such a variable the assigned command will be executed (it must be a real command and not a program as this will lead to internal problems!). The first line of the ouput generated by execution of the command will be returned as value of the variable. Eg. this can be the first line of a certain file by using Type as command or the result of an evaluation defined as an Alias as the Prompt example is showing which will change the CLI prompt into a UNIX/LINUX like variant showing a part of The command ExtdVars_CreateQuestionVar followed by variable name without “ExtdVars$”, decimal flag value, title and question text (including optionally a list of buttons at the end separated from the question by ExtdVars can be downloaded from my homepage http://thomas-milius-stade.dnshome.de/indexE.htm Section Computer activities or directly http://thomas-milius-stade.dnshome.de/Download/Freeware/ExtdVars.zip I hope anyone is finding the tool useful. Any feedback is welcome. |
Raik (463) 2061 posts |
Very nice.
Especially for me ;-) |
Charles Ferguson (8243) 427 posts |
The UUID module defines a system variable The CommandVar interface is likely to be incredibly dangerous. Consider a tool that enumerates the variables on a key press – any command can start a new application, and doing so from within a call to It looks like the filename used for writing the command output is on PipeFS; so your commands may hang after 256 characters and get switched away from if you’re inside a TaskWindow, which probably is ok, if the evaluation of the commands is safe for reentrancy (pretty sure you cannot guarantee that as commands are able to do pretty much anything). However, the PipeFS filename doesn’t look right to me – it’s specifying the disc name Read variable <ExtdVars$foo> using code variable at &04109b24 3830420: SWI XOS_CLI => r0 = &04107aec 68188908 pointer to string command "echo bar{ > Pipe::$.ExtdVars.0 }" CLI Redirection block '> Pipe::$.ExtdVars.0 ' which might be accepted by PipeFS but I don’t think it’s right. Also there’s no calls around the CLI call to preserve the redirection, so when the command ends all redirection will be have stopped. You probably need to use Hehe. The number increases when you nest the CommandCars inside one another. Nice. As for the ‘CreateQuestionVar’ interface… I suspect that this will have bad effects on many things. Reading variables is used during the enumeration, so just enumerating the variables will pop up an error box. Similarly if you’re in an environment where the error box is not appropriate you’re going to cause problems for the user – consider the error box appearing when you’re not in the desktop, when you’re redirecting output to a sprite, or trying to print (it’ll cancel your print job). Additionally, consider that You may find that the way that people use For this type of variable, though why not have the question asked at the point of invocation instead of defering until read time. That would avoid at least some of the problems with the reading of the variable being in an uncontrolled location. You even suggest that people should immediately read the variable so why bother with introducing the race condition, and avoid it entirely by just setting the value directly. You wouldn’t even need to make it a code variable and would thus simplify the situation significantly. I’m not sure how RISC OS Classic reports error boxes with additional buttons when you’re not in the desktop, but I was amused that on RISC OS Pyromaniac this came out just fine: *ExtdVars_CreateQuestionVar question 2304 "Question test" Who are you?|Raik,Thomas,Other *echo <ExtdVars$question> ---------------------------------------- [ Question: Error from Question test ] Who are you? <Raik:1/Return>, <Thomas:2>, <Other:3> ? |
Thomas Milius (7848) 116 posts |
Thanks for the feedback. Regarding UUID module: I must say I am using RISC OS since 1994 now and during the times a lot of software (Freeware and commercial programs) has found its way on my disk. Sorry but I wasn’t aware of such a module. Even it is quite simple to generate an UUID it wasn’t my intension to write things for the second or third time :-(. I think we are all suffering from the problem that there is no uptodate overview about all RISC OS modules and programs and what they are doing in detail where you can obtain them and so on. This is especally a problem for new users. Even looking in Plingstore you are only seeing a very small part of the software which was ever written for RISC OS. UUID$Generate can be emulated by using Macros like SetMacro UUID$Generate <ExtdVars$UUID> and vice versa. Regarding CommandVar interface: I must say that I am not seeing a big problem in this. I personally would always make rare usage of it. Of course you can do a lot of things with commands and Obey files. But why assigning them to variables? Eg. in the case I used it (to change the prompt) it shouldn’t end up in a desaster. I agree the redirection could be perhaps improved but is there a usage for doing so and mixing it with other redirections. For simple tasks this should be enough. Yes I integrated a recursion level in pipes redirection even I didn’t check whether it works. I think there is more the problem of recursion access of command variables which will end up inside an infinite loop. I thing I have to add a flag here to detect and reject this. I am aware of the 256 limitation if using pipes within a task window. I just checked If desired I can easily expand the PipeFS buffer to eg. 32K by creating the file before using it. But I think you will use such constructions only inside a known enviroment. So you may know whether the expected maximal output is small enough. Regarding Questions: In my example I removing the question definitely just after I got the reply to avoid trouble during enumeration, show etc.. I am regarding it just as a small help inside Obey files during installation etc.. The output you are showing above is really interesting but I fear it may not work in all environments. |
Charles Ferguson (8243) 427 posts |
I don’t expect that you would be; I created it in the early 2000s and it would have gone into a Select version, but I don’t think it made it. But it was registered, and as such it’s worthwhile mentioning that a global namespace variable to do the same thing was defined.
Depends on whether you see it as a general usage tool, or something very limited, and whether it’s documented that way. If you always use % specified commands, you would avoid many of the expectations. As it is, a simple `*set alias$if At the very least, make your examples use % prefixes so that they’re not going to get diverted to aliases that’ll give you a bad day.
I reckon just blocking it by reporting an error if your counter gets high. Or checking how much stack space is left and stopping if there’s less than 1K or something? That’d probably protect the user from a nasty abort. It was nice to see it working when it handled that it was inside itself.
I’m pretty sure that’s not how it works. PipeFS always writes to the end of the file. If you create a 32K file in PipeFS and then write data to it, you get data at the end, regardless. I could be wrong, and PipeFS is freakin’ weird, but I’m pretty sure you can’t do that.
You can’t be sure that you won’t be preempted away in that time, though. If that obey file is run in a TaskWindow, or some module is using a ticker to do things, you’re potentially going to get a failure. Again, why go to the trouble of making the Wimp_ReportError happen in an environment you don’t understand by putting it on the end of a code variable, if you expect the user to just unset it immediately afterward. You can avoid the whole problem by not using code variables at all for that operation. |
Raik (463) 2061 posts |
Very strange. I’m RISC OS user since 1992 and use Select/Adjust in my Risc PC, and 4.42 in my a9home. |
Thomas Milius (7848) 116 posts |
Ok. I think questions inside Command files could be also solved by changing the concept in such a way, that the command used to define such a variable immediately shows the query and assigns the value to a variable of the choice. |
Thomas Milius (7848) 116 posts |
Regarding “commands”: Due to the internal concepts of RISC OS not all “commands” can be used in the same way. Whilst scripts and module commands are entirely handled without any problems the start of a program leads to trouble in memory handling eg. terminating a BASIC program but not a C program because “OSCLI” and “system” are behaving entirely different. Inside BASIC you can overcome this limitations eg by using !Routines. Also starting “commands” out of modules is still one of the big adventures of our time … :-( Unfortunately I am not knowing another term to distinguish this situation. So I choosed “real commands” in absence of a better term. |
Charles Ferguson (8243) 427 posts |
It’s not that strange, honest! There are a lot of things that I created which never made it into a release. I’m a big fan of trying things out and seeing what works and what does not. Some components had a long gestation period before they saw the light of day, and some may never have been seen outside of prototypes. Somewhere lying around there was a list of all the modules and components that I created which were functional enough to have registrations, but many of which may never have escaped to the wild. |
Steve Pampling (1551) 8172 posts |
With module replacement on a test basis in an emulated environment1 that sort of thing is almost begging to be implemented as a fork for alpha test. How much of your old code have you got, or how much do you think is sitting on a file server that Aaron has? The latter is probably an arbiter of whether any of it could “escape” to the wild anyway. 1 I did wonder whether you could replace on load with something like the Fileswitch module. It seems that if you drop your replacement into the podules directory in RPCEmu, then you can do exactly that without the normal errors about things being in use. |
Thomas Milius (7848) 116 posts |
I have updated ExtdVars now. Links are the same ones as before. Changes: |
Charles Ferguson (8243) 427 posts |
I’m not quite sure what you mean there, so I’ll take a stab at a reply. Testing this sort of thing in an emulated environment is exactly what Pyromaniac is for. Although there aren’t many tests implemented in the things I’ve made available on GitHub, it’s quite possible to test most of the example `riscos-ci` components that I placed on GitHub (https://github.com/topics/riscos-ci) so adding testing for such things should be easy. And I did actually wonder how easy it’d be to shove some simple tests around ExtdVars in a github repo, because some of the things that it’s doing are quite testable, and it’d be nice to see it being exercised. And then I realised that I haven’t implemented On the other hand, some of the other parts of it should be able to be exercised.
Of my code, I have most of it (modulo loss of discs, time, forgetting where I’ve put things, and having entirely imagined that I wrote them – something that is also possible) I think. And if it’s original it belongs to me. |
Charles Ferguson (8243) 427 posts |
That’s a weird thing to do, but I suspect it’s kinda useful in a similar sort of way to Repeat, but for parameterised commands. |
Steve Pampling (1551) 8172 posts |
We seem to have an increasing number of people who are enthusiastic testers of the latest beta and various experimental items. When it comes to testing parts of the ROM based module set soft replacement doesn’t work – unless you can test that item on an IOMD release, in which case dropping the module into the podules sub-directory will do the replacement before it is in use. (Shame there isn’t the same facility in the other releases, or did I miss it?) Pyromaniac is a wonderful test bed for people to use, but it doesn’t simulate the normal use situation of Josephine Bloggs with her varied and possibly quirky personal software set use case. More later, being rushed off to do something… Edit
Would you consider making the code and/or binaries available? |
Charles Ferguson (8243) 427 posts |
I’ll answer your comment from a few different angles. [And I’ll apologise now if I’m explaining things that are obvious or you feel I’m ‘teaching grandma to suck eggs’ 1; this is a wide audience and technical levels vary greatly, and I’m hopeful that some of what I think about this is useful to someone] a) If people are only happy with testing a ROM, and not with softloading modules, they’re most likely not going to be able to deal with the expectation to debug problems. So you may not wish to care. Those people are not your demographic for beta testing. I realise that that’s assuming a technical competency from the fact that they have an interest in installing a test image, but even if they’re highly competent but don’t want additional burden for testing things, then you’re probably not going to get a good experience from them if something goes wrong – which is what you’re trying to achieve. However, that also highlights the (possibly large) group of individuals who would give feedback and would like to test things, but for which installing ROM + softload testing is not something they want to put the effort into. For those people, if you want to capture them as testers at that stage you need to change what you’re doing to enable them to do what they are able to do without the burdens that they’re unable (or unwilling) to do. …. and so … b) Change your user’s experience. If building a ROM is hard, make doing so easier. In Select development building a ROM was one thing, but you could patch the ROM to replace existing modules easily – a command line tool and desktop application were able to either replace individual modules, delete them or put modules on the end. This means that you remove the complaint that building the ROM is hard – you’re no longer building the ROM, you’re just modifying the existing one by replacing modules in it. … but also … c) Change your expectations. You mentioned IOMD, possibly because you’re talking about the RPCEmu only supporting IOMD. Who cares? Only the software that talks to the hardware should care about that, and except for device drivers, you shouldn’t be writing to the needs of the hardware. Even in the case of device drivers you should be able to author them in such a way that your testing obviates the need for hardware except in the system integration testing (which is what your ROM is, but by that time you should be very confident). In response to the particular case you’re asking about (I’m not sure which but both apply), my UUID module and Thomas’ ExtdVars have no dependence on the hardware. And they’re external enough that they would never be tested in ROM anyhow. In particular, ExtdVars would never go into ROM because its ability to destroy the world by executing commands at unsafe times means that it’s just not suitable for general purpose use (Sorry Thomas, that’s how I see it, and I think you’ve agreed that it’s fine for user’s specific uses if they understand the restrictions, but that’s not true of something that goes into ROM). Anyhow, regardless of whether they’re suitable for ROM, softloading such components is not an issue – they are not dependant on any other components (at least not in any specific way that means they need to go in ROM), nor do any depend on them. They’re perfect candidates for softload testing – do not waste your time trying to invent ways of testing in system integration if they can be tested in integration or merely unit test. So, there are modules which require testing in a ROM image, but they’re actually pretty few and far between. And those people that are working with those specific modules will know how to softload them. Fileswitch is one of the simpler ones to softload IIRC. Even FileCore can be tested as a softload module. SharedCLibrary, accepting its limitation of one-safe-load-per-session, is actually one of the more fiddly of things. ColourTrans may cause you problems because of the potentially cached translation tables that other users will have reference to. Resolver is a dangerous one, too, as it also provides references directly into its workspace. But developers of those modules, and those that are actively testing the components won’t need to build a ROM to do so. And they can be reasonably safe in the knowledge that you can drop the module into a user’s hands and providing they load it early in the boot sequence, won’t have any issues. … which brings me to … d) Change your delivery mechanism. If you need people to test modules that are not ROM-based, or which are ROM-based but you want to load separately, introduce a mechanism for people to do that. Off the top of my head I’m thinking of something in your PreDesktop directory which loads a set of modules that the user has chosen, and an application which allows those modules to be selected at runtime (or which the user just edits to insert them). It /could/ just be a directory of modules that is always loaded, but my feel is that this makes it harder to opt in and out of testing specific things without introducing another user-specific ‘not in use’ directory for things that aren’t going to be tested right now. Go the whole hog and create a !Boot.Resources.Beta directory that contains a set of directories for each beta trial that they user may opt into, with a set of modules and resources in each directory, and a description file that explains what it is and how to raise bugs. Then you can have a small Configure plugin that will allow you to opt in and out of these betas. (you could even make the little description file indicate areas that the user might have concerns about so that they can provide feedback, and a way to generate error reports back to the developer when things wrong, to provide a means for developer feedback… and then you’d have built !Bugz… The principle of loading modules during the boot sequence has been done for years (ooh, I get a chance to say ‘for decades’ here, and be accurate! I could even say ‘in the last century’, but it doesn’t seem quite as fun :-( ), so it’s not like it’s going against the grain there. Yeah, there are rare modules which are by their nature harder to inject this way. FileCore and FileSwitch is a particular example that I can say is like this. FileCore doesn’t cope so well because the disc is already mounted and you’re booting from it. FileSwitch is similar in that you’re probably executing files from it, too. Both of these are summountable – either by using separate discs for primary boot or using the obey file caching and injecting the load very early in the boot sequence. But honestly, those are incredibly specialised cases and you probably don’t want to be sending around betas to people who don’t know how to handle them or to be able to build a ROM themselves. Remember that betas of FileCore and FileSwitch have been tested by people long, long before those people had access to any of the sources, so it’s not a problem that only exists now. And of course, those two are ideal examples of where patching the ROM is not only viable, but the most appropriate option. … and leads me to … e) Change your software. You say that for some things ‘soft replacement doesn’t work’. So make it work. It’s a requirement of a modular system (ok, I’m pretty certain I’ve written about this quite a few times, but then I think I’ve written most of this a few times and I feel like I’m just a stuck record, but anyway…) that components be able to coexist and be replaced in a largely arbitrary manner. This may be a requirement that I place on OS development in RISC OS, but it’s one that it very important to user (in this case developer) experience. Here’s a little bit of reasoning… If you have to know the right order in which you need to load things, then it implies that there is a combination that is unsafe. If there is an unsafe combination that is unacceptable. As an Operating System developer you must not introduce a situation where the user can do something that is to the detriment of the system as a whole by doing expected and normal operations. This holds for applications development too, but as an OS developer your have to care that you’re not allowing parts of the system to hurt other parts of the system. Loading modules is a normal operation, so you shouldn’t have to care about ordering (as a user/developer). Therefore as the module author, you must ensure that ordering and timing doesn’t matter. By way of concrete example, if you kill the Internet module, it will release all its handles and notify clients that it has gone away. There is no mechanism for it to inform applications, so… they’ll be unhappy, and as OS developer we are unhappy about that, but they’re secondard to the module system (as it underpins the applications). For the purposes of this example, I’l ignore applications. On the Internet module’s death, modules which care about sockets sit up and take notice – they stop polling for data, they note that they don’t have sockets any more and they go equiescent (think of things like ShareFS, Freeway, LanManFS, NFS, Resolver, and others in this stack). To the user, they’ll see connections drop and the like, and the ResourceFS list of discs will empty because we now don’t know about them (actually, I’m not sure it /does/ empty here; it may actually just stay there until the records time out, which is a better solution, but it certainly could remove the remote discs). New Internet module loads, and announces itself to the world. It enumerates the network interfaces that are available. Finding them, it registers packet handlers with those modules, and announces that there are interfaces available. Some components take notice of this and claim sockets; others try but find that there isn’t an address present, so they go back to sleep. Components which configure addresses are woken there and may begin autodiscovery (in Select that means InternetIP wakes up DHCPClient and ZeroConf, and may configure addresses using the old AUN addressing scheme; in RO5 I guess that means that DHCP does its work there; possibly Freeway might configure the old AUN addressing? I can’t remember how it worked before InternetIP… anyhow, the principle is the same if the detail differs). Addresses will turn up after a few seconds, and this will be announced by the Internet module so other network modules will wake up again and try again to claim sockets. With an assigned address to work with they will be happy and may then begin doing their work. Let’s follow Freeway here… Freeway will broadcast the shared discs (and the host) it has available, and will receive back off the network a collection of discs available from other users. This will cause it to register those discs in ResourceFS, and ResourceFS will then issue upcalls to notify clients that the contents of its folder have changed – and the user sees the discs on the network appear in the remote discs window (or just refresh, because they were just updated, not actually removed in the earlier change). [There are components which cannot be replaced in this way (as easily) – MBufManager, for example, refuses to exit whilst it has clients. If you want to bootstrap a replacement MBufManager you terminate all its clients and then load it, and restart the clients – it’s not ideal, but it’s structured, and you only need to terminate its /direct/ clients. You don’t need to kill Resolver, for example, because you know that Resolver relies on Internet and is happy to survive if Internet goes away. But you do need to kill the network drivers and Internet (and maybe LanManFS, AppleTalk, if you have them). All this can be achieved by a simple scripted restart sequence – think of UnplugTBox for a similar example which orchestrates the correct termination and restart sequence for modules that are unable to be terminated cleanly (and then think about how Toolbox should be tidied up so that that doesn’t happen :-) )] All parts of the system should be able to be replaced. There will be increasing levels of carnage as you terminate parts of the system for which other components do not cope. But that carnage can be reduced by changing the software to be resilient to it – because it’s expected that users be able to kill and restart parts of the system (and required for developers to be able to do so reliably). I can’t speak for anyone else, but it is satisfying (and a huge part of my RISC OS testing) to lobotomise the OS to see at what point it becomes unusable. Kill a module, see how the system copes. If you get errors, you may be able to fix them. If you get aborts you must fix them. If you lose the desktop, that’s pretty poor, but may not be the end of the world. If you lose the video or the input, it’s probably game over. At what point the system fails catastrophically is a measure of the reliability of the system in the face of modular replacement. And as modular replacement is a /normal/ and expected part of user behaviour, it is therefore a measure of reliability to the user (obviously only one of many, but nothing is ever done or measured in isolation). Select can survive a lot of punishment before it fails catastrophically – oh there are still huge holes, but all these things are ongoing work that you pretty much never finish. Ok, so that was a long-winded example. What does that have to do with the need to change software? Well, in the example, any part of the components I discussed can be loaded at any time. Should any of those components be reloaded, they provide notice to others that there are changes necessary in the state of the system, and that users of their resources should be aware. Largely this happens through service calls. If there are components that you have identified that cannot be softloaded (or are unreliable to softload), that is a bug. The workaround, as you’ve identified, may be to load it into ROM so that it is available at the right time in the right order, but … the ROM is only the end goal. Everything should lead up to a ROM, not be the only way in which things are exercised and tested. This does have side effects – there will be a happy path for initialising parts of the system which is fast. One which does the least number of notifications that parts of the system are available. Commonly the happy path is device drivers loading first, followed by controller modules, then modules that use them, and so on. This reduces the number of ’I’ve started, oh I can’t do anything yet’ at each level. The more ineffcient path is usually to start the highest level modules first and then work down to the lowest level. Consequently the system should be organised with the lowest level first and the highest level last – which is pretty much the default for ROM builds, which is nice. And why does this sort of thing help developers and testers? Because your development time isn’t wasted looking for odd interactions between components when they load in the wrong way (strictly it’s not wasted because it’s a required part of your implementation that it cope with that – you probably still spend that time, looking for those things, but you’re doing it because you accept that your work isn’t complete until the module is able to be resilient, rather than the lack of such resilience being a secondary problem), and because when you’ve made these components able to be tested more easily, you open up easier doors to testing. That might mean that development-test cycles become shorter because you don’t have to build ROMs/reboot between loads of your components, or it might mean that you can offer softloaded components to your users because you’re confident that they’re safe to do so. The former is a goal of most software engineering, and the later is the specific thing that you’re trying to work around in your comment. Further, you say that ‘some people may feel happy tweaking a module but not building a new ROM’. If you make it so that those modules don’t require a new ROM, you’re just leading into making the system more resilient and reliable, as a way to enable your users (developers) to add value to the product. You’re meeting multiple goals at once – increasing your developer base, meeting design goals, and making the system more stable. There’s some discussion of this modularity in https://gerph.org/riscos/ramble/modularity.html (which includes a short discussion of the lobotomisation process) and https://gerph.org/riscos/ramble/buildsystem.html contains a little detail on how the ROM building ensured that the happy path was observed for ROM construction. (I was sure I’d written about this before!) … and finally (and relating to comments about Pyromaniac) … f) Change your testing. This kinda relates to point c (change your expectations), in that testing in ROM is irrelevant for 1) components that aren’t going into ROM, and 2) components that can be tested outside the system integration tests. I’ve mentioned the levels of testing in previous points above, but to be clear, what I’m talking about here is the degree of integration you have at each level of the tests – how much of the OS you care about when you’re testing things.
Those are the terms that I’m using and the expectations that you can have for each. Each level of testing comes later in the development lifecycle, and each provides a greater level of confidence in the whole. And if each layer builds on the earlier layers, it will have to exercise less of the system. If you have good unit tests, your component tests will only need to exercise as deeply as the interface layer. For example, if your unit tests exercise the functions that are called by the Module SWIs, you only need to check that the module SWIs are doing the right thing to be sure that the Module is good. And if your module doesn’t use anything else outside those functions, then you don’t /need/ to do Integration testing (because there’s nothing for it to integrate with) and may be able to skip anything more than rudimentary System testing. The example you cite of testing components in RPCEmu by making them a separate podule module is essentially System testing (or possibly field testing, but field testing is usually non-technical people acting as users, not as testers). You’re trying to test that the system as a whole works together in a target environment by testers who will give you feedback. It has the benefits of being able to be distributed to users in that form and tested by developers. But it’s targetting the level of System testing. And, if you’re talking about components like UUID or ExtdVars it’s not really relevant. Why not relevant? Because those components have no real dependence on the rest of the system (UUID doesn’t rely on anything else), or wouldn’t be distributed in a form where they needed to be used like that (ExtdVars wouldn’t be a candidate to go in ROM, so isn’t appropriate for that type of testing). By a similar token, FileSwitch should have a lot of integration testing present to try to exercise things at a lower level, so that you have less need to test at the higher level of System testing. That it is hard for users to softload; that it is hard to test in combination with the user’s system; that it is hard to build ROMs… these are all things that should be alleviated by making them unnecessary by way of testing, by way of easing the software requirements, and by easing the distribution mechanisms.
The point is that the testing necessary in the Ms Bloggs case should be low. If you have done the job right in testing it at the integration testing level (Pyromaniac straddles the line between Integration and System testing, I believe), the variables that are left at the field test level will be far smaller. And in the case of UUID incredibly unlikely to ever cause any problems – there is little of the system and their specialised cases to interact with, so very little point in going further. As I may have said, Pyromaniac isn’t intended to provide a whole operating system (though it’s depressingly functional), but to provide a way to do the testing so that you don’t have to rely on end users. Each of those layers of testing is more expensive than the one before it (in terms of the time taken to run, the time taken to get feedback, the time taken and complexity involved in finding problems). So if you’re testing something at the higher level which could be tested at the lower level it is costing you more than it should. There are other opportunity costs in doing so as well – by pushing your testing on to users for things that could be tested automatically, the cannot be doing testing on things that are inherently in need of test by an end-user human. And there are costs to you in terms of getting feedback – users are poor mechanisms for feedback in general, and as with all the layers of testing, they introduce another complexity in collecting the information that is needed to find problems. That is, they don’t give you the right information, they make the cycle of testing longer and they mean that you will go down wrong rabbit holes (that applies to each increasing level of testing as well as humans – it just gets harder to diagnose the more moving parts there are). So placing tests at the lowest level appropriate simplifies the behaviour. And should it turn out that you have problems at a higher level (eg user reports issue when used in circumstance X), and you recognise how the issue arrises, if you have lower level tests, you can introduce that fault in the lower level testing and ensure that it does not happen, or if it does it is detected and reported appropriately). So to answer your comment; it doesn’t simulate the normal use situation, no. But then it’s not meant to. It’s meant to allow you to make those quirky setups and debug what happens in really weird-ass circumstances so that it never reaches Ms Bloggs, or that their quirks are captured so that they /can/ be simulated, and when Tom Smith has the same quirks in the future, they don’t see the issue. I think my point is that if you push out your problems to your users because you believe that only they can find your problems, you have issues with the manner in which your software is developed, your testing, and the complexity of your system. Additionally, if your users find it hard to do that testing, then you are placing the burden in the wrong place.
What would be the benefit to me? I have made things available, and they have been met with silence. My concerns when doing the talk last year (in addition to nerves at talking to people) were that what I had released would be met with criticism that I’d not made things available or that I wasn’t doing things right. So…
Open source components were all with the most freedom (MIT or BSD license), and there is no charge, registration or tracking involved in build.riscos.online. They’re as available as possible. The device controller that I mentioned recently on the Pyromaniac announcement thread, a few weeks back, is an ongoing project with a friend – once they’re happy with it, I’ll register it and make it available open source. And I’ve had nobody respond to any of those things at all. I don’t know whether there are people using build.riscos.online as although I have logs of errors, I didn’t bother to include usage tracking (not even counts) – the CPU usage implies that there’s not been anything happening with it except when I submit jobs. It costs around $36/month, so that’s … eek about 310 quid a year. Meh. I could run it for less if I put the effort in to rework how the SSL termination is done, but honestly I can’t really be bothered. My feeling is the Select is deemed as passé, and badly and each time I see someone make such a comment I’m less inclined to do anything about it. As you say, much of that is 15 years old, and yet nobody else has bothered to do many of the comparable things – in fact my view is that if there was an opportunity to do things compatibly or take the design that was used in Select and reproduce/use it, the mechanisms were actively ignored or done in contrary ways. So think about what I have done to make things available already, and the general lack of response, and the cost to me of the service that I’m already incurring in money with no return. Add to the fact that what I have put out recently has had no response, the general feeling that what I had done for Select wasn’t worthwhile and hasn’t been seen to be important enough to recreate or follow up on. Now ask yourself what’s the benefit for me? Anything I do has an associated cost – I don’t get to watch telly, or add more fun things into Pyromaniac. Or sleep. Sleep’s good. I’m not complaining; just explaining how I see things. I’ve done these things because I enjoy them. I’ll probably continue to do so, but I’m not going to put effort in where it’s not fun for me, or I don’t see it as worthwhile. Also consider why I’ve spent 3 hours on a Sunday writing an explanation to a couple of paragraphs of comment, because I feel that I have something worthwhile to say and that if I say it, someone might listen. 1 For non-English speakers, that idiom is used to apologise (or criticise) another for explaining something that the other already knows. I have no idea where it comes from and honestly it makes no sense to me. But there you go. 2 System integration testing is testing that you’ve got it integrated properly (does it work when it’s in the ROM, in RISC OS world). System testing is higher up, checking that the system as a whole, not just the integration, works in its target environment (does it work when it’s in the ROM on a Pi, for example). The distinction is generally pretty blurry by this point, and commonly the System testing is used for both. |
Steve Pampling (1551) 8172 posts |
Thank you for taking so much time to answer. I think I will go and re-read and think. |
Charles Ferguson (8243) 427 posts |
Thanks; you’ve made me laugh a lot :-) Hopefully /something/ is useful in there. And sorry to Thomas for the slight hijack of the thread. I’ve just tried out the latest version of ExtdVars and I get an error when trying to execute the commands using the ExtVars_Repeat: That means 2 things – 1) I’ve got an invalid error number in my GSTrans error code, and 2) I’ve got a bug in my GSTrans code because I think your code is right and Pyromaniac isn’t interpreting it properly. So… Yay! More bugs for me :-) |
WPB (1391) 352 posts |
Pyromaniac leaves me lost for words – in a good way! It’s just mind boggling. All your work on RO over the years is very much appreciated here. (Apologies Thomas for more thread hijacking.) |
Steve Pampling (1551) 8172 posts |
It may help to look at the thread from January which seemed to unearth a bug carried through from RO2. Thomas has probably written his code to match/use the fixed behaviour which appeared in the ROM of February 11 |
Charles Ferguson (8243) 427 posts |
So… I’ll follow up on myself as this is kinda related to my prior posts.
And yes, looking at the code and comparing to RISC OS Classic, my implementation was off. Specifically, the command ` But, what does that mean for my testing? Well, firstly there’s a unit test for the GSTrans module, in the Python code. This unit testing code actually did have a test of an unterminated string, which was being tested incorrectly. So I fixed that one, and I added a dedicated test for the unterminated escape case: class Test80Unterminated(GSTransTestCase): def test_01_hyphen(self): self.assertEqual(self.gs.translate('<-'), (3, '<-')) This essentially means… if you translate And, we have an integration test on the RISC OS side. It’s actually using this code as its harness: https://github.com/gerph/riscos-tests/blob/master/testcode/s/gstrans The new section of tests that I’ve added to my GSTrans test suite is: Group: GSTrans: Unterminated variable File: bin/gstrans Test: Unterminated hyphen (ExtdVars) Command: $TOOL $FILE 0 "hello <-" Expect: expect/gstrans/0_unterminated_hyphen Test: Unterminated hyphen space close Command: $TOOL $FILE 0 "hello <- >" Expect: expect/gstrans/0_unterminated_hyphen_space_close All the integration tests use RISC OS binaries or commands, executed through Pyromanic ($TOOL is the pyromaniac binary), and the gstrans tool (for which the source is at that link) can be supplied a number of parameters to allow the tests to be varied without writing excessive amounts of assembler. In this case, the parameters to the tool are first a set of flags which indicate what bits to set, and ‘0’ here means, don’t set any bits. This is followed by the string we want to GSTrans. The program (just a plain RISC OS utility) is executed and the output captured and compared to the expectation file. If the files don’t match, you have a bug, and the test will report a failure. Yay. But how does this test things? Well, if the output changes because the utility did something different then we know that something’s gone wrong. So the utility outputs its results so that we can see what the parameter results were. This form of expectation testing is a lot simpler than explicitly writing “I want the SWI to return …” explicitly. This means less writing assembler – you just make the assembler write what it wants to check is invariant to the output. There are other ways to do this, but this mechanism is a nice trade off – at times I do put checks in the code to see that the values are correct, which is particularly important when it is one value relative to another changing. But those checks are either done by calling OS_GenerateError to report the problem (then the test will fail, because we expect a RC of 0 from all runs), or by printing out something like “ERROR: foo was not correct”, which will then fail the expectation check. What do we get in the expectation files for the above? Well, they’re not that exciting really. 0_unterminated_hyphen: Flags: &00000000 String: hello <- Calling OS_GSTrans Returned Consumed: 9 Wrote: 8 Terminator: <0> 0 Output: >>>hello <-<<< 0_unterminated_hyphen_space_close: Flags: &00000000 String: hello <- > Calling OS_GSTrans Returned Consumed: 11 Wrote: 10 Terminator: <0> 0 Output: >>>hello <- ><<< With this fix in place, I now run the Thomas’ examples and we now see: charles@laputa ~/projects/RO/pyromaniac (master)> pyrodev --common --load-module extdvars,ffa --command 'dir extdvarsegs' --command obey repeat First Line line content Hinz <- All lines line content Hinz <- line content und <- line content Kunz <- line content Meier <- line content Müller <- line content Schulze <- Line 2 and two more line content und <- line content Kunz <- line content Meier <- Last Line line content Schulze <- Which, if I’ve read it right, is what we expect. So, thank you Thomas that’s a cute command, and has fixed another bug in Pyromaniac. The more you test, the more you find. And the more tests you have, the greater confidence that you’re doing things right and that changing something doesn’t have Unforeseen Consequences. Of course your tests have to be correct, but that’s a different problem… Though honestly with 1235 tests in Pyromaniac now 1… that’s not a great number for an Operating System to have, but it’s better than 0. And if every time a bug is fixed a test is added which would catch it, the confidence goes up. 1 It’s also just trickled over 65% code coverage again in the last few days after lots of work to improve that coverage. Code coverage figures can be artificial, but they tell you that you’re at least exercising the code, even if they don’t tell you that it’s right. |
Charles Ferguson (8243) 427 posts |
That looks like something that can’t be changed though. It’s so pervasive that someone will have relied on it somewhere in their Obey file, and changing it will break things. It’s a potential injection bug, but one that you may have to live with unless you want to break users’ software. Sometimes that’s the right choice, though – at nearly midnight, I’m not sure which way I’d go on that one.
Nah; it’s just my GSTrans is bust. Was bust :-) |
Steve Pampling (1551) 8172 posts |
I wonder if your fix matches the ROOL one. |
Charles Ferguson (8243) 427 posts |
No the issues are unrelated; in my case, I decided that if I terminated with a substitution still present, that was an error. The diff in `riscos/gstrans.py` as we exit the iteration through the string that was supplied, is: # If we terminated normally then: # * inquotes will be False (if True, a '"' has been missed off the end) # * insubstitution will be None (if a string, the trailing '>' has been missed off) # * inescape will be False (if True, a '|' was used at the end of the line) # * topbitset will be False (if True, a '|!' was used but no complete character followed it) + # If the substitution was not completed, it just gets appended. + if insubstitution is not None: + acc.append('<') + acc.extend(list(insubstitution)) + if inquotes: raise GSTransBadQuotesError(self.ro, "Bad quotes during GSTrans operation") - if insubstitution is not None: - raise GSTransUnterminatedSubstitutionError(self.ro, "Unterminated substitution during GSTrans operation") if inescape: raise GSTransBadEscapeError(self.ro, "Unterminated escape during GSTrans operation") if topbitset: raise GSTransBadEscapeError(self.ro, "Unterminated top-bit escape during GSTrans operation") It’s possible that some of those other 3 errors are also incorrectly raised. I didn’t look into them, although I should really. Yes, I should check them … I’ve left a bug on my issues board to look at them. At the very least they should have error numbers specified, but I’ve been lazy there – if you don’t specify an error number, a RISCOSError exception gets the number &7FFFFF, which is pretty obviously garbage, but shouldn’t cause problems in most cases. But those errors should really have codes that people can detect. Aside: The selection of &7FFFFF is not completely arbitrary. Error numbers are allocated, in general, but there are a number of regions that people can rely on, and so you need to be able to report things in sensible ways.
So &7FFFFF sits just below the reserved range, but in an obviously unlikely value. Ideally, anyone checking for error codes using the above rules will still see it as a regular interface error, but it won’t match any specific special cases. Which is fine for hackiness, but if someone is properly checking the error code from GSTrans they won’t get the error codes they expect. So I should fix that. Sorry, way more of an aside than it needed to be. |
Rick Murray (539) 13850 posts |
Interesting discussion: https://www.riscosopen.org/forum/forums/11/topics/11133 I think the extra checks are currently disabled because what is supposed to be used as an error number, and what actually is… ;-)
I use |