Packman/PlingStore Obscurity
Pages: 1 2
GavinWraith (26) 1563 posts |
Am I the only one to feel frustrated by the way Packman and PlingStore display what can be downloaded? It may be just my obtuseness but it is not obvious on which versions of RISC OS or on which platforms it is claimed that the software will run, or has been successfully tested. It would be good to have more information about which resources are required. RISC OS platforms are becoming more diverse, so legacy software may need diverse therapies to be usable. This could entail a lot of work for maintainers. |
Rick Murray (539) 13850 posts |
I would imagine there are a good few cases where there is no comprehensive answer. For example, I develop/test on an ARMv7 Pi2. The code should be good for ARMv8, but there’s no guarantee as I don’t have such a machine to test upon.
I thought PackMan was supposed to find and install required dependencies? As for !Store, I think any application there should come with anything additional that is not part of the OS ROM or the standard Harddisc4 installation (in other words, assume ChangeFSI exists, don’t assume your favourite Coconiser player does…). It would certainly make sense for software to give a better indication of what sort of machines it is targetted towards (26/32, 32 only, 26 only or 32 with emulation1); and if there are any specific requirements (needs broadband2, needs at least half a GB of memory3…)? For my part, what would be nice to see in !Store is the ability to give a date when something was last updated, and to list in descending date order so I can take a quick look at what’s new. And please, somebody, delete those four phantom Drag ’N Drop entries at the top of the list! Oh, and Gavin – Head_Tail – version zero-point-zero-zero? Really? :-) 1 Impression-X 2 Manga 3 Otter |
GavinWraith (26) 1563 posts |
Touche 😰 |
Theo Markettos (89) 919 posts |
I can’t comment on PlingStore, but PackMan packages don’t have a field that says ‘works on hardware X’. It wouldn’t be hard to add, but the main issue is what to set it to? A package might be compiled for ARMv3, v4, v5, v6, v7, v8, and some subset of those might work on particular hardware (26 bit so not for RO5; uses halfword instructions so not for RiscPC; uses unaligned loads so not for >=v7 (or v6 with exceptions enabled); uses NEON so not for <=v6; uses v8 crypto instructions; requires at least RO5.2x). Because of the complexity of these dependencies, it isn’t sufficient to say ‘requires Windows 7 or newer’ and leave it at that. I think it would need some consensus on how to codify these decisions before any implementation can be written. The alternative is exhaustive testing (‘works on RPi 1, 2; doesn’t work on Titanium’) which nobody has time for. The riscos.info/packages.riscosopen.org package lists attempted to do an ARM v5 / v7 split because, at the time of the RPi release, a lot of the packages weren’t v7 safe. Hence there’s a script that blacklists packages from the v7 list if they’re known incompatible. This is a mess and it would be much better to throw it away and have the decisions made client-side (like Google Play won’t show you apps that don’t work on your device) but it would first need codifying what the distinctions are. (and somebody to implement it, as well as updating package files) |
Jeffrey Lee (213) 6048 posts |
There’s one simple thing you can do to the package list on riscos.info which I think will be a big help: SHOW THE DATE THE PACKAGE WAS LAST BUILT!!! E.g. for the SWP issues with ARMv8, there’s currently no way for a user to know whether a package has been rebuilt with the fix or not. If the build date was displayed, a user could at least look at the date and go “oh, this was last built 5 years ago so chances are it’s still buggered” |
Rick Murray (539) 13850 posts |
As I pointed out in my earlier post, Zap ran on the BxM. It failed on the Pi2. Both are ARMv7. So simply saying “works on ARMv7” is clearly not enough…
Yup, and also allow sorting by date so it’s dead easy to see “what’s new”. This matters more for !Store (that doesn’t handle updating existing things) than PackMan (which does). |
Alan Buckley (167) 233 posts |
Yes it does
There are no dates on the Packages, but you can see what’s got an update available and the packages are new since the last package list update by using the drop downs in the toolbar of PackMan. |
Alan Buckley (167) 233 posts |
I’m willing to make the modifications to PackMan/LibPkg to filter things client side. I’m not sure what the contents of the new field will be. I’ve been thinking of how to have the same package repeated for difference architectures and this should be considered as well. e.g. We could have a package built for the pre-VFP machines and packages built for VFP machines and would want to select the correct one depending on the machine. My original thinkings was in the lines of having. So maybe we need to mark the packages with the types they support and the types have some sort of priority. i.e. On a modern machine you would pick an vfp version of the package and other pick an arm or fall back to arm if there was no vfp version. It worth thinking about what happens if another code breaking change comes along. Do all the packages need to be updated? If we can agree a set of sensible rules I can implement them. As with any new system (especially on RISC OS) we have to consider how to deal with the existing packages which don’t have the new field. For the programs on the autobuilder website; As I see it currently (which is likely to change after discussion) we should:- |
Jeffrey Lee (213) 6048 posts |
There are a few different aspects to the problem.
Breaking changes can happen at any time, and it’s not really feasible to have packages describe themselves in enough detail for the package manager to be able to automatically detect whether a breaking change has affected a package or not. So instead, we’re going to have to rely on the package maintainers doing their jobs and updating their packages to say whether there are any compatibility issues. Essentially, I’m proposing the following:
Potentially the rules which are built into the package manager (or the package repository) could also contain the ability for the rule to examine the package. E.g. breaking changes are only likely to affect programs, so packages that don’t contain executable files shouldn’t be affected by them. Or simple BASIC programs could be analysed to see if they call any SWIs which are relevant to the breaking change. Things get more complex when you consider that breaking changes can be the result of other packages, not just changes to the OS.
Meandering on to how to handle alternate packages (for processor/OS version optimisations): I haven’t really looked at packman in any detail before. Am I correct in thinking that there’s no way for a package to list dependencies which aren’t packman packages? If so, then that’s a pretty big oversight. There should at least be the ability for packages to depend on OS version numbers (or maybe just the OS branch? – Acorn/NCOS/ROL/ROOL), and module version numbers (with packman checking the usual places to see if the module is present – RAM, ROM, !System). Once you’ve got that sorted out, you can extend the system to cope with more types of dependency, like ARM architecture versions or processor features. RISC OS 5 now has a comprehensive API for querying which instruction set features are supported, and that API is also supported via CallASWI for older OS versions. So programs which depend upon certain CPU instructions for optimisations can list those in their control file and PackMan can easily check for them. The next step will be working out what to do if you find there are multiple variants of the package available which are a capable of running on the host machine (i.e. optimised & un-optimised versions). So there’ll probably have to be some kind of weighting system for each of the optimisations. Ultimately only the package maintainer will know how much the different optimisations make to the performance of the package, so it might be best to leave it in their hands: e.g. for each package there can be “performance score” field, and out of all the applicable packages the one with the highest score will be used. |
Theo Markettos (89) 919 posts |
The idea of a separate compatibility list alongside the package lists is an interesting one. However I think a problem is this:
Unlike Debian, from where a lot of the concepts are borrowed, in practice there aren’t separate ‘upstream’ and local ‘maintainers’ – there’s usually only one person who is the RISC OS developer for that software. Maybe ‘upstream’ is source code from another platform, but (barring a few packages whose build I scripted) today nobody is taking RISC OS binaries built by somebody else and packaging them. The problem is that a maintained package likely has someone who can fix the incompatibility – given a lot are fixed by recompiling. So the requirement is upside down: many maintained packages don’t need compatibility lists because they have someone who can fix the problem. Unmaintained packages need compatibility lists but there’s nobody to make them because they’re unmaintained. That raises the question of whether it’s possible to crowdsource compatibility hints – it could work, but would depend on infrastructure that doesn’t exist. Using it as a more general hint towards which version to pick could be interesting though – along the lines of ‘multiarch’ support (where you can mix eg 32 bit Intel and 64 bit Intel binaries on the same system). It would probably need the ability to have multiple instances of the same package in the repos (same version number, different arch tag(s), different hash). I don’t think RISC OS version or module version were considered (RiscPkg originated from the mask ROM era), but an issue is that PackMan tries to ensure atomic updates to ensure consistency (ie if you need dependencies A, B and C then we either install all of them or rollback to none installed). The OS version and modules are mostly out of its control, so you can get into a state where things break because dependencies change which PackMan can’t see. So generally its scope has been limited to things it can control. |
Matthew Hambley (3084) 17 posts |
This may be a job for the hot technology of mid 2000s web sites: tags. Maybe all the package needs is a field which takes a space separated list of string identifiers. The package manager can then have a set of tags which represent the platform (and preferences of the user) on which it is running. The intersection of these two sets tells you which packages are a good bet. (Nothing is guaranteed afterall) The advantage of this plan is that it limits code changes to a very well defined specification. The tricky job of agreeing on a useful set of tags is hived off into a separate task which can be achieved using all the appropriate tools like wikis and discussion fora. |
Theo Markettos (89) 919 posts |
Yes, tags are an answer to the ‘how’ question, even if the exact implementation details are a bit vague (what the set of tags should be and what logic should be used to decide what is and isn’t appropriate on a given system). However they aren’t an answer to the ‘who’ question, which is who is going to do the tagging given that the original question was to do with packages which don’t work on a modern system because nobody has recompiled them, and if somebody is maintaining them they can simply fix the incompatibility. While in principle crowdsourcing it is feasible, it also opens up a can of worms in terms of integrity: it only takes one person to do malicious things and potentially the whole system breaks down. And so designing the system to mitigate that is somewhat more complicated. |
Alan Buckley (167) 233 posts |
I’ve now got a reasonable idea of how to implement this. Please let me know if I’m way off the mark. Updating packages to use the new field. Implementing this will give an relatively quick improvement of the quality of the lists from PackMan. I’m also thinking of adding a separate Date, Homepage and Tag properties as they would be relatively straight forward. Not for the next release, but possibilities for the future may be an OSDepends field to check for modules that are only shipped as part of the OS image and consideration of the OS version (4,5 or 6). I’m assuming I can just use the VFPSupport module and other RISC OS 5 OS_Platform calls to check for VFP and SWI. An error on the call indicating they are not present. I’m also happy to add other Environment fields if the checks can be provided as well. But I’d like to keep them to a minimum and only add values where there is an existing (or about to be released) package that needs them. I’m trying to keep the scope of the changes to PackMan/LibPkg down to a minimum as I don’t get a lot of RISC OS development time, but I’m hoping that the above would be possible in a matter of months, not years! |
Jeffrey Lee (213) 6048 posts |
Assuming this is OpenGL stuff – I’d assume it’s more the presence of the VCHIQ module that the programs are interested in, rather than BCMVideo. (And specifically it would be the dispmanx service within VCHIQ – but for now I suspect checking for VCHIQ should be sufficient) OS_Module 18 or *RMEnsure should do the trick.
VFPSupport_Features 0 is the main SWI you’ll want to use to detect VFP support. Assuming the baseline VFPv2 support is all you care about, I think you just need to check that each nibble of the MVFR0 register is non-zero, apart from the nibble at bits 24-27, which can be zero. For SWP, it’s a toss up between OS_PlatformFeatures 0 and OS_PlatformFeatures 34. They can both tell you the information, but they also both have a couple of caveats to be aware of when it comes to OS versions that don’t have the latest OS_PlatformFeatures implementation available. For both of them, RISC OS 5.24 is the first stable OS release to be able to indicate that SWP isn’t supported (since 5.24 is the first stable release to run on the Pi 3). But if you care about development OS versions then the flag in OS_PlatformFeatures 0 came first. |
Colin Ferris (399) 1818 posts |
Don’t forget – some people might want to try programs on another machine – other than the one downloaded on. (or use some form of emulator) |
Matthew Hambley (3084) 17 posts |
Part of the point of such systems is that it moves the decision of what tags mean out of the database (packages in this case) and into the application. There is no meaning inherent to the tag values, only that which is agreed by external actors. Of course this doesn’t remove the need to have that agreement, it just decouples it from implementing the database. The arguments still have to be had but they can happen in parallel with the development of the database.
Is crowd sourcing really what was being suggested? For the packages to exist there must be a packager and my first thought would be that it is their responsibility to mark up their packages with appropriate details. There is the issue of orphaned packages which no longer have a maintainer. However I think the logic for dealing with those shakes out of the suggested implementation. If a tag is not included it must be assumed that the capability it represents is not supported. Therefore, by default, the package is considered unsuitable. The package manager can provide an option to show unsuitable (i.e. packages without suitable tags) packages and even install them on the understanding that they probably wont work. The logic can be as elaborate as the package manager developer cares to make it. For instance you could say that absence of a particular tag marks the package as having unknown suitability but presence of a tag marks it as incompatible. So if the package has no tags (because no one has added any) or it doesn’t have the armv6 tag then it might work on your armv6 machine. On the other hand if it does have the armv8 tag then it wont. |
Rick Murray (539) 13850 posts |
There lies the problem. Every developer does not have access to every machine, so in this respect crowd sourced compatibility would be useful as others can feed back what does and doesn’t work.
Not really. Use heuristics. If ten people says it works and one says it doesn’t, it probably does work… |
nemo (145) 2554 posts |
No, it’s a tristate: Does work; Doesn’t work; Don’t know. |
Theo Markettos (89) 919 posts |
If this is voting, then this is equivalent to packages with an average score of fewer than 3 stars being hidden from downloaders? Bearing in mind there is no strong identity in the packaging system (ie user accounts of some form for downloaders), that means someone who votes 10 times beats a crowd of 5 people. Also, packages have dependencies. If I downvote some critical library enough times, does that mean I can prevent anyone downloading anything that depends on it? Can I downvote my competitor’s app? This all gets rather messy rather quickly. Instead of building a complicated system to manage the fact that humans do bad things, the alternative is simply to have a manual report procedure and someone vetting reports as they come in. But it needs somebody to do that. (this is all somewhat moot since there’s nobody to build such a system even if we had a volunteer to run it) |
Steffen Huber (91) 1953 posts |
Ten real users? I.e. half of the active RISC OS community? |
nemo (145) 2554 posts |
|
Jon Abbott (1421) 2651 posts |
While you’re all considering how to maintain packages, perhaps you’d like to consider my dilemma, which I’ve emailed Alan about several times in the past few years. I’d like to have a way to script package building, as I have over 300 that need building/maintaining which is simply impractical to do manually when they’re updated quite regularly. Essentially every package is in its own directory, but the metadata needs to come from a data table elsewhere – currently that’s in a BASIC library that I export from Excel, but could be converted to any format. Ideally I’d like the thing to simply rebuild packages on a daily basis and update the distribution point if they’ve changed, along with any distribution point metadata, such as compatibility, version, release date etc. |
Jeffrey Lee (213) 6048 posts |
I guess the tricky part with that is that (for most people) it’ll need to be a script which can run on RISC OS, since zipping things on RISC OS is the easiest way of making sure the filetypes are correct. ~10 years ago I did produce a similar tool to help update my website – it would check a list of projects for changes, package them up, and upload them via FTP. It shouldn’t be too hard to resurrect that in a form which works well with packman (including things like auto-incrementing the ‘package’ part of the version number). And it’s the kind of thing I’d want for when I eventually get round to switching my website over to using packman for all the RISC OS stuff. |
Vince M Hudd (116) 534 posts |
Are the packages not just zip files by another name? Zip files can easily be built from a script – I do this via an obey file for my software; when I compile a new version that is ready for upload, I run the relevant obey file which handles all the source code backups, builds the zips, and copies the results into the website directory, ready for upload. Is there something peculiar about packages that means you can’t do something similar – though possibly from something a little more capable than an obey file to handle the metadata etc. (I don’t distribute my stuff via packaging, so I’ve no idea.) |
Jeffrey Lee (213) 6048 posts |
Correct. However for the system to be useful you’d want it to be capable of extracting the upstream-version from somewhere, or if it’s a re-upload of the same upstream-version, increase the package-version (to use RiscPkg policy manual terminology). And if the packages have dependencies, the dependency version numbers in the control file may need updating as well (which would also require the packages to be built in the correct order). Looking at the tool I wrote, it’s little more than a collection of obey files which run in a task window, and some extra BASIC scripts to handle the more complex aspects (getting timestamps for files/folders, uploading via !FTPc’s BASIC library). The main source of complexity is that all my projects were a mess (“let’s put the source code within the app folder!”); if I was to do it now I’d restructure things so that every project has a makefile which provides a set of well-defined build rules/targets. |
Pages: 1 2