Secure PackMan/RiscPkg
Pages: 1 2
Jeffrey Lee (213) 6048 posts |
The comments in the UpdCaCert thread about how things would be better if the CertData file was distributed via PackMan reminded me that currently PackMan/RiscPkg doesn’t provide much protection against bad actors publishing compromised packages. All the trust is placed in the websites that hosts the packages and index files; if one of those sites get compromised (or you get caught in a MITM attack), there’s nothing to stop the system from installing a bad package on the user’s machine. There’s also no defined method of dealing with package naming collisions – if two indexes have two packages with the same name, which one gets used? As it stands, PackMan would be a terrible way of distributing updates to security-focused components like the CA certificates. I don’t know what the current best practices are for building a secure software update/delivery mechanism, but I’d imagine a good place to start would be something like the following:
Presumably looking at how Linux, BSD, etc. handle things will provide some good insight into the do’s and don’ts for building open & secure software update/delivery systems. |
Paolo Fabio Zaino (28) 1855 posts |
Jeffrey, first of all THANKS for pointing this out, it has been worrying me for quite a while, after seeing how easy it is to do a MIMT with the PackMan repository file (if points at the place where a package is in plain text and can be replaced by a MIMT with a single sed instruction…)
Yes, what Dave H. has done is great, but I think it should be part of RISC OS (just my personal opinion) and not a 3rd party software. However on Linux what distribute the CA Certificates are still the RPMs/Debs etc… so, I don’t necessarily see Packman as not being able to do this. But again a question of perception, one could argue that if the upgrade goes bad then that Packman instance is going to be hard to fix.
Most Linux distros use HTTPS, signed Packages and checksums to validate and verify the identity, the integrity and the provenience of a package. It’s not perfect, but it definitely rises the bar a lot.
Yes, for Apple and Microsoft Windows based such companies release such certificates and keys. For Open Source world is mostly GPG based. I think GPG would suit RISC OS.
I am not sure I understand what you mean here, just to be clear: if a package validation key (used to verify A public key could be provided via the website using HTTPS as a transport, this also makes it a bit more reliable for the user (the HTTPS Certificate would be verified before one download the public key). PackMan should definitely implement a public keys database as it is done for RPMs and Debs based package managers. So that the public GPG key is added once. Package signing, however may cause some issue to ROOL when they pull packages from 3rd parties to be redistributed by ROOL. Each 3rd party would have their own public key.
There is a tool that is particularly useful GNU GPG-Zip: https://www.gnupg.org/documentation/manuals/gnupg/gpg_002dzip.html Which encrypt, signs and Tar files, it could be used to encrypt a PackMan ZIP file to add a signature and being able to verify it before opening the zip file. Some useful links:
As I have mentioned in other topics: 1) PackMan repositories should migrate ASAP to proper HTTPS, this to mitigate an existing issue Again, thanks for looking into this! :) |
Jeffrey Lee (213) 6048 posts |
Packages can contain a copy of the public key, and a signature generated from the private key. I was thinking that it might make the packages easier to manage, by keeping them self-contained. The certificate included in the package would still be part of a chain which links back to one of the trusted certificates stored in PackMan, so the entire chain can still be verified. The traditional way of signing Android apps is that the developer signs the APK as part of the build process. The signed APK contains the public key associated with the developer. The Play Store doesn’t replace this signature/certificate, it just adds an extra one ontop to prove that the app was distributed by the Play Store. However this blog post covers a number of flaws with the original system, and details of the new system that developers are now encouraged to use instead. MSI files on Windows are also signed by the developer, but I’m not sure if the MSI contains the public key/certificate chain or if it’s distributed separately. Looking at the RPM docs, I’m wondering whether the requirement that mirror servers have to resign the packages is because the entire system is built around the idea that 99% of packages are compiled from source code by the organisation that’s running the package repository, unlike Windows/Android/iOS/etc. where the different stores/repositories are distributing packages that were prepared by the software author. |
Paolo Fabio Zaino (28) 1855 posts |
Oh ok, I understand now. So yes, in this case one repository may contain packages signed by different authors and PackMan should still be able to correctly verify each package signature. But it would be a bit more complex to implement I think.
This reminds me of another set of issues on RISC OS: 1) RISC OS does not support any form of protection of a private key, so storing it on an regular RISC OS FileSystem to be used during application packaging my be not the best way to protect it 2) I have thought of implementing an encrypted FileSystem for RISC OS, just to store sensitive data, but obviously, until there is some support for ARM TrustZone or to talk to a secure enclave that would store the encrypted FileSystem Keys, it’s pretty much nonsense to create such a thing. This has lead me to work on the Fetch a URL module to basically have a separate entity (for instance a Linux Server on another RPi?) to store the private key and request it from RISC OS during a build process by asking to type in (for example) a user and password or a magic number displayed on a web page by the server. In other words asking for something that is not stored on RISC OS itself, use it to request for the private key and securely erase both after the packaging is completed. It’s just an idea open to thoughts.
The old “onion-like” signing is something that we could do with tools like GPG-Zip, but yes it’s not that secure. It also does not help with issues like developers losing their private keys and/or have defined one that is not strong enough and/or allow to update it when and if new security requirements will come into play (and they will). For a modern way to sign apps (like the new Android process you’ve mentioned), is ROOL willing to take the responsibilities for storing the developers keys? And to put in place all the necessary effort to secure their keys repository and constantly check it against attacks. I mean one of the reasons to trust the new Google process is the fact that Google invests a lot of money and resources in keeping their servers and repositories safe, definitely more that an average Joe can do, but ROOL position is “slightly” different ;)
Yes they are and the certificate is applied using a tool called signtool.exe (which is part of the Platform SDK, now it is in the Windows 10 SDK or MS Visual Studio). Given that is Microsoft world, the certificate can be obtained by multiple sources and has a temporal validity (I forgot how long it lasts for, but surely it lasts for some years). The way the verification works is by using WinVerifyTrust.h: https://docs.microsoft.com/en-us/windows/win32/api/wintrust/nf-wintrust-winverifytrust (while on Win32 IIRC, it should use msi.h msiVerifyPackageA function: https://docs.microsoft.com/en-us/windows/win32/api/msi/nf-msi-msiverifypackagea). For the general discussion, Windows verify a package using Microsoft CryptoAPI and the MsiDigitalCertificate table. Basically each certificate is stored in a binary format and associated with a primary key. Each MSI can contain multiple certificates and they can be checked and/or verified also without trying to install them, AFAIR the libraries mentioned above are also used by MS Explorer (for example), so one can check a file certificate from the filer itself.
I am not sure I fully understand this. Everyone can create a (for example) CentOS mirror, no need to resign anything, here are the instructions: https://wiki.centos.org/HowTos/CreatePublicMirrors This is a big problem, because RPM doesn’t actually signs the repodata files (the repository metadata), so YUM package manager is vulnerable to MIMT attacks (as always DNS poisoning to redirect a yum client to a different mirror like it can be done for packman, unless we use well configured HTTPS ;) ) OR (worst) by creating a mirror that will get compromised. Just my 0.5c |
Steve Pampling (1551) 8155 posts |
Looks like a requirement for a small(ish) encrypted file store. Non-GPL given the level of integration into the OS that is likely. Something encrypted with the Trivial Encryption Algorithm, or is that not strong enough? |
Paolo Fabio Zaino (28) 1855 posts |
The issue is not the algorithm used to create an encrypted file system. Each encrypted file system uses at least 1 key (in the case of symmetric encryption), and the problem on RISC OS is “where” to store that key. Encrypting that key into another encryption vault presents the exact same problem, where we store the key to decrypt the encrypted vault to retrieve the previous secret key and so on and so forth. RISC OS has no concept of security, everything is accessible to everyone. Hence the idea to store the secret key for the package encryption on a different OS like Linux and create a process that allows the recovery of such a key under “specific circumstances” (for example using a user/pwd or type a number displayed on a web page, or whatever). Another approach can be to use the ARM TrustZone with a secure OS, we could use something like Genode for this, but again we need an API to talk to it to send the usual “credentials” to retrieve the key stored in the secure vault. On RISC OS on Linux project one could use the Linux side etc. One could also say we do not care about the secret key and just store it anywhere on RISC OS, it’s not safe, but oh well. In general:
It’s not perfect, but it’s definitely better than how it is now and, for what concern HTTPS, it works already, we only need to ensure that the CA Certificates on RISC OS are up-to-date (although if it’s also important to ensure they are not altered by some malicious software, but this is another story). To sign packages we probably do not want to bound only to RISC OS, people also use GCCSDK to build and they package from Linux. So, just storing the secret key on a linux system, should suffice for the beginning for most developers. For who does not want to use Linux, they could store it on a RISC OS system and make sure that SD is used only to build applications and not used for everything else (that should be safe enough at the beginning). Again, not ideal, but better than it is now. |
Jon Abbott (1421) 2641 posts |
Moving to HTTPS for distribution is certainly the first step, I’ve certainly been providing a HTTPS source for the JASPP distribution site since day 1. The problem however is that it relies on the OS supporting HTTPS and having a maintained root certificate store, so until that is provided as part of the OS updates, it can’t be forced. Signing packages, although I agree in principle, is problematic as the packages then need to be maintained and updated when certs are compromised or expire. This would mean either dropping the majority of packaged software that isn’t actively maintained, or shifting the signing to the host server – which doesn’t really achieve the main purpose of trust between the author and the consumer. A slight correction. MSI do not have to be signed, I can’t recall one instance where I’ve modified an MSI from a supplier where it’s then become untrusted. Signing hasn’t been a requirement until very recently with AppX and MSIX. Because of the trust issues with LOB packages distributed from “the cloud”, Microsoft came up with Device Guard signing which is heavily reliant on Code Integrity (or AppLocker if you’re old school) which are integral trust elements in the OS. Considering the cost of a code signing cert, we would need to consider implementing something similar to DG signing or risk most developers simply not signing their packages due to the costs involved. When a last purchased a cert to sign a package a few months back, it was £300. DG signing is free. |
Steve Pampling (1551) 8155 posts |
I was thinking that the user would supply a password. Of course, then you reply on the user not sticking that on a post-it note. |
Steffen Huber (91) 1949 posts |
A post-it note is a lot more secure than storing it inside RISC OS. Or unencrypted on any medium that is connected to any computer. |
Andrew McCarthy (3688) 605 posts |
LMAO, ;) that depends, … :D |
Theo Markettos (89) 919 posts |
Yes, this is one of the things that keeps me awake at night (well, some way down that list :-) However, it was a design decision I made when I built the ROOL packaging site. It’s better to have a packaging system that’s being used for something, rather than a secure one that’s rejected by users and not adopted. Ever since RiscPkg days (~2005) there was a considerable pushback from users who didn’t like having packaging interfere with the way they ran their machine, and the people who actually used the system were few and far between. And so any kind of complexity, in particular complexity that was unfriendly to typical small scale RISC OS developers, risked killing things off before they got going. Thanks to Alan’s excellent work, Packman has smoothed off a number of the sharp edges of the original RiscPkg design and made it more friendly and it’s getting used by people, so I think we’re in the point where now is a good time to have this conversation. I think the main thing is to focus on the usability aspect, because if it’s not friendly to developers they’ll simply walk away. One of my further design decisions was not to overly centralise package distribution, nor to rely on some server that needs to be maintained and updated. Package servers are just regular web servers, and anyone can serve their own package repo. That means there’s no central authority – if a random dev wants to ship a Packman feed for their builds, they can do that – just output the right list file and stick it on their website. The packages.riscosopen.org/thirdparty setup is merely an aggregation on top of some private feeds, with a really simple way to fetch them from devs. It’s designed in such a way that it’s possible to have exert some control over those third party feeds – while it can’t prevent a bad actor pushing a package, it’s possible to block the bad actor later and push a replacement package to override the bad one. This is why it only uses the devs’ pointer files to fetch packages, which it then serves itself (that also allows better caching behaviour, and avoids problems with devs’ websites being slow, going offline etc). I think you essentially do need a public key infrastructure (PKI) that ties the package with the person who created it. That needs a signing process at the point of creation, which means it’s no longer ‘just a zipfile’ that anyone can construct with regular tools. Although perhaps a signing tool that generates/holds the private key and adds a signed manifest file to an existing zipfile wouldn’t be too onerous. That would also allow signing of existing unsigned packages. Next up you need something to manage keys as part of the wider network. That brings us onto issues of identity. I’d suggest the easiest identifier here is an email address, given that email addresses are already incorporated into packages, everyone has one, and methods to assert ownership are well understood. You need something to manage the mapping from email addresses to public keys, including revocation. That does look a bit like a centralised server, which was something I was trying to avoid. Perhaps one way to go is a PGP keyserver, which already does most of this – although I’m not sure how well they check ownership of email addresses. Either running our own, or piggybacking on an existing public server. Finally there’s the question of who to trust. A collection of packages are signed by a collection of devs. Which ones are ‘good’ and which aren’t? We don’t have a centralised authority like Debian which manages the gold standard package list signed by a single key – it’s much more federated than that. It would be possible to have a list of ‘good’ devs, signed by some authority, and if you turn bad your identity is removed. But that still doesn’t really protect from a newly-bad dev pushing a bad package or a bad update to an existing package. And so I think that’s the hard problem: it’s fine to associate a package with a dev, and maybe complain if the dev/signing key behind a particular package changes over time (Android does this). But how to prevent a new dev advertising a new package that does bad things, given that’s trivial under RISC OS (just drop something in !Boot and job done)? So it boils down to: what is the threat model, which problems are we trying to solve and, by not solving others, are they a coach and horses to drive through the ones we do solve? |
Theo Markettos (89) 919 posts |
To follow up to a few points in other posts: I don’t think storing developer keys on RISC OS is a priority problem, because who is going to be hacking into machines to steal them? RISC OS has a defence to network attacks – it just crashes at the first sign of anything ;-). More seriously, I’d suggest a setup where it’s recommended dev keys are kept on a USB stick that’s only plugged in for specific actions – such ‘cold’ storage is good enough for bitcoin and friends, so would be fine for much lower value dev signing keys. Moving to HTTPS is a good plan (riscos.info already offers that), but things get messy when you need to keep your clock and certificates up to date before you can do that. Maybe a tool like UpdCaCert is the right solution here, but then what cert are you using to fetch the new certificates? One answer would be to bake a ROOL signing cert into the ROM and any update process checks the root cert file is signed with that. Then at least you can establish a root of trust, and one whose expiry can be more carefully managed than random sites on the internet. A multitude of chicken and egg problems that need to be thought through… |
Paolo Fabio Zaino (28) 1855 posts |
That is true, but luckily RISC OS supports NTP and there is a process going on to ensure CA certificates stays up-to-date.
Hummm are you sure about that? I have tested a few and RISC OS falls victim without crashing at all, here are few to check out for you:
There is more, for example the AIF format is extremely easy to infect, worms can be built using runnable modules and using IRQs to do work on the background. I am thinking of designing some security layers that uses Vector interception to check what’s going on (and end-point agent), but RISC OS doesn’t grant that such a module can remain active (external code can still potentially remove a vector claim).
I agree on this |
Steve Pampling (1551) 8155 posts |
Well none, unless you mean what publicly verifiable certificate on the hosting server do you trust? |
Dave Higton (1515) 3497 posts |
I have to admit that I don’t understand the question. UpdCaCert can operate without a certificate (one chicken and egg problem is already solved, at least), and the address to get new certs from is baked into UpdCaCert’s !Run file. So the first cert can be downloaded, and subsequent certs can be downloaded, all from the same place. There is a question of whether that place can be compromised, of course. As for any other site: AIUI, any site’s own cert has to agree with the site’s name, and has to have a valid chain of trust. There should only be a problem if the user points at a bad site that has obtained a valid cert matching the bad site’s name. This is usually a problem in only two cases: a typosquatter site, or a bad site that convinces people it’s good. What have I failed to understand? |
Steve Pampling (1551) 8155 posts |
From my viewpoint, nothing. |
Theo Markettos (89) 919 posts |
That was a joke :-) There’s no security, the only mitigation is that RISC OS servers often aren’t very stable under load and have a tendency to crash. That’s not security, it’s fragility.
I understand you’re making a web request to a server to get the root certificate file. If you’re making that with HTTP, you aren’t checking any certificates. Somebody could MITM that and feed you any root certificate you fancy. Likewise if you’re doing HTTPS and not checking certs, it’s the same problem. If you’re making that with HTTPS, you need to already have a valid certificate chain for the site you’re fetching the certificates from. If that site changes its root CA to one you don’t already have, the chain of trust is broken. You can’t fetch a new certificate file because you don’t have the cert for the server, but because you can’t fetch the new file you can’t update the cert to allow you to fetch the new file. This is probably OK if you’re fetching new certs once a month, because you’ll be covered by overlaps. But if you go a few years between fetches you may find your root cert file is out of date and you can’t fetch a new one. Which is an issue if people pick up old systems they haven’t updated for a while. |
Dave Higton (1515) 3497 posts |
UpdCaCert always uses HTTPS, but is open to MITM on the first fetch, assuming no existing certs in place (or they’re so out of date that they have to be deleted). Subsequent fetches will use HTTPS and check the certs. In the Help file (assuming anyone ever reads it), I point out that the target CertData file is updated several times a year, and suggest two possible ways to get them to be checked/updated once per week: automatically by TaskAlarm, or weekly alarm reminder to go do it manually. No system is perfect. I’m open to reasonable suggestions as to how to improve what I’ve put up. |
Steve Pampling (1551) 8155 posts |
That statement is true for any OS. I thought people were discussing what could be done to deal with the existing security issues of RO and to ameliorate or remediate. |
Theo Markettos (89) 919 posts |
As I suggested upthread, have the root CA file signed by a long lived certificate of our own. Ideally that would be built into the ROM so it’s ‘always there’, but if not a file on disc would suffice. That way we can manage expiry, rather than depending on the expiry of a website that doesn’t know anything about us. There’s a secondary question about how to run regular tasks – is there a way to do it when Alarm isn’t loaded? I don’t know how many people run Alarm all the time, but if it’s not the default then some won’t get updates. (not suggesting these are necessarily within your remit, but thinking of it in the context of a key piece of OS update infrastructure) |
Paolo Fabio Zaino (28) 1855 posts |
Great point. I am not aware of any another way, so I started to code a Task Module that does the same as !Alarm, but that works in the background and eventually could have a front-end like !Alarm. It will be available as part of the Desktop Modernisation project, unless someone has a) better ideas and b) already working code. |
Chris Hughes (2123) 336 posts |
Paolo what is wrong using say !Oganizer instead of !Alarm to do regular timed updates. Not sure we need your ‘module’ It just seems to duplicate existing things that do the job already. |
Grahame Parish (436) 480 posts |
What if you don’t run Alarm and don’t have Organizer? I personally do run Organizer and keep it up to date – it runs my scheduled daily backups, but not everyone has it. |
Chris Hughes (2123) 336 posts |
Fair point, but I thought every RISCOS computer had at least the free version of organiser anyway as well as Alarm, they just need to use them. Neither use lots of memory. |
Stuart Swales (8827) 1349 posts |
I think what Paolo is after is to have the lightweight scheduling backend running invisibly, and have a GUI front end (take your pick) interact with it to set/modify scheduled tasks only when the user wants to. I would go with this. |
Pages: 1 2