Build environment and CVS work
W P Blatchley (147) 247 posts |
Thanks, Jeffrey. The detail there is fantastic. It’ll be a big help.
It says somewhere on the wiki that the scripts work with the (old) version of the RISC OS Perl port supplied with the tool chain, so I guess it should be possible to do everything from RISC OS if I really wanted to. But it’s probably actually easier to use Linux/Cygwin to do the CVS interaction, I suppose, as long as I can easily move the files between there and my RISC OS set up. (i.e. via HostFS on RPCEmu) |
Andrew Hodgkinson (6) 465 posts |
This gets mentioned a lot, but I’m curious about what it would achieve, aside from personal preference of the philosophical difference between them. We’ve not yet found the performance of the actual SCS to be an issue – an argument sometimes cited when advocating things like p4 or git – and the technical side basically suits our way of working for numerous reasons, not least the extensive repository history and even through grotty hacks like FilerAction$Skip :-) AFAICT most other source control systems write control files and/or directories into the checked out tree so we don’t necessarily gain anything in terms of avoiding SCS data appearing in exported built components. SVN is as prolific as CVS, while Git appears to use a single .git directory at the top of a checked out tree; I’m not familiar enough with it to know how many it would write in the case of someone checking out bits of a RISC OS source tree so there’d clearly always be some concern about ”.git” files being exported. Worse, ”.git” is a silly name and some users might even be offended if they found ”/git” showing up on the RISC OS side, unaware of what it actually meant! We may be better off instead modifying those components which do export more than a single file when built so that they do not export the CVS control files and making sure that ‘clean’ phases don’t damage them. Lots of work though. At least then you’d be able to do “cvs -nq update” after a build and clean to see what parts of the build system had been incorrectly deleted, or not properly cleaned. Another point of concern – does a stable RISC OS git or hg client exist? |
A.C.Daniel (376) 15 posts |
Hello everyone! Nice to see so much splendid work going on :) Would it be perhaps possible to have an image filing system that hides all the cvs stuff and the different license folders from the build system? |
Andrew Hodgkinson (6) 465 posts |
Not a bad idea – HideFS, say, could mount some tree on another available filing system (typically ADFS) and present the same set of files or folders but with user-configurable sets of files or folders matching given patterns simply excluded from the things it shows (would be useful to hide Unix-like .foo files too). OTOH, integrating that into FileSwitch (or lower, not sure) may be more generally useful. Merging trees wouldn’t really work because if someone saves a file into this new FS, how would it know where to actually save it? There may be multiple places it could come from and there’s nothing in the ‘save’ action to indicate that either to the user or to the filesystem. Sorting that out at the build system level makes much more sense IMHO (means there’s less likely to be confusion about licence conditions while editing sources too). |
Peter Naulls (143) 147 posts |
I have, although I am 8 time zones away. The difference in SVN is that a local copy is kept too, so ‘svn diff’ is much, much quicker. CVS has to reference the server, which can be hopelessly slow. Even on closer connections, on a relatively slow DSL setup, this can be significant.
No, indeed. The GCCSDK autobuilder packaging does a number of things when generating zips, which include removing all such things. This is typical on any system. Every SCS has its own “issues” whether real or invented, and I appreciate the immense complexity in moving from your legacy system. But what you should remember about SVN is that it specifically addresses shortcomings in CVS, including:
See Google for other comparisons. It’s also sufficiently close to CVS that understanding its use is easy for CVS users. Git and others have a completely different approach to source control, although they do allow eventual merging of distinct trees. And in fact, I’ve found both Git and Mercurial infuriating and hard to understand at times (and I’m far from a novice software engineer).
No, but I suspect a quickly reminded point. However, we do have a port of the latest 1.6 subversion. |
Jan-Jaap van der Geer (123) 63 posts |
I have actually tried to build the git-core package. There is a problem with strlcpy not being available, which I presume is easily fixed. I did not try that, though. But I think there might be a problem: A friend of mine who is familiair with git (I know next to nothing about it) told me that there are no properties for files in git. So that might mean there are problems with retaining filetypes, I suppose. |
Andrew Hodgkinson (6) 465 posts |
Filetypes are a good point. The filenames in CVS have ”,xxx” extensions for non-text types. Historically these were checked out onto remote servers usually accessed via NFS; when the RISC OS side mounts the NFS share, it presents the files without the filename extension but with the appropriate filetype set. Thus the whole “your check out tree and your working copy are two independent things” process was rather essential, with the copy to/from NFS a simple way of getting the filename extensions either added or resolved, depending on the direction of transfer. In an ideal world where the checked out and working trees were one and the same thing, the source control system binary would have to handle the filename / filetype translations transparently. |
Peter Naulls (143) 147 posts |
Hm, I must say that the regular ~11PM GMT login breakage is getting tiresome – often right around when I want to comment on something ;-( Anyway, filetypes are almost a non-issue. Any port of a VCS will need to add the ,xxx extension when checking in, and set the filetype when checking out. Most of this is already done inside UnixLib. This means the ,xxx filetype will be there when checking out on Unix, and therefore translated again properly over NFS/Samba. |
Andrew Hodgkinson (6) 465 posts |
UnixLib doesn’t like HostFS paths, though; that’ll need sorting some time. BTW, there isn’t a 23:00 GMT login breakage. There’s 5:00 GMT server reset. According to the log rotation timestamps, that’s happening at the correct time of day. Unfortunately there’s not much I can do for people in the wrong Maybe if “version 3” of the public site ever happens, on Rails 2.3.5, the restarts will be less necessary (e.g. weekly, or something). The biggest problem is the Wiki and the DRb server presently, though; the latter is a Ruby core problem and without moving to 1.9 (which would cause catastrophic breakage) that’s not likely to ever get fixed. I must admit, open source development methodology really gets on my nerves sometimes. It’s one thing deciding you’re going to just remove or change APIs because you can’t be bothered with / have some kind of bizarre moral objection to backwards compatibility. It’s quite something else to change an entire language so that entirely legitimate code from a previous point release (1.8.x) becomes syntactically invalid on the next point release (1.9.x). Sigh…! |
Steve Revill (20) 1361 posts |
The site is invariably broken when I use it around midnight – so I usually have to give it a kick using the live restart script. Anyway, this is off-topic… |
Peter Naulls (143) 147 posts |
This is definitely the first I’ve heard of this. You’ll need to file a bug report over on riscos.info, with some examples if you can. |
Andrew Hodgkinson (6) 465 posts |
OK, I’ll look into characterising it more. |
Peter Naulls (143) 147 posts |
The time is a guess, but it’s around this it normally becomes impossible to log in. This wouldn’t be as bad if the site didn’t log me out after I looked at it the wrong way – that it is, I seem to be logged out after a small number of minutes, or sometimes I think if I visit the forums with non-https. I’m not sure of the exact logic, but it’s certainly annoying :-( |
Andrew Hodgkinson (6) 465 posts |
If you visit via non-HTTPS, you will have the illusion of being logged out. Change to HTTPS and it’ll be fine. The login persistence cookie cannot be safely transmitted in the clear. Why not force-redirect to HTTP? Some users still use browsers without HTTPS capability. |
Peter Naulls (143) 147 posts |
Yes, I know. But apart from that, there is still a short time out. Firefox tells me that cookies “expire at end of session”, which is presumably after I navigate away or close the browser. |
Steve Revill (20) 1361 posts |
Yes, my login to this site is forever timing out so I have to log in again to post to a forum thread I opened an hour ago but didn’t get around to reading. Then I have to find it again. :( |
Steve Revill (20) 1361 posts |
Anyway, having gone through all that nonsense again, back to the topic: I’ve been doing some experiments which include:
It’s getting there but there are various aspects that I’m still going to have to fix. I searched all the Makefiles for ’^’ which gives a rough indication of where components include hard-wired (relative) paths to other components – this can indicate somewhere in which my new build will break. Looking only at the ‘Disc’ build, it seems that some components did fail to build so I’ll investigate that at some point. If I get to the point where I’m happy with this, I’ll release a quick archive and braindump so other people can test it. After that, we have to look at doing builds without first having removed all of the CVS directories… |
Andrew Hodgkinson (6) 465 posts |
Whinge, whinge, whinge Fortunately, being the mild mannered and terribly helpful soul I am (!) your wish is my command… I’ve increased the Hub login timeout from one to four hours. There may be other issues causing login sessions to fail early but we’ll cross that bridge if we come to it. |
Steve Revill (20) 1361 posts |
You can just drive it from the command line:
assuming the filer has ‘seen’ !UnTarBZ2.
Well, I’ve written just that – a FrontEnd GUI app (with associated CLI tool) to turn a directory (or application) into a compressed, self-extracting program that doesn’t require any additional, non-standard components – the code only calls OS_File, OS_DynamicArea, OS_ChangeDynamicArea, OS_Module and Squash_Decompress. It seems to be working fine in my early tests (apart from a completely unrelated hard disc and kettle lead failure, ouch). All you’d do on a RISC OS system is ensure the filetype is set to Utility (&FFC) and then double-click it. I’ll release both the creation tool and a self-extracting UnTarBZ2 (and HardDisc4 image) once I’m finished testing. Note: it’s not super clever right now: the self-extracting transient utility (with the compressed data built into it) has to fit entirely in your available memory (I think transients are loaded into the RMA) and it uses a dynamic area which has to be able to grow large enough to hold a complete copy of the largest decompressed file. A later revision might decompress files in multiple steps so memory requirements can be bounded. |
Steve Revill (20) 1361 posts |
And eventually, I might also make UnTarBZ2 handle zip files, too… |
Jeffrey Lee (213) 6048 posts |
Ah, that’s better. Perhaps the help file should mention that :)
Excellent! But, just to keep you busy, I have a couple of suggestions for improvements:
Although thinking further about point 1, having an Absolute file that’s larger than the default wimpslot size isn’t going to be too friendly to people who are using versions of RISC OS that throw an error when you try loading a program that’s larger than the wimpslot. At some point I’ll remember to update the OS to perform the same checks that my Absolutely module does (If RISC OS 5 doesn’t do those checks already – I don’t think I’ve really checked). Oh and don’t feel too pressured to go out of your way to implement my suggestions; a dumb self-extracting archive is still better than none at all, and it should be easy enough for someone else to improve upon the code once it hits CVS. |
Steve Revill (20) 1361 posts |
I’ve updated the help file, too, as it happens.
Maybe version 2.00. Version 1.00 just creates in the same directory where you put the utility itself.
That’s certainly doable. Probably only take a couple more ARM instructions actually. :)
Yep. At least it’s not quite as dumb as it sounds like you thought – with it extracting wherever you put it rather than the CSD. The latter would be one step too far down the annoying path even for me. |
W P Blatchley (147) 247 posts |
Steve, I’m really interested in getting to the point where I can build the sources from CVS without having to merge the directories together first, so I’d be happy to test this once you’re ready to release it into the wild. I’m still getting myself set up, so there’s no rush. Thanks for putting the time in to try to make this workable. I might be alone in thinking so, but as far as I’m concerned it would be a big step forward. I’m wondering if, having got to the point of being able to build with the same directory structure as resides in CVS, it might be possible to move all the CVS files out into a parallel directory structure on your local machine (rather than remove them completely), then restore them back whenever you want to do a CVS update or commit. I’d have thought that would work, wouldn’t it? Or is my surface-level understanding of CVS letting me down again!? |
W P Blatchley (147) 247 posts |
Probably a fool’s errand, but I was trying to check out the sources on RISC OS. I’m having problems with Perl and the ‘checkout’ script. I get the following errors: *perl checkout OMAP3Dev Use of uninitialized value at /PerlPrivLib:zip/./File/Spec/Unix.pm line 161. Use of uninitialized value at /HostFS::HardDisc4/$/ROOL/Src/bin/perllib/Pace/Cvsrc.pm line 50. Use of uninitialized value at /HostFS::HardDisc4/$/ROOL/Src/bin/perllib/Pace/Cvsrc.pm line 51. --- ERROR: I don't know who you are - go away! --- ERROR: open of errors file Products/OMAP3Dev/ERRORS failed The first error, in Unix.pm, looks to be trying to dissect the @PATH variable. I had understood that @PATH was set from the RISC OS system variable PerlScript$Path, so I set that as: *set PerlScript$Path <PerlScript$Path>,HostFS::HardDisc4.$.ROOL.Src.bin. But that didn’t help. I’m no Perl expert. Can anyone help me? Thanks! |
Jeffrey Lee (213) 6048 posts |
Can’t say I’m much of a Perl expert myself. And I’m definitely not an expert in the quirks of the RISC OS port. You’re probably the person in the best position to fix the problems, since you’re the only one who’s mad enough to try running the scripts under RISC OS! ;) |