Free SparkFS
Pages: 1 2
David Feugey (2125) 2709 posts |
From https://www.riscosopen.org/forum/forums/1/topics/15048?page=3#posts-99213 The problem is that sometimes my (stupid) ideas are going too far. So let’s continue the discussion here. David Pilling, Old Brit God of RISC OS, where are you? :) |
David Pilling (8394) 96 posts |
(where am I) …trying to login My preference would be a new open source archive filing system for RISC OS. I’d be prepared to donate source code and do some work. It would do what SparkFS does, and could fix some shortcomings. The structure of SparkFS splits up nicely – archive modules are free standing, the front end application is another separate piece and there’s the core filing system. The “Shrink Wrap” thing was designed by Acorn for the Network computer when they ran out of space (long long time ago). Err this is all about speed – I never did any particularly fast decompression code. The best I ever saw was John Kortink’s in PackDir – Dominic Symes’ in CFS was very good. For Shrink Wrap Acorn said GZip, and later on I added CFS and Acorn Squash modules. I’m not sure of context but if you’re hoping to get a speed up by expanding data – back to the past for optimised code. Browsers use GZip compression – so another project fast un-Gzip and somewhere on the web someone has worked out how to use new ARM instructions to speed it up. Going back to SparkFS I would allow the front end to do the single file operations with external programs – this would allow people to add things like bzip2. Allow 32 bit file lengths Modern Zip module, support rename (zip64 – 64 bit file lengths…) I do not know how collaborative projects work in the RISC OS world these days, or how many people would want to be involved. I am not offering to go away for a year and do it all myself. |
Frederick Bambrough (1372) 837 posts |
Would it be greedy to mention encryption? Inter-OS? |
David Feugey (2125) 2709 posts |
Glad you find a way to connect to the forum ;) There are two subjects here. CFS clone Free SparkFS A mix of R/O R/W would be great. A R/O behaviour, that would not permit to go inside archives when doing recursive operations on the host filesystem, and a R/W mode on demand (archive by archive), just to be sure that the others archives will stay untouched. Then, the tool could be loaded on start-up. For the project itself, Shrink Wrap can be a one man project. I should really take time to go into OS programming. For SparkFS, it’s more complex. Will RISC OS Open host it, or GitHub? I don’t know. But if no one wants to take care of it, I can manage this the manual way (and we’ll see later for improvements).
In fact, it would be cool for both projects. A system level encryption (for whole file system, or specific directories) and archive encryption. |
Rick Murray (539) 13840 posts |
…is the wrong answer. What needs to happen is for FilerAction (or, preferably FileSwitch) to have an option to not recurse into images. The filing system knows what’s an image and what’s a directory. Trying to fix it in the image filing system leaves you with two choices:
The only logical option is to fix the caller, not the callee.
Reading from David’s post, I think it was a no to SparkFS, but rather an offer to assist (not do entirely) something akin to SparkFS only better. So SparkFS+ then. :-p
Weren’t you expecting for it to be an OS component? If so, then it would need to be hosted here in order to be auto-built into the ROM (with a compatible licence).
Encryption is useful for files to be sent elsewhere. It is a lot less useful within RISC OS (as in encrypted file system and/or directories) given the general kernel security situation. |
Steve Pampling (1551) 8170 posts |
You missed out a word. Fixed it for you. :) |
David Feugey (2125) 2709 posts |
That’s what I wanted to say. |
Chris Hall (132) 3554 posts |
The filing system knows what’s an image and what’s a directory. Not quite accurate. A zip file will be an image file whilst SparkFS is loaded but will be a file when it is not. Same for StrongHelp files (depending whether StrongHelp is loaded). But I agree it would be useful to be able to turn off recursive Filer operations within image filing systems. I added such an option to !Cat so that it would either treat an image file that was recognised as such as a directory or as a file when cataloguing a disc or directory. |
David Pilling (8394) 96 posts |
I have problems logging on here… hence the delay. What often happens to me is that I try to do a search and it comes to grief because of bad archives. Sometimes I am organised and quit SparkFS first. So I’d welcome some control over recursing into image files. I wonder how useful this would be for other image files – PC partitions – what else. Well there’s the software known as “ImageFS” – was that ever an image filing system – it did work like the ShrinkWrap software. My thoughts on SW are that I’ll attach any copyright licence anyone wants to it. It would be possible to write modules for it to support things like gif files. A lot of these files use the same compression algorithm – CFS, GIF, Spark archives, Acorn Squash, PackDir – they’re all the same. Meaning if for example John Kortink could be talked into donating his source then they could all be improved (or more likely making an LZW module). I thought for an OS component a new project could be called “ArchiveFS” – does anyone know if that name has been used before. Is there some way to set up projects “here”. |
David Feugey (2125) 2709 posts |
So it could really be used as a generic extension of the filesystem, for compression, crypt, conversion, etc.
I like this name :) |
Steve Pampling (1551) 8170 posts |
If it’s fresh code then adopt the licence of the rest of the OS, if elements are another licence then deal with that according to the relevant licence, but if it is GPL of any form then scatter salt, raise a silver cross and hang onto a bible and cast it out. |
Rick Murray (539) 13840 posts |
Sea salt, ground charcoal (, black pepper if you want it to smell nice), and a candle. |
Matthew Phillips (473) 721 posts |
I believe that if you go to the Forum, and then try to log in, it fails to keep you logged in when you re-enter the Forum. The trick is to go to the home page and log in there, and then enter the Forum. |
Clive Semmens (2335) 3276 posts |
Clear any riscosopen cookies first! |
David Feugey (2125) 2709 posts |
This trick works here. Back to topic: I’ll try to look at ShrinkWrap code, as it’s a good start to understand your way of coding. I’m definitively volunteer too for the ArchiveFS project. You have my mail. |
David Pilling (8394) 96 posts |
I will start this ArchiveFS project… |
Frederick Bambrough (1372) 837 posts |
S’great! The DeLorean gains a forward gear. |
Rick Murray (539) 13840 posts |
The filing system knows what’s an image and what’s a directory. Yup. When the filing system isn’t active, its data will appear as a file, which of course would never be recursed into. So what I said still stands – its never “just a directory” 1 so it shouldn’t be blindly recursed into.
Yup, another fun one is counting files and getting a result larger than the disc’s actual size! 1 Note – not looked to see what ImageFSFix does. |
Richard Walker (2090) 431 posts |
David, that’s interesting to hear. It would be nice if RISC OS could end up with a built-in archiving solution. As clever as SparkFS is, there are stuff fully edge cases (as discussed in this thread) and explaining the whole thing to a new user is a bit odd. I wonder how some of these strange cases can be dealt with? Maybe other bits of the OS need some thought? |
Steve Drain (222) 1620 posts |
There is one other reason for an archive system such as this – the use of the same storage media on RO5 machines and VirtualAcorn while preserving file types. However much I like and us SparkFS it does not suit this purpose except for smallish transfers. Its main problems are the inability to rename files and the way it slows down for large archives. Compression is not necessary, so you might use another image filing system, but from my investigations the ones available are not sufficiently reliable, particularly as they get larger, and they can leave you with garbage. |
John WILLIAMS (8368) 493 posts |
I don’t remember TBAFS being mentioned in the prequel to this thread where David commented on the pros and cons of some other systems. A big fuss was made in 2011 when it was 32-bitted. I wonder what features it had/has which could inform this thread. |
David Pitt (3386) 1248 posts |
TBAFS 1.03 needs to be downloaded with a real world browser. Runs on the Titanium, with a turn of speed. |
John WILLIAMS (8368) 493 posts |
So it does! I must have been having an almost competent week when I downloaded it in October 2016! |
John WILLIAMS (8368) 493 posts |
Another feature I would appreciate is a simple-to-use passwording facility to secure confidential files. Perhaps all such files could be kept in one place which was then password-protected. |
David Pilling (8394) 96 posts |
TBAFS – remind me what it does Rename is easy to add. It is an area where I have behaved badly. If there had been someone else in charge of the project, they would have said “do rename” and I would have done it. For many years I told people that rename would be hazardous to data and slow, those points still apply. Spark files support rename, Zips don’t. Speed – archive file formats are not designed for speed, they’re compacted. File systems are designed for speed and are not compacted. I don’t know what era we are talking about – SparkFS for Zip files is a lot faster than it was in the 90s. Mark Smith when he was designing the ArcFS file format put some thought into making it quicker to update. I’ve often turned off Zip compression and dumped large quantities of files into an archive for transport via other systems. It is an effective way to do things. The problem with compression is the time to compress the data, more data more time, and the fact that often the data will be found to not compress, wasted time. The other time problem is at the end of an archive update, writing the directory of contents, this is proportional to the number of files in the archive, not the size of the archive. If you’re going to start editing zip file contents – write 500 MB of data, and then delete the first file in the archive, resulting in 499.999MB of data being moved along by a 1000 bytes then yes, painful, but its the file format. My thought is that if I can get an open source archive file system out there then you can do what you want with it. It didn’t take me that long to write SparkFS in the first place, 30 years later it would be easier, so someone could just do their own better thing. I think that access to archive contents should be transparent – don’t know if (Richard W above) that would be more confusing or less for users from other systems. |
Pages: 1 2