Wakefield Acorn & RISC OS Computer Show 2017
George T. Greenfield (154) 748 posts |
Thanks for the explanation. I count DiscKnight as one of the best tenners I ever spent on anything, in fact if the price had been ten times higher it would still have been worth it for the grief it has saved me over the years. |
Doug Webb (190) 1158 posts |
Also my thanks for the explanation and details on the behind scenes work that the ROOL team do to ensure things progress in a logical manner and bringing together all the resources available to ensure the best outcomes. Thats why I’d urge people who have downloaded RISCOS images to ensure that at least a little money goes ROOL’s way be it by donations regular or one offs, buying their goods or pledging to the bounties. Or if not why not help on the documentation updates that are going on as well as all that costs is time. |
George T. Greenfield (154) 748 posts |
Agreed and seconded: I’ve just installed RC15 on a Pi2 (it’s running now) and have set up a modest monthly donation by way of thanks and encouragement :-). |
Steffen Huber (91) 1949 posts |
And FileCore supports at least 8 drives per FS (I remember various SCSI solutions back then that allowed 8 partitions), so it would be 16TiB. I dimly remember the original Castle RISC OS 5 blurb, where the new FileCore limits were announced, and I am fairly sure that it said “number of drives per FS: 256”. Did this really happen? |
Sprow (202) 1155 posts |
you could get to 8TB Though typically 4 of the 8 drives are set aside for floppy (or, more correct ‘removable’) drives, certainly ADFS and SCSIFS split that way, though you’d have to be slightly mad to want a machine with that many floppy drives!
You’re thinking of the DiscOp64 SWI added in RISC OS 5. The real motivation behind that SWI was that the alternative disc record in DiscOp and SectorDiscOp was packed into only 26 bits along with some flags, so doesn’t work on a 32 bit memory map where RAM exists above 64M. Since there was a new SWI being invented the opportunity was taken to expand the disc address (from 3 bit drive+29 bit byte or sector) back into a simple byte offset and extend it to 64 bits, and allow for up to 256 drives. However, at present FileCore just maps that internally back to 3+29 internal addresses. Assuming ROOL continue to roughly follow my suggested filing system bounty order then step 3 of 5 is where the limit of 8 drives is tackled. But we’re still saving up for step 2, it’s a whopper. |
Alan Robertson (52) 420 posts |
Is there anyone that’s keen to carry out Step 2, but just needs more financial incentive? If so, feel free to share the target amount here or privately let me know at After several years of being open it’s stalled at ~2.8K. I’d love to see us move to Step 3! |
Steffen Huber (91) 1949 posts |
I think step 3 consists of too many largely unrelated things. My step 3 would be “64bit file size for FileSwitch”. Nothing more, nothing less. |
Jeffrey Lee (213) 6048 posts |
Really? Adding 64bit APIs to FileSwitch sounds almost too trivial to be a bounty. |
Steffen Huber (91) 1949 posts |
Trivial or not, it is a precondition for a lot of work that builds upon this. And designing the API is perhaps not the easiest thing. |
Jeffrey Lee (213) 6048 posts |
If you’re designing a new API then you’ll want to take into account several of the other things from part 3 – unicode filenames, improved file permissions, arbitrary metadata, etc. Otherwise each new feature potentially adds a new API version that needs to be supported. A bounty which covers all of the FileSwitch-related work makes sense, but trying to split it down to individual API features doesn’t make much sense. |
Steffen Huber (91) 1949 posts |
To be honest, I don’t understand why FileSwitch has to know about the filename encoding – it does not bother at the moment, or have I missed something? |
Chris Hall (132) 3554 posts |
At the moment it assumes Latin 1. |
Steffen Huber (91) 1949 posts |
Because it is defined to be case-insensitive? |
Jeffrey Lee (213) 6048 posts |
Case-insensitive string comparison is a good reason for FileSwitch to know the encoding, yes. When it comes to filename encoding, there are four options I can think of:
|
Steffen Huber (91) 1949 posts |
Showing my ignorance – how does Linux do it? |
Jeffrey Lee (213) 6048 posts |
Some googling suggests that Linux’s core filing system layer is completely ignorant of encoding schemes, so behaviour is mostly down to individual filesystems (AIUI some have mount options to control what on-disc encoding is used) and applications (whether they treat all filenames as UTF-8 or as the current locale). However things are moving towards the convention of assuming filenames are UTF-8. Since Linux (always?) uses case-sensitive filenames it has less issues to worry about than RISC OS. Although wikipedia does raise the subject of Unicode normalisation |
Steffen Huber (91) 1949 posts |
Does anyone know how encoding stuff is handled on Windows and MacOS, both of which are (IIRC) case insensitive? I know that NTFS is Unicode enabled, but how is this handled in the API, considering the long history of different FS APIs, which surely started with “system encoding, 8bits per character”. More than 30 years of doing IT, and never programmed against any Assembler/C API in DOS or Windows world… Currently, I think that the initial extension of FileSwitch should provide an encoding aware API, but not implement anything to support it. Looking again at “step 3” of Sprow’s plan, I think this is too much to be done in the near future. We live with “no encoding support” for a long time now, and most users can cope with it. Going UTF-8 would need massive amounts of work for the applications and FSes to provide real value to the interested user. On the other hand, large file support would be of benefit for many users. Personally, I would remove points 3,4,5,6,9,10,11,12 from step 3. E.g. Journalling and Metadata is surely dependent on the decision which FS we want to use in the future, and some FSes will never handle those things, so is that a priority? |
Chris Mahoney (1684) 2165 posts |
Case sensitivity on MacOS is down to the individual file system.
As for encoding, not a clue :) [Rewrote as a list to make it easier to read] |
Rick Murray (539) 13806 posts |
Since forever1 Windows has had a dual API, calls with the suffix A (ANSI) accepting eight bit data, calls with the suffix W (aide) being Unicode. Actually there is a third oddity, but that’s for Japanese systems. https://msdn.microsoft.com/en-us/library/windows/desktop/dd317748(v=vs.85).aspx 1 Thought about since Win3.1’s Win32S, implemented properly in Win98. https://msdn.microsoft.com/en-us/library/cc194796.aspx |
Frederick Bambrough (1372) 837 posts |
At present MacOS by default uses case-insensitive generally but case sensitive for Time Machine backups. Might reflect future direction/insurance? |
Matthew Harris (1462) 36 posts |
On Windows, it depends on the underlying filesystem. FAT filesystems are case-insensitive and, essentially, force everything to UPPERCASE, except in the case of long filename support (via the VFAT ‘overlay’) where it becomes case-preserving. NTFS as a filesystem is fully case-sensitive, but the Windows APIs access it in a case-preserving way. |
Chris Mahoney (1684) 2165 posts |
I didn’t know that, but it seems like it’s just a case of simplifying things by using the lowest common denominator: A case-insensitive volume can be backed up onto a case-sensitive one without any problems, but the opposite is not true. |