Filing system bounties
Ben Avison (25) 445 posts |
At ROOL, we’ve been trying to think up suitable projects to assign bounties to. One large area which seems suitable is filing systems, especially local filing systems (FileCore, ADFS, SCSIFS, RamFS etc). Common areas that people cite include
A naive approach might be to assign each of these a bounty, but with a bit of thought, it becomes obvious that there’s a fair amount of overlap between them – for example, at many levels, lots of partitions look the same as lots of discs. It can be easy to see these simply in terms of API changes, but the more I’ve started digging into this, the more I see lots of fundamental issues that need addressing. Bear in mind that a bounty needs to be:
Where there is an unavoidable dependency of one bounty upon another, I suspect we may have to only entertain donations for those with no dependencies, and add others as and when their dependencies have been completed – although in some cases this may pose a problem in justifying to donors why the available bounties are necessary, if they have no immediately obvious benefit. On the other hand, it would help keep the total number of bounties down, which is important to avoid dilution of available funds. Some would advocate the only way to achieve this is to throw away the whole filing system stack and start again from scratch. My personal view is that this is probably unnecessary – that this would waste lots of reusable code which had been thoroughly debugged over many years of use, and most of which doesn’t actually need changing. Most of the changes involve “simply” changing the size of variables, which is amenable to “keyhole surgery” development, as has been proved successful before with things like the 32-bitting process, or the move from DiscOp to DiscSectorOp. I also think this approach is more readily divisible into orthogonal subtasks which would be suitable for assigning bounties to. What follows is a collection of observations. There are a number of unknowns in there – it’s possible that there are other people reading the forums with more knowledge on the subject than me can rapidly fill these in. Consequently, I haven’t attempted to draw up candidates for bounties yet. FileCore 31-bit file size limitThis is probably the most easily isolated task, and has no impact upon any APIs. Although FileSwitch already supports 32-bit file pointers (files up to 4GB) on many filing systems such as NFS, there is something in the way FileCore is implemented which further restricts it to 31-bit pointers (up to 2GB). This is why DOSFS partitions are currently limited to 2GB. ADFS improvementsADFS already knows the ATA LBA48 instruction set, and uses it instead of LBA28 for DiscSectorOp calls where bit 28 is set (bits 29-31 are the physical disc number). It should be relatively simple to fully implement the DiscOp64 API as defined here, and to test it, at least via a test harness which does block-level accesses to a large disc on an Iyonix. Discs up to 3TB in size are currently easily available. Because the new API supports more drives, you would probably implement support for more physical discs at the same time (perhaps testing it by installing multiple SATA PCI cards in an Iyonix, although drivers for these would need to be written). Some investigation needs to be done into the impact of ATA-8’s larger sectors (4K rather than 512 bytes). Do any drives require this, or are they likely to do so in future? Is there a significant performance penalty to using 512-byte sectors on drives that natively use 4K sectors? FileCore is documented to only support sector sizes of 256, 512 or 1024 bytes. How hard would it be to extend it to support 4K sectors natively? Nowadays, it wouldn’t be much of a restriction for the LFAU to be given a 4K minimum, so if it’s just a matter of support for LFAUs smaller than 4K sectors, that’s not a problem. Note that one implication of 4K sectors would be that zones get bigger. One potential problem with that would be the size of the free space links in the map – but at 15 bits, they allow for links of 2^15 = 32 kbits from one free space block to the next within the map, which as luck would have it is just barely big enough! SCSIFS improvementsSCSIFS would need extending similarly to ADFS to properly support DiscOp64. I’m not familiar enough the SCSI command set to say whether we’re facing a need to switch to larger commands yet. Testing large numbers of devices is easier than with ADFS – simply add lots of USB mass storage devices to an Iyonix or Beagleboard. Large discs can be tested using the same 3TB hard drives used for ADFS testing, but via an ATA-to-USB adaptor. 64-bit file pointersThis impacts FileCore upwards. New variants of various interfaces need to be added at both the high and low APIs of FileSwitch:
FileSwitch really just acts as a glorified telephone exchange between applications and filing systems, so I expect it will have limited amounts of processing to actually do on the pointers, beyond converting back and forth between applications using the 64-bit APIs and legacy filing systems that only support the 32-bit APIs. Although I don’t agree with everything in this wiki page, it does at least begin to tackle defining 64-bit versions of the APIs. One common user of the high FileSwitch interface is of course the C library. This is an issue which has been faced by many other OSes before us and 64-bit extensions to stdio.h have been developed – I suggest we copy these for compatibility’s sake. A new chunk to the stubs will be needed to support these new entries. Any C applications wishing to use large files would need to be recompiled using the new headers, new functions and new stubs. The Filer would need updating to use 64-bit directory enumeration calls, and display suitably large file sizes correctly. FileCore’s E+ directory format is unfortunately limited to 32-bit file lengths, so another new variant would be needed (the obvious name is E++ format). Although this need not differ in other aspects from E+ format, reducing necessary code changes, it’s too good an opportunity to miss to evaluate whether other fields need extending. For example, while it’s not obvious to me from the Ursula FileCore spec what the limit it imposes on idlen is, my best guess is that this is 23 bits; we need to evaluate if that is big enough for our needs for (say) the next decade, or whether it should be increased. Ideally, we need to ensure that existing versions of DiscKnight won’t attempt to “correct” E++ format discs. DOSFS improvementsThis is dependent on the above 64-bit file pointer work, but in other respects is an independent task. Once DOSFS is extended to use the above new ImageEntry reason codes, it will be possible to use FAT-formatted devices bigger than 4 GB via DOSFS (or 2 GB if you assume the FileCore 31-bit pointer issue hasn’t already been addressed). This is arguably not a high priority because of the existence of FAT32FS, but we should aim to fix it anyway. PartitioningThere’s an obvious task to survey all commonly-used partitioning schemes used on third-party RISC OS IDE interfaces, as well as on other OSes, and add support to parse them in FileCore, and create/modify them in HForm. I would suggest that each partition appears as a separate RISC OS drive, since for the majority of software won’t care about the partitions being on the same physical disc. We will need auditing the internals of FileCore to ensure no confusion between the “logical” drive number parsed from pathnames, and the “physical” drive number passed to ADFS/SCSIFS via the DiscOp API, but once this is done, there isn’t necessarily a direct dependency on work to support more than 8 physical drives per filing system. Letting the icon bar filer know what’s a floppy and what’s a hard disc may need some thought; using bit 2 of the drive number doesn’t scale well. What’s actually important from a user interface perspective is which drives feature removable media, although it doesn’t appear to be easy to identify this directly via the SCSI command set at least (currently the only things identified as hard discs are literally sealed spinning platters with an ATA-to-USB interface attached). It might be possible to do better by passing information through from the USB layer, but this is unclear – certainly, other OSes don’t seem to do a better job of identifying when a drive has removable media, but then other OSes tend not to have such good separation of the concepts of drive and media as RISC OS has. We will probably need some additional APIs developing (probably at both high and low ends of the FileSwitch API) to enable software that cares to identify when multiple partitions are on the same device. This is needed for formatting software and disc information software, but also to handle ejection properly (you dismount a partition, but eject a physical disc). Some thought needs to be given to the icon bar filer user interface. For one thing, having a large number of drives on the icon bar can be confusing. For another, partitioning support, while fine for fixed discs (which are displayed using the disc name, which would now be the partition name), doesn’t work as well for removable discs. Removable disc icons deliberately use the drive number rather than the disc name because they’re not mounted until you click on them – what happens if once it’s mounted it turns out to have two partitions? Do we have two “:0” icons? Does clicking on any “:0” icon remount all the partitions on that disc, or only if a media change had been detected? Perhaps if you dismount any “:0” partition, the icon disappears, except for the last one which then goes back to being a “click me to mount all partitions in this drive” icon? This needs some more thought. Perhaps the only sensible thing to do with “: Or perhaps we could use a new notation to access partitions – say, using the special field so that “ADFS#0::0” means the first partition on physical drive 0, “ADFS#1::0” means the second partition. But that breaks the hierarchical nature of sequential elements of the pathname, so perhaps “ADFS::0:0” is better, though that may break any software that tries to parse pathnames independently of FileSwitch, so maybe we have to settle for something like “ADFS::0/0” even though that would previously have been a valid disc name. But then that’s also going to be true for “ADFS::8” through “ADFS::255”, so it’s not an isolated problem. Granularity of file allocationSome have suggested that one way round this is to add partition support and split big discs into lots of partitions. Apart from this being an inconvenience to the end user, I don’t think this solves the problem given present limitations, especially since there are other pressures to increase the number of logical drives (both due to demands for more physical drives, and for partition support on each of those physical drives). I’ll try to explain. Each instantiation of FileCore (one for each filing system) currently has a hard limit of 8 drives. For each drive, FileCore maintains a dynamic area containing a copy of the drive’s free space map. ADFS doesn’t support hot-plugging, so as many dynamic areas as there are drives area are allocated; SCSIFS doesn’t have this luxury since it’s the filing system used for USB mass storage, so dynamic areas for all 8 drives are allocated. These don’t use up RAM for drives which are absent, but they do use up quite a lot of logical address space. You may notice that at present, these dynamic areas are limited to 4MB each – so SCSIFS alone takes up 8*4MB = 32MB of precious global logical address space. The size of these dynamic areas is already a compromise – a 4MB map can’t make full use of formats with idlen=21, let alone idlen=23 which is enforced by current FileCore formats (as far as I can tell). Why is this important? Well each bit in the map corresponds to a unit of file allocation granularity (LFAU) on the disc. So if the map is limited to 4 MB, one LFAU can be no smaller than 1/(8*4*2^20) of the disc. For a 2 TB disc this is already 32 KB. This may not sound too bad, but the smallest object on a disc is (idlen+1) LFAUs, so in this case it’s 672 KB – nearly as big as a double-density floppy disc! The impact is lessened to an extent by the fact that “small” files can share the same disc object as their parent directory, but this still sets a minimum size for each directory And it’s only going to get worse. If we apply Moore’s Law to disc sizes, then in a decade we’ll be looking at 100 TB discs, and a 4 MB map would mean an LFAU of 4 MB and a smallest object size of 88 MB – 1/7th of a CD! We could allow the disc map dynamic areas to be as large as is allowed by idlen=23, which is approximately 25 MB. Even this would only permit today’s 3 TB discs to have an LFAU of 16 KB and a minimum object size of 384 KB. And tomorrow’s 100 TB discs would be limited to 1 MB LFAU and 24 MB minimum object size. This doesn’t sound terribly efficient to me, but then are you going to care that much about the odd several megabytes of wastage if your disc is 100 TB large? Leaving this aside for a moment, let’s look at the pressures on memory allocation for maps:
So now we’re looking at a logical address space requirement for maps of 25 MB * 8 * 32 = 6400 MB. Per filing system. But the ARM architecture only has a 4 GB address space, and that has to include everything – applications, RMA, screen memory, I/O space etc, and in a typical current 32-bit RISC OS system, even with dynamic area clamping in use, about half the logical address space has already been claimed before any third-party software gets installed. It should be noted that partitioning doesn’t help the tradeoff between LFAU and map size at all, it just changes the way you look at the problem. If you partition a disc in two in order to halve the LFAU, you just have two maps each of the same size as before you halved the LFAU – this takes just as much RAM and address space as simply halving the LFAU for the whole disc would have done, it just uses up an extra drive numbers and poses maintenance problems for the end user, who becomes responsible for ensuring that neither partition fills up. So something radical has to change. This requires serious design work. What are our options? We could use an application slot per filing system (512 MB – 32 K available). This would permit 195 logical drives with idlen=20, 21 logical drives with idlen=23, or 1 logical drive with idlen=27. There’s little point in trying to address this by increasing the application slot size at the moment, as only the Iyonix can exceed 512 MB RAM (and only up to 1GB RAM at that). But there are potential performance bottlenecks here, since most file operations involve loading/saving the application slot – and so this would need lots of remapping. idlen=27, by the way, would allow a 4TB drive to be mapped using an LFAU of only 2KB. This is back in the sort of granularity that appears to have been typical when the Ursula FileCore spec was written, and drives were nearer 4 GB than 4 TB. However, the efficiency of handling the corresponding map of 470 MB in size has got to be questionable – the search time to find a free space block could be considerable. Perhaps we need to be considering on-demand paging of the disc map? This would save on RAM utilisation, but only while the disc map remains in a dynamic area since we don’t currently have support for holey application slots (not that that would be a bad thing to add to the bounty list in its own right). And this still doesn’t solve the logical address space exhaustion issues. Something better than just allocating one 4 MB dynamic area for each drive on hot-pluggable filing systems might be another approach, maybe one big dynamic area for all maps for a given filing system, and then have the maps allocated using a crude heap within that dynamic area. Would have all the usual fragmentation issues that heaps have, of course. Going up a stage in complexity but also in potential benefits would be a disc map cache, probably on the granularity of a disc map sector to avoid too much thrashing when zone auto-compaction occurs. So in summary, it’s a complete can of worms. It’s far from obvious what the best approach is for most of these “more drives” / “bigger drives” bounties, and there is a lot of overlap between them, behind the scenes. It may be foolish to try to specify a technical solution up front at this stage, I can only guess at what the best approach is. What’s needed is a proper up-front study – more in-depth than this post – of all the important issues and how best to proceed, much like Acorn did before starting any of the Ursula FileCore work. But who’s going to do that, and should that have a bounty in its own right? Of course, having had to investigate all these issues just to the extent that I had to do to write the above, I’ve gone and got my own interest piqued. Must… finish… objasm… |
Martin Bazley (331) 379 posts |
And BASIC. What’s going to happen to
If we need an E++ format anyway, how about allowing customised icon bar sprites to be allocated to a disc, in addition to a name? Also, something along the lines of the ‘hide USB’ functionality would be useful – maybe a ShareFS-like implementation which reduced icon bar bloat by only having commonly used drives in permanent residence? Actually, now I think about it, ShareFS is an ideal user interface candidate for the philosophical issues you mentioned regarding the insertion of partitioned removable media. How about just displaying ‘Discs’, with none initially mounted but the number and details of partitions scanned, ready for the user to mount them and (optionally, see above) add them to the icon bar? Of course, some kind of configuration to add certain commonly-used partitions by default at the point of insertion would be useful, too. In short: for large numbers of discs, think hierarchical rather than linear, but with optional shortcuts. Much like the RISC OS Filer and Pinboard, really. I think it dovetails quite nicely.
Incidentally, what actually happens if you try to name a disc “0”? The infamous “Ambiguous disc name”? Other than that, I don’t think that’s going to be an issue. It’s certainly preferable to inserting an extra colon, which will break every single program at once, right there.
Humble suggestion: devices connected via USB are removable, devices connected via IDE/SCSI are not. Not sure of the implications for floppy and CD drives, but surely RISC OS has some way of distinguishing them already? |
Steffen Huber (91) 1963 posts |
Random thoughts:
|
Jeffrey Lee (213) 6048 posts |
Also remember that another much-needed SCSIFS improvement is to add support for background transfers.
I think the best solution to deal with large free space maps eating RAM/logical address space is to change the way they’re stored in memory. Assuming that filecore only needs to access small amounts of the map at a time (or at least just a handful of pages), it may make sense to implement a system where a small dynamic area with can be used to access a much larger set of physical pages. This would keep the impact on the logical address space low, without requiring the wimpslot to be mapped in/out every time a free space map needs to be accessed. The system could also easily be reused for things like bigger RAM discs, bigger disc caches, etc. And somewhat tangentaly connected is the subject of supporting display rotation on OMAP – in order to support that we need a system where we can allocate physical RAM without mapping it into the logical address space (because it needs to be accessed via the rotation engine instead). |
Andrew Rawnsley (492) 1445 posts |
Just a quick comment to answer a question in the first post. A number of drives are already available that utilise 4k sectors, I believe. If memory serves, recent iterations of Western Digital Green drives 500Gb, 1Tb, 2Tb are 4k and have caused problems for some PC users. I think Seagate and Samsung also have some drives kicking around, although I don’t have time to look it up. But, they are here. |
André Timmermans (100) 656 posts |
DOSFS improvements … Partitioning … FileCore’s E+ directory format is unfortunately limited to 32-bit file lengths, so another new variant would be needed (the obvious name is E++ format). why do we want to extend filecore? I don’t think it scales very well, even today the map takes up a lot of valuable RAM – we should be able to find a better filing system somewhere in BSD land! I think the first thing to do is have a proper architecture in place first to make it easy for everyone. What a disc drive is connected this it what should happen:
This scheme is I think easy to implement and test step by step:
Once you get that running properly you can work your way step by step:
|
Jess Hampshire (158) 865 posts |
Even though Jeffrey appears to have some ideas to avoid this, I don’t think it’s a good idea for compatibility reasons. Do we want a new format that only works on RISC OS? The main reasons for a new format would be not leaving a big disk mostly unused, and allowing big files. The first would be solved by a partitioning system, with 256GB filecore and Fat32 (8TB according to some documentation) partitions. However what would be the main use of files of over 2GB on RISC OS? The two uses I can see are DVD ISO images and video files. Both these would often involve other platforms, so to be really useful, ext3, NTFS or/and UDF support would be needed anyway. (an example would be downloading a large file to use with a media player.) So I would agree with the comment for a separate reason. |
Jess Hampshire (158) 865 posts |
Is there any need to support anything other than MSDOS and GUID partition tables? It is unlikely that anyone would require (or even be able) to use an existing interface both under its 26 bit RO and RO 5. It appears to be possible for an MSDOS type partition table to co-exist with a current style ADFS format, so this should mean that an ADFS partition at the start of a partitioned drive should be usable on non partition aware RISC OS. It would also imply that a partitioner could be written before the system can use partitions properly. (e.g partition 256 GB ADFS plus 750GB NTFS on a terabyte USB drive, for use by a PC and an RO system).
Would it be possible to mount disks with the partitions appearing as subfolders in a read only root directory? Using a linux style example: hda containing partitions 1,2,3 You could mount hda and see folders 1,2,3 which would correspond to the partitions, or you could mount the individual partitions directly. You would see hda, hda1, hda2, hda3 in a sharefs style chooser. |
Jess Hampshire (158) 865 posts |
Since this will involve a new API, could it be non-blocking? |
Trevor Johnson (329) 1645 posts |
If the “windowed approach” is to be implemented, we mustn’t forget to deal with users dragging a file directly to the iconbar drive icon (rather than a filer window)? Also, different hardware would be suited to different implementations, e.g. all icons shown would be less of a problem on a high resolution desktop than a UMPC/netbook. (Sorry to be stating the obvious to the technical wizards on here.) |
Peter van der Vos (95) 115 posts |
It would be the same as with OmniClient. You can not drag a file to Omniclient but you can to the mounted remote discs. |
Theo Markettos (89) 919 posts |
I hate to slightly derail this thread, but I agree with Jess. From what I hear of the FileCore codebase, it’s somewhat lacking in maintainability. If we’re going to transition to a new format, I don’t see how we can justify keeping FileCore going much further, particularly given the major changes Ben has outlined above. I can accept that E++ might be worthwhile if it supports 64 bit file sizes with relatively minor changes, but if we have to address the map issues it sounds like it’s time to have a new filesystem. This might not be so much work if it re-uses code from elsewhere. We would need a filesystem that’s:
There are also some related issues that the FS and/or FileSwitch might need to deal with (eg what happens if I plug in a volume containing hard links?) |
Jeffrey Lee (213) 6048 posts |
I don’t think things are quite as bad as you think they are. Although there’s a lot of source code to FileCore, it does seem to be structured in a fairly logical manner, making it easy to identify the different subsystems. Apart from the 32bit conversion, the last big update that was made was the addition of the E+ format – about 15 years ago. 64bit disc addresses/file pointers will last us for much longer than 15 years. In fact, the only update I can think of us wanting to make after this one would be to update FileCore to be suitable for use in a threaded/multi-core version of RISC OS, which is still very much a pipe dream. Also remember that if we were to opt to use a different disc format for 64bit disc/file support then we’d still have to keep and maintain the existing FileCore code base, for compatability with old format discs. One way to make FileCore easier to maintain, in my mind at least, would be to convert it to C. Since the disc format is well understood and the source is fairly well documented this shouldn’t be too hard, and could even be done in a piece wise fashion, one file or function at a time. In some cases it may even be possible to run both routines side-by-side at runtime in order to verify their behaviour. Since we’ll have to touch quite a lot of the source code in order to perform the 64bit updates, it would probably make sense to do the rewrite first. However since the rewrite won’t provide any immediate benefits, and would be a fairly large task, it may be difficult to justify it to people as a bounty item.
While idly browsing through the FileCore source, I found a comment relating to thils in FileCore.s.FileCore80:
So it may be that the only thing that’s stopping >2GB files from working is the file cache (although judging by that comment, at least some of the cache code supports big files already) |
Jess Hampshire (158) 865 posts |
Wouldn’t it be possible to use add-on filesystems with other licenses? (as with FAT32FS) I would think it unlikely that 256GB volume and 2GB file size limits would be a problem for the boot volume. If it were desired for tidyness to have a single big partition (on a beagleboard for example), couldn’t provision be made for loading the required filesystem from a separate file either from the bootloader, or using a similar method to the CMOS replacement. (Like a podule ROM, in effect.)
This would give us 4GB files, like on FAT32? |
Andrew Hodgkinson (6) 465 posts |
Jess – it’ll be easier to read your posts if you use the "bq. " prefix for quoting, rather than "> ". There’s a link to the Textile reference given in the “Formatting Help” section immediately underneath the text area where you type in messages and replies. Instead of this: > So it may be that the only thing that’s stopping >2GB files from working is the file cache (although judging by that comment, at least some of the cache code supports big files already) …which doesn’t stand out from surrounding text, you’ll get this:
The only difference is the use of “bq.” instead of “>”. |
nemo (145) 2611 posts |
Martin wrote:
I have 88MB Syquest and 1GB Jaz SCSI drives and an IDE 105MB Syquest that say you’re wrong. ;-) |
Jess Hampshire (158) 865 posts |
“>” used to display like that, did it get lost with the last update? |
Andrew Hodgkinson (6) 465 posts |
It’s not part of the Textile specification so if it ever worked, it worked by accident :-) |
David J. Ruck (33) 1649 posts |
Good point, DiscKnight would very aggressively try to turn any E+ format extensions back in to vanilla E+. On RISC OS 6 it already strips out the saved filer display options from directories, because if ROL refuse to supply the documentation, it doesn’t get implemented. Luckily RO6 users haven’t complained too much as long as their files are restored. DiscKnight does have some built in leg room for increasing disc sizes, in that it wont fault standard E+ with idlen up to at least 27, as I expected this to have been increased years ago. Existing RPC and Iyonix hardware being physically limited or practically limited to 128GB, hasn’t made that worth while up to now. If a E++ format supports 64 bit file lengths, I’d expect there to be a new set of magic values in directories which should prevent them being regressed by DK – I’ll have to check the source though! Some incompatible changes to the disc record in the boot block and map would prevent DiscKnight from recognising it all, and attempting any repair. But the only guaranteed way to ensure DiscKnight doesn’t trash any new format would be for me to get a new version out before E++ hits the user base! |
Andrew Conroy (370) 740 posts |
Not sure if this is the best thread to bring this up on, but just to add to the filing system suggestions, I know it’s been mentioned before but can we have some hooks added to the filer so that deletes & overwrites can be trapped by a suitable WasteBin-type App? (I’m naively envisioning something like a wimp_message being sent out when a file is about to be deleted/overwritten so that the wastebin can take a copy first.) |
nemo (145) 2611 posts |
Not really necessary – the underlying OS calls can be trapped easily and handled appropriately. Regardless, messages being broadcast (and then ignored, and then bounced, and only then acted upon by FilerAction) would make deleting multiple files insanely slow, so that’s not the solution. |
Andrew Conroy (370) 740 posts |
Ok, hopefully someone will be able to implement it fairly easily then :-)
I did think after posting that wimp_messages weren’t really the way to go, but being a humble BASIC programmer of little talent that’s all I know about! Hopefully there’s a much more efficient system that can be used (well, BlackHole, Recycler, etc. etc. do it, so it must work somehow!). |
nemo (145) 2611 posts |
One would expect a wastebin application (of which many have been written – doesn’t ROSelect come with Recyclone?) to work outside the desktop too. Consequently Wimp messages don’t really come into it. The Filer asks Filer_Action to do recursive operations, so that can be trapped with a system variable redefinition if one wanted a UI to be able to select whether one wanted to really delete stuff – eg Shift-Del in Windows. |
Andrew Conroy (370) 740 posts |
Just checked, and Recyclone doesn’t work outside the desktop! I saved a file to RAM, dropped to a command line and *Deleted it, and it wasn’t trapped! I think this one needs to work outside the desktop too, yes. Yes, Recycler & Recyclone both offer Shift-Del to force a delete. I think we’d want to incorporate that too. |
Andrew Hodgkinson (6) 465 posts |
I’m not sure I agree. Deleting a file at either the Windows command prompt or in the Mac OS Bash shell results in it being deleted immediately. |