Filer_Action enhancement!
Rik Griffin (98) 264 posts |
Just want to apologise for the delay in getting this finished – things are rather busy at work which gives me less time to work on unrelated tasks! But I’ll get it done asap. |
Rik Griffin (98) 264 posts |
The new version is now in the CVS. Apologies for the rather poor summary text, I forgot that the first line is used as the title. |
Rik Griffin (98) 264 posts |
I see that the latest ROM build has the new Filer Action in it. Therefore I’ve removed the test version from my web site. Anyone who’s got that version installed in their Boot sequence should probably remove it and use the new ROM instead :) |
Rik Griffin (98) 264 posts |
I checked in a bug fix to Filer_Action the other day, so the next ROM build should contain v0.50. The bug was not serious – merely that the progress bar would be wrong under certain circumstances. |
Sprow (202) 1155 posts |
Question for Rik: having just given FilerAct a bruising for files > 2G, I noted (but couldn’t figure out what to do) line 1091 of listfiles.c. As total_entries could be up to 4 billion I think the product will easily overflow, or put another way the progress_per_object doesn’t need to be very large for most people’s harddiscs (with 50000+ files) to spill. I think the intermediate calculation may need widening to 64 bit? |
Rik Griffin (98) 264 posts |
Can you be a bit more specific? The code I think you are referring to counts the objects in a directory and calculates how much “progress” each is worth. At this stage the size of any files in the directory is not relevant. One place that might need attention to cater for large files would be in memmanage.c line 1653 onwards – calculating the progress value represented by copying a block of a file. FilerAct uses a signed int to store file size, for one thing, so even though the progress calculation uses long longs to avoid overflow, other parts of the code will need changing. As I’m now reasonably familiar with the code, I don’t mind taking on the task of ensuring FilerAct is “large file safe” – whatever we decide that means. For future proofing, I think it’ll mean using 64 bit ints to store any file sizes and offsets…? |
Sprow (202) 1155 posts |
Um, line 1091 of listfiles.c, variable name total_entries, seemed pretty specific.
Not any more it doesn’t, per my opening statement “just given FilerAct a bruising for files > 2G”. I’ll let you go do a CVS update/review the commit comments. In the other places I changed I’ve assumed there can be up to 4 billion objects on a disc and each can be up to 4 billion (-1) bytes long, but I think the progress calculation might wrap earlier than that. |
Rik Griffin (98) 264 posts |
I must be looking at an old version of the code, I’ll grab the latest and take a look. The progress calculations start with a total progress value of 2^31-1 (ie 0×7fffffff) and divide it equally among the objects being operated on. But rounding errors are accounted for and I think it should all be ok if presented with large files, or lots of files. |
Rik Griffin (98) 264 posts |
search_nest_level *nf = context->nested_filenames; int p = 0; /* Compensate for rounding errors by adding the 'spare' progress Isn't totally accurate if copying, as the actual progress values used will have been halved. */ p = nf->total_progress - (nf->total_entries * nf->progress_per_object); if (p > 0) *progress += (uint32_t)p; Ok, in this case, nf→progress_per_object is always equal to nf→total_progress / nf→total_entries. In other words calculate the progress value of each object in a directory, given the progress value of the directory. The multiplication can’t overflow, as it’s always the reverse of a previous division, so I think we’re safe there. |
Sprow (202) 1155 posts |
Excellent, I’ll stop worrying about it then. |