Raspberry Pi Model B+
Wouter Rademaker (458) 197 posts |
The nightly beta development build zip is from 2014-07-16 06:43:02, but the rom inside still has version (04-jul-14). The LanManFS is still at 2.45, DWCDriver is still at 0.15, DragASprite is still at 0.18, so it looks like there is something wrong. |
David Pitt (102) 743 posts |
That download is as it should be here. (On Raspberry Pi B (no +)) Beta RPi ROM 2014-07-16 06:43:02 *fx0 RISC OS 5.21 (16 Jul 2014) LanManFS 2.46 (14 Jul 2014) DWCDriver 0.16 (08 Jul 2014) DragASprite 0.19 (05 Jul 2014) |
Chris Hall (132) 3554 posts |
That download is as it should be here. And here. The ROM version inside the RC12a SD card image is 5.21 (04-Jun-2014) with LANMANFS 2.44 etc. though. Surely it is time that the ROM image for the Raspberry Pi was ‘stable’ now? |
Tank (53) 375 posts |
Still throwing logs on the fire…. |
Wouter Rademaker (458) 197 posts |
Chris/David, found it, a bug in Fat32fs, not my new riscos.img but the old renamed rom had the 8.3 dos-name RISCOS.IMG. |
Chris Hall (132) 3554 posts |
I downloaded a fresh BCM2835DEV tarball and built from that. Still no luck on the B+. You do have the updated firmware don’t you? You need it on the Model B+.
Bootcode.bin, Start.elf and Fixup.dat have all changed. |
Chris Evans (457) 1614 posts |
Mmm 5MB I wonder why that is not compressed? |
Chris Hall (132) 3554 posts |
That is what is inside the RC12a SD card image. Uncompressed ROM. |
Chris Evans (457) 1614 posts |
The answer from:
Both require a reboot of your Pi. |
Chris Dewhurst (1709) 167 posts |
Hi all Or can just the firmware be on the micro-SD and boot sequence on another, ADFS medium e.g. memory stick (like the Beagleboard)? If so how? thanks :) |
Chris Evans (457) 1614 posts |
Nothing has changes apart from the SD drive shrinking to microSD. You can have your boot drive on other medium like a BeagleBoard but the SD drive is the quickest media1 on a Pi. 1 as SDFS & SCSIFS don’t yet have caching, FAT32FS devices can be faster under some circumstances but the SD interfaces has the fastest hardware. |
Sprow (202) 1158 posts |
I’ve seen that mentioned as a possible silver bullet a few times but remain unconvinced it’ll be such a step change. FileCore’s buffering is all based on speculatively reading to C/H/S boundaries within a fragment, but With SCSIFS and SDFS a large proportion of use is with solid state media (USB sticks/SD cards/SSD) where there’s practically 0 random seek time – though spinning harddiscs in USB caddies would still benefit. The write behind buffering does decouple the foreground from having to wait for the IO to complete, but again a 255k buffer is probably getting quite easily filled. It’d be interesting to instrument up either SDFS or SCSIFS to try to get a guess at what improvement might be had for something that gives the drive a good workout (a ROM build, email debatching, that sort of thing) as that’ll probably be less effort than trying to add background transfers to either of them only to discover that it doesn’t help much. |
Chris Evans (457) 1614 posts |
Well I’m basing the comment on information from Ben, but I may extrapolating more from it than warranted! Apart from mentioning the ‘no caching’ Ben said that the SD bus on the Pi is significantly faster than on OMAP. I think I’ve seen numerous mentions that the SD BUS is faster than the USB bus. I expect (as you say) FAT32FS is faster than SCSIFS or SDFS in software terms but I would hope that they all can drive the hardware close to its maximum when a large file is written or read and the faster hardware of the SD bus would compensate for the slower FS. The benchmark programs seem to show a massive slow down for SDFS and SCSIFS over FAT32FS (Even on the same hardware) I suspect some of the test results would significantly change with caching. But as few RISC OS programs do much file manipulation it probably won’t make much difference to the user experience! |
Jeffrey Lee (213) 6048 posts |
I think the point that Sprow is making is that FileCore’s current caching implementation is likely to be insufficient for most modern use cases. So apart from taking a look into SDFS and SCSIFS to see if there are any latent performance bottlenecks (e.g. I suspect we aren’t getting full performance out of USB) we should also be looking into writing a new caching layer for FileCore (or the filing system stack as a whole). Modern machines have hundreds of megabytes of memory sat around in the free pool just doing nothing, we should be doing what Linux, Windows, etc. has been doing for years and using that as our cache for disc ops. And ideally that cache memory should be usable by anything else that needs it (font cache, image cache for browsers, etc.) |
Malcolm Hussain-Gambles (1596) 811 posts |
One thing that strikes me is that the whole I/O layer may benefit from a re-write. It would be nice if we could make it more POSIX compliant, providing open as well as fopen. Caching is all well and good, but there are certain operations that you don’t want caching at all (databases etc). Adding an I/O scheduler would be great as well, but then we need to take into account the underlying hardware (i.e. is it an SDCard, SSD, hard disc). I’m not answering these questions as I’m not in a position to do so from either a technical (as I may know a little about RISC OS, but that’s about it) My personal view is that it would be beneficial – but I do have rose tinted glasses and tend to try and avoid reality as much as I can ;-) |
David Feugey (2125) 2709 posts |
The cool thing would be a ‘shadow cache’ (or – if you prefer – a free memory manager). Something that use automatically all free memory, without blocking it, and with a list of pages not available anymore (ie new allocations for apps or modules). It would be good for read only needs (images, fonts, disc data). Perhaps not very fast, but cool :) (Sorry if it’s not very clear. It’s in my head). |
Rick Murray (539) 13840 posts |
If the plan is to consider a large cache, please please please plan from the outset to either disable caching, or switch it to read-only caching. I say this because I live rural. Little towns expanding due to “lottissement” houses (generic monopoly piece new-build houses) and an electricity infrastructure that just isn’t keeping pace with all of this. From time to time we get power cuts, a little more than a brownout but not long enough to be a problem to living. But certainly long enough to have every electrical device that doesn’t have a battery onboard to reboot. It seems to me that FileCore formats do not always cope “gracefully” with incomplete writes to disc (“One copy of map broken”…), there is no journalling, and to have many megabytes of cached data for writing just seems like the potential for a huge accident waiting to happen. Right now with NetSurf and the DDE and my server running, I have 185Mb free on a 256Mb(32MbGPU) system, so it makes sense to think of using it as a cache. However, if the time should come that it does so – I’d prefer to manually disable write caching and just accept the speed hit that it would cause. Thinking, also, when would the data be written to disc? You can’t access FileCore on a CallAfter (unless some re-entrancy-safe hooks are provided) and if the machine crashes, we’re not guaranteed to ever see a CallBack. Does this leave the data in limbo? |
Rick Murray (539) 13840 posts |
Probably, yes. We ought to have some idea of “stuff happens in the background”. Problem is, who is going to do this? And how much munging will be necessary to not break older software?
Oh. God. Full POSIX is an IEEE standard (probably costs $$$$ to certify) and it describes a command line and script system based upon the Korn shell, services and utilities (what must be present and their command line options), the structure of the filesystem, a threading library, and a bunch of library things. It is worth pointing out that Linux, Haiku, Minix, NetBSD, and FreeBSD are not fully POSIX compliant (mainly due to extensions and “POSIX sucks, we think this is better” implementations).
Why? Both open files. Neither open files using the native RISC OS mechanisms. I use DeskLib’s File_Open() :-)
There are certain operations where a large cache would be hugely beneficial (databases etc). [experience: DOS version of FoxPro with and without a cache manager that used XMS talking to a fairly slow early-IDE harddisc]
I’m not sure I’d agree with this. 64 bit is great for high end systems and servers, it might well be overkill for mobile phones, and I wonder how many domestic printers are using Thumb2 as the simpler hardware design is worth it?
The answer I prefer is that it is mostly being done as a labour of love. To stand as a beacon of coding efficiency against the ever-increasing inefficiencies and bloatware of other systems. Plus it stands as one of the few open source operating systems with the insides written in assembler. In this day and age, that’s astonishing, but it gives a system that is simple and efficient and responsive. My personal opinion is that while threads and such may be of assistance if supported at kernel level, if you want POSIX, install some flavour of Unix.
Uh… One does not just “write an OS”. If you want an example – look at MINIX. It is a Unix-like OS with some brilliant experimental features and a hugely detailed book to go with the source code. I got it out of Woking library ages ago and learned quite a bit about how operating systems are put together. It is now up to MINIX3 with more features… but it appears to still be tied to x86 hardware and still stuck in the world of academia.
Best thing, I think. :-) |
Jeffrey Lee (213) 6048 posts |
AIUI FileCore currently only deals with write caching for files, and it does so on a per-file basis, which seems perfectly sensible considering the problems you can get yourself into with excessive write caching on non-journalling filesystems (consider FAT on Windows). There are also well-defined mechanisms through which the cache can be flushed (OS_Args 255, fflush, closing the file, etc). So as long as any new caching system continues to honour those interfaces then the risk of data loss shouldn’t be any greater than it is now. |
Sprow (202) 1158 posts |
I do expect background transfers in SCSIFS and SDFS to speed things up – it certainly ought to because RAM is the fastest store available (after the on chip cache of course). My words of caution are to do some benchmarking first before embarking on implementing it since the gains might only come out in 10’s % not factors of 2! Does this leave the data in limbo? The speculative read ahead/write behind is only applied to things using the byte read/byte write interfaces (OS_GBPB, OS_BGet, OS_BPut) any whole file operations (OS_File) don’t go that way as clearly any file bigger than the cache size would just evict everything in the cache, while wasting time copying it twice in memory. Additionally, when FileCore is being used for image files (eg. DOSFS) none of that gets read ahead/write behind-ed either, because the scheme requires FileCore to know what C-H-S fragments of files correspond to which it clearly doesn’t for foreign images. Of course FileSwitch buffers stuff for OS_GBPB. OS_BGet, OS_BPut too, so often you’re paddling around in its sector sized buffers. FileCore only kicks in when the file pointer spills outside FileSwitch’s buffer. |
David Feugey (2125) 2709 posts |
Write cache should be small anyway. Read cache is the most important part. And can be loosed anytime… so my idea to use free memory and let applications overwrite some part of the cache. I’m not on system things, but is it possible to read/write any part of the system memory from an application/module? Is there a simple way to know if some part of the RAM is used by a module/app/stack? My idea is to say: if it’s not used, use it as a page for cache, without allocate it (for the system, it’s still free memory). If it’s used later (when you try to read the page), simply consider that the page is lost. Need also a mechanism to add released RAM to the map of free/available RAM. The good point is that the system could continue to allocate RAM as usual, with no need to manage the cache (managed by the cache manager itself). Stupid idea? |
Malcolm Hussain-Gambles (1596) 811 posts |
@Rick I pretty much agree with you on most of what you say. |
Chris Hall (132) 3554 posts |
I agree – the OMAP5 has a SATA port and I would say that getting that to work correctly under RISC OS is a much higher priority than a more generalised cache. |
Chris Evans (457) 1614 posts |
It does sound like efforts in other areas may be a lot more fruitful! Chasing benchmarks is a fools game especially when RISC OS spends very little time actually reading or writing and they do not represent how users actually use their computer. |
Jochen Lueg (1597) 13 posts |
My B+ and NOOBS arrived today, just in time to test my new 40 pin interface. One strange thing though. The first thing I always do with a new version of RiscOs is to turn off textured window background. Nowhere could I find a button to do this. Am I overlooking something obvious? The OS reports itself as 5.21, 4 Jun 14 Jochen |